
Topics
Guests
- Karen Haolongtime technology reporter who leads the Pulitzer Center’s AI Spotlight Series program for training journalists on how to cover artificial intelligence.
Extended interview with Karen Hao, author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. The book documents the rise of OpenAI and how the AI industry is leading to a new form of colonialism.
Transcript
AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. I’m Amy Goodman, as we continue our conversation with Karen Hao, author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.
The House recently passed Trump’s so-called big, beautiful budget bill, that contained a provision that would prohibit any state-level regulations of AI for the next 10 years, in a major gift to the AI industry. Republican Congresswoman Marjorie Taylor Greene, the extreme conservative from Georgia, has criticized the provision, even though she voted for the bill. She wrote online, quote, “Full transparency, I did not know about this section… I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there,” she said.
To talk about this and much more, we continue our conversation with Karen Hao.
So, let’s just begin with this bill. It is astounding. It has shocked many, even though many others say your job as a congressperson or senator is to actually read the bill, even though it’s a thousand pages.
KAREN HAO: Yeah.
AMY GOODMAN: Explain what this provision is, and how you believe, Karen — I mean, you’ve covered this, these issues, for years for The Wall Street Journal — how it got in there.
KAREN HAO: Oh my god, how it got in there? I don’t know. I mean, these companies in Silicon Valley, they have been so aggressively pushing for some kind of approach to just ward off regulation, and this is like the ace in the hole. I mean, previously, Sam Altman, during the Biden administration, he came to the Senate and testified, and he said, “We welcome regulation.” But he specifically tried to shift the burden of regulation to future models that don’t yet exist. So, when the senators were asking about jobs and environmental impact and copyright, he said, “No, no, the thing that we have to worry about is rogue AI systems extricating themselves from data centers. That hasn’t happened yet, but keep your eye on the prize.” And so, all the senators shifted from thinking about current regulation to future regulation.
In the Trump administration, he’s changed tactics. So, he came back to the Senate, and he said, “I think we need a light-touch approach, because if we don’t, we will not be able to win against China. And we cannot have 50 different regulatory environments at the state level, because the compliance burden would be too high.”
I can’t — I don’t — I haven’t reported on the behind the scenes of how this provision went in, but it is not a coincidence that then that provision was introduced shortly after he gave that testimony at the Senate. And what senators who support the legislation have specifically called out is, “It’s not that we don’t want to regulate AI at all. It’s that we need a light-touch approach, and we cannot have” — so they’re echoing a lot of the sentiments that Altman gave. And, I mean, the implications of this are — this is a fast-moving technology. So much can happen in 10 years. To not be able to regulate any of it at the state level, I mean —
AMY GOODMAN: And talk about what you can see the states regulating.
KAREN HAO: One of the things that we have seen is this technology is already having a huge impact on jobs, not necessarily because the technology itself is really capable of replacing jobs, but it is perceived as capable enough that executives are laying off workers. And we need more — some kind of more guardrails to actually prevent these companies from continuing to try and develop labor-automating technologies, and try to shift them to producing labor-assistive technologies.
AMY GOODMAN: What do you mean?
KAREN HAO: So, OpenAI, their definition of what they call artificial general intelligence is highly autonomous systems that outperform humans in most economically valuable work. So they explicitly state that they are trying to automate jobs away. I mean, what is economically valuable work but the things that people do to get paid?
But there’s this really great book called Power and Progress by MIT economists Daron Acemoglu and Simon Johnson, who mention that technology development, all technology revolutions, they take a labor-automating approach, not because of inevitability, but because the people at the top choose to automate those jobs away. They choose to design the technology so that they can sell it to executives and say, “You can shrink your costs by laying off all these workers and using our AI services instead.”
But in the past, we’ve seen studies that, for example, suggest that if you develop an AI tool that a doctor uses, rather than replacing the doctor, you will actually get better healthcare for patients. You will get better cancer diagnoses. If you develop an AI tool that teachers can use, rather than just an AI tutor that replaces the teacher, your kids will get better educational outcomes. And so, that’s what I mean by labor-assistive than labor.
AMY GOODMAN: And explain what you mean, because I think a lot of people don’t even understand artificial intelligence. And when you say “replace the doctor,” what are you talking about?
KAREN HAO: Right. So, these companies, they try to develop a technology that they position as an everything machine that can do anything. And so, they will try to say, “You can use this — you can talk to ChatGPT for therapy.” No, you cannot. ChatGPT is not a licensed therapist. And, in fact, these models actually spew lots of medical misinformation. And there have been lots of examples of, actually, users being psychologically harmed by the model, because the model will continue to reinforce self-harming behaviors. And we’ve even had cases where children who speak to chatbots and develop huge emotional relationships with these chatbots have actually killed themselves after using these chatbot systems. But that’s what I mean when these companies are trying to develop labor-automating tools. They’re positioning it as: You can now hire this tool instead of hire a worker.
I mean, most recently, Sam Altman was speaking at a conference and said, “We originally said that these models were junior-level partners at a law firm, and now we think that they can really be more senior colleagues at a law firm.” What he’s saying is, “Don’t hire the junior-level partners, don’t hire the senior colleagues, and just use our AI models.” And we are already seeing the career ladder breaking, because many different white-collar — white-collar service industries, as well as other industries, are becoming convinced that they do not need to hire interns, they do not need to hire entry-level positions, that they just need these AI models. And new college graduates are struggling now to find job opportunities to help them get a foothold into these industries.
AMY GOODMAN: So, you’ve talked about Sam Altman, and in Part 1, we touched on who he is, but I’d like you to go more deeply into what — who Sam Altman is, how he exploded onto the U.S. scene testifying before Congress, actually warning about the dangers of AI. So that really protected him, in a way, people seeing him as a prophet. That’s a P-R-O-P-H-E-T.
KAREN HAO: Right.
AMY GOODMAN: But now we can talk about the other kind of profit, P-R-O-F-I-T.
KAREN HAO: Yeah.
AMY GOODMAN: And how OpenAI was formed? How is OpenAI different from AI?
KAREN HAO: OpenAI is a — I mean, it was originally founded as a nonprofit, as I mentioned. And Altman specifically, when he was thinking about, “How do I make a fundamental AI research lab that is going to make a big splash?” he chose to make it a nonprofit because he identified that if he could not compete on capital — and he was relatively late to the game. Google already had a monopoly on a lot of top AI research talent at the time. If he could not compete on capital, and he could not compete in terms of being a first mover, he needed some other kind of ingredient there to really recruit talent, recruit public goodwill and establish a name for OpenAI. So he —
AMY GOODMAN: A gimmick.
KAREN HAO: — identified a mission. He identified: Let me make this a nonprofit, and let me give it a really compelling mission. So, the mission of OpenAI is to ensure artificial general intelligence benefits all of humanity. And one of the quotes that I open my book with is this quote that Sam Altman cited himself in 2013 in his blog. He was an avid blogger back in the day, talking about his learnings on business and strategy and Silicon Valley startup life. And the quote is, “Successful people build companies. More successful people build countries. The most successful people build religions.” And then he reflects on that quote in his blog, saying, “It appears to me that the best way to build a religion is actually to build a company.” And so, he identified early on in his career that if you can give people some kind of higher purpose, some kind of higher belief, that that will be a way to rally more talent, rally more capital, even if you started in second place. And so, that’s kind of the origin story of why OpenAI ended up as a nonprofit first and set its sights on trying to transform AI development.
AMY GOODMAN: And so, talk about how Altman was then forced out of the company and then came back. And also, I just found it so fascinating that you were able to speak with so many OpenAI workers. You thought —
KAREN HAO: Yeah.
AMY GOODMAN: — there was a kind of total ban on you.
KAREN HAO: Yes. Yeah, exactly. So, I was the first journalist to profile OpenAI. I embedded within the company for three days in 2019, and then my profile published in 2020 for MIT Technology Review. And at the time, I identified in the profile this tension that I was seeing, where it was a nonprofit by name, but behind the scenes, a lot of the public values that they espoused were actually the opposite of how they operated. So, they espoused transparency, but they were highly secretive. They espoused collaborativeness, they were highly competitive. And they espoused that they had no commercial intent, but, in fact, it seemed like — they had just gotten a $1 billion investment from Microsoft. It seems like they were rapidly going to develop commercial intent. And so I wrote that into the profile, and OpenAI was deeply unhappy about it, and they would not — refused to talk to me for three years.
But when I started working on the book, when I started reaching out to employees, current and former, I discovered that many employees actually really liked the profile. And they specifically wanted to talk to me, because they thought that I would do justice to the truth of what had actually happened within the company, and be able to discover behind what the executives mythologized and narrativized about this technology and about the course of this company, I would be able to actually get beneath that to the real heart of the matter.
And so, one of the things that you really have to understand about AI development today is that there are what I call quasi-religious movements that have developed within Silicon Valley. The concept of artificial general intelligence is not one that’s scientifically grounded. It is this idea that we can fundamentally recreate human intelligence in computers. And this idea has been around for actually a really long time. The field of AI was founded all the way back in the 1950s, and that was the original intent of the field: How do we recreate intelligence in computers? Can machines think? That was the famous question that British mathematician Alan Turing asked.
But we, to this day, do not have scientific consensus around even what human intelligence is. And so, to peg an entire research field in a technology to the basis of human intelligence is a very tricky endeavor, because there are no good metrics to assess: Have we actually gotten there yet? And there’s no blueprint to say what should AI look like, and how should it work, and, ultimately, who should it serve. And so, when OpenAI took up this mission of artificial general intelligence, they were able to essentially shape and mold what they wanted this technology to be, based on what is most convenient for them.
But when they identified it, it was at a time when scientists really looked down on this term even, ”AGI.” And so, they absorbed just a small group of self-identified AGI believers. This is why I call it quasi-religious. Because there’s no scientific evidence that we can actually develop AGI, the people who are strongly — have this strong conviction that they will do it and that it’s going to happen soon, it is just purely based on belief. And they talk about it as a belief, too. But there are two factions within this belief system of the AGI religion: There are people who think AGI is going to bring us to utopia, and there are people who think AGI is going to destroy all of humanity. Both of them believe that it is possible, it’s coming soon, and therefore, they conclude that they need to be the ones to control the technology and not democratize it.
And this is ultimately what leads to your question of what happened when Sam Altman was fired and rehired. Through the history of OpenAI, there’s been a lot of clashing between the boomers and doomers about who should actually —
AMY GOODMAN: The boomers and doomers.
KAREN HAO: The boomers and the doomers.
AMY GOODMAN: Those that say it’ll bring us the apocalypse.
KAREN HAO: So, utopia, boomers, and those that say it’ll destroy humanity, the doomers. And they have clashed relentlessly and aggressively about how quickly to build the technology, how quickly to release the technology. And ultimately, Altman is one that — he is really good at saying to people what they need to hear. And he will say different things to different people if he thinks they need to hear different things. So, when I asked boomers, “Is Altman a boomer?” they said, “Yes.” When I asked doomers, “Is Altman a doomer?” they said, “Yes.”
And so, the reason he was fired was because these boomers and doomers were fighting over how to ultimately determine the future of OpenAI, and therefore the future of what they saw as artificial general intelligence, and there was a loss of trust from a lot of the people about which — where Altman actually stood. And the board was more doomer-leaning. They felt, “Wait a minute. We thought Altman was more doomer-leaning, but now he seems more boomer-leaning, and he’s also saying all of these other things about the ownership of different entities that were supposed to be owned by OpenAI, but he actually owns them. And we’re discovering all of these other inconsistencies between what he’s told us and what is actually happening.” And that is why, ultimately, they decided to oust him, unsuccessfully.
AMY GOODMAN: And then explain what happened, why he wasn’t ultimately ousted.
KAREN HAO: He is very good at making himself the linchpin of people’s access to financial resources. So, when he struck the Microsoft deal, Microsoft —
AMY GOODMAN: And explain what that is.
KAREN HAO: Microsoft is one of the largest — was one of the largest backers of OpenAI, put $13 billion into the — into OpenAI, with a commercial deal where OpenAI — Microsoft would then get access to all of OpenAI’s technology.
AMY GOODMAN: Bill Gates.
KAREN HAO: Bill Gates was a big influence in striking that deal. Once he became impressed and enthralled by OpenAI’s progress, he was one of the ones that convinced Satya Nadella, the current CEO of Microsoft, to then greenlight these massive investments. Altman was a key orchestrator of that, and Altman was key to continuing to facilitate Microsoft’s ability to get access to these models.
Altman was also the key facilitator of a tender offer, which is this offer that a lot of startups do, where they sell shares in the secondary market, which allows employees who would otherwise only be able to access the financial value of their shares when a company goes public, earlier, when the company is still private, and it would have allowed some employees to cash out millions of dollars of their shares. And the tender offer had not closed yet when Altman was ousted, and he was also the linchpin of that tender offer closing. So, suddenly, all these employees were afraid that they would be out of money that they had potentially already committed to putting down for a down payment for a house.
He was also the linchpin of financial relationships with investors that had put money into OpenAI and were expecting return on that investment. And ultimately, because there was — there’s an aggressive AI talent war that’s happening within the industry, as well, many of OpenAI’s competitors thought, “Well, if Altman is gone, this company is weak. I can poach all of this talent.”
And so, many people also started worrying the company is getting so unstable that it could just collapse and dissolve, and it would disband all of this work that has been done, all of that financial value that could have been accrued. And so, the employees, Microsoft, investors, executives at the company all banded together to say, “We will not allow Altman to be fired. He has to be rehired and returned to the organization.”
AMY GOODMAN: So he returned. And I want to take this up until today, to, in January, the Trump administration announcing the Stargate Project, a $500 billion project to boost AI infrastructure in the United States. This is OpenAI’s Sam Altman speaking alongside President Trump.
SAM ALTMAN: I think this will be the most important project of this era. And as Masa said, for AGI to get built here, to create hundreds of thousands of jobs, to create a new industry centered here, we wouldn’t be able to do this without you, Mr. President. And I’m thrilled that we get to. I think it’ll be an exciting project. I think we’ll be able to do all of the wonderful things that these guys talked about. But the fact that we get to do this in the United States is, I think, wonderful.
AMY GOODMAN: He also there referred to AGI —
KAREN HAO: Exactly.
AMY GOODMAN: — artificial general intelligence. Explain what happened here and what this is. And has it actually happened?
KAREN HAO: So, Altman, before Trump was elected, he already was sensing, through observation, that it was possible that the administration would shift and that he would need to start politicking quite heavily to ingratiate himself to a new administration. Altman is very strategic. He was under a lot of pressure at the time, as well, because his original co-founder, Elon Musk, now has great beef with him. Musk feels like Altman used his name and his money to set up OpenAI, and then he got nothing in return. So, Musk had been suing him, is still suing him, and suddenly became first buddy of the Trump administration.
So, Altman basically cleverly orchestrated this announcement, where, by the way, the announcement is quite strange, because the Trump — President Trump is not — it’s not the U.S. government giving $500 billion. It’s private investment coming into the U.S. from places like SoftBank.
AMY GOODMAN: Which is?
KAREN HAO: Which is one of the largest investment funds, run by Masayoshi Son, a Japanese businessman who made a lot of his wealth from the previous tech era. So, it’s not even the U.S. government that’s providing this money.
But Altman was very clever in that he positioned this as part of the Trump administration’s legacy. So, you get to announce this $500 billion new investment into the U.S. on the second day of your presidency. And that latches the Trump — President Trump’s legacy onto the success of this particular investment. And not only does that now create new pathways for OpenAI to continue accumulating extraordinary amounts of wealth and continue laying out extraordinary, vast pieces of computational infrastructure, it also gives him protection from Musk, because President Trump was one of the only figures at that point that could be a shield to the first buddy and his beef against the company.
AMY GOODMAN: And take that right through to now, that golf trip that Elon Musk was on, but so was Sam Altman —
KAREN HAO: Yes.
AMY GOODMAN: — to the fury of Elon Musk. And then a deal was sealed in Abu Dhabi —
KAREN HAO: Yeah. So —
AMY GOODMAN: — that didn’t include Elon Musk, but was about OpenAI.
KAREN HAO: Exactly. So, Altman has continued to try and use the U.S. government as a way to get access to more places and more powerful spaces to build out this empire. And one of the things, because OpenAI’s computational infrastructure needs are so aggressive, you know, I had an OpenAI employee tell me, “We’re running out of land and power.” So, they are running out of resources in the U.S., which is why they’re trying to get access to land and energy in other places. The Middle East has a lot of land and has a lot of energy, and they’re willing to strike deals. And that is why Altman was part of that trip, looking to strike a deal. And what they — the deal that they struck was to build a massive data center, or multiple data centers, in the Middle East, using their land and their energy.
But one of the things that OpenAI has recently rolled out, they call it the OpenAI for Countries program, and it is this idea that they want to install OpenAI hardware and software in places around the world, and explicitly says. “We want to build democratic AI rails. We want to install our hardware and software as a foundation of democratic AI globally, so that we can stop China from installing authoritarian AI globally.”
But the thing that he does not acknowledge is that there is nothing democratic about what he’s doing. You know, The Atlantic executive editor says, “We need to call these companies for what they are.” They are techno-authoritarians. They do not ask the public for any perspective on how they develop the technology, what data they train the technology on, where they develop these data centers. In fact, these data centers are often developed in the cover of night under shell companies. Like, Meta recently entered New Mexico under the shell company named Greater Kudu LLC.
AMY GOODMAN: Greater Kudu?
KAREN HAO: Greater Kudu LLC. And once the deal was actually closed, and the residents couldn’t do anything about it anymore, that’s when it was revealed: “Surprise, we’re Meta. And you’re going to get a data center that drinks all of your freshwater.”
AMY GOODMAN: And then there was this whole controversy in Memphis around a data center.
KAREN HAO: Yes. So, that is the data center that Elon Musk is building. So, meanwhile, Musk is saying, “Altman is terrible. Everyone should use my AI.” And, of course, his AI is also being developed using the same environmental and public health costs. So, he built this massive supercomputer called Colossus in Memphis, Tennessee, that’s training Grok, the chatbot that people can access through X. And that is being powered by around 35 unlicensed methane gas turbines that are pumping thousands of tons of toxic air pollutants into the greater Memphis community. And that community has long suffered a lack of access to clean air, a fundamental human right.
AMY GOODMAN: So, I want to go to, interestingly, Sam Altman testifying in front of Congress last month about solutions to the high energy consumption of artificial intelligence,
SAM ALTMAN: We’ve talked a lot about the importance of energy to AI. Energy is just really important to quality of life. One of the things that seems to me the most consistent throughout history is, every time the cost of energy falls, the quality of life goes up. And so, doing a lot to make energy cheaper in the short term, I think this probably looks like more natural gas, although there are some applications where I think solar can really help. In the medium term, I hope it’s advanced nuclear fission and fusion. More energy is important well beyond AI.
AMY GOODMAN: So, that’s OpenAI’s Sam Altman. This is testifying before the Senate and talking about everything from solar to nuclear power —
KAREN HAO: Yeah.
AMY GOODMAN: — something that was fought in the United States by environmental activists for decades. So, you have these huge, old nuclear power plants, but many say you can’t make them safe, no matter how small and smart you make them.
KAREN HAO: This is one of the things — of the many things that I’m concerned about with the current trajectory of AI development. This is a second-order, tertiary-order effect, is that because these companies are trying to claim that the AI development approach they took doesn’t have climate harms, they are explicitly evoking nuclear again and again and again as nuclear will solve the problem. And it has been effective. I have talked with certain AI researchers who thought the problem was solved because of nuclear. And in order to try and actually build more and more nuclear plants, they are lobbying governments to try and unwind the regulatory structure around nuclear power plant building. I mean, this is, like, crazy on so many levels, that they’re not just trying to develop these, the AI technology, recklessly, they are also trying to lay down infrastructure and nuclear infrastructure in this move-fast, break-things ideology.
AMY GOODMAN: But for those who are environmentalists and have long opposed nuclear, will they be sucked in by the solar alternative?
KAREN HAO: But that — so, data centers have to run 24/7, so they cannot actually run on just renewables. That is why the companies keep trying to evoke nuclear as the solve-all. But solar does not actually work when we do not have sufficient enough energy storage solutions for that 24/7 operation.
AMY GOODMAN: We’re talking to Karen Hao, author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. You mentioned earlier China. You live in Hong Kong.
KAREN HAO: Yes.
AMY GOODMAN: You’ve covered Chinese AI, U.S. AI for years.
KAREN HAO: Yeah.
AMY GOODMAN: Explain what’s happening in China right now.
KAREN HAO: Yeah, so, the — I have to sort of explain the dynamic between China and the U.S. first. So, the U.S. — China and the U.S. are the largest hubs for AI research. They are the largest concentration of AI research talent globally. Other than Silicon Valley, China really is the only other rival in terms of talent density and the amount of capital investment and the amount of infrastructure that is going into AI development.
In the last few years, what we have seen is the U.S. government has been aggressively trying to stay number one, and one of the mechanisms that they have used is export controls. A key input into these AI models is the computational infrastructure and the computer chips for installing into the data centers for training these models. And these computer chips are — in order to develop the AI models, companies are using the most bleeding-edge computer chip technology. It’s like every two years, a new chip comes out, and they immediately start using that to train the next generation of AI models. Those computer chips are designed by American companies, the most prominent one being Nvidia in California. And so, the U.S. government has been trying to use export controls to prevent Chinese companies from getting access to the most cutting-edge computer chips. That has all been under the recommendation of Silicon Valley saying, “This is the way to prevent China from being number one. And, like, put export controls on them, and don’t regulate us at all, so we can stay number one, and they will fall behind.”
What has happened instead is, because there is a strong base of talent, of AI research talent, in China, under the constraints of fewer computational resources, Chinese companies have actually been able to innovate and develop the same level of AI model capabilities as American companies, with two orders of magnitude less computational resources, less energy, less data. So, I’m talking specifically about the Chinese company High-Flyer, which developed this model called DeepSeek earlier this year, that briefly tanked the global economy because the company said that their — training this one AI model cost around $6 million, when OpenAI was training models that cost hundreds of millions, if not over tens of billions of dollars, and that delta demonstrated to people that this — what Silicon Valley has tried to convince everyone for the last few years, that this is the only path to getting more AI capabilities, is totally false. And actually, the techniques that the Chinese company was using were ones that existed in the literature and just had to be assembled. They used a lot of engineering sophistication to do that, but they weren’t actually using fundamentally new techniques. They were ones that actually already existed.
AMY GOODMAN: So, explain it further, because I think a lot of people just can’t get their minds around this. How do you do this training?
KAREN HAO: So there’s software called neural networks, which is essentially a massive statistical engine. It is doing lots and lots of sophisticated statistical computation to try and ascertain what kinds of patterns exist in data sets. So, typically, in the past, before we got to large language models, it would be doing something like looking at MRI scans and checking the patterns of what — what does cancer look like in an MRI scan. Now, with ChatGPT, what it’s looking at is: What are the patterns of the English language? What is the syntax, the structure, figures of speech that are typically used? And then it uses those patterns to construct new sentences. That’s how generative AI works.
And the reason why it’s so computationally expensive is because it’s crunching the numbers for those patterns. And the more data you feed in, the more it has to crunch. And so, we used to train these AI models on, you know, a powerful laptop, like maybe one computer chip. Maybe the richest labs, academic labs, like MIT, they would be training on a couple or a dozen computer chips. And companies like Google, they would be training maybe on a couple hundred computer chips. We are now talking about hundreds of thousands of computer chips training a single model. And that is — you know, that is what OpenAI says is necessary to build these technologies. And that is what DeepSeek proved wrong.
AMY GOODMAN: So, let me ask you something, Karen. The latest news, as you’re traveling in the United States, before you go back to Hong Kong, of Trump’s attack on academia, how this fits in? How could Trump’s attack on international students, specifically targeting the, what, more than 250,000, a quarter of a million, Chinese students —
KAREN HAO: Yeah.
AMY GOODMAN: — and revoking their visas —
KAREN HAO: Yeah.
AMY GOODMAN: — impact the future of the AI industry? But not just Chinese students, because what’s going on here now is terrifying students around the world.
KAREN HAO: Yes.
AMY GOODMAN: And because labs are shutting down in all kinds of ways here, U.S. students, as well, deciding to go abroad.
KAREN HAO: This is just the latest action that the U.S. government has taken over the last few years to really alienate a key talent pool for U.S. innovation. Originally, there were more Chinese researchers working in the U.S. contributing to U.S. AI than there were in China, because just a few years ago, Chinese researchers aspired to work for American companies. They wanted to move to the U.S. They wanted to contribute to the U.S. economy. They didn’t want to go back to their home country.
But because of what was called the China Initiative, which was the first Trump-era initiative to try and criminalize Chinese academics or ethnically Chinese academics, some of whom were actually Americans, based on just paperwork errors, they would accuse them of being spies. That was one of the first actions. Then, of course, the pandemic happened, and the U.S.-China trade escalations started amplifying anti-Chinese rhetoric. All of these led — and now with the potential ban on international students, all of these have led more and more Chinese researchers to just opt for staying at home and contributing to the Chinese AI ecosystem.
And this was a prerequisite to High-Flyer pulling off DeepSeek. If there had not been that concentration and build-up of AI talent in China, they probably would have had a much harder time innovating around, circumventing these export controls that the U.S. government was imposing on them. But because they now have a high concentration of top talent, some of the top talent globally, when those restrictions were imposed, they were able to innovate around them. So, DeepSeek is literally a product of this continuation of that alienation.
And with the U.S. continuing to take this stance, it is just going to get worse. And as you mentioned, it’s not just Chinese researchers. I literally just talked to a friend in academia that said she’s considering going to Europe now, because she just cannot survive without that public funding. And European countries are seeing a critical opportunity, offering million-dollar packages: “Come here. We’ll give you a lab. We’ll give you millions of dollars of funding.” I mean, this is the fastest way to brain drain this country.
AMY GOODMAN: I mean, what many are saying, “U.S.’s brain drain is their brain gain.”
KAREN HAO: Yes.
AMY GOODMAN: And this also reminds us of history. You have the Chinese rocket scientist Qian Xuesen, who, in the 1950s, was inexplicably held under house arrest for years, and then Eisenhower has him deported to China. He becomes the father of rocket science and China’s entry into space.
KAREN HAO: Yeah.
AMY GOODMAN: And he said he would never again step foot into the United States, even though originally that was the only place he wanted to live.
KAREN HAO: Yes, and there was, I believe, a government official, a U.S. government official, who said that was the dumbest mistake the U.S. ever made.
AMY GOODMAN: We talk about the brain drain and the brain gain. OK, again, some more rhyming, the doomers and the boomers. I want to talk about what an AI apocalypse looks like, meaning how it brings us to apocalypse, but also how people say it could lead us to a utopia. What are the two tracks, trajectories?
KAREN HAO: It’s a great question. And I ask boomers and doomers this all the time: Can you articulate to me exactly how we get there? And the issue is that they cannot. And this is why I call it quasi-religious. It really is based on belief.
I mean, I was talking with one researcher who identified as a boomer, and I said — you know, his eyes were wide, and he really lit up, saying, “You know, once we get to AGI, game over. Everything becomes perfect.” And I asked him, I was like, “Can you explain to me: How does AGI feed people that haven’t — don’t have food on the table right now?” And he was like, “Oh, you’re talking about, like, the floor floor and how to elevate their quality of life.” And I was like, “Yes, because they are also part of all of humanity.” And he was like, “I’m not really sure how that would happen, but I think it could help the middle class get more economic opportunity.” And I was like, “OK, but how does that happen, as well?” And he was like, “Well, once these come — once we have AGI, and it can just create trillions of dollars of economic value, we can just give them cash payouts.” And I was like, “Who’s giving them cash payouts? What institutions are giving them?” You know, like, it doesn’t — when you actually test their logic, it doesn’t really hold.
And with the doomers, I mean, it’s the same thing. Like, their belief is — ultimately, what I realized when reporting on the book is they believe AGI is possible because of their belief of how the human brain works. They believe human intelligence is inherently fully computational. So, if you have enough data and you have enough computational resources, you will inevitably be able to recreate human intelligence. It’s just a matter of time. And to them, the reason why that would lead to an apocalyptic scenario is humans, we learn and improve our intelligence through communication, and communication is inefficient. We miscommunicate all the time. And so, for AI intelligences, they would be able to rapidly get smarter and smarter and smarter by having perfect communication with one another as digital intelligences. And so, many of these people who self-identify as doomers say there has never been in the history of the universe a species that was superior to another species — a species that was able to rule over a more superior species. So they think that, ultimately, AI will evolve into a higher species and then start ruling us, and then maybe decide to get rid of us altogether.
AMY GOODMAN: But who are these beings?
KAREN HAO: Who knows? Yeah, like, that — I mean, that is one of the holes, again, in their argument, is: Do these beings have bodies? Are we building those bodies? Do we build them into robotics? Like, how do we actually get there? How are they getting control of the physical environment? And that is still a big question mark.
AMY GOODMAN: Who’s the AI guy who’s doing research in universal basic income to deal with all the jobs lost?
KAREN HAO: I mean, Sam Altman has done some of that research himself, yeah. So, he has long talked about publicly supporting universal basic income. But, I mean, he’s been talking about it for almost a decade, or maybe a decade at this point, and it hasn’t actually materialized. But he did, in fact, do a study, out of his foundation — another nonprofit, called OpenResearch, where they gave cash payouts to people, and then did an A/B test with a control group, and — that didn’t get the payouts, and then a group that did. And they did find that the people who did get the payouts actually were able to meaningfully improve their economic — their economic fortunes and invest more of their time in things that they wanted to do. So it did open up their opportunities.
But the thing is, that research is now done, and there’s still not any movement around, “OK, so, is this going to happen?” And at the end of the day, what they’re actually doing with UBI is they’re reinventing the social security net, but instead of elected government officials giving those payouts, it’s companies that are giving those payouts. So, it is inherently still an anti-democratic premise.
AMY GOODMAN: So, you notice I said, “Who is the AI guy?” And you said, “Sam Altman.” So, let’s talk about the lack of diversity in the artificial intelligence universe.
KAREN HAO: Yeah, yeah. I mean, it is hugely undiverse. It’s like — it is mostly men. I don’t remember the stat currently, but back in the day, it was around only 15% were women. There are no — practically no Black researchers. They largely come from backgrounds where they have the privilege to think about these theoretical futures, like the boomers and the doomers. I mean, it is all based in theory. They don’t live in environments where they have to think about other types of existential crises, like being able to feed their kids, like being able to get their next paycheck. And so, it is an extremely insular, insular environment. And I think this also goes to why they think AGI is possible and the world is fundamentally computational. It’s because they, as a community, are so homogenous that they are highly predictable as a group, and their behaviors can be computed quite easily.
AMY GOODMAN: As we begin to wrap up, I’m wondering if you can talk about any model of a country, not a company, that is pioneering a way of democratically controlled artificial intelligence.
KAREN HAO: I don’t think it’s actively happening right now. The EU has had the EU AI Act, which is their major piece of legislation trying to develop a risk-based, rights-based framework for governing AI deployment.
But to me, one of the keys of democratic AI governance is also democratically developing AI, and I don’t think any country is really doing that. And what I mean by that is there are — AI has a supply chain. It needs data. It needs land. It needs energy. It needs water. And it also needs spaces in which these companies need access to it to then deploy their technology — schools, hospitals, government agencies. Silicon Valley has done a really good job over the last decade of making people feel that their collectively owned resources are Silicon Valley’s. You know, I talk with friends all the time who say, “We don’t have data privacy anymore. So, like, what’s more — what is more data to these companies? Like, I’m fine just giving them all of my data.”
But that data is yours. You know, that intellectual property is the writers’ and artists’ intellectual property. That land is a community’s land. Those schools are the students’ and teachers’ schools. The hospitals are the doctors’ and nurses’ and patients’ hospitals. These are all sites of democratic contestation in the deployment — in the development and the deployment of AI. And just like those Chilean water activists that we talked about, who aggressively understood that that freshwater was theirs, and they were not willing to give it up unless they got some kind of mutually beneficial agreement for it, we need to have that spirit in protecting our data, our land, our water and our schools, so that companies inevitably will have to adjust their approach, because they will no longer get access to the resources they need or the spaces that they need to deploy in.
AMY GOODMAN: In 2022, Karen, you wrote a piece for MIT Technology Review headlined “A new vision of artificial intelligence for the people: In a remote rural town in New Zealand, an Indigenous couple is challenging what AI could be and who it should serve.” Who are they?
KAREN HAO: This was a wonderful story that I did, where the couple, they run Te Hiku Media. It’s a nonprofit Māori radio station in New Zealand. And the Māori people have suffered a lot of the same challenges as many Indigenous peoples around the world. The history of colonization led them to rapidly lose their language, and there are very few Māori speakers in the world anymore. And so, in the last few years, there has been an attempt to revive the language, and the New Zealand government has tried to repent by trying to encourage the revival of that language.
But this nonprofit radio station, they had all of this wonderful archival material, archival audio of their ancestors speaking the Māori language, that they wanted to provide to Māori speakers, Māori learners around the world as an educational resource. The problem is, in order to do that, they needed to transcribe the audio so that Māori learners could actually listen, see what was being said, click on the words, understand the translation, and actually turn it into an active learning tool. But there were so few Māori speakers that can speak at that advanced level, that they realized they had to turn to AI.
And this is a key part of my book’s argument, is I’m not critiquing all AI development. I’m specifically critiquing the scale-at-all-costs approach that Silicon Valley has taken. But there are many different kinds of beneficial AI models, including what they ended up doing.
So, they took a fundamentally different approach. First and foremost, they asked their community, “Do we want this AI tool?” Once the community said yes, then they moved to the next step of asking people to fully consent to donating data for the training of this tool. They explained to the community what this data was for, how it would be used, how they would then guard that data and make sure that it wasn’t used for other purposes. They collected around a couple hundred hours of audio data in just a few days, because the community rallied support around this project. And only a couple hundred hours was enough to create a performance speech recognition model, which is crazy when you think about the scales of data that these Silicon Valley companies require. And that is, once again, a lesson that can be learned, is actually there’s plenty of research that shows, when you have highly curated small data sets, you can actually create very powerful AI models. And then, once they had that tool, they were able to do exactly what they wanted, to open source and resource — open source this educational resource to their community.
And so, my vision for AI development in the future is to have more small, task-specific AI models that are not trained on vast, polluted data sets, but small, curated data sets, and therefore only need small amounts of computational power and can be deployed in challenges that we actually need to tackle for humanity — mitigating climate change by integrating more renewable energy into the grid, improving healthcare by doing more drug discovery. I mean, these are task-specific challenges that AI could tackle if the systems were actually designed well. And —
AMY GOODMAN: How would you do that drug discovery?
KAREN HAO: If you designed — so DeepMind, before it got caught up in this huge race to develop so-called everything machines, artificial general Intelligence, they were actually — they developed a system called AlphaFold, which is an AI system that was just trained on amino acid sequences and their protein structures, to then be able to highly — to predict with high accuracy, when you get an amino acid, how is that protein going to fold, which helps enormously in understanding disease and in drug discovery. So, that is one of many, many applications, and it actually won the Nobel Prize for Chemistry last year. That is one of many, many applications that AI could do to accelerate drug discovery or improve healthcare. But that is a task-specific approach. They only trained it on amino acids and protein folding structures. They did not train it on the entire internet. So, that is an example of what can be accomplished if there is actually more thought put into what data we feed into these systems, how they are designed and how they are deployed.
AMY GOODMAN: And that MRI example that you use when it comes to detecting cancer.
KAREN HAO: Yeah.
AMY GOODMAN: How would that happen?
KAREN HAO: So, that’s also a task-specific approach that has been widely proven to work. If you take MRI scans of a specific type of breast cancer, and you feed that into your neural network software, it can then do the statistical calculations to identify with high precision even the earliest signs of that particular type of breast cancer, or whatever, any — name any cancer — or Alzheimer’s. Like, any kind of disease that has medical imaging, if you just take a highly curated data set of those medical images, you can train an extremely performing AI model. And studies have shown, then, when you give it to a trained radiologist, they will actually be able to identify cancer or Alzheimer’s or whatever disease it is far earlier, at a much higher accuracy, and therefore give patients earlier interventions.
AMY GOODMAN: And where are they doing that?
KAREN HAO: In hospitals around the world. But, you know, unfortunately, there’s not enough investment going into this technology, because the public’s imagination is now captivated by the ChatGPTs of the world.
AMY GOODMAN: So, as we finally do wrap up, what you were most shocked by? You’ve been doing this journalism, this research for years. What you were most shocked by in writing Empire of AI?
KAREN HAO: I originally thought that I was going to write a book focused on vertical harms of the AI supply chain — here’s how labor exploitation happens in the AI industry, here’s how the environmental harms are arising out of the AI industry. And at the end of my reporting, I realized that there is a horizontal harm that’s happening here. Every single community that I spoke to, whether it was artists having their intellectual property taken or Chilean water activists having their freshwater taken, they all said that when they encountered the empire, they initially felt exactly the same way: a complete loss of agency to self-determine their future. And that is when I realized the horizontal harm here is AI is threatening democracy. If the majority of the world is going to feel this loss of agency over self-determining their future, democracy cannot survive. And again, specifically Silicon Valley’s approach, scale-at-all-costs AI development.
AMY GOODMAN: But you also chronicle the resistance. You talk about how the Chilean water activists felt at first —
KAREN HAO: Yes, exactly.
AMY GOODMAN: — how the artists feel at first.
KAREN HAO: Yes.
AMY GOODMAN: So, talk about the strategies that these people have employed, and if they’ve been effective.
KAREN HAO: So, the amazing thing is that there has since been so much pushback. The artists have then said, “Wait a minute. We can sue these companies.” The Chilean water activists said, “Wait a minute. We can fight back and protect these water resources.” The Kenyan workers that I spoke to who were contracted by OpenAI, they said, “We can unionize and escalate our story to international media attention.”
And so, even in these — even when I thought that these communities, you could argue, are the most vulnerable in the world, have the least amount of agency, they were the ones that remembered that they do have agency and that they can seize that agency and fight back. And I think it was remarkably heartening to encounter those people to remind me that, actually, the first step to reclaiming democracy is remembering that no one can take your agency away.
AMY GOODMAN: Well, I want to thank you, Karen, for this incredible book. Karen Hao is author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, a journalist who formerly reported for the MIT Technology Review and The Wall Street Journal. She leads the Pulitzer Center’s AI Spotlight Series program for training journalists on how to cover AI. Your new name, Karen hěn Hǎo.
To see Part 1 of our discussion, you can go to democracynow.org. I’m Amy Goodman. Thanks so much for joining us.
Media Options