You turn to us for voices you won't hear anywhere else.

Sign up for Democracy Now!'s Daily Digest to get our latest headlines and stories delivered to your inbox every day.

“Empire of AI”: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World

Listen
Media Options
Listen

The new book Empire of AI by longtime technology reporter Karen Hao unveils the accruing political and economic power of AI companies — especially Sam Altman’s OpenAI. Her reporting uncovered the exploitation of workers in Kenya, attempts to take massive amounts of freshwater from communities in Chile, along with numerous accounts of the technology’s detrimental impact on the environment. “This is an extraordinary type of AI development that is causing a lot of social, labor and environmental harms,” says Hao.

Transcript
This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. I’m Amy Goodman, with Juan González.

We turn now to the Empire of AI. That’s the name of a new book by the journalist Karen Hao, who has closely reported on the rise of the artificial intelligence industry, with a focus on Sam Altman’s OpenAI, the company behind ChatGPT. Karen Hao compares the actions of the AI industry to those of colonial powers in the past. She writes, quote, “The empires of AI are not engaged in the same overt violence and brutality that marked this history. But they, too, seize and extract precious resources to feed their vision of artificial intelligence: the work of artists and writers; the data of countless individuals posting about their experiences and observations online; the land, energy, and water required to house and run massive data centers and supercomputers,” she writes.

Karen Hao’s book comes at a time when Republican congressmembers are trying to block states from regulating AI. The House recently passed Trump’s so-called big, beautiful budget bill, that contains a provision that would prohibit any state-level regulations of AI for the next decade, in major gift to the AI industry. Republican Congresswoman Marjorie Taylor Greene has criticized the provision, even though she voted for the bill. She wrote online, quote, “Full transparency, I did not know about this section… I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there.” She says she won’t vote for it if it remains in there after the Senate votes.

To talk about this and much more, we’re joined by the journalist Karen Hao, whose new book, again, is just out, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, former reporter at The Wall Street Journal and MIT Technology Review. She leads the Pulitzer Center’s AI Spotlight Series program for training journalists on how to cover AI.

You should train everyone on how to understand AI, Karen. It’s an amazing book.

KAREN HAO: Thank you so much.

AMY GOODMAN: Why don’t you start by talking about the title, what exactly you mean? For a lay audience, for — I suspect a lot of our audience are Luddites, so you really have to explain — 

KAREN HAO: Yeah.

AMY GOODMAN: — what artificial intelligence, artificial general intelligence is.

KAREN HAO: So, AI is a collection of many different technologies, but most people were introduced to it through ChatGPT. And what I argue in the book, and what the title refers to, Empire of AI, it’s actually a critique of the specific trajectory of AI development that led us to ChatGPT and has continued since ChatGPT. And that is specifically Silicon Valley’s scale-at-all-costs approach to AI development.

AI models in modern day, they are trained on data. They need computers to train them on that data. But what Silicon Valley did, and what OpenAI did in the last few years, is they started blowing up the amount of data and the size of the computers that need to do this training. So, we are talking about the full English-language internet being fed into these models — books, scientific articles, all of the intellectual property that has been created — and also massive supercomputers that run tens of thousands, even hundreds of thousands, of computer chips that are the size of dozens, maybe hundreds, of football fields and use practically the entire energy demands of cities now. So, this is an extraordinary type of AI development that is causing a lot of social, labor and environmental harms. And that is ultimately why I evoke this analogy to empire.

JUAN GONZÁLEZ: And, Karen, could you talk some more about not only the energy requirements, but the water requirements of these huge data centers that are, in essence, the backbone of this widening industry?

KAREN HAO: Absolutely. I’ll give you two stats on both the energy and the water. When talking about the energy demand, McKinsey recently came out with a report that said in the next five years, based on the current pace of AI computational infrastructure expansion, we would need to put as much energy on the global grid as what is consumed by two to six times the energy consumed annually by the state of California, and that will mostly be serviced by fossil fuels. We’re already seeing reporting of coal plants with their lives being extended. They were supposed to retire, but now they cannot, to support this data center development. We are seeing methane gas turbines, unlicensed ones, being popped up to service these data centers, as well.

From a freshwater perspective, these data centers need to be trained on freshwater. They cannot be trained on any other type of water, because it can corrode the equipment, it can lead to bacterial growth. And most of the time, it actually taps directly into a public drinking water supply, because that is the infrastructure that has been laid to deliver this clean freshwater to different businesses, to different homes. And Bloomberg recently had an analysis where they looked at the expansion of these data centers around the world, and two-thirds of them are being placed in water-scarce areas. So they’re being placed in communities that do not have access to freshwater. So, it’s not just the total amount of freshwater that we need to be concerned about, but actually the distribution of this infrastructure around the world.

JUAN GONZÁLEZ: And most people are familiar with ChatGPT, the consumer aspect of AI, but what about the military aspect of AI, where, in essence, we’re finding Silicon Valley companies becoming the next generation of defense contractors?

KAREN HAO: One of the reasons why OpenAI and many other companies are turning to the defense industry is because they have spent an extraordinary amount of money in developing these technologies. They’re spending hundreds of billions to train these models. And they need to recoup those costs. And there are only so many industries and so many places that have that size of a paycheck to pay. And so, that’s why we’re seeing a cozying up to the defense industry. We’re also seeing Silicon Valley use the U.S. government in their empire-building ambitions. You could argue that the U.S. government is also trying to use Silicon Valley, vice versa, in their empire-building ambitions.

But certainly, these technologies are not — they are not designed to be used in a sensitive military context. And so, the aggressive push of these companies to try and get those defense contracts and integrate their technologies more and more into the infrastructure of the military is really alarming.

AMY GOODMAN: I wanted to go to the countries you went to, or the stories you covered, because, I mean, this is amazing, the depth of your reporting, from Kenya to Uruguay to Chile. You were talking about the use of water. And I also want to ask you about nuclear power.

KAREN HAO: Yeah.

AMY GOODMAN: But in Chile, what is happening there around these data centers and the water they would use and the resistance to that?

KAREN HAO: Yeah. So, Chile has an interesting history in that it’s been under — it was under a dictatorship for a very long time. And so, during that time, most public resources were privatized, including water. But because of an anomaly, there’s one community in the greater Santiago metropolitan region that actually still has access to a public freshwater resource that services both that community, as well as the rest of the country in emergency situations. That is the exact community that Google chose to try to put a data center in. And they proposed for their data center to use a thousand times more freshwater than that community used annually.

AMY GOODMAN: And it would be free.

KAREN HAO: And it — you know, I have no idea. That is a great question. But what the community told me was they weren’t even paying taxes for this, because they believed, based on reading the documentation, that the taxes that Google was paying was, in fact, to where they had registered their offices, their administrative offices, not where they were putting down the data center. So they were not seeing any benefit from this data center directly to that community, and they were seeing no checks placed on the freshwater that this data center would have been allowed to extract.

And so, these activists said, “Wait a minute. Absolutely not. We’re not going to allow this data center to come in, unless they give us a legitimate reason for why it benefits us.” And so, they started doing boots-on-the-ground activism, pushing back, knocking on every single one of their neighbors’ doors, handing out flyers to the community, telling them, “This company is taking our freshwater resources without giving us anything in return.”

And so, they escalated so dramatically that it escalated to Google Chile. It escalated to Google Mountain View, which, by the way, then sent representatives to Chile that only spoke English. But then, it eventually escalated to the Chilean government. And the Chilean government now has roundtables where they ask these community residents and the company representatives and representatives from the government to come together to actually discuss how to make data center development more beneficial to the community.

The activists say the fight is not over. Just because they’ve been invited to the table doesn’t mean that everything is suddenly better. They need to stay vigilant. They need to continue scrutinizing these projects. But thus far, they’ve been able to block this project for four to five years and have gained that seat at the table.

JUAN GONZÁLEZ: And how is it that these Western companies, in essence, are exploiting labor in the Global South? You go into something called data annotation firms. What are those?

KAREN HAO: Yeah, so, because AI, modern-day AI systems are trained on massive amounts of data, and they’re scraped — that’s scraped from the internet, you can’t actually pump that data directly into your AI model, because there are a lot of things within that data. It’s heavily polluted. It needs to be cleaned. It needs to be annotated. So, this is where data annotation firms come in. These are middle-man firms that hire contract labor to provide to these AI companies to do that kind of data preparation.

And OpenAI, when it was starting to think about commercializing its products and thinking about, “Let’s put text-generation machines that can spew any kind of text into the hands of millions of users,” they realized they needed to have some kind of content moderation. They needed to develop a filter that would wrap around these models and prevent these models from actually spewing racist, hateful and harmful speech to users. That would not make a very good, commercially viable product.

And so, they contracted these middle-man firms in Kenya, where the Kenyan workers had to read through reams of the worst text on the internet, as well as AI-generated text, where OpenAI was prompting its own AI models to imagine the worst text on the internet and then telling these Kenyan workers to detail — to categorize them in detailed taxonomies of “Is this sexual content? Is this violent content? How graphic is that violent content?” in order to teach its filter all the different categories of content it had to block.

And this is an incredibly uncommon form of labor. There are lots of other different types of contract labor that they use. But these workers, they’re paid a few bucks an hour, if at all. And just like the era of social media, these content moderators are left very deeply psychologically traumatized. And ultimately, there is no real philosophy behind why these workers are paid a couple bucks an hour and have their lives destroyed, and why AI researchers who also contribute to these models are paid million-dollar compensation packages simply because they sit in Silicon Valley, in OpenAI’s offices. That is the logic of empire, and that harkens back to my title, Empire of AI.

AMY GOODMAN: So, let’s go back to your title, Empire of AI, the subtitle, Dreams and Nightmares in Sam Altman’s OpenAI. So, tell us the story of Sam Altman and what OpenAI is all about, right through to the deal he just made in the Gulf, when President Trump, Sam Altman and Elon Musk were there.

KAREN HAO: Altman is very much a product of Silicon Valley. His career was first as a founder of a startup, and then as the president of Y Combinator, which is one of the most famous startup accelerators in Silicon Valley, and then the CEO of OpenAI. And there’s no coincidence that OpenAI ended up introducing the world to the scale-at-all-costs approach to AI development, because that is the way that Silicon Valley has operated in the entire time that Altman came up in it.

And so, he is a very strategic person. He is incredibly good at telling stories about the future and painting these sweeping visions that investors and employees want to be a part of. And so, early on at YC, he identified that AI would be one of the trends that could take off. And he was trying to build a portfolio of different investments and different initiatives to place himself in the center of various different trends, depending on which one took off. He was investing in quantum computing, he was investing in nuclear fusion, he was investing in self-driving cars, and he was developing a fundamental AI research lab. Ultimately, the AI research lab was the ones that started accelerating really quickly, so he makes himself the CEO of that company.

And originally, he started it as a nonprofit to try and position it as a counter to for-profit-driven incentives in Silicon Valley. But within one-and-a-half years, OpenAI’s executives identified that if they wanted to be the lead in this space, they “had to” go for this scale-at-all-costs approach — and “had to” should be in quotes. They thought that they had to do this. There are actually many other ways to develop AI and to have progress in AI that does not take this approach.

But once they decided that, they realized the bottleneck was capital. It just so happened Sam Altman is a once-in-a-generation fundraising talent. He created this new structure, nesting a for-profit arm within the nonprofit, to become this fundraising vehicle for the tens of billions, and ultimately hundreds of billions, that they needed to pursue the approach that they decided on. And that is how we ultimately get to present-day OpenAI, which is one of the most capitalistic companies in the history of Silicon Valley, continuing to raise hundreds of billions, and, Altman has joked, even trillions, to produce a technology that ultimately has a middling economic impact thus far.

JUAN GONZÁLEZ: And we only have about a minute left, but the relationship between Altman and other rivals, like Google or Elon Musk?

KAREN HAO: It is accelerating the development of this technology that, again, I mentioned, has this middling impact. We’ve seen, through the history of OpenAI, that all of the former executives that have left have created their own rivals. So, Elon Musk was one of the original co-founders, created xAI. Ilya Sutskever was the chief scientist, created Safe Superintelligence. Mira Murati was the former chief technology officer, created Thinking Machines Lab. All of them are now competing to try and dominate this technology and try and make it in their own image.

AMY GOODMAN: And finally, this regulation within the so-called big, beautiful bill that says AI can’t be regulated by states for a decade?

KAREN HAO: This is absolutely going to enshrine the impunity of Silicon Valley into law. I mean, I call this these empires of AI because they are already quickly becoming the apex predator in the ecosystem able to act in their self-interest. And that bill, if passed, would certainly codify that.

AMY GOODMAN: Well, we’re going to do a Part 2, post it online at democracynow.org. Karen Hao is author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Tune in tomorrow on Democracy Now! when we speak with the British journalist Carole Cadwalladr about how to survive the broligarchy.

We’ve got three job openings: senior headline news producer, a director of audience and a director of technology, all full-time here in New York. You can learn more at democracynow.org/jobs.

Democracy Now! produced with Mike Burke, Renée Feltz, Deena Guzder, Messiah Rhodes, Nermeen Shaikh, María Taracena, Tami Woronoff, Charina Nadura, Sam Alcoff, Tey-Marie Astudillo. I’m Amy Goodman, with Juan González, for another edition of Democracy Now!

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Up Next

A History of California, Capitalism, and the World: Malcolm Harris on New Book “Palo Alto”

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
Make a donation
Top