You turn to us for voices you won't hear anywhere else.

Sign up for Democracy Now!'s Daily Digest to get our latest headlines and stories delivered to your inbox every day.

Artificial Intelligence History, How It Embeds Bias, Displaces Workers, as Congress Lags on Regulation

Web ExclusiveMay 18, 2023
Listen
Media Options
Listen

In Part 2 of our interview with Marc Rotenberg, executive director of the Center for AI and Digital Policy, we look at the history of artificial intelligence, concerns about how it embeds bias and discrimination, and how actors and writers say it could disrupt the entertainment industry, among others. This comes as Congress heard warnings this week from experts like the CEO of the startup behind ChatGPT at a hearing on the dangers of artificial intelligence — his company’s own product.

Transcript
This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. I’m Amy Goodman, with Nermeen Shaikh, as we bring you Part 2 of our conversation with Marc Rotenberg, executive director of the Center for AI and Digital Policy.

As more of the public becomes aware of artificial intelligence, or AI, the Senate held a hearing Tuesday on how to regulate it. California Senator Alex Padilla raised concerns about the fact that most research on AI has been conducted in English and neglected other languages.

SEN. ALEX PADILLA: My understanding is that most research in evaluating and mitigating fairness harms has been concentrated on the English language, while non-English languages have received comparatively little attention or investment, and that we’ve seen this problem before. I’ll tell you why I raise this. Social media companies, for example, have not adequately invested in content moderation tools and resources for their non-English — in non-English language. And I share this not just out of concern for non-U.S.-based users, but so many U.S.-based users prefer a language other than English in their communication. So I’m deeply concerned about repeating social media’s failure in AI tools and applications.

AMY GOODMAN: Also at the hearing, New York University professor emeritus of psychology and neuroscience Gary Marcus testified.

GARY MARCUS: One of the things that I’m most concerned about with GPT-4 is that we don’t know what it’s trained on. I guess Sam knows, but the rest of us do not. And what it is trained on has consequences for, essentially, the biases of the system. We could talk about that in technical terms, but how these systems might lead people about depends very heavily on what data is trained on them. And so, we need transparency about that. And we probably need scientists in there doing analysis in order to understand what the political influences, for example, of these systems might be. And it’s not just about politics. It can be about health. It could be about anything. These systems absorb a lot of data, and then what they say reflects that data, and they’re going to do it differently depending on what’s in that data. So it makes a difference if they’re trained on The Wall Street Journal as opposed to The New York Times or Reddit. I mean, actually, they’re largely trained on all of this stuff, but we don’t really understand the composition of that. And so we have this issue of potential manipulation. And it’s even more complex than that, because it’s subtle manipulation. People may not be aware of what’s going on.

AMY GOODMAN: So, we’re continuing with Marc Rotenberg, executive director of the Center for AI and Digital Policy. And these are all really critical points that were raised in the Senate hearing. But before you address them, everything from, for example, most of the research is done on English AI, give us the history of artificial intelligence.

MARC ROTENBERG: Well, it’s a great question, Amy. And I’ve been in this field now for many, many years. I remember, you know — wow, going back to the 1970s, and people were talking about whether a computer could beat a human chess player. And in the early days, there was a lot of focus on computers and chess. And that’s where I did my early work. And we used to design what we called AI systems; they were actually expert systems, big rule-based decision trees. And you could actually, with a computer, write a strong program, basically, that would evaluate, looking at a particular position, the various options, score the options, pick the best option, anticipate the best move, and go deeper and deeper into the decision tree based on your computing power. And those programs actually became quite strong. I was in Philadelphia, in fact, in 1997, when the IBM program, Deep Blue, beat the world chess champion, Garry Kasparov, which was a moment, you know? I mean, there is a computer beating a human in a very, you know, what some people would say, advanced activity.

But this is the important point for your listeners to understand. Even at that time in the 1990s, when a computer could beat Kasparov, you could open the program. You could look at how the moves were produced. You could trace the decision tree. You’d, you know, add more grandmaster expertise on an endgame position if the computer was having difficulty with that. And it really was something within human control, as advanced as the technology was. But AI began to shift, and AI moved toward machine learning techniques. And I’m really not an expert there, but I can tell you, generally speaking, machine learning techniques involve a lot of data and a lot of processing within the program to try to assess how to find the best outcome.

So, I’ll tell you now about another chess match, what happened — which happened in 2017. And in that year, a new program, based on machine learning, called AlphaZero, beat the reigning old-style computer program called Stockfish, the one with the decision trees that you could open up and fix. And people looked at the outcome, and they were, like, shocked. AlphaZero had made, you know, brilliant moves. It had beat the reigning computer champion, which was already better than the reigning human champion. But people could not prove the outcomes. They couldn’t figure out why the program made a particular move, because now the training data was so complex, and the procedures and techniques, you know, were so complicated.

And I think, for many people, this signaled real worry. You know, on the one hand, you have this tremendous sense of accomplishment, which I know the people at AlphaZero and DeepMind involved in that project did. And on the other hand, you walk away from that, and you ask yourself the question: What if one of these systems were used to advise the president in a global conflict or to draft, you know, legislation or to produce a medical diagnosis? Do you really want outcomes that can’t be proven, that can’t be replicated, that don’t follow all the traditional techniques of the scientific method?

And this concern, you know, looming in the background, let’s say, was recently accelerated because of ChatGPT, because now you saw the ability of advanced AI — let’s call it that, for the moment, advanced AI — to produce text, to produce speech, to produce images, to produce video, and even the people who created the systems couldn’t explain to you precisely how it happened.

And that’s why, for example, going back to the the clips at the beginning, you know, Senator Padilla is absolutely right. I mean, how do we assess a system when we know already that there’s bias in the training data if it’s all English-language text? And Gary Marcus was absolutely right. I mean, he called this a perfect storm. You know, he said we have corporations racing off with new products, we have no guardrails in place, and we have an enormous risk, you know, to the public of these systems unregulated.

So, I’m using the chess example to help people understand there’s a way to do AI where you maintain control, promote innovation, have spectacular outcomes, and there’s a way to do AI that’s, you know, frankly, a bit scary.

NERMEEN SHAIKH: And what are the particular concerns? I mean, first of all, Marc, if you could explain artificial general intelligence and what the specific concerns are about that? I mean, is this AlphaZero — was that an example of artificial general intelligence? If you could just explain?

MARC ROTENBERG: Right. So, generally speaking, we have in the past thought about AI as helping to solve a particular problem, like when a radiologist is trying to interpret an X-ray. We have now AI-based techniques, based on a lot of training data that do extremely good job, you know, identifying risks to cancer through the examination of the digital image produced, produced by an X-ray. And we would think of that, much like chess, as a particular application of AI.

The interesting question now is whether we can develop AI systems that can move across multiple domains and provide insight without training in any specific field. And, in fact, the model provided by AlphaZero is an example of this, because, you see, that program, although it was very good on chess, wasn’t really trained on the expertise of grandmasters. It wasn’t an old-style expert system. It was actually a rule-based system that said, “How is this game played? OK, now I’ll go off, play myself 10 million times,” and became extremely strong.

We see this also with generative AI, which is, you know, trained on the text of the internet and trained on the text of trademark applications and trained on the text of great books. When you have that much information, you know, you can ask the question, “Who are the top five Renaissance painters?” get a surprisingly good answer, and then ask the question, you know, “What are the best safety procedures if a house catches on fire?” and another remarkably good answer. That’s what, you know, general AI begins to feel like, because it’s not to a specific application that someone has predetermined.

NERMEEN SHAIKH: Marc, let’s go back to Tuesday’s hearing and discuss what some of the ways are in which this technology could be regulated. Senate Judiciary Subcommittee Chair Richard Blumenthal warned also of the danger of repeating the mistakes of Section 230, two-three-zero, which protects internet companies from liability for illegal content posted by users and lets them remove legal but objectionable posts.

SEN. RICHARD BLUMENTHAL: When AI companies and their clients cause harm, they should be held liable. We should not repeat our past mistakes — for example, Section 230. Forcing companies to think ahead and be responsible for the ramifications of their business decisions can be the most powerful tool of all. Garbage in, garbage out — the principle still applies. We ought to beware of the garbage, whether it’s going into these platforms or coming out of them.

NERMEEN SHAIKH: Marc Rotenberg, if you could respond to what Senator Blumenthal said?

MARC ROTENBERG: Right. Well, I was very pleased that he said that. It was actually a view that was, you know, expressed across the aisle by members of both parties. The reference here is to a provision from a 1996 law that granted broad immunity to internet companies for the information that they published, and, frankly, profited from through the advertising model that developed over the years. I was involved in 1996 in the drafting of the legislation. At the time, it felt appropriate. The internet was still in its very early days, and there weren’t big tech companies that dominated, as is the case today.

But I think it became apparent to many of us — I know — over time that Section 230 needed to be reformed, if not repealed. One of the unfortunate consequences of that immunity provision was to allow the internet companies to gather a lot of the readership and revenue of traditional news organizations. So, paradoxically, in protecting the internet firms, it actually disadvantaged the news organizations that were trying to make their work available online. And I was very, very troubled by that.

But again, the good news is Senator Blumenthal and others have basically said, with regard to the AI industry, they’re not going to make that mistake again. And it’s actually very easy to see how the exact same problem could emerge, because these AI models, particularly the generative AI models, basically collect as much data from others as they can, and it’ll be very difficult, I think, to maintain independent businesses and organizations without a better approach to liability. So, that was a positive signal, and there was, I think, bipartisan support.

AMY GOODMAN: Let me ask you, Marc Rotenberg, about the WGA strike that’s going on and how this relates to AI. You have, for example — well, it’s entered its third week — the actress and computer scientist Justine Bateman posting a tweet thread detailing how artificial intelligence could disrupt the entertainment industry, and talked about what actors can do to protect themselves. But talk about what the WGA’s demands of the studios are and why artificial intelligence is so central and threatening to jobs.

MARC ROTENBERG: Right. I have a little understanding of the WGA’s provisions — I’m sorry, position in the current negotiation. I did happen to see one of the negotiating points, and it concerned whether, in fact, generative AI techniques could be used, you know, to write scripts, for example. And I think the union, quite rightly, wants to draw a strong line there and say, “No, we don’t want our work to be replaced by machines.”

And we’re seeing it already in the news industry, actually. One of the news organizations — I forget the name, but they, you know, announced staff layoffs, because they see efficiency and economy, and then, frankly, more profitability, if they can use, let’s say, you know, one supervisor and 10 generative AI programs, as opposed to an editor and 10 writers. IBM, in fact, has recently announced that it’s going to be replacing a lot of its employees with programs it believes can take over those responsibilities.

Now, of course, there’s long been an interesting debate in the field about the impact of technology on employment. And there is a view that, you know, although some jobs are lost, new jobs will be created, and oftentimes they can be better jobs. I’m not an expert on that topic, but I will say this. The change that is taking place right now is happening so quickly, I really wonder if even those who believe that new jobs will come along eventually actually think that’s going to happen quickly enough. And the reading that I’ve done from many of the leading AI proponents — this includes, for example, Kai-Fu Lee in China and Eric Schmidt here in the U.S. — you know, both want to keep government out of the way and want AI to be broadly deployed. They are anticipating high levels of unemployment. And many are beginning to talk about, you know, a growing need for universal basic income to assist people who are going to lose their jobs because of AI. So, to me, this moment actually looks a bit more pressing than the traditional debate over technology and employment. And I think it’s right for the Writers Guild to be raising these issues now. I suspect many others will be, as well.

AMY GOODMAN: And, you know, of course, there is a lot of alphabet soup here, from AI to WGA, which is, as you just said, Writers Guild of America. Nermeen?

NERMEEN SHAIKH: So, Marc, if you could explain also — I mean, there are multiple risks that people have pointed out about the way in which artificial intelligence could potentially compromise the democratic process, not just here with elections, but really all over the world. If you could explain how that could happen, and where we’ve already witnessed it?

MARC ROTENBERG: Right. So, this is another area where AI is moving very quickly. And I don’t think we have the adequate expertise or rules to respond to consequences, and particularly here in the United States, where, regrettably, you know, there’s growing distrust of institutions, growing distrust of news reporting, really an inability to agree on basic facts. You can amplify and accelerate those trends through AI techniques.

And let me briefly explain how that works. We might think, for example, the traditional concern is, you know, propaganda or false statements that are published to confuse people, to mislead people. There’s nothing new about that type of communication — disinformation, we could also call it. With AI techniques and with personal data, there is now the ability to target, profile, engage, groom individuals, that is highly efficient, highly persuasive and very difficult, actually, to anticipate or to assess. This is one of the issues, actually, that Gary Marcus raised with the committee this week. I mean, we have interactions with people, you know, over the phone, let’s say, or over Zoom. We can judge who we’re talking to and why they’re saying what they’re saying, and we have certain innate ability to evaluate human behavior that helps us at least sort out truth and falsehood. With generative AI, it’s going to become increasingly easy and increasingly cheap to replicate human voice, human words, human images and video to persuade people.

So, Senator Blumenthal, in that clip you provided at the beginning, was actually quite gentle in his example, because he asked the program to simply say words that he was likely to have said, based on his prior statements. It actually sounded like him. But let’s imagine a different scenario, maybe from Senator Blumenthal’s opponent in the next race for Senate in Connecticut, who uses that exact same technology and has Senator Blumenthal say, in his voice, some truly outrageous things, right? The kinds of things that you say — that you hear, and you go, “Oh my god! Did he, like, really say that?” And then, of course, you know, Blumenthal will be put in this position of saying, “Well, that was not me. You know, that was a computer program generated by my opponent. I would never say such things.” And the people who now have these two conflicting statements are like, “Yeah, but, you know, it sounded a lot like you. Right?”

And that scenario, I think many people in the AI world are really concerned about. There’s an article, I think, just this week in The Atlantic by a philosopher at Tufts who talks about the creation of counterfeit people. Generative AI makes it possible to create, you know, counterfeit people. And we are going to have to develop the critical skills, as we maneuver through this digital environment, to be able to assess, you know, what’s true and what’s not.

AMY GOODMAN: You know, you can think of all sorts of things. For example, Donald Trump running for president — not that he cares about facts, but, you know, tape of him decades ago saying how he is a complete believer in women’s right to choose an abortion. He said, “You know, I come from New York. So that is how I feel.” Can just say that’s deep fake technology. You have this latest, Giuliani tapes that are coming out of a lawsuit against him, where the woman who’s accusing him of sexual assault recorded a number of things. He can just say, “This is deep fake.” You have deep fake technology powered by artificial intelligence already interfering with democratic elections in Turkey. A video purporting to be a leaked sex tape of a presidential candidate was released on the internet ahead of last Sunday’s first round of balloting. He dropped out of the race, saying it was a deep fake, fake video, fake pictures. Your response?

MARC ROTENBERG: Well, Amy, your point is actually very important, because I was describing, of course, how AI techniques can be used to persuade people with false information, but also the presence of AI techniques can make it difficult to establish what is true, because others can then say, “Oh, well, you know, that video you created,” which is true, by the way, “was generated by AI, and therefore is something that people can dismiss.” So, when we get into an era when it becomes increasingly easy both to manufacture and persuade with synthetic materials generated by AI, as well as to diminish what is true by calling into question what we see in the digital world, it’s really a threat to public reason and to democratic institutions.

And this also — you know, I’ll just say I did a review not too long ago of a book by Eric Schmidt and Henry Kissinger, Huttenlocher at MIT, called The Age of AI. And they seemed generally upbeat about the possibility that AI would lead to better decision-making and support democratic institutions. And I read that book, and I looked at their examples, and I came away with, you know, almost the opposite conclusion. You know, they said this will take us beyond the age of reason, and I said this will take us back to an age of faith, where we will have to simply rely on the output of AI systems, and lose our ability to reason, debate and discuss within the context of democracy.

But without, you know, getting too down here, I again want to share with you and your viewers and your listeners, the Senate hearing this week was very good. It was really a milestone and a positive development in the United States. As you said at the beginning of this segment, the U.S. has lagged behind other countries. We actually produce an annual report called the “Artificial Intelligence and Democratic Values Index,” and we rate and rank countries based on their AI policies and practices on alignment to democratic values. And we have been genuinely concerned about the United States over the last several years — too many secret meetings, too many meetings with tech CEOs, not enough progress on legislation. But I think we’re turning a corner, and I think the Senate hearing this week, which, as I said at the beginning, actually responded to some of the concerns my colleague Merve Hickok had raised at the House Oversight hearing in early March, is really good news.

NERMEEN SHAIKH: Marc Rotenberg, could you also explain what the concerns are that have been expressed about the use of artificial intelligence, facial recognition technology, biometrics in policing and immigration?

MARC ROTENBERG: Right. So, that’s a very important topic. And that actually takes us into one of the critical debates about AI policy, which is whether governments will have the ability to actually draw red lines to create prohibitions on certain AI techniques that violate fundamental human rights. There is a tendency in the technology world, when a problem is identified, to say, “Oh, you know, you’re right there. Let’s see if we can fix that. And then we’ll go ahead with the system, now that we have addressed the problem.” I think what NGOs and human rights organizations are saying, there are actually certain applications of AI that should not be allowed.

So, for example, the use of AI for facial surveillance. And I use that term, “facial surveillance,” basically, to distinguish the kind of face recognition we do when we open our iPhone, for example, and that’s an authentication technique within your control. That’s not really a problem. But when you’re doing that, and it’s not in your control but in the government’s control, then it’s a form of mass surveillance, because now not only do you have cameras on streets, but you have cameras on streets that have the ability to identify people, and not only identify people, but link their image and identity to a database that profiles them, and maybe even ranks them based on their alliance with the government. So, of course, you know, that system now is widely deployed in China. It’s being considered in other countries. That type of facial surveillance, mass surveillance, we believe, should simply be prohibited. We believe biometric categorization, trying to extract people based on biometric factors, that should also be prohibited. Social scoring, which is, again, the effort to align a person’s identity with their support for the state.

And the good news here, if I may say a couple more words, is that working with the international organization UNESCO on their recommendation of AI ethics, which was adopted in 2021, we now have, at least in principle, 193 countries that have said AI should not be used for mass surveillance, and AI should not be used for social scoring. And so, when Senator Blumenthal this week at the beginning of the hearing said, “We need to establish transparency and accountability and limits on use,” you know, I was thinking, “Wow! that is a great moment,” because for him to say “limits on use” is actually responding precisely to the question you raised and the points that the NGOs and human rights groups have made: We actually need some prohibitions. The tech industry may say, “Well, we can be fair. We can be open. We can be responsible.” Those words are not good enough. We need some bright lines.

AMY GOODMAN: You know, when we were at Sundance a few years ago, there was this amazing documentary called Coded Bias, an exploration into the fallout of MIT Media Lab researcher — she is Black — Joy Buolamwini, a discovery of racial bias in facial recognition by artificial intelligence, and how it’s used against people who are Black, people of color. Can you expand on that, Marc?

MARC ROTENBERG: Yeah, that’s actually a great film, and I’m a big fan of Joy. She launched an organization called the Algorithmic Justice League, which was really one of the first groups, and her work, in particular, to call attention to the problem of embedded bias in AI.

And the remarkable fact she uncovered while she was doing her research at MIT was that as a Black woman, these systems did not work so well in terms of identifying her by facial characteristics, but when she put a white mask on her face, then the recognition system began to work. And it was a very stark reminder, I think, about the type of bias that is built in to many of these systems. And that bias, by the way, can come from many different sources. You know, part of it’s about the data. Part of it’s about the people who code the systems. Part of it’s about the structure and purpose of the companies that deploy the systems. It actually exists at many different levels.

And, you know, her movie, Coded Bias, is actually something that we show to the participants in our AI Policy Clinic, because we want people to understand what that problem looks like. And as I said earlier on, you know, while I share the concerns of those who are concerned about long-term risk, I think we need to deal right now with the way in which AI systems embed bias. And Joy and the Algorithmic Justice League have just done a tremendous job drawing attention to that problem.

NERMEEN SHAIKH: And, Marc, could you explain — just elaborate on that, how AI already embeds bias, as you were saying, that Joy pointed out, and whom we interviewed at Sundance. Could you explain exactly how that happens already? In what context is that technology used?

MARC ROTENBERG: Right. Well, it’s a big and complex field. And truly, I hope that you will have others on your program to talk about some of these issues. I’ve mentioned the AI Now Institute. I’ve mentioned Distributed AI Research Institute, the many people out there who have studied this topic more than I have.

But I will say, as a lawyer, one scenario, which I understand quite well, arises in the context of the criminal justice system and sentencing. And, you know, I mean, judges are overwhelmed, and they’re dealing with a lot of cases. And they get to a sentencing decision, and, you know, the prosecutor may say, “For this young white male, we recommend a six-month sentence,” and, an hour later, say, “For this young Black male, we recommend a nine-month sentence.” And the judge, you know, hears these two recommendations and asks, frankly, the obvious question, you know: What is the basis for the disparity in the sentencing recommendation? And the answer across the United States is coming back, increasingly, “Well, these recommendations come from a very sophisticated program that’s looked at a lot of data, that’s analyzed the two people in your court today, Your Honor. And based on some proprietary information, as well, which we actually can’t disclose, but with a lot of good reporting and statistical assessment, we can assure the court that if the aim is to reduce recidivism,” which is the likelihood of recommitting a criminal offense, “you know, the white offender gets the six-month sentence, and the Black offender gets the nine-month sentence.”

And, you know, what you see immediately in the scenario here that is of widespread concern is that, you know, maybe the nine-month sentence is justified, maybe the six-month sentence is justified, but we actually don’t have the ability to be able to understand how those results were produced, to contest those results. Perhaps they were wrong. Maybe there was an error even in the entry of the person’s name, or their age or their address or other factors that contributed to the scoring. And, you see, we’re back in the situation with the chess program back in 2017, where we see an outcome, and we don’t know how it was produced.

And I’ve actually spoken with federal judges about this problem. And they say, “Well, there’s a lot of talk about human-in-the-loop. And so, of course, we’re not going to simply accept the AI recommendation. You know, there’s going to be a person who’s going to make a final decision.” But the judges also admit that these systems are complex. They’re, you know, supported by experts. It’s difficult for the judge to contest the outcome. And, in fact, they become like a rubber stamp.

So, if I can share another thought with your listeners and viewers, I’ve really moved away from this concept of human-in-the-loop, although it’s very popular in the AI world. I think we actually need to say instead, “AI-in-the-loop, humans in charge.” And that means, coming back to this criminal sentencing example, if the judge is not satisfied that she can understand and justify the outcome, I think we just take the AI away. I don’t think we have a situation where we presume that the AI-generated output is going to be the correct output. And, of course, this example that I’m providing, in a field I know a little bit about, is happening with decisions regarding housing and credit, employment, immigration, hiring. It’s widely replicated across the country,

AMY GOODMAN: Marc, we don’t have much time, but we just wanted to end by asking about your organization, the Center for AI and Digital Policy.

MARC ROTENBERG: Well, thank you, Amy. And thank you again for the opportunity to be with you this morning.

We launched this organization because we thought there was an urgent need to align AI techniques with democratic values and to train the next generation of AI policy leaders. We’ve been teaching courses now for several years and provided certifications, I think, to almost 500 people in 60 countries. They have learned about the democratic policy frameworks, the UNESCO framework and some of the others. We hope that through the organization, we will be able to produce people who are well educated to engage in these debates and to put in place good rules. We’re not against technology, but we do think technology must always remain within human control. We support rule of law, democratic values and fundamental rights. That’s really our mission.

AMY GOODMAN: Well, Marc Rotenberg, we want to thank you so much for being with us, executive director of the Center for AI and Digital Policy. To see Part 1 of our discussion, go to democracynow.org. I’m Amy Goodman, with Nermeen Shaikh. Thanks so much for joining us.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Up Next

How Artificial Intelligence Is Changing Us & Who Is Building It: Anjan Sundaram & David Wallace-Wells

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
Make a donation
Top