How Do We Ensure an A.I. Future That Allows for Human Thriving?
"When OpenAI released its artificial-intelligence chatbot, ChatGPT, to the public at the end of last year, it unleashed a wave of excitement, fear, curiosity and debate that has only grown as rival competitors have accelerated their efforts and members of the public have tested out new A.I.-powered technology. Gary Marcus, an emeritus professor of psychology and neural science at New York University and an A.I. entrepreneur himself, has been one of the most prominent — and critical — voices in these discussions. More specifically, Marcus, a prolific author and writer of the Substack “The Road to A.I. We Can Trust,” as well as the host of a new podcast, “Humans vs. Machines,” has positioned himself as a gadfly to A.I. boosters. At a recent TED conference, he even called for the establishment of an international institution to help govern A.I.’s development and use. “I’m not one of these long-term riskers who think the entire planet is going to be taken over by robots,” says Marcus, who is 53. “But I am worried about what bad actors can do with these things, because there is no control over them. We’re not really grappling with what that means or what the scale could be.”
It seems as if people are easily able to articulate a whole host of serious social, political and cultural problems that are likely to arise from the But it seems much less easy for people to articulate specific potential benefits on the same scale. Should that be a huge red flag? The question is: Do the benefits outweigh the costs? The intellectually honest answer is that we don’t know. Some of us would like to slow this down because we are seeing more costs every day, but I don’t think that means that there are no benefits. We know it’s useful for computer programmers. A lot of this discussion around the so-called is a fear that if we don’t build GPT-5, and China builds it first, somehow something magical’s going to happen; 5 is going to become an artificial general intelligence that can do anything. We may someday have a technology that revolutionizes science and technology, but I don’t think GPT-5 is the ticket for that. GPT-4 is pitched as this universal problem solver and can’t even play a decent game of chess! To scale that up in your mind to think that GPT-5 is going to go from “can’t even play chess” to “if China gets it first, the United States is going to explode” — this is fantasyland. But yeah, I’m sure GPT-5 will have some nice use cases. The biggest use case is still writing dodgy prose for search engines.
Do you think the public has been too credulous about ChatGPT? It’s not just the public. Some of your friends at your newspaper have been a bit credulous. In my book, “Rebooting A.I.,” we talked about the Eliza effect — we called it the In the mid-1960s, Joseph Weizenbaum wrote this primitive piece of software called Eliza, and some people started spilling their guts to it. It was set up as a psychotherapist, and it was doing keyword matching. It didn’t know what it was talking about, but it wrote text, and people didn’t understand that a machine could write text and not know what it was talking about. The same thing is happening right now. It is very easy for human beings to attribute awareness to things that don’t have it. The cleverest thing that OpenAI did was to have GPT type its answers out one character at a time — made it look like a person was doing it. That adds to the illusion. It is sucking people in and making them believe that there’s a there there that isn’t there. That’s dangerous. We saw the Jonathan Turley incident, when it You have to remember, these systems don’t understand what they’re reading. They’re collecting statistics about the relations between words. If everybody looked at these systems and said, “It’s kind of a neat party trick, but haha, it’s not real,” it wouldn’t be so disconcerting. But people believe it because it’s a search engine. We trust Microsoft. Combine that human overattribution with the reality that these systems don’t know what they’re talking about and are error-prone, and you have a problem.
So have or been irresponsible in not speaking more clearly about the actual capabilities — or lack thereof — of their companies’ technologies? Sam has walked both sides of that fence — at times, I think, inviting the inference that these things are artificial general intelligence. The most egregious example of that in my mind is when He posted pictures and a Tweet saying, “A.G.I. is gonna be wild.” That is inviting the inference that these things are artificial general intelligence, and they are not! He subsequently backed down from that. Also, around that time, he attacked me. He said, “Give me the confidence of a mediocre deep-learning skeptic.” It was clearly an attack on me. But by December, he started to, I think, realize that he was overselling the stuff. He had a Tweet saying these things have trouble with truth. That’s what I was telling you back when you were making fun of me! So he has played both sides of it and continues to play both sides. They put out this statement about dealing with A.G.I. risk, inviting the inference that what they have has something to do with artificial general intelligence. I think it’s misleading. And Nadella is certainly not going around being particularly clear about the gap between people’s expectations and the reality of the systems.
And when Sam Altman said that ChatGPT needs to be out there being used by the public so that we can learn what the technology doesn’t do well and how it can be misused while “the stakes are low” — to you that argument didn’t hold water? Are the stakes still low if 100 million people have it and bad actors can download their own new trained models from the dark web? We see a real risk here. Every day on Twitter I get people like: “What’s the risk? We have roads. People die on roads.” But we can’t act like the stakes are low. I mean, in other domains, people have said, “Yeah, this is scary, and we should think more about it.” Germ-line genome editing is something that people have paused on from time to time and said, “Let’s try to understand the risks.” There’s no logical requirement that we simply march forward if something is risky. There’s a lot of money involved, and that’s pushing people in a particular direction, but I don’t think we should be fatalistic and say, “Let’s let it rip and see what happens.”
What do you think the 2024 presidential election looks like in a world of A.I.-generated misinformation and deepfakes? A [expletive] show. A train wreck. You probably saw the Trump arrest photos. And The Guardian had a piece about what their policy is going to be as people make fake Guardian articles, because they know this is going to happen. People are going to make fake New York Times articles, fake CBS News videos. We had already seen hints of that, but the tools have gotten better. So we’re going to see a lot more of it — also because the cost of misinformation is going to zero.
You can imagine candidates’ dismissing factual reporting that is troublesome to them as being A.I. fakery. Yeah, if we don’t do something, the default is that by the time the election comes around in 2024, nobody’s going to believe anything, and anything they don’t want to believe they’re going to reject as being A.I.-generated. Aand the problems we have around civil discourse and polarization are just going to get worse.
So what do we do? We’re going to need watermarking for video. For text, it’s going to be really hard; it’s hard to make machines that can detect the difference between something generated by a person and something generated by a machine, but we should try to watermark as best we can and track provenance. That’s one. No. 2 is we’re going to have to have laws that are going to make a lot of people uncomfortable because they sound like they’re in conflict with our First Amendment — and maybe they are. But I think we’re going to have to penalize people for mass-scale harmful misinformation. I don’t think we should go after an individual who posts a silly story on Facebook that wasn’t true. But if you have troll farms and they put out a hundred million fake pieces of news in one day about vaccines — I think that should be penalizable. We don’t really have laws around that, and we need to in the way that we developed laws around spam and telemarketing. We don’t have rules on a single call, but we have rules on telemarketing at scale. We need rules on distributing misinformation at scale.
You have A.I. companies, right? I sold it to Uber. Then the second one is called RobustAI. It’s a robotics company. I co-founded it, but I’m not there any longer.
OK, so knowing all we know about the dangers of A.I., what for you is driving the “why” of developing it? Why do it at all? Rather than lobby to shut it down?
Yeah, because the potential harms feel so profound, and all the positive applications I ever hear about basically have to do with increased efficiency. Efficiency, to me, isn’t higher on the list of things to pursue than human flourishing. So what is the “why” for you? Since I was 8 years old, I’ve been interested in questions about how the mind works and how computers work. From the pure, academic intellectual perspective, there are few questions in the world more interesting than: How does the human child manage to take in input, understand how the world works, understand a language, when it’s so hard for people who’ve spent billions of dollars working on this problem to build a machine that does the same? That’s one side of it. The other side is, I do think that artificial general intelligence has enormous upside. Imagine a human scientist but a lot faster — solving problems in molecular biology, material science, neuroscience, actually figuring out how the brain works. A.I. could help us with that. There are a lot of applications for a system that could do scientific, causal reasoning at scale, that might actually make I don’t think, however, that the technology we have right now is very good for that — systems that can’t even reliably do math problems. Those kinds of systems are not going to reinvent material science and save the climate. But I feel that we are moving into a regime where, exactly, the biggest benefit is efficiency: I don’t have to type as much; I can be more productive. These tools might give us tremendous productivity benefits but also destroy the fabric of society. If that’s the case, that’s not worth it. I feel that the last few months have been a wake-up call about how irresponsible the companies that own this stuff can be. They released it, and it was so bad that they took it down after 24 hours. I thought, oh, Microsoft has learned its lesson. Now Microsoft is racing, and Nadella is saying he wants to That’s not how we should be thinking about a technology that could radically alter the world.
Presumably an international body governing A.I. would help guide that thinking? What we need is something global, neutral, nonprofit, with governments and companies all part of it. We need to have coordinated efforts around building rules. Like, what happens when you have chatbots that lie a lot? Is that allowed? Who’s responsible? If misinformation spreads broadly, what are we going to do about that? Who’s going to bear the costs? Do we want the companies to put money into building tools to detect the misinformation that they’re causing? What happens if these systems perpetuate bias and keep underrepresented people from getting jobs? It’s not even in the interest of the tech companies to have different policies everywhere. It is in their interest to have a coordinated and global response.
Maybe I’m overly skeptical, but look at something like the Paris climate accord: The science is clear, we know the risks, and yet countries are falling well short of meeting their goals. So why would global action on A.I. be feasible? I’m not sure this is going to fall neatly on traditional political lines. YouGov ran a poll — it’s not the most scientific poll — but 69 percent of people supported a pause in A.I. development. That makes it a bipartisan issue. I think, in the U.S., there’s a real chance to do something in a bipartisan way. And in Europe, people are like, “This So I’m more optimistic about this than I am about a lot of things. The exact nature is totally up for grabs, but there’s a strong desire to do something. It’s like many other domains: You have to build infrastructure in order to make sure things are OK, like building codes and the UL standards for electrical wiring and appliances. They may not like the code, but people live with the code. We need to build a code here of what’s acceptable and who’s responsible. I’m moderately optimistic. On the other hand, I’m very pessimistic that if we don’t, we’re in trouble.
There’s a belief that A.I. development should be paused until we can know whether it presents undue risks. But given how new and dynamic A.I. is, how could we even know what all the undue risks are? You don’t. It’s part of the problem. I actually wrote my own pause letter, so to speak, with We called for a pause not on research but on deployment. The notion was that if you’re going to deploy it on a wide scale, let’s say 100 million people, then you should have to build a safety case for it, the way you do in medicine. OpenAI kind of did a version of this with their system card. They said, “Here are 12 risks.” But they didn’t actually make a case that the benefits of people typing faster and having fun with chatbots outweighed the risks! Sam Altman has acknowledged that there’s a risk of misinformation, massive cybercrime. OK, that’s nice that you acknowledge it. Now the next step ought to be, before you have widespread release, let’s have somebody decide: Do those risks outweigh the benefits? Or how are we even going to decide that? And at the moment, the power to release something is entirely with the companies. It has to change.
Opening illustration: Source photograph by Athena Vouloumanos
This interview has been edited and condensed for clarity from two conversations."
No comments:
Post a Comment