This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
Well, Casey, I’m having a little mystery this week —
We love a mystery on the show.
— because I was traveling last week, and I was at the Newark, New Jersey Airport. And I realized as I was leaving the airport that I had left my Apple Watch at the airport.
So at some point, you just look down at your wrist and the Apple Watch is gone.
Yes.
OK.
And then, of course, I went to the Find My app to try to locate my Apple Watch, and it had moved. So in the span of about an hour, I went from the Newark Airport to a vape shop in Kearny, New Jersey.
That’s not good.
No.
That means it probably was not accidental that this thing just disappeared off your wrist.
Yeah, maybe not. And I put it into lost mode, too. Have you ever done lost mode?
You put your life in lost mode.
Yes, your life is one big lost mode. But this is a thing that you can do where if someone picks up your Apple Watch, it’ll say, here’s my phone number. Please call me. Yeah. So the thief, whoever it is, has my number on the watch.
Have you thought about just changing the message to something really mean. Just be like nice vapes you have there. What are you, 13? Cool vapes, bro.
Yeah, now give me back my watch.
Say, I’m watching you vape. That’s what I would change it to.
(MUSIC PLAYING)
I’m Kevin Roose, a tech columnist at “The New York Times.”
I’m Casey Newton from Platformer.
And this is “Hard Fork.”
This week, Meta goes to trial in an antitrust case. Here’s what we’ve learned from the testimony so far. Then AI Snake Oil author Arvind Narayanan joins us to make his case that AI has been massively overhyped. And finally, it’s time for another round of hat GPT.
(MUSIC PLAYING)
Casey, we have a very exciting announcement to start the show this week.
Yes we do. Kevin, tell the people.
We are doing a live show. “Hard Fork” Live is coming to San Francisco on Tuesday, June 24.
Kevin, it’s almost the start of doing this podcast. We have been clamoring to get out there to get listeners in a room with us and do what we do live, but maybe with some fun twists and turns, and we are finally ready to share that with the world.
Yes, this is going to be a very fun night at SFJAZZ. We have all kinds of surprises in store, including some very special guests. So come on out and hang out with us. Spend the night with us.
Yes, and here’s what I will say. If you want to have your bachelorette party at “Hard Fork” Live, we’ll take pictures with you. Think about it. Now, listen, I know that you’re already sold. So how do you get tickets? Well, it’s simple, gang. Go to nytimes.com/events/hardforklive and you can find the tickets right there. Or you can just click the link in the show notes. Yes, there will be “Hard Fork” merch at “Hard Fork” Live.
There may be dancing. There may be pyrotechnics.
Here’s the point. If you don’t go, you’re not going to know what happened.
Exactly.
And I think you’re going to want to know.
Yeah, it’s going to be great.
June 24, SFJAZZ. Tickets just went on sale. Snap them up because they’re not going to last forever.
Yes, buy your tickets today at nytimes.com/events/hardforklive.
Now, that’s enough of that, Kevin. Let’s get to the news.
Yeah, so the big story this week that we want to start with is that Meta is finally going on trial. This is an antitrust case that was brought by the Federal Trade Commission years ago. And this week it started the actual trial in the US district court in Washington D.C. This is a big case. It is one of the largest antitrust cases brought over the last decade against a major tech company. And it has potentially big consequences.
One of the remedies that the government is trying to push for here is that Meta would be forced to divest Instagram and WhatsApp, basically undoing those acquisitions that Facebook made many years ago.
Yeah, and while that might not be a super likely outcome here, Kevin, I do think it speaks to the existential stakes of this trial for Meta. They absolutely have to win, otherwise it will look like a completely different company.
Yes, so it’s a little too soon to try to handicap how the case is going. It’s only been a couple of days. Mark Zuckerberg and other Meta executives are still testifying. Basically, we’ve just gotten through opening statements and a little bit of testimony. But Casey, can we just review the history of this case and how we got to this point.
Yeah, so this is a case that was filed all the way back in December 2020 during the first Trump administration. The charge was that Meta had acted anticompetitively in building a monopoly in this space that the FTC calls personal social networking, and that one of the ways that they had illegally maintained it was by snapping up all their competitors and preventing them from growing into big independent companies, most famously, of course, with Instagram, which it bought in 2012, and with WhatsApp, which it bought two years later. So that was the original charge. And the judge, Kevin, throws it out.
And what’s the reason for throwing it out?
Well, the FTC had alleged that Facebook had a monopoly in this market. But when the judge read the complaint, he was like, you didn’t really offer much evidence for that. There was a stray statute here or there. But he felt like the FTC had been lazy in bringing the case. So he said, I’m throwing this out. If you want to refile it, you can, but I’m not letting this thing go forward until then.
And then they refiled it.
They did after President Biden took office and the FTC got some new leadership. So Lina Khan became the new head of the agency. And the administration did decide to continue the case. And they went through and they added in a bunch of new stats and facts and figures trying to illustrate this idea that there really is something called a personal social networking market. And that Facebook, which would soon change its name to Meta, had a monopoly over it.
So this is clearly a case that spans some partisan gap. But can you just remind us, what the basic crux of the argument here? What is the government’s case that Meta has built and maintained an illegal monopoly by acquiring Instagram and WhatsApp?
Well, the crux of the case is that according to the government, there is a market for something called personal social networking, which consists of apps that are primarily intended to help you keep up with friends and family. And besides Facebook, there are only four other apps in this market, Kevin.
What are those apps?
Those apps are Instagram, WhatsApp, Snapchat, and MIUI.
What is MIUI?
MIUI is an app that most people have absolutely never heard of. It has a little kind of Facebook alternative. Basically, it is not particularly popular. MIUI might say that that is because that Meta has been acting anti-competitively and prevented them from growing, and I suspect Meta would say that MIUI has not been growing because it is just not that great of an app.
Yeah, it’s Pee.
It’s a bit of a Pee app. Is MIUI.
Yeah, so I understand why market definitions are important in cases like these. Essentially because my understanding is in order to argue that someone has a monopoly over a market, you first have to define what the market is. And if you’re the FTC and you’re trying to make the case that Meta has an illegal monopoly, it can’t just be everything, every internet service, because clearly they would not have a monopoly over that. But as for this one specific area, which we are defining as personal social networking, you really don’t have much competition.
Yes, that is the argument that they are making. And frankly, Kevin, I just don’t think it is that strong of an argument. And so as the first couple days of this trial have unfolded, it has been interesting to see the government trying to sketch out that case, but in my opinion, struggling to do so.
Well, let’s get into the analysis in a little bit. But first, let’s just talk about what has been happening with the trial so far. So my colleagues at “The Times,” Cecilia Kang, Mike Isaac, and David McCabe have been covering this, including actually going to the trial. And it seems like so far both sides are just laying out their opening arguments.
The FTC is trying to make the case that this is an illegal monopoly. They have all these emails and communications from various Meta executives going back many years, talking about the competitors that they are trying to neutralize by either acquiring them or copying their features and trying to use that to make the case that this is a company that has had a very clear anti-competitive strategy for many years. What has Meta been saying in their defense?
Meta has been saying, essentially, that the market that the government is suggesting they have a monopoly over is fake and has been invented solely for the purposes of this trial. On the stand, Mark Zuckerberg has been explaining how Meta’s family of products has evolved to include new features as the market has evolved. And really, Kevin, this is just a story about TikTok. And it speaks to why the fact that the federal government is generally so slow to bring antitrust actions winds up hurting it in a case like this.
Because the thing is, I would argue from roughly 2016 to 2021 or so, Meta does have a monopoly over what we think of as social networks. Snapchat gets neutralized. Twitter gets neutralized. YouTube is playing a different game. When it comes to sending messages to friends, Meta really does have the market locked up. But then along comes TikTok, this app out of China, which has a completely different view of what a social network could be. And the most important thing they decide is we don’t actually care who your friends and family are.
We’re just going to show you the coolest stuff we can find on our network. We’re going to personalize it to you, and you can enjoy it. And you don’t even have to follow anything if you don’t want to. We’re just going to show it. And this winds up transforming the industry. And Meta and everyone else has been chasing it ever since. And so if you’re the federal government, this is a problem because you’re trying to solve an essentially like 2016 era problem in 2025 when the world looks very different.
Yeah, I mean, the way I sometimes think about it is that Meta had a monopoly, but that they failed to maintain the monopoly. And the way that they failed to maintain the monopoly was that they didn’t buy TikTok when it was a much smaller but fast growing and popular app. And actually, it’s stranger than that because a big way that TikTok grew was by buying a bunch of ads on Facebook and Facebook apps. They essentially used Facebook to bootstrap a new social network that ended up becoming one of Facebook’s biggest competitors.
So I think if you take the long view here, it’s not that Meta has this long standing monopoly that it still maintains today. It’s like they had one and they let it slip away by failing to recognize the threat that TikTok posed.
And Zuckerberg has talked about this a bit on the stand this week. And what he’s essentially said is, look, TikTok just looked very different than what we were used to competing against because it was not really about your friends and family. And so we did miss it for that reason. But if you fast forward to 2025 and you open up TikTok, what is TikTok trying to get you to do? Add all of your friends. Send them messages.
All of these apps eventually wind up turning into versions of each other. But again, it’s creating this problem for the government. Because how do you successfully make the argument that Meta still has maintained this monopoly, or are you somehow able to convince a judge that Meta should be penalized for maintaining the monopoly that it had back when it did.
Yeah, I wanted to ask you about one thing that has come up so far in the trial. Every time there’s a big antitrust lawsuit between a tech company and the government, we get all these emails and these internal deliberations between executives talking about various strategy things. And I always find that to be the most interesting and revelatory piece of any of these trials.
Absolutely.
How these people talk to each other. What kinds of things they’re worried about. How they’re planning years in the future. And one of the things that came up in this trial already is that Mark Zuckerberg at one point argued that they might actually want to split Instagram off into a separate company. This was back in 2018. He considered spinning off Instagram from the core Facebook app. Basically reasoning like the government might try to force us to do this anyway. But also he worried about something called the strategy tax of continuing to own Instagram. Can you explain what he meant by that?
Yeah, so this was really fascinating to me as well. In 2018, Instagram was really bedeviling Mark Zuckerberg. Instagram had been bought six years prior. It was still run by its founders, Kevin Systrom and Mike Krieger, and it had been afforded a level of independence that was really unusual for most acquisitions at big tech companies. But Zuckerberg starts looking at the numbers, and he just becomes convinced that Instagram’s growth and its cultural relevance is coming at the expense of Facebook. That Instagram seems younger, hipper, sexier. Facebook is starting to feel older and more fuddy-duddy.
And so he starts figuring out these ways of if you share your Instagram photo to Facebook, we’re going to get rid of the little thing that says like shared from Instagram on it. So some of this has been reported before. Sarah Frier wrote a great book about this called No Filter, but this was truly new. We did not know until this week that in 2018, Zuckerberg almost just pulled the ripcord and said, let’s get rid of this thing.
Yeah, and his argument was interesting. It wasn’t just that he thought that Instagram was cannibalizing Facebook’s popularity. It’s that he appeared to think that it would be more valuable as an independent company. He talks in these exchanges about how sometimes the companies that are spun off of big giants tend to be more valuable after the spin offs, and how spinning off Instagram might actually enable it to become more valuable. So the government is obviously trying to use this to say, look, this is such a good idea, breaking up Instagram and Facebook, that even Mark Zuckerberg thought it was a good idea back in 2018. Now I assume he would say something different today.
Yeah, and this gets at another antitrust argument that has been made over the past decade or so that has less grounding in legal precedent, but is still favored by some including Lena Khan. And the basic idea is just there is such thing as a company that is too big. And one rule that antitrust law can play is by taking things that are very big and making them a little bit smaller. And a main reason to do that is exactly what you just said, that the pieces that you break up will be more valuable in the long run than if you clump everything together.
And by the way, I think you can make a good argument that Instagram would be one of those things. We have seen reporting that Instagram now makes up around half of Meta’s overall revenue. So first of all, you can imagine how devastating it would be to Meta if they just lost that in one fell swoop. But on the other hand, it does absolutely show that this network could survive and thrive on its own. And it’ll be interesting to see if the government makes that case.
So one other historical tidbit that has come out so far in this trial that I thought was fascinating and wanted to talk about with you was this exchange between Mark Zuckerberg and some other Facebook executives back in 2022, where Mark Zuckerberg pitched the idea of deleting everyone’s Facebook friends and basically having everyone start over with a fresh graph, a fresh slate of Facebook friends. What was this idea? Did it ever get close to fruition? And why is it coming up in the context of the antitrust trial?
So this was an idea that was floated in 2022, according to some reporting that Alex Heath did for The Verge. And the idea was that people weren’t using Facebook as much as they used to. I would imagine, particularly in the United States. And so Mark Zuckerberg floats the idea, why don’t we just delete everyone’s friends list and make them start over?
And the idea was that this would make it feel cooler and more interesting if you weren’t just hearing from a bunch of people you added 12 years ago and never talked to again.
Well, that’s the thing, is that I think one reason why Facebook started to feel stale was that you had built this network of people that was just everyone you’d ever made eye contact with. And so it was a pretty boring thing to browse because you didn’t actually care about a lot of the people you were seeing there. So what if you just had to start over and say, actually, I only care about these 20 people.
Yeah, I actually think this is a great idea
Me too.
But why didn’t it happen?
Well, so there’s a moment in a Business Insider story about this where the head of Facebook, Tom Alison, apparently replied that he wasn’t sure the idea was, “viable given my understanding of how vital the friend use case is, which this is a man who is trying to say as delicately as possible to his boss, one of the world’s richest men, you were out of your freaking mind.”
(LAUGHS):
Tom Alison is like, my understanding of Facebook is that it’s important that your friends list exists because that’s actually the entire point of Facebook. But obviously, you’re the boss. But I just want to point that out.
Yes.
And so the idea doesn’t get followed-up on.
It reminds me of those passages in books about Elon Musk where he just goes into the Tesla factory and he’s like, what if it went underwater like a submarine. And all the engineers have to be like, aye, sir, that’s not possible, given the laws of physics.
Exactly. This is there’s these famous stories about if you were an Amazon employee and you open up your email and there’s some terrible story from a customer, and it’s been forwarded to you by Jeff Bezos with just a single question mark.
Yes.
All of a sudden, that’s all you do for the next month is figuring this out. So this is one of those stories. But what’s so funny, Kevin, is while on one hand, we can agree this would probably be disruptive to Facebook’s business, it does seem like a great idea that they should absolutely do.
Totally.
Yeah.
All right. So that is some of the spicy stuff that has come up at the trial so far. What are you going to be looking for as this trial goes forward to figure out how it’s going to go?
Well, number one, your colleague Mike Isaac reported this week that in one of the emails, the former chief operating officer of Facebook, Sheryl Sandberg, asked Mark Zuckerberg how to play “Settlers of Catan.” I need to know if she ever learned how and if she got any good at it.
That seems a little beside the point.
Well, that’s thing one. Thing two, though, Kevin, is, can the government actually make its case? Look, I never like to be on the side of sounding like I’m carrying water for $1 trillion corporation. But I also believe in governments making good, solid arguments based on the facts. And again, while I think that there was a great case that Meta acted super anticompetitively over the past decade, I mean, that’s just settled fact, as far as I’m concerned. I think it’s a lot harder to say they have a monopoly in a world where everyone is chasing after TikTok at a million miles an hour.
So can the government somehow convince us that, no, no, no, TikTok is a very different thing, and that were it not for Meta’s continued anticompetitive behavior, MeWe would have a billion users. Then, well, I don’t know how the government is going to be able to prove its case.
Yeah, I mean, one thing that I’m looking at is the political dimension here because we know that Mark Zuckerberg has spent the last several months furiously sucking up to Donald Trump and people in the Trump administration trying to cast himself as a great ally and friend to the administration. And we also now know that part of the reason that he was doing that is to try to make this antitrust case go away.
And in fact, late last month, according to some reporting that came out recently, Mark Zuckerberg actually called the FTC and offered to settle this case for $450 million, which was a small fraction of the $30 billion the FTC was asking for. And that he, according to this report, sounded confident that the Trump administration would back him up. And that was one reason that he was willing to make this lowball offer.
Yeah, this is as close as you could have come if you’re Mark Zuckerberg to just giving the FTC the finger. $450 million in the context of this trial is nothing. And he knew it was nothing. And this was, I believe, him signaling to them, your case sucks, and I’m about to win.
Yeah, and there’s some really interesting backroom politics happening here that I don’t pretend to know the ins and outs of but maybe you know more about them. Essentially, my impression from the outside is that all of this flattery and kissing up to the Trump administration was actually seeming to work.
Yes.
The administration’s posture toward Meta was softening somewhat. And then when some hardcore MAGA folks got wind of this, they stepped in, and according to some reports, at least had some conversations with the president that resulted in him stiffening his spine a bit toward Meta. So explain what’s going on here.
Sure. Well, so there was a report in Semafor by Ben Smith, former “Hard Fork” guest, that Andrew Ferguson, who is now the chair of the FTC, and Gail Slater, who is an Assistant Attorney General in charge of antitrust enforcement at the Justice Department, went to meet with the president to try to say, hey, you have to please let this case go forward. And they were apparently successful in that. Prior to that, though, Kevin, as you note, Meta had transferred $26 million to the president at least, $1 million for the inauguration, $25 million to settle a lawsuit over the fact that they suspended him after January 6. And that seemed to be working.
The president and JD Vance started to criticize all of the European fines and fees that were being levied against Facebook for various infractions. It was really starting to become a plank of his trade war that, hey, we’re not going to let you fine our companies anymore, no matter what they did. So all of this was music to Mark Zuckerberg’s ears, and I suspect one reason why he might have thought, I bet I can get this antitrust case thrown out.
Yeah, and it turns out he couldn’t. And now he’s on trial. And he’s having to testify and go through all this evidence. And my feeling on this is like, I think the FTC’s case is somewhat weak here for all the reasons that you laid out. But I think that it’s good that Meta has to have its day in court that it can’t just buy its way out of this antitrust action, and that it will actually have to prove that it did not have an illegal monopoly, or at least to cast some reasonable doubt on that.
Absolutely. And I think that no matter what happens in this case, Kevin, it has actually had a really positive outcome for the market for consumer apps in general.
What do you mean?
So if you look at the past five years or so, look at some of the apps that have come to prominence since then. Look at a Telegram. Look at a Substack. Look at a Clubhouse, even back during the heyday of that app. In the world of 2016, I’m very confident that Facebook would have been trying to buy all of those apps. But they couldn’t anymore. They just knew that all of those would be a nonstarter. And so what has happened we have started to see other apps flourish.
There actually is oxygen in the market now. Companies can come in and compete and know that they’re not about to immediately get swept off the chessboard via huge offer from a Meta or a Google or one of the other giants. Now, this has some problems for those companies. If you raise billions of dollars in venture capital, eventually those investors want to see a return. But if you’re just a consumer who wants to see competition in the market, and not every app you use on your phone owned by one or four companies, you’ve actually had a pretty good go of it over the past few years.
Yeah, I think that’s a good point. I think aside from the merits or lack of merits of this particular case, what it makes me just realize is that bringing antitrust enforcements against the big tech companies is just so challenging because the underlying marketplace and the ecosystem just changes so rapidly. So the facts from 2016 or 2017 might not actually hold up by the time your antitrust case gets to trial years later, things may just look very different.
And as you’ve laid out, I think this is a big problem for the FTC here, the market for social networks or even what Meta is is very different now than it was even a couple of years ago. So do you think this has any implications for tech regulation writ large?
Well, on one hand, yes, I do think it means that when the FTC wants to bring in antitrust action, they need to do it much more quickly than they did here. But on the other hand, Kevin, every case is different. I was thinking this week as I was writing, are there any implications here for, let’s say, the Google antitrust cases that have gone on. And as I stopped and reflected, I thought you know what, I still think that Google does actually have a monopoly in search and search advertising. And I think the FTC is right to go in there and try to break that up. So every big company is different. But in this one particular case of Meta, Kevin, I do think that the tech world has moved on in a way that the FTC has not.
Yeah, it’s almost like the thing that made the case irrelevant, or at least not as urgent as it might have felt a few years ago is not that Meta changed its ways. It’s just that it’s social media as a category became much less delineated and much less relevant.
And let me just say, I continue to be surprised at how little a stir it made a couple of years ago, when Meta announced that they were basically going to be moving on from the friends and family model, that all of a sudden your feed was going to have a bunch of recommendations from creators and celebrities and stuff that you had never chosen to follow, but that an algorithm thinks that you might like. Meta — Facebook really did leave friends and family behind in a big way, at least as a priority. And the world just shrugged because the world had already moved on to TikTok.
Totally. Well, more to say there, but we will keep watching this trial. Casey, are you planning to show up at the courthouse?
I’m hoping to get called as a witness.
I wouldn’t call you to the stand. You’re too unpredictable.
I’m going to put the whole system on trial.
When we come back, a skeptical look at AI progress from Princeton’s Arvind Narayanan.
(MUSIC PLAYING)
Well, Casey, last week on this show, we had a conversation with Daniel Kokotajlo from the AI Futures Project about his new manifesto, AI 2027. And we got a lot of feedback on it.
Yeah, in fact, one of my friends messaged me and said that was a real bummer.
Yes, and so much to our surprise, there was a new manifesto on the block this week. This one was much more skeptical of the fast takeoff scenario that Daniel and his co-authors suggested. It was written by two computer scientists at Princeton, and it is called AI as normal technology.
Yeah, and this really arrived at the right time for us, I think, Kevin, because for weeks, if not months now, listeners have been writing in saying, hey, we love hearing you guys talk about AI, but we would really appreciate a slightly more skeptical take on all of this, somebody who has not bought all the way into the idea that society is going to be completely transformed by 2028. And so when you and I read this piece from Arvin and search, we thought, this might be the thing that our listeners have been looking for.
Yes, so this piece was written by Arvind Narayanan, who’s a professor of computer science at Princeton, and his co-author, Sayash Kapoor. And in this piece, Arvind and Sayash really lay out what they call an alternative vision of AI, basically one that treats AI not as some looming superintelligence that’s going to go rogue and take over for humanity, but as a type of technology like any other, like electricity, like the internet, like the PC, things that take a period of years or even decades to fully diffuse throughout society.
Yeah, they go through step by step. What are the conditions inside organizations that prevent technology from spreading at a faster pace? Why is that same dynamic likely to unfold here? And what does it mean that AI might not arrive in a superintelligent form for decades instead of just a few months?
Totally. And this is a very different view than we hear from the big AI labs in Silicon Valley. A lot of the people we’ve talked to on this show believe in something more like a fast takeoff, where you do start to get these recursively self-improving AI agents that can just build better and better AI systems. And Arvind and Sayash really say, hold on a minute, that’s not how any of this works.
And, Kevin, something that I really appreciate about this work is that Arvind and Sayash are not the skeptics who say that AI is all hype, that it isn’t powerful, that you can’t do cool things with it today. They also don’t think that its capabilities are going to stop improving anytime soon. These are not people who are in the deep learning is hitting a wall camp. They think it’s going to get more powerful. They just think that the implications of that are much different than those who have been suggested. So to me it seems like a much smarter, more nuanced kind of AI skepticism than the sort that I sometimes see online.
Yeah, so to make his case that AI is a normal technology and not some crazy superintelligence in the making, let’s bring in Arvind Narayanan.
(MUSIC PLAYING)
Arvind Narayanan, welcome to “Hard Fork.”
Thank you. It’s great to be here. Great to chat with you after so many years of reading your writing.
Well, let’s start with the central thesis of this new piece that you and Sayash Kapoor wrote together. There’s a lot in it. It’s very long, and we’ll take a timeout to unpack some of the different claims that you all make. But one of the core arguments you make is that AI progress, or the fast takeoff scenario that some folks, including former guests of this show, have envisioned is not going to happen because it’s going to be bottle-necked by this slower process of diffusion.
Basically, even if the labs are out there inventing these AI models, that can do all kinds of amazing and useful things, people and institutions are slower to change. And so we won’t really see much dramatic transformation in the coming years. But to me, AI diffusion actually seems very fast by historical standards. ChatGPT is not even three years old. It has something like 500 million users.
Something like 40 percent of US adults use generative AI, which didn’t really exist even a few years ago. That just seems much faster to me than the proliferation of earlier technologies that you’ve written about. So how do you square the growing popularity and widespread usage of these apps, with the claim that it just is going to take a long time for this stuff to diffuse throughout society?
So I’m going to make a crazy sounding claim, but hear me out. Our view is that it actually doesn’t seem like technology adoption is getting faster. So we’re well aware of that claim about 40 percent of US adults using generative AI. We discussed that paper in our essay. And I have no qualms with their methodology or numbers or whatever, but the way it’s been interpreted is it’s only looking at the number of users without looking at the distinction between someone who is heavily using it and relying on it for work, versus someone who used ChatGPT once a week to generate a Limerick or something like that.
So the paper, to the author’s credit does get into this notion of intensity of use. And when they look at that, it’s something on the order of 1 hour per workweek, and it translates to a fraction of a percentage point in productivity. And that is actually not faster than PC adoption, for instance, going back 40 years.
I know you are not specifically responding with this piece to any other piece that’s come out, but I just can’t help but thinking about our conversation that we had last week with Daniel Kokotajlo of the AI Futures Project, who has just spent the past year putting together this scenario of what he thinks the world will look like over the next few years. And part of his thesis is that we’ll start to have these autonomous coding agents that will automate the work of AI research and development, and will essentially speed up the iteration loop for creating more and more powerful AI systems.
I’m curious what you make of that thesis and where you think it breaks down. Is it that you don’t think that the systems will ever become that good and capable of that kind of recursive self-improvement? Or is it that you think that will happen, but it just won’t matter much because coding is only one type of job and one type of occupation, and we’ve got all these other ones. Where is the hole in that scenario?
Yeah. A lot of part two of our paper is devoted to this issue. And there is an interesting linguistic choice, I think, that you made. You alternatively refer to them as highly capable and highly powerful AI systems. For us, those two are not equivalent. They are actually very different from each other. We don’t dispute — we completely agree with Dan that improving AI capabilities is already rapid and could be further accelerated with the use of AI itself for AI development.
For us, that does not mean that these AI systems will become more powerful. Power is not just a property of the AI system itself. It’s a property both of the AI system and the environment in which it is deployed. And that environment is something that we control. And we think we can choose to make it so that we’re not rapidly handing over increasing amounts of control and autonomy to these AI systems, and therefore not make them more powerful.
Now, there is an obvious counterargument that these things will make you so much more efficient that people will have no choice but to do so. We disagree. We have a lot of analysis in the paper for why it’s actually just not going to make business sense in most cases to deploy AI systems in uncontrolled fashions compared to the benefits that it will bring.
It might be worth pausing a bit and saying a bit more of the argument you make for why that is. What are some of the natural breaks that you see happening in organizations that prevent technology from spreading faster than it does today?
Sure. This is where we think we can learn a lot from past technologies. When we look at the history of automobiles, for instance, for the first several decades, vehicle safety was not even considered a responsibility of manufacturers. It was entirely on the user. And then there was a mindset shift. And once safety began to be seen as a responsibility of manufacturers, it no longer made business sense for them to develop cars with very poor safety engineering because whenever those cars caused accidents, there would be a negative PR consequence for the car company.
So this mindset shift realigned incentives so that safety becomes part of what the manufacturer here is competing on. And that kind of mindset shift, I think, is important for AI, and that is something we can credit the AI safety movement with. A safety is so indelibly associated with AI in most people’s minds. So that’s a good thing. So that’s the first thing. Second, once you’re in the situation where the negative safety consequences are easily attributable to the party, who is responsible for it, you can have regulation that sets a standard that’s going to be much more feasible than a scenario where something bad happens, and there’s no way to attribute it to who is the responsible party for it. So those are things we should be working on. How to make responsibility clearer. Who is responsible for what. But those are things we can do. And if we get those things right, I don’t think it’s the case that companies will be forced to deploy AI in unsupervised, uncontrolled manner.
Yeah, I mean, I see your point there, and I think you make a really instructive example in your paper about the difference between Waymo and Cruise, two self-driving car companies, one of which has a very strong safety record, Waymo, the other of which had a high profile incident in San Francisco and was forced to pull its robotaxis out of the city and essentially shut down, which was Cruise.
And you extend that to the logic of AI more generally, where you say that the companies that have the safe products will outcompete the companies with the unsafe products in the market. And I would love to believe that there is a self-correcting mechanism in the market that will filter out all the unsafe products. But what I observe is that sometimes, as the technology is getting more capable, the safety standards are actually moving in the other direction.
So just a few days ago, the FT reported that OpenAI is now giving its safety testers less time and fewer resources than it used to before releasing their models, in part because the pressure to get these models out the door and stay ahead of their competition has become so intense. So I guess given the safety standards that we’re seeing now in the industry, what makes you confident that the market will take care of these safety concerns before these unsafe products are put into people’s hands?
I’m not confident, and that’s not something we say in the paper. We don’t use the term self-correcting. We don’t think markets will self-correct. It will take constant work, I think, from society. And obviously, journalism plays a big role here. And regulators, we don’t try to minimize the role for regulation either. And yes, what we’re seeing with some of the safety windows decreasing is a problem. Definitely with you on that. But I think that is something that we have the agency to change.
The fact that safety testing happens at all before models are released. That is something very different with AI than with past technologies. That is something we accomplished together, the AI safety community and everyone else who had a stake in this to make this the expected practice for companies. And yes, it’s true that recently things have been trending in a negative direction. It’s important to change that.
But one last thing I want to say is that while I agree with Kevin’s concerns, it’s not maybe quite as concerning to me as some people would see it, because for us, a lot of the safety concerns come from the deployment phase as opposed to the development phase. And so while, yes, there is a big responsibility for model developers, a lot of the responsibility has to be shared by deployers so that we have defense in depth. We have multiple parties who are responsible for ensuring good safety outcomes. And I think right now the balance of attention is too much on the developers, too little on the deployers. And I think that should change.
Let’s talk about another aspect of safety, which is the idea of alignment. This idea in AI development that we should build systems that adhere to human values. And that if we don’t do that, there is some potential that eventually, they will go rogue and wreak havoc. You are very skeptical about the current approach to model alignment. Why is that?
Sure. So here’s the causal chain that AI systems will become more and more capable. And recall that gap between capability and power. As a result of being more and more capable, they will become more and more powerful. And that distinction has been elided in a lot of the alignment literature. And once you have these super powerful systems, we have to ensure that they are aligned with human values. Otherwise, they’re going to be in control of whole economies or critical infrastructure or whatever. And if they’re not aligned, they can go rogue and they can have catastrophic consequences for humanity.
Our point is that if you even get to the stage where alignment becomes super important, you’ve already lost. So in a sense, we want a stricter safety standard in a way than a lot of the alignment folks do. We don’t think one should get to the super power stage, and if you get to that stage, then tinkering with these technical aspects of AI systems is a fool’s errand. It’s just not going to work.
Where we need to put the brakes is between those increases in capabilities and saying, oh, AI is doing better than humans now. We don’t need humans to provision. We’re going to put AI in charge of all these things. And that is something where we do think we can exercise agency. Again, that’s a prediction. We can’t be 100 percent confident of that. We outline in detail in the paper why we think we can do that. But certainly, it remains to be seen.
I just want to make sure I understand the claim because right now the leading AI labs are all trying to give their models more agency, more autonomy, to allow them to do longer sequences of tasks without requiring a human to intervene. Their goal, many of them, is to build these fully autonomous drop in remote workers that you could hire at your company and tell them to go do something, and then come back a month later and it’s done or a week later. Are you saying that is technologically impossible or implausible, or are you just saying that it’s a bad idea, and we should stop these companies from giving their models more autonomy without human intervention?
So it’s a bit of both. We’re not saying it’s technologically impossible, but we think the timelines are going to be much, much longer than the AI developers are claiming. To be clear, I agree with you, Kevin. You wrote recently in your field of AGI piece that within perhaps a couple of years, AI companies are going to start declaring that they have built AGI. However, we don’t think what they’re going to choose to call AGI based on their pronouncements so far is the kind of AI that will actually be able to replace human workers across the whole spectrum of tasks in a meaningful way.
So first of all, our claim is that it’s going to take a long time. It’s going to take a feedback loop of learning from experience in real world contexts to get to actual drop in replacements for human workers, if you will. But our second claim is that even if and when that is achieved, for companies to put that out there with no supervision would be a very bad idea. We do think there are market incentives against that. But there also needs to be regulation.
One example of something that we suggest is, for instance, the idea of AI owning wealth. That is one way in which AI could accumulate more power and control. Those are all avenues that we have of simple interventions simply banning AI owning wealth, for instance, that will ensure that humans are forced to be in the loop, forced to be in control at critical stages of the deployment of AI systems.
Yeah, we simply must not give ChatGPT an allowance. I will not hear of it in this house. Now, you brought up Arvind Kevin’s recent piece in which he argued that AGI is imminent. How wrong did you think that piece was?
I mean, so first of all, I agreed with him that companies are going to declare this AGI. And I also agree with Kevin that some people at least are not paying as much attention to this as they maybe should. With all that said, there’s also an information asymmetry from the other side. A lot of the time when AI developers claim that AI can replace this or that job, they’re doing so with a very narrow conception of what that job actually involves. And the domain experts in that job have a much better idea.
So a lot of the time, ignoring AI, I do think is rational. So there is a gap in both directions. And what I wish for is better mutual understanding, better understanding from the public of where AI capabilities currently are, but also better understanding from AI developers of the real knowledge that everyday people have of their various different contexts through which they can learn what the actual limitations of AI systems are.
Yeah, one of the themes that you get at in the paper is that this focus on catastrophic risk that the AI safety community often emphasizes risks, taking the focus off of nearer term risks, but those risks, as you describe them, include the entrenchment of bias and discrimination, massive job loss, increasing inequality, concentration of power, Democratic backsliding, mass surveillance. And I guess I’m just really struck that even you, as somebody who has really been leading the charge, saying that a lot of this AI stuff is overhyped are also saying, my god, look at these terrible risks that are baked into the potential of these systems.
And just one small clarification. We do think that some of the interventions targeted against superintelligence risks could actually worsen these other kinds of risks that we care more about. We’re not making a distraction argument. That’s a very specific argument that I’ve made in the past on Twitter, but not in any of my more formal writing. I don’t make that argument anymore. We can certainly worry about multiple kinds of risks, but our real concern is that if we’re so worried about superintelligence that we decide that the answer to it is a world authoritarian government, then that is going to worsen all of these other risks.
And yes. So look, I mean, the second sentence of the paper is normal technology isn’t meant to underplay this. Even electricity and the internet are normal technologies in our conception. And when we look at the past history of general purpose technologies, there have always been periods of societal destabilization as a result of their rapid, as you could call it, even decades long in our view, is rapid deployment and it’s hard for societies to adjust.
Most famously, the Industrial Revolution led to a mass migration of workers to cities where they lived in crowded tenements. Worker safety was horrendous. There was so much child labor. And it was as a result of those horrors that the modern labor movement came about. And so eventually, the Industrial Revolution lifted living standards for everybody, but not in the first few decades. All of that is very plausible with AI. We’re not necessarily saying that would happen, but I do think those are the kinds of things we should be thinking about and trying to forestall.
Let me move to an area where I think I really do disagree with you, but I want to see if I can understand your argument a little bit better.
Sure.
So you write about the idea of arms races in this piece. And one of the things you say is that there is no straightforward reason to expect arms races between countries over AI. You also say in the piece that you want to exclude from the discussion anything about the military. But I didn’t understand this because to me, the military is typically the source of the arms race. And as these systems gain more capabilities, it is going to lead the United States and its adversaries to try to build systems faster, more capably, possibly less safely, in hopes of getting 1 over on their adversary. So help me understand what you are arguing about arms races and why you’re leaving the military out of it.
Yeah, so to be clear, when we say we exclude the military, we’re just straightforwardly admitting that could be an Achilles heel of the whole framework. And it is something we’re researching. We don’t think that military arms races are likely, but it’s not yet something we understand well enough to confidently put into the paper. We’re going to be exploring that in follow-up. So that’s what we mean by excluding military AI. But even outside the military, there are lots of arms races that have been proposed.
So for instance, one way in which this has been envisioned is, let’s say, our court system or any other important application of decision-making, maybe countries will find that it’s just so much more efficient and effective, let’s say, to put AI in charge of making all decisions about criminal justice. So that’s a kind of metaphorical arms race you can imagine, where there is a push to develop more and more powerful AI systems with less and less oversight. So it is that particular concern we’re responding to and our point of view there is, first of all, this is not the kind of thing at which you can perform at a superhuman level.
The limitation are inherent and not related to computational capabilities. And even if it is somewhat more efficient, you can save paying for the judiciary, for instance. I think the consequences for civil liberties, et cetera, are going to be domestically felt and therefore, the local citizens will rise up and protest against those kinds of irresponsible AI deployments. So it doesn’t matter if it gives you an advantage against another country in some abstract sense. It is not something that people will accept or should accept.
Yeah, I find that such an interesting argument and provocative because it really flies against some of my priors here, which are that there are actually powerful market forces and demand forces pushing people toward less restricted models. I observe that in some of the most recent models they’ve released, OpenAI has relaxed some of the rules around what you can generate, generate images of public figures. They’re reportedly exploring more erotic role play that you can use their models for. There does seem to be this market force, at least in the US, that is pushing people toward these less restricted uses of the AI models.
Or how about, Kevin, when you went to the Paris AI Action Summit, which had its roots in the idea of building safer AI. But you got there and it was basically a trade show. And it was the French government saying, hey, don’t count France out of the AI race. Here we come. So to me, I feel like we’re already seeing this competitive dynamic play out.
Yeah, but I want to see what that looks like from your perspective, Arvind, because I think my perspective is that we are already in something of an arms race. We have these export controls. We have people at the highest levels of government in both the United States and China, saying that this is like a definitive conflict of the next few decades. So what in your mind lowers the stakes here, or lowers the temperatures or takes us out of the category of an arms race?
So this is one of the other big ideas in the paper. We’re borrowing this from political scientist Jeffrey Ding, and we’re adapting it a little bit. And his big idea is that geopolitical advantage comes less from the innovation in technology and more from the diffusion of that technology throughout the economy, the public sector, throughout productive sectors. And he says that America’s advantage over China right now is not primarily because of our capacity to out innovate. And innovations travel between countries is very easily. And we’ve seen that over and over.
And in any case, the innovation advantage in AI is like a few months at best. The real advantage is in diffusion. Diffusion again, being the bottleneck step that takes several decades. And that’s where the advantage really is. So we agree and we talk about the implications of that. But we’re saying that the same thing also applies to risks. The risks for us are key not to the development of capabilities, but to the decision to put those models into consequential decision-making systems.
And while it is true that there is an arms race going on in development, we’re not seeing an arms race in the consequential deployment of these AI systems. And just to make that point very concrete, Kevin, I’m going to play devil’s advocate a little bit here. To be clear, I do think it’s bad if model developers skip safety testing altogether. But indulge me in this thought experiment. Let’s say model developers start releasing models with absolutely no safeguards whatever. So what. Let’s talk about it.
Yeah, what do you think happens next, Kevin?
I mean, the classic answer from the perspective of an AI safety person would be that a very bad person or group gets their hands on a model that is unrestricted and uses it to say, create a novel pathogen or a bio-weapon.
They can do that. Today we have state of the art models that have been released with open weights. And sure, they might have some safeguards, but those are actually trivial to disable. So that capability exists today. I think if that were the thing that’s going to lead to catastrophe, we would all be dead already. And that’s a risk that existed even before AI. A lot of the ways in which AI can help create pathogens is based on information that’s also available on the internet.
So this is something we should have been acting on all along, and we have been finding other ways to decrease that risk. One could argue that maybe those steps are not enough, but it is hard for me to see this as an AI problem, as opposed to just an existing civilizational risk.
I find that a bit glib. It makes me think of the debate over Deepfakes and Synthetic Media. It was always true. I shouldn’t say always, but for the past, I don’t know, 30 years it’s been true that you could take a photo of me and manipulate it into some non-consensual nude image of myself. The capability exists. And until recently, it hasn’t even been legal to do that in most places. But now you can do it instantaneously.
And so part of the danger of AI is not does the capability exist, it is how easy does it make the bad thing. And my guess is that what Kevin is worried about is that a future open weights model is going to make it much easier for somebody to make a novel pathogen than the current state of the art, where you have to use the Google search engine, which famously has been getting worse for some time now.
So yeah, let me, if I may quickly respond to that. So I completely agree with you on the Deepfakes concern, and I want to come back to that, especially the notification every time we’re asked about what we actually worry about with AI. That’s absolutely at the top of our list. And I think the way in which policymakers have been so slow to react to that has been shameful. I’ll come back to that in a second.
But where I disagree with you is that as a model for how we should think about biorisk because with the notification apps, friction matters a lot. If you decrease the friction, if you make it a little bit easier to access these apps, these high school kids are going to use them. And it’s been an epidemic of hundreds of thousands of these kids using it. And it’s a real problem. It’s a huge problem. Biorisk is not like that. It’s not something that a bored teenager does. That’s something where someone’s trying to destroy the world or whatever.
For that kind of adversary, the friction is irrelevant. If they’re actually so keen on getting to that outcome, if it takes three extra clicks to get to something, that’s not going to stop them. So these are for us two very different kinds of risks. For deepfakes, we do think we should be putting more frictions in place. Simple things like disallowing these apps on the App Store, not allowing social media companies to profit off of advertisements for these apps. These are all things that it boggles my mind that we have not done yet.
One of the things that I think was so useful about the scenario that Daniel Kokotajlo and his coworkers sketched out in AI 2027 is that it just made it very vivid and visceral for people to try to imagine what the near future could look like, if they’re right. Now, obviously, people will have many disagreements or quibbles with specific things that they project, but it was at least a scenario. I’m wondering if you could paint a picture for us of what the world of AI as normal technology will look like a few years from now.
Obviously, you don’t think that AI capabilities have hit a wall, so we will continue to get some new AI capabilities. Those capabilities will not be diffused throughout society. But what does the world look like in 2027 to you?
I mean, for me it’s a longer time scale. Maybe I’ll talk about the world, 10 or 20 years from now. The world in 2027 for US is still pretty much the world we’re in today. The capabilities will have increased a little bit, and the work hours of people using AI are going to have increased from, I don’t know, three hours per week to five hours per week or something like that. I might be off with the numbers, but I think qualitatively the world is not going to be different.
But a decade or two from now, I do think qualitatively the world will be different. And this is still a work in progress in our minds. We’ll expand it in the book version of this paper. But one of the things we do say is that the nature of cognitive jobs is going to change dramatically. And we draw an analogy to the Industrial Revolution. Before the Industrial Revolution, most jobs were manual and eventually most manual job got automated. In fact, back then, a lot of what we do wouldn’t even have seemed like work.
Work meant physical labor. That was the definition of work. So the definition of work fundamentally changed at one point in time. We do think the definition of work is going to fundamentally change. Again, we do think there will come a point where just in terms of capabilities, not power, AI systems will be capable of doing or at least mediating a lot of the cognitive work that we do today. And because we think it’s so important that we don’t hand over power to these AI systems, and because we think people and companies will recognize that, a lot of what it means to do a job will be supervising those AI systems.
It takes a surprising amount of effort, I think, to communicate what we want out of a particular task or a project to let’s say, a human contractor, and we think that the same thing is going to happen with AI. So a lot of what’s involved in jobs is just specifying the task. And a lot of what is going to be involved is monitoring of AI and ensuring that it’s not running amok. So that’s one kind of prediction that we make. That’s of course, far from a complete description, but I think that’s already radical enough. So I’ll stop there.
Yeah, I just I think that a casual observer of your work in AI Snake Oil and in this new piece about how AI is normal technology could come away from your work with the impression that they don’t have to think about AI because it’s overhyped, it can’t actually do anything, and it’s not going to arrive anytime soon in your life. And I know that that is not what you’re saying because I have read your papers, but I think that is a view that many people have about AI right now is that it’s just being hyped up by the circus masters of Silicon Valley, and it’s all smoke and mirrors, and if you really dig an inch below the official announcements, what you find is that it’s all fake. And people don’t have to worry about it.
And I just wonder how it feels to be an AI skeptic in a landscape like that. Because I worry that these stories that we’re telling people about AI being overhyped and not all that powerful are actually lulling them into a false sense of security. I was recently reading an article from Scientific American that they published in 1940 called Don’t Worry — It Can’t Happen, which was all about how the leading physicists and scientists of the day had looked into this question of could you do nuclear fission? Could you split the atom? Could you make an atomic bomb? And had basically concluded that this was impossible and literally told readers that they should not be losing any sleep over this possibility.
Did Gary Marcus write that?
I don’t think he was born yet, but this was the scientific consensus of the people outside the Manhattan Project and similar efforts was that this was just impossible. And as a result, I think people were afraid and scared and surprised when it emerged that we actually did have atomic bombs in the making. And I worry about something similar happening with AI today, where we were just telling people over and over again you don’t have to think about this. You don’t have to worry about it. It’s not of medium importance to you.
And I think if it does show up in people’s lives in a way that is shocking or unpleasant, I just worry that they’re going to be more surprised than they need to be. Do you worry at all about that?
So there are so many things to unpack there. I think I’ve been surprised by how often people have opinions about my work, having only read the title, not even the subtitle of my book. And the subtitle of AI Snake Oil is what AI can do, what it can’t, and how to tell the difference. The point is, not all AI is useless, and just the amount of hate mail I’ve gotten where people don’t recognize this is interesting, but I guess that’s what the internet does.
And I think part of it is there are these two narratives. There’s the utopia narrative and there’s the dystopia narrative, and it’s all hype. There’s nothing to see here. Narrative and it’s so tempting to box people into one of those narratives, and we don’t fit into any of those. I think neither me nor my co-author, Sayash Kapoor. And it’s interesting being on social media, seeing the amount of audience capture. When I write something skeptical of AI, it gets 10 times or 100 times more engagement than when I write something pointing out improvements in AI capabilities, for instance.
So there’s a strong pull. And look, I’m doing what I can, which is not succumbed to that audience capture and not give people only the one half of the story that they want to hear. But it’s just a structural problem with our information environment, for which I can’t, as an individual, I think, be responsible. At the same time, let me also say that I think part of the blame here has to lie with the hype because these products are being hyped up so much that when people believe the hype for a bit and try things out and they find that it’s not what it’s been hyped up to be, it’s just very tempting to flip all the way to the other side of things.
So yeah, I mean, I wish for a more productive discourse, but I think that’s a shared responsibility for all of us, including the companies who are really setting the direction of the discourse.
All right, Arvind, thank you so much.
Thank you, Arvind.
Thank you for being a good sport.
Thank you. This has been great. Appreciate it.
When we come back. Hold on to your hat. It’s time for hat GPT.
(MUSIC PLAYING)
Well, Casey, it’s time to pass the hat. Let’s get the hat.
(MUSIC PLAYING)
We are playing hat GPT today. That is, of course, our game where we pick tech headlines out of a hat and we discuss them, riff on them, analyze them, and then one of us says, stop generating.
Or sometimes we say it in unison. All right, Kevin, would you like to draw the first slip from the hat, or do you just want to pick up the one off the ground that I accidentally dropped.
I’ll pick up the ground one.
OK, great.
This is more like ground GPT. Well, with AI systems, it’s very important to ground them.
That’s true.
OK.
That an AI joke.
First item from the hat/ground Mark Zuckerberg Elon Musk mocked in hacked crosswalk recordings in Silicon Valley. So this one came to us from the San Francisco Chronicle. It also came to us via listener to this show, Hannah Henderson, who wrote in about something that had happened down on the Peninsula. That’s the South part of the Bay Area, where apparently crosswalk signals on several streets were hacked and the messages were replaced with messages mocking Meta CEO Mark Zuckerberg and Elon Musk.
Videos circulated on social media capturing the satirical messages, which were broadcast when pedestrians pressed the crosswalk buttons at intersections. Now, Casey, did you hear these?
I haven’t, Kevin, but I believe we have a clip so that we can hear them right now.
Yes play the clip.
- archived recording 1
-
What?
- archived recording (mark zuckerberg)
-
Hi, this is Mark Zuckerberg, but real ones call me the Zuck. It’s normal to feel uncomfortable or even violated as we forcefully insert AI into every facet of your conscious experience. And I just want to assure you, you don’t need to worry because there’s absolutely nothing you can do to stop it. Anyway, see you’ll.
Wow, Kevin, I’ve heard of jaywalking, but Jay mocking. But we really shouldn’t joke about this. I heard that after he learned of this, Elon Musk had DOGE shut down the Federal Department of Crosswalks. So it could be a really bad outcome here.
Yes, so I assume this was just some flaw in the security features of these systems and that it will be quickly repaired. But I do think it points to a new potential source of revenue that I’ve been curious about for years, which is we should just have way more sponsorship of stuff, like how you can sponsor a highway and you clean it up and pay some money and you get your company’s name on the little sign next to the highway. I think we should allow that for everything — telephone poles, crosswalks, every piece of public infrastructure you should be able to sponsor. And if you hit the button to cross the sidewalk, it should say, I’m Jack Black. Go see the “Minecraft Movie.” Now cross the street.
I’m trying to think if you’ve had a worse idea than this, but I’m coming up empty. Stop generating.
OK.
All right, this next story, Kevin. Cuomo announces new housing plan with a hint of ChatGPT. The local New York News site, Hell Gate, first reported that the former New York Governor and current New York City mayoral candidate, Andrew Cuomo, had released a 29-page housing plan that featured several nonsensical statements and a ChatGPT generated link to a news article raising questions about whether the campaign had used ChatGPT to write its housing plan.
Yes, I love this story because it is not only a case of AI usage run amok in the government, but it is also a case of people being caught out by the UTM code. Because Casey, do you know what a UTM code is?
A UTM code is a piece of text that you can append to the end of a URL that often will tell you, for example, what site is referring you to these next site from.
Exactly. So my understanding of how this all went down is that Cuomo put up his housing plan and reporters started going through it. And one of these reporters at Hell Gate noticed that on one of the links in the plan, there was the little UTM code at the end that said that the source of that article had been from ChatGPT.com. That’s how they were able to start piecing together the fact that maybe the Cuomo campaign had had ChatGPT help them write this.
Well, that is some really impressive sleuthing. But, Kevin, I think there was actually an easier way to realize that ChatGPT had written this housing plan, which is that it had six fingers. Now, some follow-up reporting from “The Times,” Dana Rubinstein revealed that the report was written up by policy advisor Paul Francis, who said that he relies on voice recognition software after having had his left arm amputated in 2012.
He told “The Times,” it’s very hard to type with one hand, so I dictate. And what happens when you dictate is that sometimes things get garbled. He acknowledged using ChatGPT to do research, but said, “it clearly was not a writing tool.” A campaign spokesman argued that the housing plan wasn’t written by ChatGPT, saying if it was written by ChatGPT, we wouldn’t have had the errors.
Which I love. It’s like the new excuse for I didn’t use ChatGPT is like, look at all the errors. This has to have been done by a human because the AI is smarter than that.
Yeah, well, any way you slice it, a really hard story for all the cuomosexuals still out there. Remember cuomosexuals from the pandemic? Stop generating.
That’s triggering my PTSD. OK, next story. Oh, this is a good one. Dolphin Gemma — how Google AI is helping decode dolphin communication. On Monday, Google announced on their blog that in collaboration with researchers at Georgia Tech and the field research of the Wild Dolphin Project, they were announcing progress on DolphinGemma, a foundational AI model trained to learn the structure of dolphin vocalizations and generate novel, dolphin-like sound sequences. Casey, have you used DolphinGemma to communicate with any dolphins yet?
I haven’t. And here’s why. I don’t think it’s any of my business what the dolphins are saying. How would you feel if some alien civilization just came in and decoded your language and analyzed all your thoughts? I don’t think you’d like it very much. So maybe some of these Google researchers ought to mind their own business.
Yeah, I like this. I like the use of AI to communicate with animals. Seems like a very good use of this technology. It’s also just wild that using the same techniques that got you these large language models can also maybe help us start to decode the utterances of other species. And I actually did use DolphinGemma to talk with the dolphin the other day. You know what it told me?
What did it tell you?
It said, I’ve been trying to reach you about your car’s extended warranty, and I said, that’s enough out of you.
Here’s the best part about releasing a tool and telling people that it’s going to help decode dolphin language. If you’re wrong, how are they going to know? You’d be like, ma, I don’t think a dolphin would say that.
All right, stop generating.
OK, you’re on the next one.
OK, next one. Oh my god, truly my favorite story of the entire week. The US Secretary of Education referred to AI as A.1. like the steak sauce. From TechCrunch, US Secretary of Education and former World Wrestling Entertainment executive Linda McMahon attended the ASU+GSV Summit this week, where experts in education and technology gather to discuss how AI will impact learning. While speaking on a panel about AI in the workforce, McMahon repeatedly referred to AI as A1 like the steak sauce. Now, Kevin, you have to admit that’s a very rare way of pronouncing AI.
Well done.
I thought it was only medium. Now, do we actually have a clip of Linda saying this. Let’s play it. I call her Linda.
- archived recording (linda mcmahon)
-
I think it was a letter or a report that I heard this morning. I wish I could remember the source, but that there is a school system that’s going to start making sure that first graders or even pre-K, have A1 teaching every year, starting that far down in the grades. And that’s a wonderful thing. Kids are sponges. They just absorb everything. And so it wasn’t all that long ago that as we’re going to have internet in our schools. Now OK, let’s see A1 and how can that be helpful. How can it be helpful in one-on-one.
Now, if your child is absorbing A1, you may want to take them to the hospital.
Casey, we are so cooked. This is the Secretary of Education saying that we need more A1 in our schools. I did love the A1 steak sauce brands corporate response to this. Usually not a big fan of the corporate internet personalities, but this one, they did actually post an Instagram post, the A1 Steak Sauce Company and they said we agree it’s best to start them early.
Yeah, big day for them. I can’t wait to find out that A1 donated $25 million to the inauguration before this little quote, accident. But look, this is the sort of story that makes you wonder, maybe we should actually have a Department of Education.
It really just underscores the stakes of AI. Really high stakes conversation.
All right, Kevin. Stop generating.
OK.
Next one. How Japan built a 3D-printed train station in six hours. This one comes to us from “The New York Times.” Apparently in six hours recently, workers in rural Japan built an entirely new train station. This station will replace a significantly bigger wooden structure that has served commuters in this remote community for over 75 years. The new station’s components were 3D-printed off site over a seven-day period, and assembled just over 100 square feet are the measurements of this new station. It is expected to be open for use in July. Casey, what do you think of the 3D-printed train station in Japan that was built in just six hours?
Well, if they built it in six hours, why do I have to wait until July to use it? That’s my main question. What do you think?
I like this. I like the idea of 3D printing housing. Obviously, our friends and colleagues, Ezra Klein and Derek Thompson, have their new Abundance book out talking about how we need to build new houses in this country. And we should say, the 3D printing housing thing has not been totally successful in America, but the technology is there. I think we should start 3D printing houses. I think we should 3D print ourselves a new studio.
Sure. I mean, it’s worth a shot. Here’s what I like about this story. You read about countries doing things like this, and I feel like this is the thing that DOGE is convincing us that it’s trying to do. It’s like we’re going to make things so efficient. And in my view of that, they would be like, oh, well, you’re going to build new public infrastructure really quickly, but instead it’s just like, well, we’ve replaced your local social security office with a phone number that no one answers.
Yeah.
Yeah. Anyway, good job, Japan.
Good job, Japan.
Advantage Japan.
Yeah.
Is there one left, or is that it?
There’s one more.
All right, Kevin, and now the final slip in the hat. One Giant Stunt for Womankind. This is such a fun essay by Amanda Hess in “The Times.” I recommend everyone go read it. This was, of course, about Blue Origin’s all-women spaceflight this week, when some very famous women very briefly went up into, I guess, you’d call it the outer reaches of the atmosphere.
And Amanda writes, “if an all-women spaceflight were chartered by, say, NASA, it might represent the culmination of many decades of serious investment in female astronauts. And all-women Blue Origin spaceflight signifies only that several women have amassed the social capital to be friends with Lauren Sanchez.” Lauren Sanchez, of course, Jeff Bezos, fiance Jeff Bezos, the guy that created Blue Origin. So, Kevin, what did you make of this spaceflight?
I mean, I think it’s an amazing publicity stunt for Blue Origin, which I had not thought of for more than about five minutes until this week, when Katy Perry and Gayle King and all of these famous women were thrust up into orbit in one of these Blue Origin rockets, and they got some good publicity out of it. What did you make of it?
Well, as a gay man, I’m always very interested in what Katy Perry is doing. And so when I found out she was going to space, I thought this could be good. And indeed, Kevin, when she got up, she told us in advance that she was going to make an announcement. And then she got up into space and the live stream cut out. And so I think we’re still trying to find out what that announcement is. But we may actually have a clip of exactly when the live stream got out. Play it.
- archived recording 2
-
And Katy Perry did say that she was going to sing in space. I’m waiting for it.
- archived recording 3
-
I’m waiting for it.
- archived recording 2
-
One-minute warning. One-minute warning.
- archived recording 3
-
So that is Capcom indicating a one-minute warning for our astronauts to take in those last views before they get buckled back into their seats.
Now, according to several other passengers on the trip, Katy Perry did indeed break out into song, singing what a wonderful world before returning, and when she got back to Earth, she kissed the ground. What did you make of that?
I just I’m still reeling from that clip. That’s the best Katy Perry has sounded in years. She may just want to release that. Forget what a wonderful world. What did we just hear? That was great. Avant-garde.
Now, Casey, would you go to space if Jeff Bezos or Elon Musk offered you a spot on one of their rockets?
No. If Elon Musk offers you a spot on a rocket like that’s giving Bond supervillain, I will not be. I’m barely getting into Teslas at this point. How about you?
What about Blue Origin. Would you take one of their flights?
I mean, under the right circumstances, I’m space curious. You have to admit, it would be a great story to tell, even if you only go up for a few minutes.
Yeah.
It could be a great story.
Yeah, totally. And you get to wear that valor for the rest of your life. You’re always an astronaut. You go up for 10 minutes. I mean, this was a fairly short flight. My understanding is they didn’t have to do any maintenance of the craft. It was just like they were just there for the ride.
No, well, I think once Katy Perry started singing, people started looking around saying, we got to get this thing back on the ground. I really can’t deal with that much of this.
No, I want to go to space and I’ll tell you why.
Why is that?
Because I once read that you actually become taller in space. You can grow as much as a couple inches just because your spine elongates in the zero gravity environment. I’m 5’ 10“, Casey. I’ve always wanted to be 6 feet. So I think going to space could get me there. And for that reason, I’ll go up.
Well, if we went up together, we’d come down and you’d be 6 feet. And I would be 6’ 7“.
Yeah, which would be terrifying.
Which would be. That would be then I’d have an even harder time finding pants. Anyway, that’s what’s going on in space. Now, actually, one more question.
Yeah.
If you were an alien civilization and you found out that the United States had launched Katy Perry at you, would you consider that an act of aggression?
Yes. And I do think, I hope this starts an international incident where the Soviet Union will start sending up its pop stars. Does the Soviet Union still exist. Whatever happened to them.
I’ve been meaning to ask.
And that’s hat GPT. Hats off to you. Newsmakers and we can assure actually, people will be curious. I can assure you no one was used in the making of this segment.
(MUSIC PLAYING)
One more thing before we go. We are recording another episode of our hard questions series with a very special guest. I’m so excited to tell you guys who it is, and we just want to have really, really great questions for them. If you have not heard our hard questions segments before, this is our advice segment where we try to answer your most difficult moral quandaries, ethical dilemmas, etiquette questions that involve technology in some way.
What is going on in your life with technology right now that you might be able to use a little celebrity help on. Please get in touch with us, write to us or better yet, send us a voice, memo, or even a short video of yourself asking your hard question, and we might answer it in this upcoming episode. Please send those to hardfork@nytimes.com.
(MUSIC PLAYING)
“Hard Fork” is produced by Whitney Jones and Rachel Cohn, were edited by Matt Collette, we’re fact-checked by Ena Alvarado. Today’s show was engineered by Katie McMurran. Original music by Elisheba Ittoop, Marion Lozano, Rowan Niemisto, and Dan Powell. Our executive producer is Jen Poyant. Video production by Roman Safiullin, Pat Gunther, and Chris Schott. You can watch this full episode on YouTube at youtube.com/hardfork.
Special thanks to Paula Szuchman, Huang Tam, Dahlia Haddad, and Jeffrey Miranda. As always can email us at hardfork@nytimes.com.
What have you got to the vape shop when the thief was actually like a suave gentleman thief dressed in a suit.
It’s just lupin from the French.
Yes, exactly.
He’s like, I’ve been expecting you, Mr. Roose. You’re probably here about your series 8 Apple Watch. I thought you’d never track me down.
(LAUGHS)