Anthropic Thinks AI Might Destroy the Economy. It's Building It Anyway.
62m 34s
This podcast interview with Anthropic co-founder Jack Clark explores the transformative potential and profound risks of artificial intelligence. Anthropic's extraordinary revenue growth underscores AI's economic significance, countering bubble narratives. Clark addresses the ethical dilemma of private companies developing technology with existential risks, comparing AI to nuclear weapons but arguing for a collaborative governance model where the private sector works with governments to mitigate dangers while promoting beneficial applications. He acknowledges AI could disrupt millions of jobs, possibly spiking unemployment, but stresses this is a choice society can manage through policies like creating jobs in sectors like healthcare and education, funded by AI-driven economic growth. Clark emphasizes fostering human curiosity as a core value AI can amplify, helping individuals adapt. He also discusses public distrust, particularly in developed nations, attributing it to economic anxiety and past tech industry failures, while noting more optimism in developing economies viewing AI as an opportunity for advancement.
(bell ringing) Today's podcast is an interview with one of the co-founders of the AI company, Anthropic, Jack Clark. One thing I'm trying to do on the subject of artificial intelligence on this show is to offer a balance of perspectives on an issue where I find most coverage tends to be extremely one-sided. Some people are very certain that AI is a bubble and some people are certain that it is not. Some are certain that AI will destroy millions of jobs and others are sure that it will not. And I want listeners of this show to feel like every time they hear an intelligent take on one side of these issues, the next episode they'll hear will offer in some way a counter-vailing take. So two weeks ago you heard the investor and writer Paul Kodroski argue that artificial intelligence was an economic bubble. But if any single data point pierces that narrative, it's this. Between December 2025 and this month, March, 2026, Anthropic more than doubled its annual recurring revenue from $9 billion to more than 20 billion. According to several analysts, there is no record of any company ever growing this fast at this scale. Now, I don't need Jack Clark or anybody at Anthropic to read me a corporate statement about the company's revenue growth. I can very easily do that myself. What I wanted to do today was to ask questions that only someone in Jack's position could answer. Questions like, if Anthropics executives believe that AI might be as dangerous as nuclear weapons, what right does any private business have to build this sort of thing for profit? Or how does the company balance its reputation as the industry leader in caution and safety with its other reputation for being one of the fastest developers of this technology? And if artificial intelligence has the capacity to produce, as its CEO, Dario Amadea, has said, a country of geniuses in a data center. Why do Americans overall say they disapprove of AI more than just about every other institution and individual in the world? I'm Derek Thompson. This is Planingrich. (upbeat music) (upbeat music) This episode of Planingrich is presented by Audi. We all know that feeling, a change of plans, a new opportunity, instead of overthinking, what if you just said yes? With the all new Audi Q3, the answer is easy. It's made for the yes life with the power and room to handle whatever pops up. Yes to adventure, yes to right now. Because saying yes without hesitation, that's real luxury. The all new Audi Q3 made for the yes life. Learn more at AudiUSA.com. Jack Clark, welcome to the show. Thanks very much for having me. I believe you and I were on Purchernity Leave right around the same time. My daughter was born the first week of December. Does that roughly line up with your schedule? Yeah, my second child, my son was born the first week of November. Okay. So we were meeting each other in a shared space of mutual exhaustion, which is always nice. Hopefully that leads to some kind of symbiosis. I was thinking about holding this question for the end, but it might be the most important question I ask. So I might as well just get it out in front. You're building a technology that you think is gonna change the world and change the nature of work. More than anything since the computer, maybe electricity, maybe anything else. If you're right, our kids' futures are going to be profoundly reshaped, maybe ruined by this technology. And I wonder how that sits with you, how you go to work and you work on cloth, and then you go home and you raise your children. When you bridge those two lives, how do you think about the art of raising kids in a world where there's a technology coming on down the pike that will always already be smarter than us at almost everything, which is at least the goal of your company? How do you sit with that and how do you think about raising your kids? I spend a lot of time thinking about this, but I also think, as you know, when you become a parent, all the cliches are true and things that you learn are like, it's not about external validation, it's really about having a good sense of your own self and various path phrases like this. But when I look at my kids and I think about myself and my own experience of this technology, being curious about the world, being interested in the world and getting joy from experiencing the world and learning about it are how I stay calm and stay ready for this, this technology evolution that's happening all around us. When I look at my children, the main thing I'm doing is spending time encouraging them to develop passions like reading and playing and exploring the world, because whatever happens with the technology, getting through any period of change requires you to have some sense of yourself that doesn't isn't massively contingent on a changing environment outside. And some sense of innate curiosity and a world that you can live in inside your own head, I think that just stems from encouraging curiosity and encouraging them to get to know themselves. - You said curiosity several times and I think I agree that that's a value that artificial intelligence might amplify. What does curiosity mean to you? - For the first time, we have a technology that lets you really follow your curiosity to almost like the absolute limit of it. I'm reminded of when I was a kid, I'm sure that you have the same. I would go on interesting research expeditions. I would research and colony, so I'd research black holes, so I'd research how city planning worked. And I would follow that interest to extraordinary points. I'd learn aspects of time dilation around black holes or I'd learn about how to implement and colony simulations on my computer or whatever. I'd indulge my curiosity and it was incredibly fun. And now we have a technology that lets anyone take something they're curious about and kind of take that to the absolute limit. And I think that this is just like wildly exciting and also good for you. Whatever happens to labor and employment and big changes are surely coming. Being able to exercise your own curiosity and derive satisfaction from that, I think is really important. When I was a kid, I didn't have any ambitions that I would be the world's best physicist, or the world's best town planner. I just found this stuff like fun to think about and enjoyable. And I think that the more we encourage people to get good at bad stuff, the more well set up will be for what this technology will bring us. - We're gonna return to some of those themes in a second when we talk about AI and the labor force. But I wanna get to the news. I think as most listeners know at this point, andthropic was in a spat with a Pentagon over contract details that ended with the company being designated a supply chain risk, I know that you are extremely limited in what you can say about the details of the case because your company is an active litigation against the Department of Defense or whatever. I hope this question therefore arrives at the right level of altitude for you to be able to answer it. Andthropic has compared artificial intelligence to nuclear weapons on several occasions. This is not a rare analogy. And just most recently in January, Dario Amade the CEO of Anthropic said that Trump administration's decision to allow Nvidia chips, advanced Nvidia chips to be exported to China was quote, "A bit like I don't know, like selling nuclear weapons to North Korea and bragging, oh yeah, Boeing made the casings." End quote. The US does not allow private companies to build nuclear weapons. That is the law. If artificial intelligence is just like nuclear weapons, why should we allow private firms to build it for profit? AI is fundamentally like everything. It's like a factory that produces cars, microscutors, animals, and nuclear weapons all at the same time. And the main question that we're gonna have to deal with as a society is how do you govern those factories that produce these things? And how do you decide what the appropriate uses are of the things that come out and where they should be used? So I can't talk obviously about the specifics of our ongoing discussion with the Department of War. I can say that Anthropic was extremely committed to working on national security early because we recognize that AI is going to touch every single part of life. And every single part of life is gonna have its own range of incredibly forny difficult issues. So ultimately we're gonna need there to be a much larger societal conversation about how we just govern this technology in general. And we will need to reckon with the fact that the technology comes from the private sector and then flows into all of these other sectors. And that's going to be really challenging. It's the thing that we haven't encountered before because previously you didn't have a technology it could take on.
this ability to become anything. You had specific technologies built by specific industries for specific purposes. And that was in many ways simpler. - Just to hit home on the new theology one more time though, because I really want, I wanna hear a robust defense of why this is the private sector's job. I mean, the new theology is invoked in so many different ways. It's invoked for export controls. It's invoked for arguments for government attention. It's invoked for arguments about the stakes here being existential or even arguments about the need for international cooperation around the kind of artificial intelligence that's built at the frontier. But one conclusion that this analogy very clearly supports is that private sectors should not control this technology. And so I wonder why the analogy applies almost everywhere except here where it is private sectors developing frontier AI for profit while the government is on the outside attempting to regulate it or develop contract negotiations with it. - I'll push on this in a way that I hope is helpful. We worked for many years with the National Nuclear Security Administration to actually test out this property of how well could AI understand aspects of nuclear weapons or nuclear technology. And we used that to develop e-vows and to develop ways of ensuring that we don't proliferate things into the world that have an understanding of nuclear technology. And that's almost a very positive example of how you would have a private sector work of government where some things absolutely should only be the domain of government, like nuclear weapons, bipartisan area of agreement, everyone's comfortable with this. The job of a company that is producing a technology that can take on many different aspects is to work out where the areas where it's inappropriate for a company to be deploying that technology, like nuclear weapons. And then you can work with government to take that capability surface off. So that holds some of the path that we're going to have to pursue here and it's one that most of the industry is going down with a few areas including bats and biological weapons and other aspects of CBRN. - So to go back to your first answer, I just want to make sure that I understand your perspective here. You're saying, the right way to think about this is that AI is this multifarious factory kind of technology where you are creating super powered Excel charts which is a technology that has no precedent for government regulation. But you're also creating technology that can be used by the Pentagon or can be used by individuals to have essentially militaristic or dangerous ends. And that is a factor of your invention that does require a different kind of government regulation. And so you're saying the analogy with nuclear weapons is true in so far as it is contained to the parts of your technology that's like nuclear weapons but you're also doing a lot of other things that have no analogy in nuclear weapons like say, make white collar workers a little bit more productive in their desk jobs. - Yeah, I just pass this out into, there's almost two problems here. One is that you have this factory that can produce anything. Then you make sure that what comes out of the factory like correlates to what we've decided society can have available in the free market. Not nuclear weapons, yes, it's fine to produce things but accelerate knowledge workers. And then you have the second question of given the kind of multifaceted nature of what can be produced, how do you then work with government or academia or other parties on the things which you can't necessarily push out to the world in general but have value in the rest of the world. And another example here is biology, right? Where that's less for domain of government but for a certain parts of biology which have danger if you brought from solely to the general populace but which can massively accelerate the development of biological science in industry. And so you need to work out then what is the path to acceptively getting rid of it? And so some of the conversation but society is gonna have now is what is the what are the appropriate ways we as a society want this technology to be used and how do people decide about what to do with the things in mis factory and how to evaluate them and how to proliferate them so society gets the benefit. - Back to jobs. Anthropic CEO, Daria Amade has predicted in several occasions that AI will destroy half of all entry white color positions in spike unemployment to as high as 20%, which would be the highest unemployment rate since the Great Depression. This is a near term prediction. He said this could happen in as soon as five years. Do you agree with that forecast? - We're talking about one of the potential things that can happen. And I think it's worth thinking that this is a choice. I don't agree with this because I think it's a choice that we can make and also my personal view based on the data that I look at is big changes in employment take a long time to filter fruit of the economy. And even with the magnitude of what we're talking about, you might expect it to take longer. But let's say that there is the potential for massive, massive employment changes. I think that this is accompanied innately by the fact that AI must also be growing the economy a lot and causing a lot of economic activity. If that is the case, then you would expect that we can have more degrees of freedom about policy and what we do with this economy. The idea which I return to a lot is if you end up in a situation where employment is being negatively affected by AI in one part of the economy, I'm like correlates to loads of money being generated by the economic activity of AI systems, you could choose to create many jobs in other parts of the economy. Like jobs in areas like teaching or nursing, where people have a preference for there to be more people working in it. And you could both increase the number of jobs and also do things like cross sector wage subsidies to improve the wages of first jobs, where today we severely undercompensate them. - I wonder what the purpose of talking like this is as a company. I mean, it is unusual in corporate history for a company to announce that if its product is successful, tens of millions of people will lose their jobs and there's a non-zero chance that we end the human race entirely. Like there is, there is in fact, I think no precedent for a private sector company talking about its product like this. I think the analogy that I've reached to before is like Henry Ford would have been within the realm of reality if he said, you know, if this Model T thing takes off, hundreds of thousands of Americans are gonna die every single decade of car actions. That is true, but Ford and GM did not talk like that in the 1910s and 1920s. What is the strategy of communicating your technology, your product to the American people, as a means by which we might have 20% unemployment and a non-zero chance of human catastrophe? - These are not the outcomes we want or anyone in the industry wants, but I think the industry has also learned from looking at the overly rosy predictions made by many in the technology industry before about how the only effects they'd have on the world would be unalloyed positivity. And I think the world lost huge amounts of trust in the technology industry because of that, because then based on what it wasn't, only positive things social media has caused a range of amazing positives in the world and a range of harms, which we're now dealing with. The E-FOSS here and why I'm working on this new initiative for the company called Beyond Foric Institute, is we want to share a lot more data about what we see in front of us, so that society is better prepared for any of the different changes which could come along. We also don't spend enough time talking about all of the really positive changes, which I think are a choice that we can make as a civilization and as companies to pursue as well. But it would be negligent of us, I think to not call out that there are ways that we as a species could get this technology wrong. And I don't think that we're alone in this. If you look at scientists and people that have worked on transformational technologies before in biology or in the early days of nanotechnology, they've all talked about this combination of upsides and risks. It's just that AI as a sector has matured and made a lot more impact on the markets than either of those kind of classes of technology of the same time period. So everything's accentuated. - I hear the argument that you're reacting to the social media experience, that social media companies promised in a polyannish way to merely connect the world and be a kind of global newspaper and they did not merely connect the world. They did a lot of negative things as well. I hear that argument, but I also look at reality and I look at polling. Like last week NBC News published a national survey on attitudes toward a range of politicians and institutions. AI's net favorability was negative 20. That's below every politician that was surveyed in the poll and it's below ICE, immigration enforcement division. Why do you think people seem to disapprove of and even,
In some cases seem to hate artificial intelligence, despite your efforts to learn from the social media experience. So we did this project recently called the Claude interviewer, where we talked to something on the order of 81,000 people around the world about their experiences of using our technology, their hopes of their technology, their worries about it. And you saw a couple of interesting things which speaks to this. One is that there was a very detectable change in sentiment between what you might think of as people in the developed world and people in the emerging economies. And if you look at the emerging economies or developing world, people had a much more positive view of the technology. This was also true of some economies in East Asia as well, where they viewed the technology as part of this larger story of positive economic transformation that could happen to them and could help them improve their lives and better their lives. And then if you look in the developed world, you had much more of a kind of neutral sentiment or negative sentiment, which correlates to your polling stuff. Well, I'd say if you look across these two worlds, you have one important factor, which is in one, the economies have been growing at like surprisingly large rates for a long time. And in the other, the economy has been relatively stagnant. In the stagnant world, which is the developed world, people are appropriately anxious about change. People have been through a lot of change already and seeing AI as another tool of technological change can cause people to feel kind of significant anxiety. And if you're looking for developing world, they see change and they're like, "Great, my story has been one of change and change has mostly correlated to things in my world and material circumstance getting better." So I think that's like an important thing to bear in mind. The second part is, I think if you look at the polling, you don't see all of these amazing ways that people around the world are using the technology to just allow them to kind of do more or become more themselves. In this like Claude interview, we talked, for example, of people, someone who was mute using Claude to build a text to speech application so they could speak to their friends, someone who is a security guard, using Claude to educate themselves and become someone that worked now in educational technology. There are a range of these examples as well. And I'm not, you know, soly cherry picking, but I think what we were striking from this was how many examples people had of the ways in which they've used the technology to just meaningfully change aspects of their life or how they relate to people. Finding a way to get more of that and show people that the good for technology can do is something that we in the industry need to do a lot more of. And I think fundamentally the AI industry just needs to help the economy grow a lot to also change sentiment. I think that's a big thing underlying all of this. I'm going to go back to that graph that you mentioned from the Anthropic Institute study. I have it right here in front of me. We might be able to throw it up on the screen as well for folks who are viewing on Spotify or YouTube. It indeed shows that the countries and regions that use AI the most and are most developed tend to be most concerned about jobs in the economy. They report the highest negative sentiment toward AI. And it really does seem in one of these charts like there's almost a linear relationship between the regions with the most AI exposure and the most negative sentiment about AI, which led me I wrote down two explanations. You added a third. So I'll start by reiterating yours. Your explanation, which I don't take is entirely dispositive. But one explanation of yours is that the developed world, the richer world, has a feeling of zero sum sentiment because of slowing GDP growth. And there is a sense, therefore, that a technology that increases productivity will not increase productivity for all, but will rather increase productivity at the expense of existing workers, which is to be clear. A prediction or a forecast that your CEO has made explicit. So that's explanation number one. Is this difference between zero sum and positive sum attitudes that might have something to do with GDP growth rates? Exploration number two that I saw from some more really full-throated AI boosters is that concern about AI is a quote, "Luxury Good." We've all heard this term of luxury goods that essentially people can only afford to be negative about artificial intelligence if they can literally afford it because they're rich. It is a luxury good. Explanation number three is that exposure to AI reduces positive sentiment toward AI overall. And I want to contextualize that latter explanation by bringing in the last thing you said. One of the reasons I find artificial intelligence basically harder to talk about than any other subject I cover on this show is that it's not one thing. For one person, AI is slop on TikTok. For someone in Hollywood, it is threatening their FX job. For someone in research, it's dramatically accelerating the pace by which they do deep research projects or put together excel or PowerPoint. For someone in science, it is sometimes a frustrating source of misleading information and sometimes an extraordinary source of citing information they need in order to finish their papers or their grants to the NIH. It's just so many things. The first thing you said is it's a factory that makes everything from whatever scooters to biological weapons. But I would like to hear you grapple with this final explanation, which is that what if there's something about exposure to this technology that seems to linearity reduce positive sentiment about it? My best explanation for this is it's about anxiety about the world in general. And I think these things are just increasingly coupled in that AI is like an everything technology which doesn't just touch all of the different types of work that you or I might do in our life, but also touches aspects of things that we don't do in work, like things that we do in a sort of personal level. And thoughts about AI, I think increasingly trend towards being a proxy for a person's thoughts about the world and AI contains within itself the world. And you look at this and the polling that we've done, you look at this and the Claude interviewer, you can see this in the economic index, we're actually like as usage of AI grows, it just increasingly correlates with generally known facts that you see in other forms of data about people's perception of the world or economic and daily life. So the main like lesson I have here is all the world is like feeling very anxious at the moment and we need to figure out a better story for the world. And AI is going to be acutely like exposed to this because it is it is a technology that distills all aspects of like labor and life into itself. And therefore magnifies your anxiety about any of those. We need to show all of the different ways the technology can be used and we also need to figure out ways that you help people discover that. Magic from curiosity that magic from kind of self-betterman and that magic from using it to change your life. Some of that will probably come from having a technology show up in different ways to people and changing things about the product surfaces, changing things about the user interfaces, and also changing what we actually use the technology for as a society. I want to get to agents and your predictions about the labor force in just a second, but the last question in this zone that occurred to me is that I just wonder if you and other people and anthropic in the industry live with this kind of tortured ambivalence this divided soul where on the one hand, the very identity of anthropic is to be founded as a company to ensure the safety of a technology that could be designed in a dangerous way. The only reason to found anthropic is in the context of against the tension of the possibility that this technology could do extraordinary harm to the world if it's built the wrong way. And at the same time, you're talking about wanting people to feel the magic of the technology as it exists. But it's not one of the other. It's it's both at the same time like the technology does contain within it. Magic and I think people who have spent tens hundreds of hours with cloud code or cloud generally or even chat to me, T may have experienced that magic. And yet at the same time, this is a technology that is clearly within the EA and assist with the effective altruist community within the rashes community of San Francisco within the AI community writ large dripping with anxiety about what this thing could be. So how do you live with that tension that balance between thinking that you're building something that contains the possibility of magic, but also like recognizing sometimes that the only honest way to speak about this technology is to be clear about just how dangerous this thing might be. We we share what we we feel about it and also set up things like the Institute to share more information so that the rest of the world can work on the problem problem as well. We don't think for this is unique to AI, almost a hundred years ago or so, there was a memo written by the British government worrying about the rise of civilian air travel and it this memo which was.
be able to send you off to the show, have this dark vision of a world where like wars were entirely fought by like aircraft and terrorists in aircraft like bombing cities and killing people and the whole of life in life on continental Europe would be disrupted by this and we would live in kind of like unimaginable horror and planes should only be kept for like government purposes and you shouldn't have them generally distributed because of the harm they could do. Now obviously it got part of it right. But all of those things I just mentioned are done by aircraft today but we also have an entirely changed world due to civilian aircraft and transportation which has unlocked like a vast vast range of things. So even the person writing this memo was a civil servant in the English government was doing the same thing that we're doing here of staring at the technology, seeing that it encodes within itself some great fantastical power and then worrying a lot about not wanting that to come to pass in a negative way that you can sometimes blind yourself to all of the tremendous upsides but also come along with it. And you know how do we solve planes? Well you created a very complex overlapping set of regulations from how you build planes to how you regulate like transport between different countries to how you build standards for how planes work for whole. The whole practice of making civilian aircraft safe and reliable and be integrated into the world is fiendishly complicated and also plane sit at the end of supply chains which are almost as complicated as semiconductors and AI and yet the world managed to do it. So I think that what we have in front of us is we can get to this world where AI will be integrated into the world and will have vastly expanded the horizons of what people can do and we have to avoid some of these misuses of the technology which stare us in the face just as like air potential misuses of aircraft were obvious to people very shortly after aircraft had started taking flight. So a lot of the the feeling I set where there's we've really got to avoid these foreseeable downsides and come up with the technical solutions to avoid them and we have to get enough of society working on that that we build this very complex interlocking series of safety mechanisms but will allow that to be safe. But we've done it in so many other parts of the world as well. I want to close the door on these sort of big picture questions about artificial intelligence and the balance between promise and and peril here and talk a little bit about the last few months and an anthropic which have been historic months. I feel like we're in a new chapter of artificial intelligence right now and the title of that chapter is the age of agents. You have built and the topic has built an agent Claude code open a I as built its own agent technology code X is is the name of its sort of coding agent tell me before we talk a little bit about the effect of this technology on the labor force what is an agent. Yeah a few months ago before I went on paternity leave I kept on going up to a colleague or one of our research teams and said what is an agent like I was like a Zen master and he didn't know for some months now keep on walking up to him and say miles miles McCain what is an agent and eventually miles is answer was you know an agent is a is a language model but uses tools over time and so I'll just unwrap that for people listening to this. An agent is an AI system like you or I might use in the browser today but you can ask it to go and do a task for you like read a bunch of research papers about the history of aircraft regulation for instance it will go away and read those papers and to do that it will use tools it will use web search to access paper repositories like archive or or what have you pull down those papers read them then it will use other tools to summarize those papers and write scratch pads for itself and use graph making tools and come back to you for research report. So an agent is for all intents and purposes like a person but you can email a question to who will then go and work for you for a while and come back. There's a lot of talk right now about the possibility of agents like Claude codex replacing white collar jobs and I've spoken to folks in legal in consulting firms and their position is these tools are good enough to make us more productive but they're not good enough to significantly reduce headcount yet they're much more like a better computer than a better worker and that's a really interesting piece of testimony to me because I feel like one of the more important macro economic questions of artificial intelligence is is this going to replace workers because AI is a better worker or will it merely increase productivity because AI is a better computer. What is your take at the moment and is your take sort of bound to to this moment because you think there's something coming down the pike that would change your answer to this question. Yeah, I'll talk about what I see right now and then where I think how I think this will actually unroll over time. What I see right now is it just massively multiplies for productivity of any individual but you can't like fully dedicate delegate delegate to it nor would you want to. It doesn't replace people but it changes the sort of work that people do so researchers that I work with now have to reckon with a world where a research project that previously took two to three weeks can now be done in one to two days primarily through the use of agents, cloud code, other things that we have here at Anthropic and many businesses have built and what that means is they're needing to change their style of work and say oh now more of my job is generating research questions than doing research. And more of the work that we do as research teams now is about coming up with those questions because we've actually had to spend a lot more time on it because we're burning through the questions a lot faster than we did before. But before we go on without revealing industry secrets, can you be as specific as you possibly can be here because I do want to make sure I find the most consistent criticism I get for my coverage artificial intelligence is that I'm always trying to see beyond the horizon and not describing the here and now. So like in the here and now, as specifically as you can say, how has the use of agents accelerated your ability to ask and answer specific types of questions so it allows you to move on to the next question faster. I'll give you two very concrete examples. Here's one example. The AI industry produces thousands of technical evaluations every year, which are published in research papers that I write about my newsletter. You write about the AI labs read whenever these evaluations come along, there's always work at the AI labs, which is let's see how we do, which involves reading the paper, downloading the data set and benchmark and GitHub and getting it to work on your infrastructure and testing out your AI systems against it. That previously would take anywhere from days to weeks, depending on how complicated the evaluation system was. These days, we can increasingly just point Claude code at the evaluation and say get this to work on our infrastructure and it will just do it. So a task that like was extreme uglg factor that you had to do, but no one enjoyed doing is now a task that we can actually just point the systems to. So that's example one example two is we have tools that we built here to do our research like the Claude interviewer study, which I referenced, depends on a tool called Claude interviewer we built. You access Claude interviewer internally through a variety of software tools that we've built. Well, now we can just ask Claude to spin up a new Claude interviewer and it will use all of those tools, which again was annoying, configuration stuff that you had to do, but no one enjoyed doing, and now that AI can do it. Right. And this was a survey for I think it was 80,000 people around the world in the 160 countries. And so this is this is work that Gallup or Pew in a pre AI world you're saying might have taken weeks, months to complete because of the complications of administering maybe a somewhat dynamic survey to 80,000 people. Yes. 80,000 people. But with this technology, it's accelerated. Exactly. And we've we've used that same technology to survey our own workers about how they use Claude code. We've used it to survey scientists on our platform about how they're using it. You kind of you have a group of people and you want to ask them some questions. Now we've made it very easy for us to do that because Claude can set up the interview process. Now I interview I interrupted you because I think you were in the process of describing how you're answered to my previous question had part one. Here's how agents are working today and part two given the extrapolation we've seen of the last few months or years where I think agents might go in the next few months or years. So why don't you finish telling me that? So part two is I mean, the lesson here is from the electrification and factories where when we first got electricity, you had an existing load of factories and you could maybe put some light bulbs in the factory and now they could work longer because hey, electricity let you put a light bulb in. It took many years for people to build entirely new factories that were built on the assumption electricity existed. So what we're now seeing is for formation of new firms, AI startups but there will be many others, but have put AI at their center. They built themselves on the assumption that AI is a primitive, like electricity that they can access. And that's going to change the shape of how those businesses work and I think what you'll see is businesses make surprisingly large amounts of like
economic activity while employing relatively few people, just like how factories built around electrification were surprisingly more productive relative to ones that hadn't been built around electrification as a base input. Right. Carpethi is called this the R2H ratio. The robot to human ratio with companies is going to grow significantly and you're going to see companies with one, maybe just one, one, two, three employees suddenly do revenue in the millions, tens of millions, or even higher. Where specifically are you seeing those companies form? Because it can't possibly be universal. Like there is no H2R ratio for dry cleaners right now. But maybe there is in coding or software development or consulting. Do you have a sense given, you know, anthropics, God's I view into the ways your technology is being used, where those companies are growing and popping up right now? Yes. I mean, from our own economic index data, we see this most profoundly in software engineering and also in what you might think of as knowledge work, knowledge work being consulting, knowledge work being analysis of things, knowledge work being the paralegal aspects of legal work aspects like this, where you have something that has this property of a wrote task but required some expertise to do and loads of finicky aspects which took up time. Well, now you can take a person that has the intuition of how to do that task and they can just instruct a large set of AI systems to work for them to do the finicky things that took time but were basically wrote processes, read this legal filing, make this slide deck, produce this code that has this property and it all still requires people to come up with the intuition of where to go and the ideas of what's going to be most strategically valuable. But a lot of the schlep factor now gets done by these AI systems. We also produced research recently from our team of economists that looks at AI occupations by exposure and I think here you see a very significant difference between people that do work involving computers and people that do work for mostly involves the physical world. The physical world is going to require a whole other set of technologies to do with robotics and other things to mature before I think you'd expect to see AI move through it as quickly as it's moving through other parts of the economy. So I recently had the investor and writer Paul Kodraski on this show to talk about his conviction that artificial intelligence is a bubble. Paul had a theory that when I posted about it online got a lot of attention, some of it positive, a lot of it negative and I want to put that theory to you to have you weigh in on it. He said he believes that software engineering is just materially different from the rest of the economy when it comes to its susceptibility to being automated by or even made more useful with this generation of artificial intelligence, especially when it comes to token use, sort of the basic unit of AI use for folks who aren't as familiar. He said, look, software engineering just uses way more tokens than the typical consultant or doctor or PR executive. And so while it seems like AI and anthropic and open AI or having this vertical moment in revenue growth, that part of the S curve, that vertical part of the S is actually very short because we're going to burn through software engineering and then get to the rest of the knowledge economy and realize that their token usage is much more slight, which means that the revenue for companies like Anthropic and Open AI is going to be much more meager. So his prediction essentially is that we're like in this vertical golden age of AI revenue growth for Anthropic and other folks, but we're going to be out of it very soon because software engineering is just plain different. You again have this 30,000 foot view into the way that people are using your tokens and using your technology. Is he right? Is software engineering materially different from the rest of the knowledge economy? It's very different, but it won't be that different for long and I'll explain why. Software engineering is the factory that already built itself around electricity and that software engineering involves code as if it need to access code. Coding like as a discipline has already gone through the challenge of how do you make the maximal amount of code available to my software engineers in an organization. Famously from enterprises AI companies, code is get to access like huge amounts of data, huge amounts of tokens because they have to read the whole code base. You have to do work on it. That's because their profession already understood that you have to give them like fundamental and privileged access to a huge amount of data to get their job done. When we talk to customers, every customer I've talked to recently is going through this exact challenge of how do I make my organization traversable by your AI systems? Because my coding organization is, but if I'm a bank, all of my different sources of data currently are not trivially accessible by single systems, but I want them to be. I can see a path to how making them access to single systems will help my own employees deal with many interesting problems. This is true of public relations people as well. Public relations people would often like to read 100 or 200 stories about the company that they work for or they cover or are contracted by. But doing that has previously been a very intensive human labor-centric thing that involves reading a load of coverage. Well, this could be different as well. So my general sense is it's true that software engineering looks different to other industries right now. But every customer I talked to from a range of industries is just trying to think about how do I make the words in my organization be as accessible to AI systems as for code currently is. So we're going to go through that change and I think quicker than people expect. When Cloud Code came out, I saw a lot of people that I trust say this is a GI. This is artificial general intelligence, the line that was promised to cross. Are they right? Are these coding are these agents? Are these agents? What we five years ago would have called a GI? They're very close, but they're not quite there because they lack a certain type of creativity and intuition, which you can find in no AI system or agent yet. And I will just use the way I think about this. So Dario, Amode, CEO of Anthropic, defined in machines of love and grace, this vision of powerful AI, which he thinks we can get to by end of 2026, early 2027. And powerful AI as he describes it has a few different properties that I'll go through. One, it can access all of the interfaces that your eye can access on a computer. Today, well, great. That's true of AI systems today. Number two, it can do tasks that take hours, days or weeks for a person to complete. Well, if you look at lots of tests these days, modern AI systems like Opus 4.6 can do tasks that might take a person about 10 hours to complete. And if you look at our own studies of agents for the deployed on our platform, we see tasks that can take like some number of hours. So we're on this trajectory towards end of this year, early 27. Sure, tasks that might take a human days to complete seem within scope. The load bearing part of Dario's definition, though, is he's as smart of an a Nobel Prize winner across many dimensions. I've spent a long time staring at that. We have AI systems now here and at the other companies that can assist scientists at the frontier of mathematics and biology and physics. You can read papers where models are being used by these people to make advances. But the models have themselves not come up with intuitive and heterodox creative ideas that we award humans Nobel prizes for. Like AI systems have an invented CRISPR. They haven't come up with a theory of relativity. And it ever sounds like a tool order, but I think it speaks to some essential property of creativity, which these systems lack. There's like an improvisational element, which they don't have. And for this reason, when we think about how this is going to affect the economy, I think the reason we're in this counterintuitive place where every person, every human becomes a manager is because humans have intuition and creativity. And these systems don't. And I think the $100 trillion question is if at some point AI systems can display that same level of creativity and intuition that humans might. And it's very, very hard to not. What is missing, do you think, at the technological level that's keeping artificial intelligence from being able to produce results that you would consider original and creative? Yeah, we. I don't think we know how to get these machines to stop working and idle. And it sounds like a bizarre thing to say, but where do great insights come from? Most people have great insights when they've worked really hard on something and then they've gone and done something else for quite a while. And they'll have these insights and they've gone swimming or they've gone for a walk or they've, you know, they're often when you hear these anecdotes of great breakthroughs, it's not a taking place in the office of a lab, it's taking place outside of it. AI systems, we invoke them and they do work for us, but they actually have no real time with themselves. And I don't know that we know how to like give
give them that time or how you would even structure it. So there's some essential property here of being present in the world and not working, but like thinking and interacting with the world that is something people do that AI systems don't. And my guess is some aspect of creativity comes from this very subtle thing that is very special about us and other kind of living creatures where we can idle and like fritter away time and user imagination and curiosity. And we don't quite know how to give AI systems this. You reminded me that Thomas Edison on the subject of creative breakthroughs is most famous for the observation that, you know, it's 99% perspiration, 1% inspiration, right? He's a better quote that makes for a worse college poster, but it's a better quote and it's this. I never had, this is a quote, this is a, recording here, 1912. I never had an idea in my life. I've got no imagination. I never dream. My so-called inventions already existed in the environment. I took them out, I've created nothing. Nobody does. There's no such thing as an idea being brain-born. Everything comes from the outside. The industrious one coaxes it from the environment. The drone lets it lie there while he goes off to the baseball game. End quote. What I like about this idea, is not only, it's not a problem. This is how he invented the incandescent light bulb. He did not think really, really hard about what kind of bamboo would properly burn inside of the glass. He tried hundreds, thousands of different materials and then the bamboo happened to burn for the right amount. That's not inspiration. That is just trying stuff over and over and over again. And I wonder whether if we just take this sort of Edison principle very literally, whether it suggests that there needs to be some corporeal element to AI, maybe namely robotics. For it essentially to be embodied in something that's interacting with the world in order to have original ideas about it. Is that too dreamy, fanciful? Is there maybe something there? - I have a quote in a story to return to throw back at you. Seymour Kray, one of the fathers of supercomputing. A notoriously brilliant guy that built supercomputers was famous for when he was stuck on a problem. He would go onto his property and dig tunnels and he would have ideas and people would say, how did you have the idea? And he said, "The elves told me while I was tunneling," which is the sort of thing that an eccentric like Guy that built computers says. But it speaks to this where being idle and maybe being embodied and finding some way to have activity while being in some sense mentally more passive about other things. It seems integral to how people come up with stuff. There are also throughout history so many people that talk about just walking around, like going and walking around the city that they live in or the countryside to have ideas. Isaac Newton famously spent decades basically living in a fancy barn, walking around and occasionally having ideas, some of which turned out to be very consequential. So it could be that this embodiment is part of it just to sort of pull it back to agents. We are beginning to have these examples of agents working in like agent ecologies with each other. There was a technology called OpenClaw recently that also led to an online social network for agents where they could talk to one another. It's hard to know how much of that is a signal versus noise, but it seemed to have this property of like organic creativity and frittering away time and being in conversation with one another, not necessarily about work, but you might expect creativity to come from. So I think there are lessons to be found there whether embodied digitally or in the physical world. There may be something here about getting AI systems to explore in a different way but there's gonna be importance. - It is conceptually a lovely idea to think that the problem with AI in terms of coming up with truly original concepts isn't that it's not productive enough, it's that it's too productive and that a critical ingredient for creativity is the opposite of what we understand as industriousness or productivity. It's actually the capacity for leisure, for itleness, for us to sort of make our minds a blank slate upon which ideas that are currently sort of far apart come together and combine to create new concepts. It is a lovely concept. I wanna close on safety, which in many ways is the calling card of Anthropic. And I wanna ask this version of the question in a very sort of abundance-themed way. My posture as an abundance guy is to seek supply side answers to complex problems. And I've thought for a while about what would it mean, what would abundance mean for AI safety? And maybe one way to pose it to you because I don't have an answer here. This is a purely innocent question. Is, doesn'tthropic think about increasing the supply, so to speak, of safety? What would that mean? - Yes, I mean over Anthropic's history, we have contributed things to help make the ecosystem safer. One example is, I'll give you two examples. Early in Anthropic's history, we contributed to a hugging phase, a data set for red teaming AI systems which was basically a data set we created to make it easier to deploy AI systems because they weren't going to talk about a grigis-access of violence or sexualized children or give you recommendations on how to do illegal things. And by contributing that to the ecosystem, for many years, the majority of the open-weight models that existed had been trained partly on that data set. So it's spared for creation of a bunch more AI because we created an asset that allow people to take risk out of the creation process. More recently, we have done this work with Mozilla where we used one of our AI systems to find around 20 significant security floors in Firefox for web browser and fix it ahead of the general kind of deployment of that system. And that idea is we can use our AI systems to generally increase the robustness of the world for AI systems we'll be interacting with and increase for safety of the infrastructure. So both of these ideas are important. We want to release things that make it easier for AI systems to themselves be made safe. And we want to release things that help increase the robustness of the world to the changes that we expect to be caused by AI systems. And I think by doing both, you buy the ability to, in a safe way, accelerate getting this technology to do more good in the world. - How do you globalize that idea? Because on a subject like, say, climate change, there's an enormous difference between an individual household deciding that they're gonna eat less meat and therefore have a smaller carbon footprint. And then on the other side of the spectrum, the Montreal Protocol, where dozens, more than 100 countries get together and say, "CFCs are dangerous and we're going to collectively regulate them out of use in order to let the ozone layer grow back." It does almost nothing for one AI company to take safety seriously. If this is a technology that is not only, Theral is the wrong word, that's so competitive, that's in the process of being built as if in an arms race, even if it isn't exactly like new-through weapons, the only way to really make a difference to safety is to globalize, to socialize your concepts of safety. How do you do that? I mean, how do you climb a mountain, you start walking, right? So you start with the AI company, you start doing projects, then you get other AI companies to kind of copy you in a form that we call race to the top. Let's create positive sun competition. So we evaluated our AI systems for biological weapons. Other AI companies vended the same. Then governments stood up, non-regulatory entities like the AI Security Institutes in the US, UK, and other countries, that now do third party evaluation of these systems for biological risk. And now you have the beginnings of a policy norm. Overfrontier companies test out for this stuff, governments have stood up bodies that help validate those tests. Those of the ingredients you need to eventually pass, pass a law, pass a policy, that's something that society gets to decide, but we're able to take actions that broaden the set of options available to society when it comes to deciding how to regulate this technology and make it safe. Our goal and the goal of some of what I'm doing with the Anthropic Institute is to produce a lot of the data and basically raw material that you might need to have other companies and other places run experiments along with same lines we're doing. And the more of that gets run, the more you have confidence as a government, that you could adopt some of that, because standards get partly developed just through competition in the ecosystem. And if it becomes a robustly good idea, everyone will end up doing it as a de facto way, and then you can decide whether this should be mandated or not. So I'm very confident that this works for a large chunk of the problems ahead. It doesn't work for all of them, but it gets as surprisingly far. - Well, I think, look, I think you're building something that's absolutely fascinating. It is on the one hand, I think dangerous and strange and scary to a lot of people. I think sometimes the way that your company represents it the world is there something that's
dangerous and strange and scary. And yet at the same time, I recognize that you are helping to lead this and socialize this attitude toward AI safety that I do think is commendable. So Jack, it was really nice to talk to you and my best year family and to your nightly sleep. Well, my best year was as well. Thank you very much. (gentle music)
Podcast Summary
Key Points:
Anthropic's rapid revenue growth challenges the view of AI as an economic bubble, highlighting its significant market impact.
The company balances developing powerful AI with serious safety concerns, comparing its potential risks to nuclear weapons while advocating for private-sector innovation under regulatory frameworks.
AI's societal impact includes potential massive job displacement, but Anthropic emphasizes proactive policy choices to manage economic transitions and foster positive outcomes like enhanced human curiosity.
Public perception of AI is mixed, with more skepticism in developed economies due to economic stagnation, contrasted with optimism in developing regions seeing AI as a tool for growth.
Summary:
This podcast interview with Anthropic co-founder Jack Clark explores the transformative potential and profound risks of artificial intelligence. Anthropic's extraordinary revenue growth underscores AI's economic significance, countering bubble narratives. Clark addresses the ethical dilemma of private companies developing technology with existential risks, comparing AI to nuclear weapons but arguing for a collaborative governance model where the private sector works with governments to mitigate dangers while promoting beneficial applications.
He acknowledges AI could disrupt millions of jobs, possibly spiking unemployment, but stresses this is a choice society can manage through policies like creating jobs in sectors like healthcare and education, funded by AI-driven economic growth. Clark emphasizes fostering human curiosity as a core value AI can amplify, helping individuals adapt. He also discusses public distrust, particularly in developed nations, attributing it to economic anxiety and past tech industry failures, while noting more optimism in developing economies viewing AI as an opportunity for advancement.
FAQs
Anthropic uses the nuclear weapons analogy to highlight AI's potential dangers and the need for governance, but argues AI is a multifunctional technology that also offers significant benefits like productivity tools, requiring collaboration with governments to regulate specific dangerous applications while allowing broader positive uses.
Anthropic emphasizes proactive safety measures and transparency, such as sharing data through initiatives like the Beyond Foric Institute, to prepare society for potential risks while continuing to innovate, learning from past tech industry mistakes like those in social media.
Anthropic acknowledges AI could disrupt employment, particularly in white-collar roles, but sees it as a societal choice; they believe economic growth from AI could enable policies to create jobs in sectors like teaching or nursing and improve wages through subsidies.
Clark focuses on fostering curiosity and self-awareness in his children, believing that cultivating internal passions and resilience will help them navigate a future where AI may surpass human capabilities in many areas.
Public sentiment is more positive in emerging economies where AI is seen as a tool for economic improvement, while in developed nations, anxiety stems from economic stagnation and prior experiences with disruptive technological changes, leading to greater skepticism.
Clark views curiosity as a key human trait that AI can amplify, allowing individuals to explore interests deeply; he believes encouraging curiosity helps people derive satisfaction and adapt to technological changes, regardless of labor market shifts.
Chat with AI
Loading...
Pro features
Go deeper with this episode
Unlock creator-grade tools that turn any transcript into show notes and subtitle files.