Helen Toner and Emelia Probasco: National Security in the Age of Intelligence
43m 51s
The podcast "Age of Intelligence" explores AI's impact on various aspects of society, featuring guests from the George Town Centre for Security and Immersion Technology (CESET). CESET focuses on the security implications of emerging technologies with a policy-oriented approach, emphasizing rigorous, data-driven work. The discussion highlights AI as a general-purpose technology with uncertainties surrounding its trajectory and implications for national power. The dual-use nature of AI, geopolitical implications, and the US-China competition in AI development are also explored. International collaboration, especially within NATO, is seen as crucial for AI research and development. The conversation underscores the importance of understanding different perspectives in the AI research community, navigating the dual-use aspect of AI technologies, and fostering dialogue to bridge gaps in understanding and intentions among stakeholders.
Transcription
8765 Words, 48450 Characters
[MUSIC] >> Welcome to the Age of Intelligence. The podcast considers how AI is rebalancing our world. National powers, business competitiveness, and our own lives. I'm Fuse of Genio, an academic, an entrepreneur, an advisor, about AI for the past couple of decades. >> I'm Tim Gordon, I've been an advisor, an investor, and an entrepreneur in AI for the past decade. [MUSIC] >> We are very lucky to be joined today by Helen Toner. Helen Toner is the Interim Executive Director at George Town Centre for Security and Immersion Technology, CESET. She's previously researched the Chinese AI ecosystem, and was also on the board of OpenAI. We're also joined by Amy Provasco. Amy Provasco is a senior fellow at George Town Centre for Security and Immersion Technology. She's previously worked at the John Hopkins Applied Physics Laboratory, and also in the US Navy. >> Great to have you with us. Thank you for joining us. I can also be to start by just telling us a bit more about CESET, and what's this mission, what it does, and are there any other sort of organizations globally in the form, city, or role, or are you unique? >> Yeah, it's great to be on. Thanks so much for having us. CESET Centre for Security and Immersion Technology. The basics are in the name. We work on the security implications of Immersion Technologies. So what that looks like is, so we're based inside George Town University, but we're really not an academic organization. We're very policy focused, very focused on what policy makers, decision makers, and a broader set of stakeholders need to know about progress in emerging technology, and how it affects the kinds of decisions they're making. So we have a number of different teams working on different elements of that. Amy works on our team focused on national security applications of AI. We also have team members looking at workforce and talent issues, biosecurity, semiconductors, a wide range of other related issues, and we're really laser focused on trying to find actionable insights, trying to shed light on these complicated issues, to help our audience make sense of what's going on. >> Thanks, you're welcome. >> And you also asked about comparable organizations globally. I think we do like to think that we're unique in some important ways. We really strive to make sure that our work is really rigorous, evidence-driven, data-driven. We have a data science team and a large number of data resources that feed into our work. So in some ways, sometimes I think of comparable organizations, there's kind of consulting firms that will do some of the kinds of things that we do, but paid and more client directed. There are other things, tanks out there, of course, who increasingly are focused on the kind of tech and national security issues that we identified when we were set up in 2019. Internationally, I believe there have been CSAT-inspired organizations set up in, at least the UK and in South Korea. So CTAS, the Center for Emerging Technology and Security, it's part of the Alan Turing Institute in the UK. And I'm actually blanking on the South Korean organization, but I think it's another mix of those same words and a similar acronym. And again, trying to help decision makers in that context understand some of these same fast-moving emerging tech issues. Given the breadth of the topics that you study at CSAT that also you've been around, the Center for Generals for the number of years now, how can we position the themes that are more AI-specific today to, let's say, themes that are do with, like, image or something? Security, maybe some other kind of technologies, or perhaps a lot to be young CSAT, other kind of science and technologies. Of course, people always think about nuclear or other things, right? How can we position AI ready to others? What are some common themes or common challenges or common lessons? And what are some differences? Pick your favorite one, I don't know whether nuclear is one or something else, that can help us position AI within the historic framework of the past, I don't know. A whole century, maybe, if not longer. I can start, and then I'd love to hear if you want to share any thoughts. I think a really important piece for AI is that it is a very general purpose technology. So there's a lot of emerging tech that from a security perspective is highly relevant. So whether that's cyber security or stealth technology, in the later 20th century or other things that are really security-specific, and it's obviously very important for security decision-makers to understand those topics. But I think the general purpose nature of AI just adds a lot of complexity around who are the relevant stakeholders, what kind of trade-offs are you making, where it's not specific to a military domain or to some other national security-focused domain. But really also affects civilians, also affects consumer technology. More analogous to something like electricity in that way, electricity changed everything about how we live and had very important security implications, but those were far from the only implications. So I think that is really important for AI, and I think also another piece that to me makes AI a little distinct from many other technologies that we are looking at at the moment is how much disagreement and uncertainty there is about the trajectory of the technology. I think often experts might disagree about sort of the pace, you know, is quantum computing five years away or 10 years away or 15 years away, but they'll often agree much more on kind of what will it look like as we're succeeding, what kind of markers will we see along the way. And with AI, there's much more, I would say, a much more chaotic and mixed picture of who expects what, which again makes it very difficult from the perspective of, you know, if you're a senator or, you know, a senior decision-maker with lots of different issues that you're trying to make sense of, and you know, and they get a little bit of time to think about AI, it can be really tough to understand what is going on, not just what are the answers to the questions, but what are even the questions you need to be asking and the problems that you need to be solving. I'd love to go later on to the questions. Amy, you've been doing all the military side, so people very often say, oh, this is another nuclear one. They say this is like a cold war. This also has a different narrative is out there. Some of them make more sense, some of them make less sense. What's your take on all these narratives and maybe how to compare? On the military side at least? Sure. I mean, I don't find the the nuclear comparison super helpful and really as a person in security and applications of AI, it's what can I add AI to in order to make it faster, better, more precise. So you can't, the nuclear analogy doesn't quite hold up for me, but instead, every person in national security has to be asking themselves, where does AI factor in? And it's not by the way, should be clear, it's not just large language models. It's all the other types of AI, computer vision, good old fashioned AI. All of these things have a lot of applicability across the entire spectrum of national security concerns. And so in that way, it's more comparable to electricity than to something specific like like nuclear power or nuclear weapons, which invites a whole new set of thinking about how to do things like deterrence or protection for issues around that technology. I mean, we want to go now, brother, I mean, the geopolitical implications of AI at massive, at economic level, economic impact, sometimes also the ideology level, and there is like a quote along the lines of like whoever controls AI, controls your ideology, therefore your future kind of thing, it's a culture product, but also, of course, the disruption to the defense sector, let's say. If you were to unpack the different topics around the geopolitics of AI, if my call this theme this way, right? There is a lot of work on this and people work on different themes and both on your, both at CIS and elsewhere. What can the audience, what can we, which themes would you see like, you know, which some communities in this broader topic would you, would there be, and what are some today's questions kind of things and their ideas theories? There's definitely a lot that falls under geopolitics of AI to unpack. I think from our perch sitting in Washington DC, key part of it in our work is the U.S.-China competition, trying to understand how AI affects U.S.-China competition, which has been one of the sort of driving focuses of, or probably the driving focus of the U.S. national security community for the past what eight years at this point. I think there's also interesting thinking being done on how kind of new AI capabilities, potentials or national security implications of what AI can do itself. So this is thinking about, does it affect the sort of range of capabilities available in cyber attacks or there's a lot of concern about AI helping with creating bioweapons things like this. So you're sort of shifting the balance of power between different actors making attacks that are currently require high sophistication and making those sort of more accessible. And I think that is maybe a specific manifestation of a broader question around how AI affects national power. So, you know, a few hundred years ago, national power was primarily determined by how many people you had and how well you could feed them in the 20th century. That shifted to a much more industrial model where it was really important how much steel you could produce, how many ships and planes you could build and how many munitions, rather than how many fit young men you had. And I think a big question for the 21st century is to what extent AI will reshape, you know, what it means to have national power and what that looks like. And that I think is quite an open question. There's a lot of different takes on the implications that it might have. But I think it's really a further exploration. What would you add on me? Sometimes I think we think of AI is this thing that pops out and it's done. And in fact, it's layers of things that all work together. And so in thinking about the themes, it's, what is the power that the model itself confers and what sorts of models are we talking about. Then there's the compute. We talk a lot about the competition over the compute, which is the models are sort of nothing without the compute. The compute is not as interesting without the models. And then you have the delivery methods or the apps, all the different ways and the things that it might do the way we pair the compute, the data, the models into some sort of final product. So all of those things are at play right now. And so we have lots of conversations about is that the deep seek model or is it the open AI or the anthropic model? Is it Nvidia's compute? Is it somebody else's compute? So there is more than the U.S. China is certainly the thing that I certainly spend a lot of time. I think a lot of us spend a lot of time thinking about, but there are lots of elements to the entirety of the AI stack. And we should be thinking about how we engage with other nations that can contribute and do interesting work at any point in the layers of the AI stack or the software stack. I think the other point I would just make, I think you might have been getting to this a little in your, in your question is that there's a dual use element. So I can use an LLM to complete certain actions in the military domain that might also be extremely useful in the healthcare domain. And the technology that is used to do targeting for the military is also very good at delivering relief supplies. Or it might be very effective, the logistic system for the military is very effective for Amazon. So it's very difficult to know when we've created this sort of collection of technologies, what it could be used for, which is both exciting, and I think uncomfortable, because then it starts to make you think about what you're building in the many different ways in which it can be misused. And useful is the analogy of the race between the U.S. and China. You saw the front page of the FT a few days ago. Just one was saying that China is going to be waiting because they can build power stations far, far. So the weekend, there's another star that goes effective in America. So all of it are the sort of building the machine gobb through vast amounts of cap extra compute that actually that doesn't quite work out. The boom, the possible sort of cause of crisis. Is this a useful way of thinking about it? Or is it something actually where, given this as a technology, as you say, we'll transform the 25th century, should we be thinking, any site, let's raise the terms? So personally, I think it is perfectly reasonable to think of this as being a competition. But I think the word race brings up much more specific connotations that I don't think are clearly right. Because if you think about a race, what defines a race is you're trying to get to the finish line first. And whoever gets their first wins. And it doesn't matter if you're half an inch behind if you didn't win, you didn't win. And that's all that matters. It's conceivable that AI could work this way. If there is some kind of future highly advanced AI system, that somehow gives you some permanent advantage if you get there first. But I think it's not at all clear that that's what the competition will look like. Thinking in terms of a race, I think narrows the set of possibilities that we consider and the set of strategies that we think about as being valid and valuable. So I do tend to push back on that, that race language, even if you're very committed to this being a competitive situation, which I know some actors are more than others, I think it can still make sense to say, yes, it is a competition, but no, it's not, or probably isn't a race. And you mentioned the sort of the sense of some of the other allies out there, they could perhaps play a role in this. It's not just a sort of head-to-head battle. If you look around the world, what do you think that Europe brings to the table? What do you think that say India, for example, with their tech stack and their and their qualities of resource springs? Well, how do you look at the wider set of allies out there? Everybody has something to bring to this game, and it's just a matter of figuring out where and how. We've done at CSAT studies in the past that looked at Europe and our allies and the academic research. So if you think about how all the research that's going on, particularly in both Europe and India, that research is critical. Much of that, the collaborations that American scientists are doing with overseas scientists, particularly in the EU, is of high impact. And so the research element, I think, is one where we can be working together. And then obviously in the National Security Domain, which is where I focus, we are starting to see more and more movement, particularly in NATO, to try and instantiate and operationalize AI. There will be different ideas about how to get that done. And I think this is one of the great strengths of NATO. There are different rules of engagement. There are different notions of privacy. All of these things, I think, are important, and we need to struggle through them together to get to ideal outcomes. It should not just be the US in China having these discussions. I think looking at the different ways in which those countries can contribute is absolutely essential to finding the best possible solution for everybody. I love the point on the research. And I'll take the opportunity to ask a question that is I'm facing, as an academic also, I'm actually finding myself this day is going to AI research conferences. And sometimes people say, oh no, we don't want these people for using AI for military. They should not be in the conference, don't belong here. And they ask me, of course, I mean, then everybody should, nobody should come here in that case, because we all do this in some sense, because we do all use. And one of the debates people are standing to head was very interesting. And I'm sure people have the same debates, physicists have the same debates in 1950, 60, 70 or something. Is I should really search as academics or whatever, search and think about this dual use. And I mean, we'll start doing it to become like, you know, scientists at the same time, like, you know, the thesis used to be like, back then, what do you have to say about to the research community that they may or may not realize they're doing military research, actually, because they do all use, and have navigated it. That's an infest question, but then I have a second question regarding the stack of AI you mentioned, but let's tackle it later. I'm curious what you both actually think about advice to research at community. I think one of the most wonderful things about democratic societies is that people can have different opinions and lead perfectly productive and wonderful lives. And so I am thrilled that there are researchers who do not want to work on military things who have other passions and care about things like health and education because those are the things that make societies wonderful. At the same time, I do not think that those who are interested in national security application should be excluded or their voices should somehow be sidelined because for some reason they are bad. That I find to be an unacceptable and that's frankly not democratic. So I understand that there is a level of discomfort with certain researchers working on military topics into them. I would say, fine, don't work on military topics. That is fine. That's what makes us great. But at the same time, it's very difficult to know. I mean, this is the question beneath your question. How do you know that you're working on one and not the other? And I can't tell you the answer to that. I mean, it's just like if you work in finance, I can't tell you that just money gets printed. It gets used in lots of different ways. The energy exists. It does lots of different functions. Even if you work on social media and you think you're just doing something for entertainment, you come to realize that it has misuse. And the best we can do is to make the system that is, you know, work with institutions that we think are that share our principles and then work on technologies that either create the better future that we're trying to create or control for the misuse as best we can. But there is no clean answer and I completely respect everyone across the spectrum who's doing research on this topic. Ellen Simon Pouadon, this dual-use skit-of-fraining of researchers or also follow us, right? So how do you navigate it or just accept it? One thing I would maybe add is I think that in my time in the space, I've been working on these topics for coming up on 10 years or so. I have seen a lot of people with passionate opinions on both sides of this who seem to talk past each other a lot or who seem to not necessarily understand where the other perspective is coming from. So I think, for example, you know, there's been these long debates on lethal autonomous weapons systems in the UN and often I think the advocates coming in wanting to ban those systems have really good intentions and I really understand where they're coming from. But then when they start trying to operationalize that into things that would actually affect military practice, you know, I'm like, Emmy, I don't have a military background. I don't, you know, don't have that lived experience. But when I speak with people who do, they help me understand why some of the things that advocates might be pushing for actually would have really unintended effects that they don't realize or how it can be really difficult as Emmy pointed out to kind of cabin off what is military in what way, what is, you know, what does lethality mean, what is autonomous mean, how would this effect existing practice. One suggestion that I keep coming back to in this space is I think it's really important that we just remember that we have international humanitarian law and that that tries to articulate a lot of the kinds of things people I think are worried about here. So if people are worried that AI, you know, military AI will be used to harm civilians, that's already contrary to international humanitarian law. And you know, likewise for other kinds of many other kinds of concerns people have. So I think those concerns are very valid, but I sometimes wish that they would be channeled into trying to strengthen and promote international humanitarian law and trying to have that continue rather than being made to be about AI because AI can also be used in ways that support international humanitarian law. So for example, looking at how you use AI to reduce civilian casualties is absolutely something that you could do. So that's just one perspective that I'm trying to bring to these debates as well. Can I, I'd love to add Helen picked up on some of my favorite discussion topics here. I find that one of the things that's most frustrating as a person in this space but also helps me in a lot of conversations is forcing people to get specific about what technology or what what they're worried about. And often I find that their worries are more informed by things that aren't real than they are about things that are real or alternatively they have a false perception of what happens today. And this is particularly true in the military. I think folks think somehow folks, you know, things are magically all working together on the military side. I'm sorry to inform you, it's like really hard and there are a lot of Excel spreadsheets that are throwing around and power points and terrible mistakes that no one wants to happen but they don't have a better tool. And so many of these tools are when you get into the specifics of exactly how it's being used, it becomes a little less scary. So I always try and encourage folks who are concerned to engage with me on a conversation on something that's where their real concern are and then what technology can and cannot do. And just maybe to close out that thought as well. I just want to make sure that it's clear that I think neither MNRI think that, you know, everything that is going on is fine and great or that, you know, the potential ways you could use AI are finding great. There's been, you know, there's been reporting about use of AI in ways that absolutely contravene international humanitarian law, for example. And I think my point is not that's all fine. Nothing to worry about my point is the criticism there should be about the law and not about the AI. So what's your sense of the future, the diplomatic conversation around this? So we've had the bletched apart conference, which basically is all about scary robots going to come and eat us. And then we have the Paris conference is all about scary Americans going to come eat us Europeans. We've got the Indian conference. The presuming is going to be about, you know, India's coming is the wonderful hallelujah. I've had a Geneva conversation and 17 million Americans from the Chinese are based around some of the military topics. Where do you think that's going? Is it really late? Is that what you think that's all kind of doomed? I'm honestly not sure what to expect. I think the international summit series that was kicked off with bletchly. I love that summary of it. That's that's amazing. I think it was started with a real vision on the part of the British government for the specific niche that they wanted it to fill in the specific set of topics that they wanted it to cover. And I think that just wasn't shared by other governments, unfortunately. And so it has sort of spread out and gone in many different directions that I think maybe the folks who hosted the first summit weren't intending or hoping for. The US China sort of track one bilateral conversation doesn't seem to me to have much momentum behind it right now. There's that first meeting in Geneva in May last year where the two sides met and had I think sort of not terrible, but certainly not incredibly promising discussions. The G7 Hiroshima process has been sort of chugging along in the background and seems to have been somewhat productive at a minimum as a way of sort of crystallizing some sort of best practices and trying to articulate, especially for those so called frontier AI developers. The ones really pushing the frontiers and making especially advanced systems. That process seems to have made a little bit of headway and laying down some some early sort of shared views of that space. But yeah, and I'm personally not sure. I feel a question about the following broader related topic and who discusses with in our earlier episode just with Karen Karen how we think to wrote this wonderful book and bar of AI. And there, as you know, so the argument of Karen, which is, you know, who should be more intentional about what how technology developed and who should pay attention to negative externalities whether it's for electricity, for energy, water, climate, children, whatever, whatever. And I'm wondering, and that's one question was that also with with Tim before we discussed with Karen. Is there really any way to avoid negative externalities when a technology as powerful as AI is moving so fast and so unpredictable as you said, Alan. And do we expect to see a model which is neither today Silicon Valley go fast, you know, going at the narrative of the race, the narrative that we use has to win or you know, everything is doomed. With a quick quarter of the results of what's in the world street on the other side, all the Chinese way. Is there another way that maybe we need to kind of look towards that you may be even see shaping that maybe some of the else in the world. I mentioned Europe here as an example or some of the else also can look into another model that solved the problem of talent, house, argument, but that is made. What do you think? I don't know maybe Helen can give the shot. I'm sure there is another model and I don't know what it is. And I think we are in a giant sort of code discovery process of what is this technology and how do we use it and how can we build it. Something I'm very passionate about is trying to expand the range of people and the range of perspectives that are able to have input into how AI is developed and used. And so I think that can include things as simple as there's been some good progress this year in creating more transparency requirements for AI development. So trying to give governments, give the public, give sort of independent experts more ability to understand how the technology is being developed. I think also, you know, it's important to remember that for, you know, in many ways, we've been served pretty well in the past by reacting to things as they happen and as they come and seeing, you know, what kind of problems come up. How can you rectify them? And a big question with AI is, you know, to what extent is that a fine model versus are there going to be potential harms that are going to be harder and possible to rectify? And what might that look like? And I think that concern that there are going to be such harms is what motivates some of the more sort of radical proposals in AI governance circles, like trying to do a, you know, pause or you have a massive global governance arrangement. I think those those only make sense if you really think that there's going to be these sort of irrevocable issues. What do you think, Amy? Yeah, I mean, I think, I think that's right. And the, I mean, I'm sort of stuck on your your question about externalities. It's sometimes it's hard to fully appreciate the externalities as you're starting to get going. But that's why what Helen said about, you know, the mechanism of transparency is meant to bring more people into the conversation to overall shift the power dynamics that that shape how this technology gets deployed. And this is a time when we do need other sources of pressure and power exerted than just one, you know, just the military or just the development set the folks who are working on the development and having that transparency showing when, you know, being as excited about the wins and the amazing things that AI can do as you are about the harms and making sure we don't do that again, we have to embrace both sides at the same time and allow for productive tension to exist as we as we try and get to an ideal outcome, which won't be easy or pretty necessarily, but is a struggle worth engaging in? I mean, I've had a question for you on on actually deployment of AI in the military. It might, but all the concepts that have been the true focus have always been very struck by how considered and responsible they actually appear to be around the conversation. And in the sort of, in the civilian world, you like a lot of the terminology, the whole sort of in the loop context human loop and so on, I think that comes out to drone warfare and so, you know, do you like some of the thinking that's evolved from that? From the start plays. Do you think that's still true in the in the in the military sphere or do you think the combination of the some of the experience you create and the sort of affecting the force nature of some of the choice that they had to make in the face of in the face of reality? The increased sense of near peer competitive with the US with China and possibly even the new warry ethos we see at the Pentagon. Has that begun to change some of that sense that actually the truth has got a real need to get on the responsible AI's type of things? I don't think so. I'm really happy to say I do not think it's changed some of the intuitions of behind responsible AI. In fact, the group that did the responsible AI toolkit for the United States, which is part of the Department of Defense, was highlighted in the most recent AI action plan. So I was trained at the United States Naval Academy where leadership and ethics and law are sort of baked into you from the time you're 18. And the culture in the military, especially amongst the leaders is if we are learning about international humanitarian law or learning about international human rights law, we understand the burden of command and leadership. And so I think what you will find is that commanders themselves, the individuals who are tasked with deploying these systems feel acutely that they do not want to make a mistake and they do not want to violate law. They don't want to go to sleep at night knowing that they did the wrong thing. And I think sometimes that gets forgotten in these big conversations about the deployment of tech. So I do think that you will find that there is a core that is very strongly indexed on being responsible in the use of AI. But you raised the point of how does that change when you feel like you're in a situation of existential threat where you feel like you might die? Does that change the way that you look at the system? And I think that, you know, in some regards, it hasn't, it has changed things in Ukraine, but it's not Willie Nilly, we pressed a button and then all the swarms came out and won the war. Like that does not happen. What is happening in terms of autonomy, just to give you a very specific of how this looks, there is this, you know, you send up the drone and then you lose comms because they're attacking all the communications networks. So it becomes much harder. Most of the of the drones today were piloted. There was a guy with a joystick that was sort of driving it into the end. Well, when you lose comms, it gets much harder to drive it with a joystick. There's some workarounds with these like wires basically. But you have this thing called terminal guidance. So that's at the very end when you when you found the target, you know, that's the target. And now you want to make sure you hit that target and nothing else. Terminal guidance is autonomy. You know, you can imagine it includes some amount of AI, but really not much. It's been around for a long time. And it just lets us hit the target we intended to hit. So I think it's just, you know, this is the specifics of the conversation that does it drive us to adopt autonomy? Yes, they're jamming our communications. We can't pilot the thing in anymore. Does it lead to sort of, you know, the terminator? No, because nobody's comfortable with that terminator sort of a future. They also, you know, nobody wants to accidentally turn on their own people. So I think, you know, this notion, I'm very proud of what the military has done with responsible AI. I think they've really tried to push out these principles and instantiate them in different ways. And I don't think that's going to go away because of the people who are actually in uniform in that. And they're saying people, we forget that the people when we talk about the big picture, but that's a key lesson here. Of course, we can be a full course in shapes. I think we can be of different different personalities, if I may say. So I go back, Helen to you. And we don't want to get into all this drama that happened a couple of years ago with the board of open AI and some Altman hosting all that, but two questions. First of all, what would be some general lessons on governance that maybe you took out of it personally and in general? And the second, which relates also with the shifting environment these days that we see, is the music changing recently? We hear about bubbles, we hear about bailouts, we hear about, you know, all sorts of new ideas coming out. Are you, how do you feel about both the lessons from that time? And how things may have changed since then there's a lot of different perceptions and music. Yeah, I think some were answered to both questions. And I think it comes back to the point about water participation that we were talking about earlier, which is, I think, for a long time, the AI sector was a pretty small group of people and a pretty insular community, largely based in Silicon Valley and San Francisco Bay Area. And I think, I think a great thing over the last couple of years has been the increasing number and variety of people who are interested in AI, who are engaging in these conversations, who are helping to figure out, you know, what we want to do. And I think that included maybe in sort of 2023 and 2024, I would say lawmakers got much more in-days and involved following the release of chat GBT, which I think was a good development. It feels to me like this year, 2025, there's been also an especially notable increase in kind of just public engagement and interest. Of course, the public also got interested after chat GBT, but with some of these questions around data central buildouts or job impacts, things like that, I think people are feeling the impacts of the technology a little closer to home. And I think to the extent that that means that they sort of, you know, get more informed about AI, develop their own perspective, start to engage in these debates. I think that's a really valuable thing. So I guess some of the questions that comes keeps going back to me is, do you feel more than the AI bubble? Do you think this thing is about to go pop? I'm not a financial expert, so I defer to others on bubble dynamics really, but something that I believe is true is that many of the most transformative technologies of the past couple hundred years have been associated with bubbles. And so it seems to me perfectly possible that we'll see some kind of financial readjustment, which could be very painful, potentially, depending on who loses money and which company's struggle. But that's also perfectly compatible with expecting the technology will have a huge impact on the world and we'll sort of carry on, carry on progressing. So I think sometimes the bubble question is like, do you think this is all fake and hype? And no, I don't. But am I confident that the level investment that's going in right now is going to pay off? No, I'm not. So I think those two things can co-exist. It's always a question of timing and actually what happens first of the day, don't you get it traction? Or do we actually find ways to make it work? And also, which actors are affected? So, you know, which investors struggle? Is it just the VCs who, is it the VCs and the sort of early stage companies that have promised massive returns in the next two years? And they have a bad day? Or are we looking at pension funds? Are we looking at some of the big tech companies really being hit? That would be a more systemic shock. Totally, you know what I'm going to take for really good. And you mentioned actually what the challenge of the military is there's still a lot of reliance on exosperate chiefs and PowerPoints and so on. And actually, I'm trying to take these sort of legacy tech systems that were probably going to place by some prime contractor back of the '70s and, you know, put AI on top of them. Is that something that you see as being a real sort of blocker between the sort of the, if you'd like, the conversation we heard about the glories of AI and the military and the actually the harsh realities that it's going to take a long time to make that happen? Yeah, I mean, I think, I think everybody's actually struggling and there's this notion that you're just going to drop off on your desk and then it's going to transform your whole world and everyone is funding. Yeah, that's not true. So yeah, there's the military more than many organizations as an enormous amount of legacy equipment that, you know, we have to maybe not quite as bad as NASA. NASA's got a maintained computer from the '80s to operate old spacecraft, but it's comparable in some regards. And so that integrating with legacy technology, especially legacy weapons platforms is is pretty hard. And they're also the Pentagon is a massive bureaucracy that composed of other, like, composed of other large bureaucracies. So moving data throughout the Pentagon is a huge issue that folks have been working on for a long time. So, so do I think it's going to take a long time? Yeah, I do. I think it will take a while to get to a fully scaled adoption of AI. I think we have some bright spots where we can see prototypes that worked so well that folks started to really pretty, for the military, rapidly adopt them. And Maven Smart System is the one we talk about most often, but there's still, there's still lots of, I call them the AI adulting problems. So there's the, you get the prototype and you're really excited and you're playing with it and it does something really cool. And then you realize you have to pay for compute. And that's the AI adulting problem, or you need to figure out where your your edge compute versus your cloud compute strategy is going to look like. These are all the nitty-gritty next stage issues that the military is going to have to work through in order to go from some very successful prototypes to fully scaled operations. And I think everybody encounters that to some extent. Very much business, very much business experience of this day's adoption. Let me go a little bit towards the future. And what I find very exciting this day is the discussion about AI and science. Even some of us are talking about AI and science who's like, oh wow, okay, that's interesting. Science, robotics, AI and the physical world. How do you see that playing out as a marriage wise? No, no, of course. And how do you see this made sense in dynamics between different regions? Even the weaknesses, the relative weaknesses, strengths, you know, manufacture, manufacturing, industrial production in Europe, another story of politics, of course, sort of rare earth and materials, all the stuff. How do you see that playing out? How do you see the power dynamics shifting as we go from AI on the internet and the labs on which models to AI in the physical world and in science? I think it's two pretty different questions, whether we're talking about AI for science or whether we're talking about AI and robotics. Of course, there are connections. You can use robotics for science, but I think AI for science is, you know, one of the most exciting stories with AI right now. And again, you know, isn't always language models. So I think there would be little disagreement at the biggest, most exciting development in AI for science over the past 10 years has been alpha-fold. You're predicting the physical 3D structure of proteins based on their base pairs. There seems to be agreement from a lot of biomedical specialists that that should unlock a lot of new things, but that kind of research takes time. So I think we're still yet to see exactly how much bossons from that. And likewise, there's, you know, other directions. There's automated material science that's happening where, you know, labs will autonomously generate ideas for compounds and then mix them using relatively simple robots and test them automatically and, you know, this concept of a self-driving lab starting to get established. I think lots of exciting stuff there. I think most science tends to be sort of common good, internationally shared good, which is great and wonderful. And I hope that that continues. I think the question around robotics and even also, you know, advancement manufacturing is a little different and that that comes back more to sort of these ideas of competition and national power and geopolitical balance. And in my mind, that's an area where it looks, I haven't, you know, looked into this deeply, but it looks like China has some pretty significant advantages in that space over a lot of other countries, not just the US, but many others. And so if it does turn out to be a really important source of advantage, then, you know, we'll have to, we're made to be seen how other countries adapt. One last single out on the AI for science pieces, I see this as a big place where other countries, non-US countries, non-China countries have a lot of opportunities because I think making use of AI for science and actually getting good results is often going to involve investing and dedicating time to a specific setting to a specific field, not just kind of having the newest, shiniest sort of hypothetically most advanced model, you know, those kinds of deployments and uses offer a lot of elbow grease and legwork. And so I think countries sort of pulling out whether it's, you know, challenge prizes or dedicated national efforts to solve particularly significant challenges, I think is a window of opportunity for a wider range of different actors. Not only the front of the, let's say, military operational, but also the back, may change the balance of powers between different regions going forward. Do you see something happening there? I mean, I think, so AI plus robotics is, in my world, is drones. So that is, yeah, that's a, that's a source of substantial interest on, on all sides. One of the hardest areas has been the navigation of under-sea drones, which is of deep importance to the US China competition. And that is, you know, that's going to be an area where there will be a lot of focus. Do you have a question about back office dynamics? I mean, yes, AI could make a huge deal for back office dynamics. Like I said, the military is a massive bureaucracy and runs a health care system as is the Chinese military. They have to run their own health care system. There's a lot of health work that's going on in the background that could be, that could use AI. The other one that often gets talked about, which is not quite what we call the pointy end of the spear, and it's also not quite back office, but all the intelligence gathering. It's just a lot that the, when a commander is trying or when a politician is trying to make a decision, they're trying to integrate a vast amount of information that can swing from commodities prices all the way to fuel stock, integrating that amount of information is really difficult. And the notion is that AI can help, if not AI, then at least some better data science and data management. Helen, meanwhile, I'd love to hear, I know you're super busy, your mind is in a thousand different directions, but if there is one question that excites you these days, what it is, what are you obsessed with these days? The one question. I'm really fascinated to see what happens with open source AI models at the frontier. For a long time, the frontier models have been leading, I think almost a year ago, when DeepSeek put out V3 and R1, it looked like that gap had mostly closed, then during 2025 it looked like the gap had opened again. Just a few days ago, we saw the release of Kini K2 thinking from the chatty's live moonshot, which again looks like the gap is potentially quite narrow. And I think there's a ton of strategic implications, not just for US China, but also for who has access and what kind of safeguards you can put in place, whether it's going to be a small number of closed companies or whether there's going to be this broad open access. So that's something I've been thinking about a lot, I'm trying to learn more about recently. In Europe, the music changed after JD once talked in Paris, and before that it was no open source, after that it was yes, open source, because maybe like the Chinese, they realize since we cannot compete on language models, one of the open sources and compete on the complementary assets. Do you think that's actually, do you see that as a strategy of Europe and China? Actually, let's open source everything and kill the investment issues of the Americans. I think open source is always a good tool for the follower or the trailing party, and not so much for the leader. So I think it will also be interesting to see, is there a natural dynamic where if open source does start to overtake the most advanced, you know, start to have the most advanced models and do those actors then change their mind about open sourcing, or do they continue to open source when they are at the frontier or leading. That will be interesting to watch as well. So it's a leader follower question or the complementary assets question and comparative advantage kind of think? I think it's a little TBD. I think a lot of people's strategic thinking about open source AI draws from strategic thinking about open source software and the kinds of benefits that accrue to the providers of large open source software projects. And I think it's not clear. It may be that you get similar benefits from open sourcing AI systems, but I think it's somewhat different than traditional open source software, so we don't actually really know. Fantastic. So closing question to both of you. It's very easy to make a scary case for a day, the AI future, whether it's the concentration of power in a few corporate hands, whether it's the destruction of the middle class, whether it's the global war once or another. You see, it's kind of working incredibly hard to try and avoid some of this worst case scenarios. What's the one thing that you would really like? The opinion take us to take on board to create a better future, and what would that benefit you look like? I think taking seriously the possibility that AI is extremely advanced and powerful over the next 10 to 20 years. I think sometimes the debate has gotten so skewed that if you don't say it's going to happen, something crazy is going to happen in two years, then that counts as a slow timeline. So I think the big thing for me would be taking seriously all the kinds of preparation we need to be doing if we are looking at a possibility of very advanced systems sort of in the 2030s. Yeah, I mean, my biggest ask, and I think it sounds like this is what Helen was saying as well. It's not just about the tech, it's about how the tech is adopted, the rules of the organization set around it, the way that they train and prepare people to adopt it. There are lots of human choices that are not necessarily technical and having everyone engaged in those questions, I think sets us all on a better path. Thank you both for very useful conversation. Thank you. Thanks very much.
Podcast Summary
Key Points:
Introduction to the podcast "Age of Intelligence" discussing AI's impact on national powers, business competitiveness, and daily lives.
Guests Helen Toner and Amy Provasco discuss their work at the George Town Centre for Security and Immersion Technology (CESET).
CESET focuses on security implications of emerging technologies, with a policy-oriented approach and rigorous, data-driven work.
Discussion on how AI is a general-purpose technology with uncertainties about its trajectory and implications for national power.
Consideration of AI's dual-use nature, geopolitical implications, and the competition between the US and China in AI development.
Importance of international collaboration in AI research and development, especially in the context of NATO.
Summary:
The podcast "Age of Intelligence" explores AI's impact on various aspects of society, featuring guests from the George Town Centre for Security and Immersion Technology (CESET). CESET focuses on the security implications of emerging technologies with a policy-oriented approach, emphasizing rigorous, data-driven work. The discussion highlights AI as a general-purpose technology with uncertainties surrounding its trajectory and implications for national power.
The dual-use nature of AI, geopolitical implications, and the US-China competition in AI development are also explored. International collaboration, especially within NATO, is seen as crucial for AI research and development. The conversation underscores the importance of understanding different perspectives in the AI research community, navigating the dual-use aspect of AI technologies, and fostering dialogue to bridge gaps in understanding and intentions among stakeholders.
FAQs
CESET stands for Centre for Security and Immersion Technology, focusing on security implications of Immersion Technologies and providing actionable insights for decision-makers.
There are organizations inspired by CESET in the UK and South Korea, such as CTAS and the Alan Turing Institute, focusing on emerging tech issues.
AI is a general purpose technology impacting various domains, unlike technology specific to military or security, and brings complexity due to disagreements about its trajectory.
Geopolitics of AI includes understanding U.S.-China competition, new AI capabilities in cyber attacks and bioweapons, and how AI reshapes national power.
While AI competition is valid, viewing it as a race may limit strategic thinking and possibilities, emphasizing collaboration and engagement with diverse allies.
Researchers should be aware of dual-use potential in AI applications, engage with institutions sharing ethical principles, and aim to create a better future while controlling for misuse.
Chat with AI
Loading...
Pro features
Go deeper with this episode
Unlock creator-grade tools that turn any transcript into show notes and subtitle files.