Go back

AI diplomacy: Can 'peace tech' make the world less violent?

36m 0s

AI diplomacy: Can 'peace tech' make the world less violent?

The podcast explores the nascent field of peace tech, where technology and venture capital are harnessed to prevent violent conflict. Guest Brian Abrams, a venture capitalist, explains that peace tech companies, such as Anadir Horizon, use artificial intelligence for crisis simulation and negotiation, analyzing vast data—including psychological profiles of leaders—to probabilistically assess and mitigate conflict risks. This approach offers a profitable, scalable model that contrasts with traditional nonprofit peacebuilding, aiming to attract investment by serving both governmental and corporate clients who need strategic foresight. Politically, peace tech bridges ideological divides, appealing to the left's peace goals and the right's capitalist principles. Abrams frames it as a complementary alternative to defense tech, focusing on conflict prevention rather than victory. The discussion highlights real-world applications, like predicting the Ukraine invasion, and underscores the potential for exponential impact by redirecting entrepreneurial innovation toward saving lives and reducing the massive global economic costs of war.

Transcription

6007 Words, 33478 Characters

In peace and peace tech, we actually have something that unites the political spectrum. The left loves the outcomes that these peace tech companies are trying to achieve. The right loves the machinery of capitalism and its efficiency. Welcome to Making Peace Visible, the podcast about peace, conflict and the media. I'm your host, Jamil Simon. In this episode, we ask if artificial intelligence can help prevent violent conflicts and make the world more peaceful. Our guest today is Brian Abrams, a venture capitalist working in the new field of peace tech. The idea of someone who is determined to make money from peace at this particular moment in history is, to me, a delightful and refreshing concept. It's in sharp contrast to defunding USAID's peace building program, the evisceration of the US Institute of Peace, and changing the name of the Department of Defense to the Department of War. However, if you think about it from a metal level, it should not surprise anyone that peace could be profitable. Countries at peace are way more prosperous than countries in conflict by a lot. Peace is great for business in general. War may be good for a few businesses, but war is bad for the broader economy. Start looking at it another way, peace is great for everyone who isn't in the war business, which is most of us. Given that war and violence cost the world $19 trillion a year, the world would reap a massive dividend by developing effective ways to reduce war and violence. Brian Abrams is a successful venture capitalist who's made money investing in several tech startups. He has a very strong track record of a couple of quick highlights. He's managed a billion dollars in assets and has sat on the board of directors of 15 companies. But recently, he pivoted to investing in what he calls peace tech. He launched a firm called Bee Ventures, and he plans to invest in technology that can be used to facilitate peace in one way or another. It's a fascinating journey, and I want to hear more about it. Well, thanks so much for having Jimmy, I'm thrilled to be here. Brian, you and I are both working on peace at the moment, but we come to it from very different backgrounds and different approaches. My background is media production, international development, and peace building, mostly in the nonprofit sector. You come from a venture capital background and tech background. Can you give me and our audience a peek into the venture capital world? What are the values and ambitions that drive people in that sector? I think at its best, venture capital and startups approach problems with an idea of asking a question, not necessarily presuming the answer, but asking the question, and then running experiments in the case of peace tech, one of the themes we have is using artificial intelligence for negotiation as an example. We're not presuming to know the answer, we're asking the question, could AI be used to better negotiate geopolitics, for example, or financial deals? I don't know. Let's see. Then we find several different startups, several different founders who have an idea, and they run an experiment. They try something. Maybe it works, maybe it doesn't, maybe part of it works, maybe part of it doesn't. Then they iterate, and they get that feedback, they discover what works, and they try it over and over and over again until the solution bubbles up from the top. One of the things we're most excited about with peace tech is applying this bottom-up approach of experimentation and discovery and iteration, and it's something that just hadn't yet really been applied to the world of peacebuilding. Brian, how did you get started and build your career? Wasn't a straight line, but you learned from your mistakes, what are some tech innovations that you've helped launch in the past? You're right. It's a winding path. I love the old Yiddish saying we plan and God laughs, so that's certainly been true of my life, my path. There was a chapter as a founder doing work in India, not terribly successfully, but I learned a lot, and then another chapter as an investor investing in companies in Israel, and one interesting example of finding an early technology trend and investing in it and watching it be successful was a company, really the first in the world, that was doing mobile cybersecurity. At the time, we all thought our smartphones were safe. We said, "Well, I have an iPhone, can't be hacked." There was a smart, young technologist, hacker, in Israel, who said, "I begged a different. In fact, I've actually hacked your phone right now." I said, "Okay, you've got my attention. Where does this go?" He said, "Well, the good news is I'm moving from offense to defense, and I want to build software that protects our smartphones, so we don't get hacked the minute we walk into a coffee shop." Well, that's really interesting. We invested, I think, when the valuation was in the single digits, and it ultimately was acquired for $550 million. I've seen these trends, and I look at the arc that cybersecurity followed over the last 15 years. I looked at the arc that India followed over the last 25 years when I first went there. The arc that Israel, as the quote-unquote "start of nation" followed. The innovation is tremendous, and these are decade or multi-decade long cycles. I look at peace tech today, and we'll talk about it in a minute, as that next big cycle. So what was the spark that got you interested in developing peace tech? When I was investing in Israel, there was one particular company. Again, it was a cybersecurity company doing cloud data security back when the cloud wasn't even really a thing, and we really had no idea how to secure that data, but really smart, young people, they built this company, and then they got what is the dream of every founder team, and that's an offer to be acquired from a big multinational company. They were based a little bit outside of Tel Aviv, and they said, "Okay, we would like to sell. This is the moment we've been waiting for. It'll be a life changer for us, but we have a condition." And that is, while we've been building this company in Tel Aviv, we've also hired a team of our colleagues, our co-workers in Ramallah. We are Israelis, and we have Palestinian teammates, and we will not sell this company unless you promised to not only keep, but really embrace our Palestinian counterparts in Ramallah, and I thought, "Wow, this is the seed of something much bigger." It took some years to go from there to what we're doing now with the Ventures and Peace Tech, but I was inspired by that, and that's what we're trying to do now. Well, that's a profound spark, and it's so valuable. I know it's a brand new field, but can you share examples where Peace Tech has been used in the past? Where has it succeeded? Where has it failed? It is early, but we're seeing really interesting glimmers of what's possible. So let me tell you about a company called Anadir Horizon. It was founded by a former Harvard researcher who was running a program in Harvard called the Red Horizon program, this gay war gaming. They would spend a whole year planning a dozen different scenarios, and then over three days, they'd bring in very senior-level people, admirals, generals, diplomats, ambassadors, et cetera, et cetera, and they would game out some conflict, for example, Ukraine. In January of 2022, they did, in fact, game out the question of what would happen, and what's the probability of Russia invading Ukraine. And by the way, the output of that was everybody thought that probability was low, and the output of their simulation was actually pretty high. The next month, Russia actually invades Ukraine. So this Harvard researcher said it was a wake-up call to him that we need to do this not once a year, but once a day, and we need to run not 10 scenarios, but 10,000 scenarios. But he said the good news is we have this new technology called AI, and we have the ability to do that continuously. And so I, this founder, I'm going to leave Harvard to go create a startup that uses AI to try to prevent the next big war, like the one in Ukraine. And there are already some proof points there, I'll share one more with you, and I know you just did an episode recently on Venezuela. So this founder and I were having a conversation in early December 2025, and he said, Brian, have you been paying attention to what's happening in Venezuela? And I said, well, a bit, I know the US warships have been bombing or torpedoing or sinking various speedboats coming out of Venezuela, and he said, yeah, have you read anything about the likelihood of a land encouragement, some sort of attack on land in Venezuela? He said, well, there are some markets, call sheet, polymarket that actually have probabilities of this. People betting on it. He said, we ran it through our model, and our model says it's twice as likely to happen as people think it is. And sure enough, a month later, the US goes in an extra-axe, Maduro. So again, these are early signals, but it shows that these concepts are working. So AI can actually, probabilistically, if not predictably, better assess the likelihood or risk of certain conflicts breaking out. And therefore, when they're more widely adopted, potentially prevent them. You know, speaking about Anadir, if they were trying to figure out what's going to happen in Iran, what would the AI do and what would it generate for humans to use in making decisions? A big part of it is going from qualitative to quantitative. For example, if there is a big oil company that's trying to figure out what it is they want to do in the Middle East, and they're looking at Iran, wouldn't it be better for them if instead of asking experts whether they think the regime in Iran is going to fall, they could instead look at an AI-powered model that says we think the probability is x percent, or we think the probability of A, B, and C is x, y, and z percent is 23 percent, 42 percent, 74 percent. Well, now, they have actual numbers they can use to make a much better informed decision. And that can be true of Iran. We're now talking about Greenland and what's going to happen there. You know, there are all sorts of geopolitical situations around the world. Thailand and Cambodia is a very active situation. Most Westerners aren't looking at it. If our leaders, whether they're diplomats and government, people in the military, or private sector executives could have quantitative probabilistic information, I think it both helps them make better decisions and help potentially affect the events that happen on the ground, i.e. prevent violent conflict. Well, where does the data come from to inform these tools? Yeah, in the case of Anadir, these are experts who have been studying violent conflict and war for decades, including one of the co-founders who was very involved in the U.S. Soviet nuclear datant in the 80s. So we're going back that far. But they're also bringing some really interesting new approaches. For example, psychologically profiling the leaders. So they're not just imagining a rational president of Russia or China or wherever the United States, they are saying, let's understand the psychological profile of this leader. Let's understand what information is going to affect them. Who that information comes from and where that's going to carry the most weight. Even what time of day they might be making that decision, knowing that a decision made at 3 p.m. is very different from a decision made at 3 a.m. Those sorts of things, again, combined with AI, can now be replicated across not a handful of scenarios, but thousands, thousands and thousands of scenarios. And that can both help figure out what's most probable. It can also look at edge cases that we, with our limited human brains, really just can't imagine. But AI can imagine the one out of 10,000 possibility. It can even figure out what the right sequencing would be. That if we do A, and then B and then C, exactly. So by quantifying all of this, by adding really thoughtful psychological profiles, by creating a map of all the different players and all the different levers of influence and all the different relationships among those players, it enables us to do something we can never do before without AI, and that is what creates all this possibility. By what factors do you think will make peace tech economically viable? A big part of this, when we talk about peace being profitable, we're not trying to profiteer. We're not trying to make money extracting it out of peace. What we're trying to do is utilize profits to catalyze peace. We're trying to say there is a model here with startups and venture capital that can do powerful things. And let me contrast it for a second with nonprofits. I think, first of all, there are a lot of nonprofits doing great work, and I have much admiration for them. It's a model that many times is the best model for whatever it is they're doing. But nonprofits usually scale linearly when they grow. They might be a million and then they grow to 2 million and they grow to 3, 4, 5, etc. Startups usually scale exponentially. They go from 1 million to 2 million to 4 million to 8, 16, 32, and so on. And the thought here is that if we can create ventures that scale exponentially, that may be able to both be an incredibly attractive investment, and that much more relevant, powerful, impactful in terms of their broader goals, in this case, peace, positive goals. So the company we're talking about earlier, Anadier Horizon, the AI Wargaming Company, or they would describe it as AI for crisis simulation. They should be able to make lots of money selling that into the private sector, helping companies and corporations see ahead, see around corners, defend against threats, go after opportunities. They should make a lot of money. That money enables them to raise money as funding. That funding allows them to invest ever more in their technology, which then allows them to make ever better technology to apply it in the field of geopolitics to try to prevent a war over Taiwan, or the next October 7th, or wars like what we're seeing in Sudan. That is a mutually beneficial and complementary model that, again, hasn't really been tried at scale, but as we envision it, could be the thing that moves the world. I think of the Archimedes quote, "Give me a lever long enough and a fulcrum on which to put it, and I'll move the world." What if AI and startup level innovation is the lever and venture capitals the fulcrum? Could that move the world just a little bit? What do you think will make peace tech politically viable? I'm really hopeful on this. I think it's something that unites. I think if you look at people on the left, there's a long history of activism and pro-peace, anti-war activism, etc. But if you look on the right, even far to the right, we have a president in President Trump who wants to win the Nobel Peace Prize. In the middle, people just want something that everybody could agree on for once. In peace and more specifically peace tech, we actually have something that unites the political spectrum. The right loves the machinery of capitalism and its efficiency, so we can all agree on it. People who we are investing in who were at the U.S. Institute for Peace or were at U.S.A. They left because of the disruption last year, and they're now creating startups going after those same problems that potentially could be even more impactful. I think it's something that can unite people and of those who are involved in peace tech today, it does unite people from all over the spectrum. Who are the intended customers for an idea, for example? Well, they're the obvious ones, they're the government customers, and I'm talking about friendly governments. They have guardrails that would prey up at them from selling any governments they believed would not use it for good purposes. But also corporations, and the example I use all the time is, and this is a hypothetical, but I imagine Jensen Huang running in video with its $5 trillion market cap, and it's incredibly complicated competitive landscape where it has its historic competitors in talent AMD. It has its customers like Microsoft who might also be competing, and it has seemingly a new startup out of China popping up every other month. And now it's also made investments in some of those competitors. It is an incredibly complicated landscape, and relying on experts or reports, it's just not enough. But if you could go to Jensen Huang and say, "Hey, I've got an AI-powered model that can simulate your landscape and help you see around corners toward the threat that you don't even know is coming or see an opportunity to invest in a competitor like Intel, which they've done, or invest in a customer like OpenAI, which they've done." And probabilistically determined whether that's a good idea and what it might lead to, and all the second and third and fourth order derivatives of that, again, how much better of a decision could you make? How many millions or billions of dollars is that worth? And if I'm right, or if we're right, that helps make money for the company. That helps raise money for the company. That helps them build out the technology, which again, could prevent a war potentially somewhere in the world someday. You said that you see peace tech and defense tech as two sides of the same coin. Can you say more about that? Yes, I think they're both going after, again, this huge $19 trillion problem of war. The defense tech companies, and we've seen this burgeoning connection between Silicon Valley and Washington, D.C. in the defense space, but they're focused on winning wars. And we look at peace tech as trying to solve the same problem of war, but not by winning it, but by preventing it in the first place, or by mitigating it, or by resolving it. Which has the biggest dividends. Yes, absolutely. No, nobody wins in war, really. And some people think they win, but really everybody loses. So we look at peace tech as going after the same problem, doing it with a different north star. And our hope in building this ecosystem is to inspire that would be entrepreneur who has a great idea. And thanks, well, I guess I need to be a defense tech company and sell to the Pentagon. And we, when we can get a hold of them, say, wait, wait, wait, there's another model. It's called peace tech. You can go do the same thing, take your same idea, but do it to save people's lives. And that is something that usually inspires them. And they realize they've got all the same opportunity, all the same tailwinds, but they can do it with higher purpose. That's great. One of the other companies you've invested in is called Transcend AI. The founder is a woman who has an interesting backstory about having worked for the US Institute of Peace. Tell us a little more about what's driving her. Sure. The company's called Transcend AI. The founder is a woman named Ola. She has an incredible personal story. She was born in Lebanon. She emigrated Canada. She was actually in Lebanon when the war between Lebanon and Israel broke out in 2006. She managed to escape back to Canada, then found her way to the US and found her way to the US Institute of Peace. Her life mission, having been a refugee from war, is to work to prevent and mitigate war. She was doing that at the USIP. When that shut down, she said to herself, "Well, I still have the same mission. If I can't do it in government, I'm going to do it in the private sector." And immediately created this startup, again, that's using AI to do many of the same things that she did, but do it much better, faster, more efficiently, and then potentially to be even bigger than that, to potentially be even a platform to be able to accomplish the same objectives that they've always been working toward, but do it with a private sector level of efficiency. And that is just such a win-win. And the last thing I'll say is this, Transcend is an example of a company that we think could be a peace contractor. We all know about defense contractors, and they make billions and billions of dollars making weapons of war. We think there is an opportunity for peace contractors, or peace tech contractors, in this case, who are helping the government do what they want to do, do it more efficiently, and do it for the purpose of not getting into violent conflict in the first place. That is part of why we see this as not just a venture capital fund or startups, but the early stages of an ecosystem, and both public sector and private sector. When we spoke before, you talked about the digital twin concept and about applying it to societies. Can you explain that a little bit? I've never heard that term before. Sure. Let me tell you the story of a company called CulturePulse, which has an AI digital twin that's using four conflict zones around the world. But before I even do that, I have to share the story of the founder, the two co-founders, but one of them, Justin Lane, he was both an academic researching conflict zones and the history of war and various conflicts around the world, but he was also working on the ground in Northern Ireland and has, at least, incredible experiences where he could be in a pub on the union aside and then go over to the Republican side and put those people together in a room and he said, "I'm seeing amazing things, but I'm not able to accomplish as much when I'm publishing a paper about it at a university. I need to get closer to the ground." If I try to do that as part of an NGO or a nonprofit, I may be able to accomplish some things, but there are always funding constraints and all these things that make it hard to do. If I were to create a startup, I think I could do that faster, better, and at greater scale. We found it a company called CulturePulse and what they do are they create digital twins of either organizations or civil societies and in Armenian Azerbaijan as an example, meeting up to the conflict they had a few years ago around the Nagorno-Karabakh region, which they have been having a conflict over for decades, maybe centuries. They were able to see what, on average, the Armenian civil society cared about and what the Azari civil society cared about. They were able to determine that in Azerbaijan, people were thinking about things like history, moral rights, the things that they felt entitled to by virtue of their long history and their culture, which essentially indicated they wanted to take back this region that they felt like was theirs. By contrast, you look at the Armenian side and they're carrying much more at this point in time about economic factors, essentially, how do I provide for my family? How do I give them a better life, much less about the factors that were at play on the Azerbaijan side? So, sure enough, when you look at both those things, it explains what happened, which is Azerbaijan being more aggressive and Armenia putting their hands up and saying, that's fine. You can have it. We're not going to fight you over this. Of course, are more complicated factors here at play, but they were able to see with these digital twins what was previously invisible. That's great. Yes, and so that they can apply all over the place and they can then, by doing that, iterate much, much faster so that by the end of a day, you could go through a dozen, two dozen, three dozen iterations that might otherwise take years and that is just potentially a game changer for conflict. Right. I mean, the whole thing is helping people find common ground and if you have a tool that accelerates that process, then that would be a real game changer. You know, it's funny, almost all the piece builders that I've ever spoken to have cited that where things shift in a negotiation is at dinnertime when they're drinking beer and talking and showing pictures of their grandchildren and learning about each other's families and that. That's where they discover that they have more in common than they thought they did. I don't know if AI can simulate that kind of thing, but it's definitely resonant with people who've been through the piece building process. You're 100% right, AI can't drink a beer. It's not going to be able to sit down at dinner and find common ground in people's families and histories and all that 100%. But it might say to the mediators, this negotiation will be well served by sending everybody out to dinner tonight or serving alcohol or whatever it might be that eases things and reduces friction and then the human beings are the ones who do it. I think there's also really important to respect the piece builders on the ground. A lot of times, people have big ideas, let's say in Geneva and the people on the ground say, well, that's nice. I'm glad that worked on a whiteboard, you know, in Switzerland, but over here in Sub-Saharan Africa, it just doesn't work. And I think at its best, peace tech equips human beings with tools to do their job better and then it respects the fact that they may want to just throw those tools away and trust their instinct. And that's okay. And people on the ground are going to know what's best. But if they have new tools that help guide them or maybe give them an idea or maybe give them a probabilistic sense of which choice A, B or C is going to work better incrementally and makes them more likely to be successful. No, I think it does. I mean, I think accelerating the analysis of the wants and needs and positions of both sides, if you can shorten that process, that's good. Do you have any examples of where these tools have been used to change policy? It's a great question. I think it is a goal of some of the companies in peace tech where they want to find ways to affect better policy. Let me give you a hypothetical example or a dream example that I imagine. I would love to see AI be used to both help decision makers make better decisions in the room. So to speak, we talked about tools like anadir. I'd also love to see AI help get policy turned into law better and more efficiently. I look at the way proposals or bills move through Congress and I think this is wildly inefficient. Well, what if they could use AI that takes the work that lobbyists do and says, "Okay, I know lobbyists so and so, you have a relationship with Senator X, but that's actually not the right place to start." Instead, why did you start with Congressperson Y and then go to Congressperson Z and then go to Senator X and then go back to Congressperson Y again? And if you do that, that sequencing is much more likely to lead to success. Now your bill has a 62% chance of succeeding as opposed to a 27% chance that it had before. What might be possible then, especially for good legislation, of getting it done better, more efficiently and seeing patterns and possibilities that we human beings sometimes don't see. Again, it's the human beings who have to do it, but maybe if they open up that app on their phone that tells them, "Here's the best way to go about it." It's more like we don't accomplish it, so I can't point to something that has been done, but I can tell you what I would like to see done, what I believe will be done. And I think if we can fast forward five or ten years, that's going to be the case. Yeah. I mean, there are some organizations that like rebuild Congress, run by gentleman named Bruce Patton, that's what they focus on, and that could be a tool that they could use. These predictive tools using AI are exciting, and they seem to have real potential for the way to influence the way governments, NGOs, and multinational corporations make decisions. Are there any other kinds of applications of AI on the horizon that could support peace? There are, I mean, there's so many, there's so many different use cases. We're finding ever more good-hearted, brilliant people that are coming up with new ideas all the time. One example of something we're seeing or a theme we're very interested in is using AI for negotiation, I think of William Yury. What would be possible in negotiations if we had an AI co-pilot that could tell us you're going this one direction, but you're going to do better if you go in this other direction. We've got a 37% probability of success here, we'll bump it up to a 72% if you go there. That could be incredibly powerful if it's in geopolitics, the equivalent of Camp David, or if it's at an investment bank trying to get some multi-billion dollar acquisition done. We don't have the answer to that yet. I happen to be this morning talking to a startup that's working on this very problem, but we're in space. It's very alive for you. It's so alive, it's every hour of every day, I'm talking to people who are working on ideas toward these goals. I don't know who's going to get it right, but I know there are a whole bunch of founders out there working on it and they're trying things, and the best of them we're able to allocate a little bit of capital to enable them to do more of that. This is a work in progress, but it is definitely in progress, and I think if we check back in a couple of years, we're going to have a whole bunch of stories on that topic. Well, we'll certainly do that, I'd love to do that. There are tools that ordinary people can use, for instance, to suss out misinformation or communicate better with an adversary. Absolutely. Before starting this fund, I made a personal investment in the company called ODR.com. That stood for Online Dispute Resolution, and this founder's idea is a world-class expert mediation was to use AI and technology to essentially take the small claims court and put it on the smartphone. And the idea was most people going to small claims court don't have time to do it. They're working nine to five. They can't take a day off and go to court. Meanwhile, the people they're suing who are usually corporations or the people paying for the lawsuits through the insurance companies, they don't want this either. So if you could take these cases, allow people to adjudicate them at night or on the weekend on their smartphone. It's better for the individuals, especially working class people. It is actually better for the companies because they're happy to just, you know, you're suing them for a thousand bucks. They're happy to pay you $700 just to ignore it because it's not going to worth an hour of their lawyers time, make it more efficient and get it done. And that company was acquired by the AA, the American Arbitration Association. So that's an example, we're not resolving war on a geopolitical scale, but we are resolving disputes at a micro everyday human being level. And that's also very important. That is peace building as well. That certainly is. Well, this has been great and very refreshing to talk about. Are there any things that I didn't ask you that you'd like to share or the only thing I would add is someone may be listening and saying, all right, come on, you're telling me someone's going to build an AI driven app that's going to create a world peace, get real. And my response to them is look, it's not that we think all of this is going to happen everywhere it wants, but we think there's a possibility that it can happen one place one time. And if there's a possibility that one conflict, one time could be resolved that might not otherwise have been resolved, that is worth the effort. That is worth the effort. And this new approach is something new under the sun. It's something that was never possible before in human history. This technology that we have at our disposal today, which is a sea change. It's a generational development. And there are people, new people joining this community of ours every single day. We have a little WhatsApp group a year ago. We had 20 people in that WhatsApp group for peace tech. Today we have 220 people this time next year. I think we'll have 2000 people and then 20,000 people years from now. It is happening. It is a snowball. It is underway. And I think it is irreversible. So this is not something we're trying to do. It's something we're doing. It's something that is happening and it is going to scale up. Your listeners are hearing about it probably earlier than most, but I would be willing to bet if we could look ahead a couple of years, almost everyone will have heard of it. And they'll be seeing the results on the ground. I look forward to talking a couple of years from now and getting an update on what's been happening because this is very, very exciting and positive. Thanks so much for joining us today. Thank you. You can keep up with Brian Abrams on LinkedIn at the link in the show notes. We also have links in the show notes to more information about the peace tech companies we discussed in this interview. But curious how you feel about AI technology as it relates to the work of peace building. If you're listening on Spotify, you can leave a comment below the episode description. If you use Apple podcasts, leave a rating or review. Or you can email our team at makingpeacephysible.org. If you found this episode thought-provoking, please share it with a friend. An easy way to do that is to take a screenshot of your podcast player on your phone and send it in a text. Makingpeacephysible is produced by Andrew Moraskin, the Associate Producers Faith McClure. And I'm Jamil Simon. Thanks so much for listening and talk soon.

Key Points:

  1. Peace tech is an emerging field that applies technology, particularly AI and venture capital models, to prevent violent conflict, contrasting with traditional defense tech focused on winning wars.
  2. It unites diverse political interests by aligning the left's peace outcomes with the right's appreciation for capitalist efficiency and scalability.
  3. Examples include AI-driven crisis simulation (e.g., Anadir Horizon) to predict conflicts and quantify risks, and ventures like Transcend AI, founded by a former refugee, using AI for peacebuilding.
  4. The economic model leverages profitable, scalable startups to fund peace initiatives, arguing that peace is broadly profitable and war is economically detrimental.
  5. Data sources combine expert historical analysis with innovative inputs like psychological profiling of leaders to enhance AI predictions.

Summary:

The podcast explores the nascent field of peace tech, where technology and venture capital are harnessed to prevent violent conflict. Guest Brian Abrams, a venture capitalist, explains that peace tech companies, such as Anadir Horizon, use artificial intelligence for crisis simulation and negotiation, analyzing vast data—including psychological profiles of leaders—to probabilistically assess and mitigate conflict risks. This approach offers a profitable, scalable model that contrasts with traditional nonprofit peacebuilding, aiming to attract investment by serving both governmental and corporate clients who need strategic foresight. Politically, peace tech bridges ideological divides, appealing to the left's peace goals and the right's capitalist principles. Abrams frames it as a complementary alternative to defense tech, focusing on conflict prevention rather than victory. The discussion highlights real-world applications, like predicting the Ukraine invasion, and underscores the potential for exponential impact by redirecting entrepreneurial innovation toward saving lives and reducing the massive global economic costs of war.

FAQs

Peace tech involves using technology, like AI, to prevent violent conflicts and promote peace. It unites the political spectrum because the left supports its peace-building outcomes, while the right appreciates its alignment with capitalist efficiency and innovation.

AI can simulate thousands of conflict scenarios to assess risks probabilistically, helping predict and potentially prevent conflicts. For example, it has been used to model events like the Russia-Ukraine war, providing early warnings that traditional methods might miss.

Peace tech startups can scale exponentially through venture capital, unlike nonprofits that often grow linearly. This allows for greater impact and profitability, as seen with companies selling AI crisis simulation tools to the private sector, which then funds further peace-building efforts.

Primary customers include friendly governments and corporations. For instance, companies use AI models to navigate complex competitive landscapes, while governments apply them for geopolitical risk assessment and conflict prevention.

Peace tech aims to prevent or resolve conflicts, focusing on saving lives and reducing violence, while defense tech focuses on winning wars. Both address the $19 trillion annual cost of war, but peace tech offers a proactive, life-saving alternative.

He was inspired by an Israeli cybersecurity company that insisted on including Palestinian teammates in a sale, showing how business can foster peace. This sparked his interest in leveraging venture capital to support technology that facilitates peace.

Chat with AI

Ask up to 3 questions based on this transcript.

No messages yet. Ask your first question about the episode.