Go back

20VC: Enterprises Will Not Adopt AI without Forward-Deployed Engineers | Who Wins the Data Labelling Race: How Does it Shake Out? | How Synthetic Data Threatens the Future of Human-Generated Data with Matt Fitzpatrick, CEO of Invisible Technologies

83m 48s

20VC: Enterprises Will Not Adopt AI without Forward-Deployed Engineers | Who Wins the Data Labelling Race: How Does it Shake Out? | How Synthetic Data Threatens the Future of Human-Generated Data with Matt Fitzpatrick, CEO of Invisible Technologies

In the final episode of "20VC" podcast in 2025, host Harry Stabbings shares about a charity event he participated in to raise funds for multiple sclerosis sufferers. The guest, Matt Fitzpatrick, CEO of Invisible Technologies, discusses the company's achievements and challenges in the world of AI. They explore the gap between model performance and adoption in enterprises, highlighting the complexities of internal AI builds and the need for clear metrics and outcomes. The conversation also touches on the role of CFOs in managing AI initiatives, emphasizing the importance of good data, clear milestones, and outcomes for successful implementation.

Transcription

18582 Words, 99973 Characters

This is 20vc with me Harry Stabbings and this is the last episode of 2025. Now, if you're wondering why I sound like Mick Jagger, no, it is not because I have been partying like a maniac and lost my voice over the Christmas break, it's because I have been walking four marathons in four days with my mother to raise money for multiple cirrhosis sufferers. We've raised $50,000 in the last three days. I would love your support if you want to donate to MS sufferers, but that is why I sound like Mick Jagger, but to the show today and data is everything in the world of model performance. Cheering, McCore and today's guest Invisible are one of a few who have reached several hundred million dollars in revenue. And as I said, I'm thrilled to be joined today by Matt Fitzpatrick, CEO of Invisible Technologies. Now, since joining a CEO in January 2025, he's achieved some incredible milestones. Most significantly, he's raised over $100 million for the company. This was an incredible show recorded in person in London and I cannot wait to hear your feedback. But before we dive into the show today, are you drowning in AI tools, chat GPT for writing, notion for docs, Gmail for email, Slack for comms, and you're constantly copy-pasting between the more losing context and losing time. This is the AI productivity tax and it's killing your output. At 20 VC, we're all about speed of execution and superhuman is the AI productivity suite that gives you superpowers everywhere you work. With the intelligence of Grammily, mail and code are built in, you can get things done faster and collaborate seamlessly. Finally, AI that works where you work, however you work. Superhuman gets you from day one with zero learning curve and it's personalized to sound like you at your best, not like everyone else using generic AI. Get AI that works where you work, unlock your superhuman potential, learn more at superhuman.com/podcast, that superhuman.com/podcast, and speaking of tools that give you an edge, that's exactly what AlphaSense does for decision making. As an investor, I'm always on the lookout for tools that really transform how I work, tools that don't just save time but fundamentally change how I uncover insights, that's exactly what AlphaSense does. With the acquisition of Tegas, AlphaSense is now the ultimate research platform built for professionals who need insights they can trust, fast. I've used Tegas before for company deep dives right here on the podcast. It's been an incredible resource for expert insights, but now with AlphaSense, leading the way, it combines those insights with premium content, top broker research and cutting edge generative AI. The result, a platform that works like a supercharged junior analyst, delivering trusted insights and analysis on demand. AlphaSense has completely reimagined fundamental research, helping you uncover opportunities. From perspectives, you didn't even know how they existed. It's faster, it's smarter, and it's built to give you the edge in every decision you make. 20 VC listeners don't miss your chance to try AlphaSense for free. Visit AlphaSense.com/2-0 to unlock your trial, that's AlphaSense.com/2-0. And if AlphaSense helps you make smarter decisions, Daily Body Coach helps you build smarter habits. You know how so many founders and execs say they'll finally take care of their health once things slow down? Well, they never do. Running a business is a marathon made of high intensity sprints, and taking care of yourself is what gets you through those times performing at your best, both professionally and personally. This is exactly where Daily Body Coach comes in. Daily Body Coach is a complete, high-touch service for busy founders and executives, combining personalized nutrition and training with psychology-based coaching to help you not just follow a plan, but actually build the systems, habits and mindset to stay at the top of your game. Built by an exited founder and led by certified experts with masters and PhD level credentials, Daily Body Coach is fully tailored to your life, whether you're traveling, dining out or in back-to-back meetings. You get daily accountability, data-driven insights from Dexascans and blood work and a highly certified team backing you. If you're serious about performing at your best physically and mentally, go to dailybodycoach.com/2-0-vc. That's dailybodycoach.com, forward slash 2-0-vc, and take the next step. You'll have now arrived at your destination. Matt, I am so excited for this, dude. I think Invisible is one of the most incredible, but also I'm sorry to say this under-disgust businesses when I look at the incredible achievements that you've had over the last years. So thank you so much for joining me. Thank you for having me. I really enjoy the show. Can you just talk to me about how does like a 10-year McKinsey stool warrior come CEO of one of the false, destroying data companies in tech? How does that transition happen? I would say my McKinsey journey was non-traditional. I spent 12 years there. I was a senior partner, and I led a group called Quantum Black Labs, which is the Firms Global Tech Development Group. So about 10 years ago, McKinsey actually started hiring engineers. I was a big part of this, and a pretty big quantum. When I started, I made about 100 engineers total in firm. By the time I left, we had 7,000. I oversaw about a fifth of that group, and all the application development, all of the data warehouse infrastructure, and all of the gen AI builds globally. So that journey was really interesting. And over the course of it, it's been a variety of my time competing with other large enterprise AI businesses, and I got to know the founder, Francis, really well, about three years and four years ago now. We actually met totally not work-related kind of social context where we were discussing. It was basically a form called dialogue. I don't know if you heard it, but you basically talked about different ideas. We bonded over this story. Getting in mind to this. It's not in Hawaii though. It's in many different locations. It's fall. I really enjoy it because you actually don't talk about work at all. You're not allowed to talk about your job. You spend time talking about history, politics, technology. What does everyone from San Francisco do? They don't talk about it for two days, which is slightly different. Exactly, exactly. But I actually think it's one of the few events I've been to where people are not talking their own book. They're not trying to convince you of anything. And you just really actually, I've made a bunch of really good adult friendships out of that. And so, Francis and I got to know each other from that four years ago. And there had been another CEO kind of in the two years before I joined, too, was actually based in Australia, interestingly. And so, when the business got to a certain scale, it was just time to have a US-based CEO that could help take the business to the next level. And it was actually Francis approached me and kind of pretty directly said, "You want to be our next CEO?" That was kind of, that was kind of what happened. Was it an eyebrow? Look, I think when you walk away from a really stable job that you really enjoy, that's always difficult. The sliver of McKinsey that I was doing, I found to be one of the most intellectual day-to-day jobs ever. I was working with all the fortune 1000 on every different AI topic daily. And particularly in the early machine learning days, kind of 10 years ago, I think we built some really interesting stuff. But yeah, it was, I think it was kind of a no-brainer in some ways. Because I think when you think about it, I think this is the most interesting time to run a company on a topic that has probably existed in our lifetime, maybe the 2000s. But to run a company in AI right now is fascinating that the rate at which you can build, the people which you can recruit, the interest of customers in this topic. And so I felt like I'd spent 10 years learning one topic. And now I had a chance to run a business and build it the way I wanted to build it on that topic. And that's just something you can't pass up. And even though, you know, walked away from a fair amount, but I think that I'm much more excited about building something the next two decades out of this. When we think about like decision-making framework, so I always have one which is like, find someone who you respect and admire. So for me, it's Pat Grady who's the head of Sequoia, nine and ten years. He's a great father, investor and husband, three things that I care about. And whenever I have a tough decision, I'm like, what would Pat do? And most of the time I get to the answer by asking that question in that framework, if I were to ask you, what do you ask yourself, how do you find direction when struggling with a decision? I'm not a particularly materialistic person. You know, I think when I was coming to college, for example, everyone was focused on going into large finance jobs, which at that time were pre-financial crisis, obviously, where where a lot of that was. And I think a lot of what I think about is doing work day to day that I really enjoy with people I really enjoy. And then building something. And I do think I really enjoy the decade I spent building in McKinsey. I think that was an incredibly interesting experience to stand up something of that scale within an existing institution. And then I do think about, I read a ton about everything from military history to current entrepreneurs to enterprise executives, I really admire. And then I have a group of kind of a small group of people whose opinions I ask pretty regularly. And probably the most telling piece of advice, my girlfriend and my main mentor, both of them, when I asked within two minutes, were like, "Absolutely, do this." My main mentor is a guy named Smash Kana who had been a senior partner in McKinsey for a long time, is on the board of a whole variety of different companies today. And I remember we got lunch. I walked through the opportunity. I said, "Listen, it's a big risk." And he goes, "The only risk is if you don't take this and the amount of regret you'll have, not give it a go." I totally agree with that one. I was once given an advice that whatever you think you should do, hold that close, and then let your girlfriend tell you what you should do. And that's why you still have a relationship. That's a great piece of advice. That was from someone who's been married for 40 years. And so it's worked well for him. We were chatting before, and I said, "Where do we have to go?" I always think that the best conversations are led by passion. The first one that you said was, "There's a gap or a chasm between model performance and adoption." When we break that down, can you explain to me what you meant by that and how we see that in action? Yeah, and let me set the context. I'll go into more detail later, but in visible as an interesting business and that we both train all the large language models with reinforcement and even feedback. And we are at the core, a modular software platform where in enterprise context, we deploy all different enterprise use cases. And I think the cognitive distance that has occurred in the last couple of years is model performance has increased exponentially. I don't think anyone would doubt that if you look at all the public benchmarks, models have increased 40 to 60% in performance over the last two years. And consumer adoption has been also exponential, so KPMG just released that 60% of consumers use GNI weekly now. But the enterprise is not. I think in the enterprise, MIT just released this report that 5% of GNI deployments are working in any form. I think you've seen Gartner saying 40% of enterprise projects will likely be canceled by 2027. And I think the reason for that is deployment of the enterprise is a lot more than just models themselves. It's the data infrastructure to support those models. It's the redesign of workflows. It's the process figuring out which operational leader takes accountability for that. And most importantly, it's trust. It's observability. All the things that I spend a decade building things like credit models and banking. And in those cases, you need to go through model risk management, testing, training, validation. And so I think that whole process is in the first inning in the enterprise. I think it's going to take a decade, not two years. And I do think that is the core mission that we think a lot about is I actually think the evolution of deployment of AI will be what the modelers have done for the last couple of years, you'll see banks and healthcare firms start to do the same sort of testing evaluation over this period. And then the rest of the enterprise will be over the next five, six years after that. That's the journey that we're focused on. I was speaking at one of the largest banks in the world. It's an absolute joke that they get a university dropout like me to speak at that like largest I might have been very fond. But I left and I messaged the team and I just said, oh my god, they're toast. And they're toast because I said about the amazing tool is from an internally and the CTO left at me. He was like, dude, there's no way that we can ever adopt your off-the-shelf search engine optimization for LLM tool because of data, because of security, because of permissions. And I was like, wow, everything that you just said there, I listened to. Yeah. But that was once you got in the door. Our enterprises even opened for business. You see Goldman Sachs developing a huge amount of their own tools. Are they open for AI business? Yeah, it's a great question. I think it depends a bit on the sector. I think there are sectors like banking that are very focused on building this internally. I think that is a reality. Do you think that will work the internal build for them? So it's interesting if you look at the MIT report, which is the one I mentioned that says 5% of models are making a production right now. They actually cite a stat that externally driven builds are 2x as effective as internal team builds. I actually think there's an interesting kind of 10-year pattern on this, which is 10 years ago, everyone bought software. That was your tech team, did not try and build anything. And you started to buy and you bought, you know, often you bought way too many apps, but you bought 15 different apps. And that was what the technology team did. And then I think with the advent of cloud, you started to have a world where the technology function starts to start to think about building things. Like maybe they started to have more some custom applications that wrapped around that. I think Genai has 5x that. We're now an internal team has given this enormous budget and said kind of go have at it. And I think that's complicated because I think when you hire somebody to build any vendor of any kind, you're pretty disciplined about what are you delivering on what timeline? What's the ROI of it? What are the milestones? How does that? And I don't think that that discipline exists in the same way in internal builds. I also think that the talent levels off in the internal teams have are challenging. And so when you say the internal team builds are challenging, there are some things that you can't say, but I can. The perception from external or from general kind of tech crowds is the internal teams for, I don't know, you name your boring large enterprise. It's just really low quality. You're not getting the top tier AI engineers. You're not getting top tier devs. It's that true. Look, I think the amount of talent that knows how to do this well is not large. And so that that finite group mostly works in AI startups of various forms, right? And large tech companies. And so I do think there's real risk to the process of figuring it out from first principles and enterprises, right? And I think that's part of the cycle that we're going through right now is a lot of internal groups have gone through the process of saying, we must do this all internally. But the reality is if you think about that this is an open architecture ecosystem and you're going to adopt things like MCP or all the new voice agent that comes out, you actually want a modular open architecture where you can use all the best tech available and figure out how to link it together. And I think the desire to shape that all internally has been challenged. I'll give you one of the more interesting examples I can discuss. I was talking to an e-commerce retailer that had built an agent to handle the returns process and they spent 25 million bucks building this agent. And at the end of it, I said, well, how did you define this was after I'd met them after they built it? And I said at the end of it, how did you define if this agent worked or not? And like, well, we built we built our own Eval tool. It's not a joke. And we basically analyzed a mix of speed of call resolution and sentiment. The problem with that is what if the agent hallucinates and says, here's $2 million, that actually gets resolved quickly and the person's happy. And so they built this entire system from first principles. And what ended up happening with a couple of months later, they shut it down and moved back to a deterministic flow. And that's not surprising to me at all. And so I do think that's a little bit of the adoption curve we're in is over the next two years, you're going to see the CFO function put different guard rails on how the stuff is built and say, what is the ROI? What are you investing in? What's the metric? What's the return? And that will change the adoption curve. But right now, there have been a lot of science projects. I think that is a realistic. Okay. And we have hundreds of thousands of listeners and many of them are CEOs. If you are a CEO, thinking about your CFO being equipped to buy and to manage in this new environment, what should they be thinking about? And do we have the right CFO talent pool to manage this new environment? Yeah. So I think one misconception is that that leader has to be highly technical to make that decision. And I would actually argue they don't at all. They just need the same muscle memory they've looked at in the past, which would be, what do you need to get a Genai Initiative working? You need good data that you can work off of for that specific initiative, clear milestones and outputs, clear line ownership of the initiative. And then probably most importantly, you want to actually anchor it in milestones and outcomes wherever you pay as it works. So I think the other interesting context for a lot of this is what I would call the Accenture paradigm of the last 20 years, right? Which is a lot of times the way that if you think about the wrapper that's been around software for the last 20 years, you know, our founder, Francis Daza has the founding principle of invisible was if there's an app for everything, how come nothing works? And it's an interesting concept, right? Because what when it happened is you bought 50 apps, you had Accenture come in and you paid them $200 million over two years to try and layer them all together. And often you ended up a couple years in with no working data, no linkages between them. And that kind of layers of sediment has been how the tech paradigm work in the enterprise for the last five years. And I think what's different now is if you're thinking about a specific genetic initiative like a context center, let's say, you don't need to operate that way. You can think about what are the operational metrics you want in your context center. You want to think about call resolution, call performance, cost per call, routing logic, you know, you can then look at both internal and a set of vendors who will deliver those metrics and make an evaluation. And if the vendor doesn't work, you fire them. And I think there's a very clear way to get ROI in this, which is figure out the list of three to four things and move the needle for your business. Focus on those three to four. Don't spend money on a thousand science projects. Take your best four operational leaders and put them on those four things. Don't locate it in the tech function. That's the main advice I give people is your genetic initiative should be led by the business and figure out that could be your header call center. That could be your head of operations. But each of those people with clear operational KPIs will get the stuff working. And there are a bunch of companies that have, but it's just a very different approach than I'm building journey eyes. And it's really interesting. You said don't invest in a bunch of science projects do three to four initiatives. Okay, let's do three to four initiatives again. Let's put on that CEO hat contact center. It's just the big one that is homogenous across everything. Man, there's so many players in the contact center space. I'm a CEO. I'm not a, I'm not a Silicon Valley guy. How am I meant to understand whether we go for Sierra or Dacagon or Zendesk of old or intercom or any of the other players that we've seen in the space? How do you advise the biggest CEOs on buying in a wave of new innovation? I think this is the other big challenge if Jenny had option is you're an average CTO COO. You've got 250 vendors a week pitching you. All of them sound pretty similar. In fact, I was with a customer yesterday who literally started the meeting by saying, how are you different than the other 250 people that have pitched me this week? So this is, this is the dynamic of we have an over saturation of companies that all sound relatively similar relative agents to make your question even more pointed. A lot of them don't work. You know, I think you've got a fair number of the enterprise agent companies that, you know, like Salesforce say our research releases report that if you test a lot of the out of the box agents on single turn multi turn workflows, they're about 58% accurate on single turn and 33% accurate on multi turn workflows, which means they don't really work. And so you've got this challenge of 250 companies a week pitching you. You don't really know how to select it and you're worried you're going to pick someone that's actually a Charlotte and it won't work. And the more you have a market where there's a lot of excitement the more you do have that risk, right? So I think the simplest advice I give, and by the way, this is how we sell a quote unquote, is start with proof of concepts. Start with we call solutions prints. Don't pay a dollar until you prove the tech works. So like we don't actually sell anything. We meet a customer. We say we will we will do it for free for eight weeks and prove to you the tech works. And that's a very simple way. If your tech works, you'll show it. It seems principally to do business. It isn't it's not. So let me give an example like how one of our deployments works because I think it fair enough if the answer is that, you know, it takes you two years to build anything. But like I'll give you an example. So our AI software platform is effectively five modular modular components. So neuron which is our data platform brings together structured unstructured data axon, which is our AI agent builder. Atomic, which is effectively a process builder. We can build any custom software workflow. And then we have a meridial expert marketplace, which is we have 1.3 million experts a year on any any topic you can imagine that we bring into those workflows. And then synapse, which is our valuation platform. Now we can take those five things and configure them to almost any different enterprise context. So just an example, we serve food and beverage, public sector asset management, agriculture, sports, oil and gas, a whole host of different sectors using that same modular architecture. I think we we end up scaling pretty materially once we show what the tech works. We're working on the company called lifespan MD, which is a concierge medicine business across the US and internationally. And what we're doing for them is we're building them an entire tech backbone where they have an enormous amount of fragmented data across EHRs, CRM, ERP systems, notes, everything else. All of their data, it's in a pretty fragmented format. And so we're using neuron to bring all that data together. We do that very, very fast. So Accenture would take two years. We can usually do it in two to three months. We're then on the back of that, building a lot of different intelligence and reporting. So they can look at things like patient journeys over time, labs, genomics data, how much you use like the ore ring or anything else like that. But they want to look at wearables, how all that content is looking. So they have a lot of detail on what any patient is doing at one time. And then on top of that, we layer things like we have the ability to interrogate it and ask lots of different questions like let me look at who's used peptides, it's a male between 36 and 50 and what have been the results. So we're using Accenture build all that and then we build and to fine tune the model to do that. And then we actually do also on top of that build lots of specific custom agents for things like scheduling. So what you get at the end of that is a transformed tech enabled business with all of those different components. Now that does take us a little while to stand up. But once that is there, it's effectively hyper personalized software. And that is my view on where this whole industry goes is you move from SaaS out of the box SaaS to much more hyper personalization using the specific data of an individual customer. And that is what we do. Do you think you can work with enterprise today with Gen AI and with AI implementation without an intense fully deployed engineer mechanism? I don't think you can. So we are we've doubled down. A huge part of what we do is four deploy engineers. So we now have eight offices in eight cities, four to 50 people were fully focused on four deploy engineering. And I can tell you from a decade of my prior life, you just cannot do this without the box SaaS. It does not work. What are the economics of FTEs that like obviously volunteers made at the most sexy thing ever? I love the way tech crowds were. We were just kind of got super excited by like an acronym and say this is the coolest thing. But what are the economics like? Well, one thing I'll say is four deploy engineers has come to me in a lot of different things. So a lot of four deploy engineering, I think, you know, across about a market is more like kind of solutions engineering where the people that kind of answer your questions and show up in your office. I think four deploy engineering done well is executing a very specific workflow build. So you're effectively configuring a set of core platforms to build something hyper specific for that customer. And usually one of the questions is it depends on how good your platform is because for example, you could argue Accenture is four deploy engineering, right? But that build may take three years. And in our case, I think we've built modularity and built a lot of the new software workflow development workflows into what we do. And so usually our four deploy engineering motions are about three months. So we will come on board, customize everything to the hyper-specific way a customer wants it and then build something on a basis works. And it does require ongoing fine tuning. So that's the other big difference that people should acknowledge, right? Is that you can't fine tune a model in an enterprise context and just leave it for four years and hope it continues to work. I could give you 100 examples, but take take healthcare, GLP ones launch. You do need to fine tune the model for the new context of the market. And so we do view it that way. But I'm proud. And I said, forgive me on this. Do they pay additional for like FDs to come, do you pay additional in terms of ongoing maintenance, just on the economics of it? For many of our competitors, they do charge. We do not charge anything for FDs. Why not? I think it goes back to my general premise that the best way to differentiate in this market is to prove that your tech works. And so the way that we do this is we say you will pay when the software is up and running. And we're able to do with one to two person small and FD teams a lot. And so once that's stood up and running, then we do have ongoing software that is, you know, I think the paradigm that we're evolving from is over the last 20 years, you had kind of the system of record layers where a lot of the value set. And what we're building is hyper personalized system of agility layers, kind of what sits up a top that I think the accenture paradigm is what people are afraid of. And it's very hard to convince somebody you're going to pay time and materials until it gets working. And so I spend less on sellers and more on four deploy engineers. That's my simple math. I always think, you know, the biggest mistake that people have is they don't put the hat on of their customer. Yeah. Yeah. I think the reason the show's been successful is because I put the hat on of different customers. A lot of the customers that we have is startup founders who create amazing products. And everyone wants to sell into enterprise. That's where the money is. If I must start up founder thinking, huh, do we need FDs? How do we do FDs? How do we move into an FD model? What would you say to them that they should know if they're thinking about starting that model or potentially needing that model? Knowing all that you know. I think it depends a lot on the nature of the business and what you're trying to build. You know, if you're trying to build a knowledge management system of public filings for finance, for example, you don't need FDs because what you're building there is a repository of information that people can access. You've similar things in healthcare, for example, if you're trying to change workflows, you do need FDs. I think that's the simple paradigm difference in my mind is if you're building something where the hardest part is getting adoption and workflow embedding. And you need to actually change the way a company works. Then yes, four deploy engineers are the only way to do it. It's interesting. There aren't that many folks that have expertise doing that. So it's a hard thing to train and learn, but I do think it is the only way to get the enterprise working. You've said several times, hey, don't pay until you prove that it works. And you said earlier, pay as it works. That's not the sauce business that we've been trained on. I'm a sauce investor. How does the pricing model of the future look in this very new environment? Let me set me up for a second. I think an interesting thing if you look at the economics of SaaS and enterprise five to 10 years ago. And I think it's an interesting look at any large public enterprise software business and then look at how much of their revenues actually services. And I think you could kind of argue that out of the box software has always been a lie to some degree. It's a weird thing to say, but they always had a ton of configuration and they just dressed it up to some degree. I think SaaS was even more challenging than that because often the unit economics of SaaS, you're selling a much smaller cost per customer. The SaaS business that worked was actually about selling something where the out of the box setup was quick enough that you could make it work with the sales team where you didn't have to do lots of configuration because the minute you had to bring in FDs and a SaaS context, your economics broke instantly. And what I'd say then on the enterprise side, the way people made it work was that's why Accenture grew so much. That's why Cognizant. That's why TCS grew so much is, I'll give an example, like if you take insure techs as an example, right? Every one of the major insure techs like a duck creek, like what they have is a set of core data schemas, a series of analytical logic in front end, and the ones that did really well had momentum and push from the SIs that got them going and so their economics were geared by having somebody else do all your services around what you did and then you got something up standing up at the end that worked. I think the challenge with GNI is that motion doesn't really work because what ends up being built at the end of the day is something that is hyper specific to that customer. Like if you actually think about the nature of it fine-tuning an LLM or creating a knowledge management system, it's not a box. It's not. It is something that uses a lot of different consistent tooling, but it has to be customized. And so the way we do that is we stand that up, we get it working. And at the end of it, usually two to three months in, the payment happens when we pass user acceptance testing and validation and it works. And here's the other thing I'll say is we use SaaS as a paradigm because that's how software has worked. But machine learning has been around the enterprise world. I was building machine learning miles 10 years ago. That's always been a motion that looked like this. So what's happening now is we're starting to realize that the GNI adoption paradigm in the enterprise works the same way that ML did. When we look at the different products that we have today, the expert platform is one I think that gets a lot of attention. How much of the business today is the expert platform? I find companies are lumped into categories. It's easier. And you have your macaws, your surges, your invisibles. And you're all going to put in this like, I'll just talent marketplaces. And no one wants to be a talent marketplace, it seems. And I'm like, how much of your revenues the talent marketplace and why does no one want to be a talent marketplace? I actually think the AI training space has many different players that do have many different business models within it. There's four to five, but actually they're all quite different. I think it was much more of an AI training platform than just a talent marketplace, meaning we have 1.3 million experts that come through the marketplace. But a lot of the expertise we built over the last 10 years is the ability to here's the simplistic question. I think that AI training asks, you have to be able to source any expert in the world in 24 hours notice. You have to be able to source a PhD in astrophysics from Oxford, put them into a digital assembly line in four days later generate perfect statistically validated data that will be compared head to head somebody else's data and make sure that that is perfect at the end. That is an incredibly difficult thing to do. And so actually a lot of what I saw when I took over invisible was that motion was incredibly applicable to actually the next phase of the enterprise as well, which is the fine tuning motions, the training, the ability to statistically validate for an enterprise use case like claims processing, it's the same motion like I actually think AI training will be used next in banking and healthcare and then after that in many other different enterprise contexts. And so the the historical business I took over in 24 was pretty materially weighted to the AI training side of the house, but I came in with a thesis that enterprise be a huge source of growth. And I think as you see next year evolve, you know, I think we've confirmed 12 enterprise deals in the last 45 days. So we see pretty good momentum on that side of the business. And I think that's where we will evolve us to doing both. I think the five core platforms we have allow us to serve a whole host of different end markets. And I do think that's very different than the other AI training players you mentioned. I think we're the only player that spans that broad-based view in the same way. On the talent marketplace side, how much of the business is that stay then? I won't say an exact number, but it was a pretty material percentage of 2024. Okay. Gotcha. So it's a pretty material percentage. The one thing that's also striking is the concentration of revenue to a couple of core players. When you look at other providers, it's like two players that make up more than 50% of revenues for pretty much every provider. Is that the same for you? And how do you think about what that revenue makeup will be given the enterprise diversification that you're talking about? Yeah. I do think for this is a space where there are not that many players that are that are actually building all of them. So by definition, the whole space has concentration. I think I would not disagree with that. I do think that's one of the really interesting things for us on the enterprise side is we have materially more diversification now in the number of customers we serve on a whole different range of topics. I also think you're seeing more kind of early-stage model builders as well that are building hyper-specific topics. And so that's the other part of where we see expansion in the total customer base. When you come to negotiations with a client, given the revenue concentration, how do you play that staring contest? Because essentially they go, we know that we are one of your core customers and we will squeeze you on price. And you go, I know I'm one of your core data providers. I will stand firm. How do you handle that negotiation? Because it is a staring contest of sorts. I think people are willing to pay for good data. That's my simple framework. If you think about the importance of these models, if you think about the costs of compute that is actually a huge chunk of the cost base, if you think about one week of bad data burns a lot of compute, I think what we've seen the reason that's been the same four to five players in this market for a couple of years now is it's really hard to do well. And so people are willing to pay for good data. And so I think we have a very collaborative dynamic with all of our customers on that front. I think that when you provide a service that's helpful, people are willing to pay for it. And if you provide a service that doesn't work, people don't pay for it. And so the interesting thing I would say on that front is the discussion topics anchor around, again, proven value. So we'll get a topic that will come in like a multimodal audio model, for example. And we'll go head to head with somebody on that that week. And at the end of it, we want to release. And so if you win and your data is way better, people are willing to pay for that. I had a chat last night with a board member of another companies in the space. And he said two things that really start to me. He said, I'm just drastically shocked at the lack of price sensitivity for the core customers. But they're willing to pay pretty much anything. Is that the case or is that a bit of an exaggeration? I think that's an exaggeration. I think in any of you think about like classic economics, people are willing to pay a fair price for good data. And so I don't think we operate in a model of trying to give anything unreasonable. I think there's actually fairly standard price balance across all the players here. Is data commoditized when I think about pricing power? I'm a massive panel of Hamilton Helm, the seven powers. It's amazing. When you think about pricing premiums, you get that through not being a commodity through owning supply of a rare asset. Is that commoditization of data? And we're kind of in a race at the bottom on the pricing of that data? Or do you own the supply of that workflow data for surgeons in Oklahoma? Yeah. So let me take that. I actually start with the market context and then I actually use seven powers. It is a great book. I'll use one of his frameworks for that. Like I think the market context that is somewhat misunderstood here is the way that human data becomes more and more important over the next decade. And I think the reason for that is if you thought of the different types of things you could train off of. So synthetic data gets mentioned a lot. But like most of the times synthetic data is useful for things like let's say bass truth information like math where there is a clear output that is right or wrong. Now let's take all of the different reasoning tasks like a multi-step reasoning task like I mean even a simple one like what movie would I select based on you know these five preferences. And then let's take that question and add into it audio, video, multimodal language, the ability to do it in 45 language, language context. So the ability to think about computational biology in Hindi versus French versus English versus English in with a southern accent like that that paradigm is actually incredibly hard to train on. And we're still in the first inning of a lot of those permutations of complexity is what I would say. And so for a multi-stage reasoning task that requires a PhD in multi-different languages and like human feedback is going to be important in that for the next decade. I have a strong belief on that and that was actually one of when I chose to take this job that was actually one of my core convictions is the enterprise is going to need that too because actually a lot of you take legal services for example a lot of the way you're going to validate that is with legal expertise. There's no corpus information you can train from. So I would start with the idea that I think the market tailwind for the next 10 years we're actually in the first inning because there's the LM's then there's the more sophisticated enterprises and then there's everyone else that needs to train validate and move to fine tuning. So again contrast and there's like the pre-training and LM work but then to fine tune a model to a specific context most companies don't even know what that is in the enterprise yet and that whole process we're in the first inning of. So I think the market demand is going to continue to grow pretty materially for a decade. The Hamilton Helmer framework is an interesting one because my favorite example is he talks a little about what he calls institutional memory. He mentions the Toyota production system as an example right where Toyota would literally say to people this is exactly how our factors are set up and nobody could replicate it right. I think the interesting thing about this space and why you've had a consistent set of folks doing it for a while is to go through the process of every week having to spin up we have 1.3 million active agents are kind of experts that come into the pool at any given week we have 26,000 of those we've selected that have to start in 24 hours and produce perfect data. Think about the challenge of scaling an organization that for five years can do that at really high quality and consistently turn and evolve to the different permutations of the market new ideas of training. It's really hard to do and I think that was what got me most excited when I took the invisible job was the question of can you make AI work in a really complicated context? Very few companies know how to do that on the enterprise side or on the training side of that for that matter and so I thought that was a really unique institutional memory context. It is a digital assembly line no different than than an auto factory and I think that is a hard thing to replicate. The other really interesting area that this board member said to me was if I very much agreed with you he said exactly the same words as you in terms of first innings of data in terms of just how much market size will increase. He said the other thing I really didn't understand when I made the investment was the specialization of data and how we are moving into the acquisition of this insanely niche data supply pools where it's not like cat hedge zebra crossing zebra crossing is a what do you guys call it a pedestrian path? I did not see the specialization in the unbundling is that something that you see too in terms of these very micro niche specialized data requirements. Absolutely I think you know five years ago this space was what I would call cat dog cat dog commodity labeling. I don't think anyone and I think there was a lot of Google sheets in that area and you've seen some comments on it like this sector has evolved the same way most technology sectors do where it started with Google sheets and cat dog labeling and it's evolved to real digital assembly lines huge velocity of expertise and incredibly specific expertise so like you know we have to give a funny example we have to be able to validate an architectural expert on 17th century French architecture who speaks French. I mean that is a that is a complex thing to do on 24 hours notice right and so the ability to source assess validate and I think one of the advantage for us is because we have five years of data on who's been good at what task there's real institutional data memory and how you do that selection and assessment. I think that's one of the core advantages we have from them. How important is pay? You know I think a couple of other providers see I've said the funny it's about how much you pay pay more than the others you'll get a good talent. So a weird analogy I think of our business like Uber we source talent at the price at which people will do work that is asked of them right so the same way I do that if you're standing standing on a street corner your question is can I find a ride that will pick you up at this moment within three minutes and that matter that's a different price if it's raining that's a different price if you're in you know Rio de Janeiro versus London right the price depends on the market context and the specific place you are I think extra pay is the same dynamic really a lot of what we're doing is what I call price discovery and so the nuance I would add to what you're saying is you can overpay a really bad expert and that is a total waste of every once time and so what I think our customers appreciate is we can tell you between a hundred and fifty dollar expert and a hundred and thirty dollar expert the difference and expertise you get. Do you think you have control of a finite supply of data providers if you look at the seven palace in Hamilton and one of them is like acquiring finite supply so I actually don't think finite supply matters and what I mean by that is I think the expertise needed varies so much month to month that if you tried to do a world where you bottled up whatever supply it is it would change in three months and we actually relish that concept I actually think the dynamic again why I would use Uber and Lyft you could use Airbnb and VRBO is the same context is I don't think extra it's go on five platforms right I think actually what you want to be is this is a two way marketplace where you need enough demand for people to be interested and you need enough expertise that many experts and I think the reason we get 1.3 million in balance is because of that kind of supply demand balance so I don't think this moves to a world and I actually I would never say it moved to world where there is one player coming out of this I think there is benefits to everyone to having numerous players at do AI training and so it's a question of being one of the players that has that balance you said that about kind of the switching of preference of like oh three months ago it was this you won't know it's something yeah different switching cost is another when you have data providers in this way are there an inherent barriers to switching is there any loyalty yeah I know I I think that if you've learned how to do a certain data task really well there's incredible value in that and let's take the enterprise context again because I do think it's a good one so I'll give you an example we're doing a lot of fine tuning on some pretty interesting topics one example we worked with essay I see Vantor in the US Navy on fine tuning a model for underwater drone swarms and so the question on that if you think about nice very nice example that's your question so if you thought of in that context you've got a bunch of underwater unmanned vehicles and they're getting an all the drone and sensor data from the interaction patterns of those vehicles and what they want to know is you know an object is in the water near them what do they do do they react they pull back do they alert another drone do they engage what are the topics of that so fine tuning a model to take in all that complex sensor data fine tune it train it and build decision making framework for those drones there's a lot of logic built into that and I think that's why it's been a great partnership with essay and Vantor because we built logic on how to do that and it's you know I think that there is real sustainability and expertise you you build up and so the way I think about like our enterprise motion for example is every sector is led by somebody with deep deep sector expertise and we do build real logic on those topics and I think the same is true for multimodal video and audio it's true for legal I actually think a lot of the training work even the model they're side now one interesting view I have is people talk a lot about the public benchmarks that tends to be one question you get a lot is like are we reaching a point where models are not improving I actually think I think about it very differently which is the models are now all moving down hyper specific things where there's not a public benchmark for them by definition right like they're moving to more very specific tasks that are very different and not something you can publicly benchmark in the same way and that's why we do see more more model improvement every day but both in modellers and enterprises on these specific tasks you said about kind of the benchmark so I'm just so interested in we gem and I three killed it it's the best ever and then yesterday opus 4.5 killed it it's the best ever next week sounds going to release one does it matter like are we in a world of such transient influx where really we should detach ourselves from these funny updates to lost for days look I think the benchmarks are useful framework for society to gauge progress on this topic and it's a very it's a very often discussed topic so people want a way to answer the question about the models improving and I can tell you like unequivocally the answer is yes I mean I think by every measure you look at they are and you know they're not only improving on the benchmarks but even on specific tasks like research for investments for example you can see the models are much better at doing certain tasks and I think what you're seeing start to happen is people and we're doing this as well are building very specific work-based benchmarks to calibrate certain things like how well does the model do on building an LBO model for example and you're going to see more and more benchmarks cited now the complexity then becomes if you move from five main benchmarks like sweet bench others to 600 benchmarks then you kind of lose track of what's doing who's doing well on which things but I think my my my interesting view on that would be I'm not sure the benchmark progress is what determines enterprise adoption and what I mean by that is if you take the fact that the models have improved exponentially over the last couple years and you say consumer adoption has been massive right like KBMG has reported 60% of consumers use this on a on a weekly basis the adoption curve on enterprise is not going to be a question of generalizability it's going to be a question of hyper specific performance on a specific task right and so there isn't actually a benchmark for that like if you know let's take a investment summary document for a private equity firm right there's no benchmark to say firm one this is how you write investment investment committee memos does this generate something that looks with 99% precision like something you would would roll out there's no benchmark to do that and so that's where what I see is the adoption curves actually the fine-tuning an inference layer of actually testing that getting into a place where that firm could say like this looks good I'm okay with this I've you've tested it like machine learning has a context I don't know if you've heard the banks do this thing called model risk management where they actually do a whole host of validation and testing on things like red lining before they might roll a model out that's what the enterprise is going to have to do and so it's not that the model improvement doesn't matter I actually think the the benchmarks are a good way to get some sense of model improvement but it they're almost orthogonal to enterprise uptake I think enterprise uptake depends on trust and precision on specific tasks at 99% accuracy not generalizability if those specific tasks are removed in the way that you said like summary docs for investments often it's done by more junior people in the earlier stages of their career when they are building and kind of scaling those skills do you think we will have a talent pipeline problem if we do remove a lot of those junior roles which we are seeing in certain cases already and I think we'll continue to see where we won't actually have the graduation pathways that lead to the leaders that we have today because we've removed those junior roles I don't actually so I think one of the challenges is that the adoption curve of the stuff is going to take a lot longer than people expect so I do think you know I said this to you earlier like I think enterprise this is a five to ten year adoption journey not one to two and so I think you have a dynamic where people have a lot of time to react and to think about what's useful as you know in addition to that and so I actually find a lot of the people coming out of college right now are some of the highest adopters of this and the most useful for these kind of tools and so we're hiring more and more people that profile not less but I think the the usage curve of that group of people certain tasks will not be done but there will be many more so I'll give an example accounting if you worked at a bank example or any accounting firm in the 1980s this is absurd to think about but you literally calculated revenue and financial statements with a slide rule like people literally would sit there and they would generate a financial statement manually on paper with slide rule and that was how people did accounting now excel comes around that becomes the main tool everyone uses to do accounting and so in theory you'd have less accounts because you went from manual generation of slide rules to excel which actually makes it way easier to do that you look today we have about the exact same number accounts and back the same number of junior accounts and what's happened is people do way more sophisticated accounting scenarios with the tools they have it's this old idea of Gevin's paradox which is you increase consumption with advanced technology and so the number accounts and go down you actually had way more accounting in fact every fpna functions probably larger now than it was 25 years ago because the work people do is more sophisticated I do want to go back into we said about kind of market composition yeah and how we see the different players is this a market where you said like Uber and Lyft is this a market where there's one and two players and they take the dominant market share and then there's everyone else is it a cloud market where it's much more evenly distributed how do you project that out and say a 10 year horizon in both AI training and an enterprise I don't think the answer is one player you know I think actually interestingly in the enterprise historically there's probably been penalty or not many others so that's kind of I think why you've seen more people want alternative options to that I think that I think that's part of the reason you've seen so much excitement on enterprise AI recently I think most of these markets end up with three four or five players I don't actually think it's even two and I think the choice in consumers is markets tend to create that and that's a good thing right like I think you'll have some specialization on certain topics you know maybe some better at coding some better better at specialist has some better PhDs but I think it'll it'll stay with a fair amount of choice when you look at the landscape who do you most respect and what do you learn from them I would say palleteers the company I probably respect the most in enterprise AI it's really easy to them as a competitor more than a a surge or a core or a chair in my name Ellis I think they're both competitors in different ways to different parts all of those players are competitors in different ways different parts for our business I think I call out a manager because I think they realized 10 years before the rest of the kind of tech market that four-to-poit engineering customization would be important and I think that was a very counter-cultural leap at the time you know and because I look I mean I spent a lot of time running four-to-poit engineering teams and most of what I saw was players like Accenture what was called tech services back then was not a place that anyone wanted to play in and so Pounder spent a decade before anyone realized this was important building good tech right and so I have a ton of respect for that and the the culture they built out of that I think on the training side I won't comment anyone specific I think I think all the players in the space are good and they all do different things well there are large revenue numbers thrown out yeah are they revenue because I've done shows before with them and I got bastard bluntly when people like oh it's not revenue harry and you can't categorize it as revenue is it GMV not revenue are we playing fast and loose with the truth on revenue versus kind of bookings I think it is revenue I think that the rate you get on every project is different the margin you make on every project is different so I do think it is revenue and I think that the can you help me understand sorry and I'm very naive if I'm acquiring amazing talent and I get paid for that and then I have to pay them and then I get my take at the end of that how is that different than booking on a B&B where I get my take from a location but I have to pay out to the owner oh good question well I think Airbnb has one consistent fee that's a difference there's actually a fair amount of variation of based on the skill side of the expert like you don't have a consistent rate relative to the booking amount that's the biggest difference so there's huge variety depending on the project the expertise type the expert type of what you book on that there are any other big misnomas that you think are pronounced in the industry why you consistently I wish people would change the way they think about it look I think the biggest one is just the view that when I first started this job the main pushback I always got was that synthetic data will take over and you just will not need human feedback two three years from now and it's interesting I don't from first principles that actually doesn't make very much sense if you think through it right if you think about the diversity of tasks that is this in the world and then how long it would take you to get comfortable with the accuracy it doesn't make any sense right like I'll take legal services because it's a really interesting one right a lot of the legal data in the world exists with big law firms it doesn't exist in the public industry so if you take like the corpus of publicly available information that's been commoditized for years at this point right and so most of the logic is incredibly contextual to language culture multimodal context and the information stored in individual companies as an example and so the only way to actually do the fine tuning process consistently and to get it accurate for any specific context is Arley Jeff and I actually think in my in my decade in my McKinsey days and McKinsey going on black days that was the thing I realized was different about traditional ML models versus gen AIs in machine learning you can back test you can get to a really clearly statistically validated outcome without any human intervention I think on the gen AIs side you are going to need humans to loop for decades to come and I think that is something that most people are starting to realize I think it's always confusing to people when they hear like oh that's how models are trained on the back and I didn't realize that's how the statistical validation works and so I think that's been an interesting evolution curve as people started to realize that you're profitable correct this year we have started invest a lot more so I think one of the big differences historically invisible had only raised seven million a primary capital on its entire nine year journey we initially answered a hundred of actually right now raised 130 million and so I'm investing very heavily in technology so we will not be proud of this year now can you just take me to that decision because this was going to be my question which is like that's a very clear decision to be profitable and profitability comes often at the extent of growth yeah naturally can you just take me to that decision making for you and how you thought about it yeah look I mean to me it was a simple one which is if you think about the dynamics of return on capital you can either harvest capital or invest capital and your decision to invest depends on the growth you see as a result of that investment and I think we're in the greatest environment for growth that has ever existed I think invisible is really uniquely positioned to capitalize on that growth too and so I think of our five core platforms I think of the growth vectors across both AI training and enterprise and there were just way too many different things I thought were interesting to invest in it was the clear best use of capital and I look I'm trying to build it for next 10 to 20 years and I think if you want to build enterprise value for 10 to 20 years now is the time to invest and build and I hope we never get to the harvest stage but I definitely not now where are you not investing that you want to be investing I think the simplest answer is actual physical world interactions so what I mean by that is I think a lot of the most interesting data that we don't even really have access to yet is things that exist in the physical world that are more complicated to acquire and organize so I'll give you an example we are serving one of the largest agricultural conglomerates in the US on herd safety so actually like monitoring risk factors when should you send a vet for their herd of cows basically that whole process relies us on us actually sending four deploy engineers to farms dropping starling terminals into those farms and building out custom computer vision models in those contexts and I think there are so many different physical world contexts that become really really interesting but it does take cost and capital to build those out like you know I think oil and gas oil rigs are an interesting one as an example and so I think physical world interaction patterns are some of the most interesting growth vectors for this but they do take time and money to invest in robotics being another big part of that one area of investment that I think is interesting is brand how do you think about invisible brand today was interesting when I took over we had had if you looked at the entire public internet I think there's one article available and so I we've definitely spent a lot more time this year thinking about is that a deliberate decision I think so to some degree I think invisible as a culture of you know we believe in doing great work for customers and we were kind of not really focused on telling the whole world about that does that become detrimental to the business at some point yeah look I do think branding matters a lot my view now is that it's been very helpful for us to spend time where I spent a lot I spent about 70% of my time on the road and I got a lot of conferences things like that and I think building a brand is really important for trust for awareness for engagement and so and I think also how you tell that story is really important so I'm very much a believer one of my favorite quotes like Mark Andreessen has this idea that when private and public narratives diverge that is the risk or the opportunity so many of you say things you don't believe to be true or if everyone's saying things they don't believe true then what is the actual private narrative so I think it's been very important to me to make you just help me understand that yeah so hypothetically if I was going around saying we have an out-of-the-box agent that does everything and then that wasn't actually true that's what either creates opportunity for others or risk for us that's how I think about it and so I think what's been very important for me is that not our industry I'm sorry I'm sorry I mean I mean I don't mean to pick a fight with Mark Andreessen but like hello Mark why our job is to sell and then deliver later like I'm gonna get making well I'm fucked well you know I guess it's all a question of degrees and I think in my mind like I want to say things where the narratives are the same to the public and to what our team thinks and what our customers experience and so I think that's part of why I have focused on saying some of the nuances of what's not working and not claiming everything works out of the box and I think that is that is a different approach but it's been a core to how we've thought about building the brand is we are buildings around trust where like I want a company we work with to know that if I say this will work it will work and I think you only get one chance to do that right you agree to fake it till you make it that's such an interesting question I think it depends on what faking it means right and and what one of the things I think is really complicated about Jenny I it's non-deterministic right so like if you've never built a machine learning model to do pricing in industrial manufacturing you can still understand what data is available understand how the price is being set today and get pretty comfortable that what you build if you say you will build it will work and I think that is okay I think the challenge of non-deterministic systems is there is more risk to fake until you make it meaning if you you can kind of go out and say your agent will do anything then you actually have to deliver an agent that works right I think that's part of the interesting you're asking about accounting dynamics I think it's part of the interesting dynamic of like a lot of the contracts that the people will sign right now are like I'll sign up for 50 agents to be delivered but then the question is do you deliver the agents do they work and so I think that is a different thing than SaaS to go to go back to your earlier question if I deliver a SaaS box I know it will work if I deliver an agent in the current world there was actually a report AWS came out with today is interesting like 70% of agents are actually not even AI agents as you think of most the agent agent processes today are actually traditional script writing and just traditional automation right and I think that's why I don't self identify as an agent company actually at all I think we do AI agents we do AI workflows are a core part of we do but we do data we do training and fine tuning and agents are one tool in the toolkit because I think too much emphasis on them a lot of the time won't work did you see the video of the robot going around the house recently and it was like the worst thing ever it was 11 minutes to take out a glove and then at the end of it was like and this was controlled by Simon in the back room and you're like the shittest robot ever was then controlled by some weird dude in your back bedroom like this is so shit I do think that is yeah I did see that and look I think we're about another one that will take longer but we'll be really interesting when it works but by the way I think even in that case you'll need more task specific robotics not just broad based have you ever fate to tell you make it and being cool sounds and did you learn anything from it so when I first started working in it was even called AI back then it was kind of data analytics was actually called this is probably 12 years ago or 13 years ago now 12 probably 12 years ago and you know I I think the firm gave me a pretty interesting purview to try and explore where I could build out AI offerings across different sectors and customer bases and I don't think I knew what I was going to build candidly I think that the interesting dynamic is I I had a lot of conviction that and partly to some of things that I've been for that guy could be really useful on a whole host of things from inventory forecasting to pricing to credit underwriting you just thought intuitively of like the sources of data the fact that so 70% of the software in America is over 20 years old most of that data is massively fragmented not clean and so a lot of the decisioning that happens the enterprise is done in a really fragmented way and this is what I did know I did know that like you took your average sales rep making a call most of the time they're like googling some stuff to try and figure out what information not now but this was 12 years ago they had very little information on the script to say customer information what they might sell so I had a lot of conviction that that would work I did not know what would be most interesting in fact there were areas I thought would be really interesting like banking they're actually much harder to do this inconsistently it was somewhat you mentioned earlier like banking so so the average bank spends 93% of its cost of its tech cost on maintained initiatives 7% go into building new things this is my favorite thing with people that I just had one of the CEOs of a big vibe coding platform on and he was like well the fastest ever we're going to build our and I heard the seven yeah yeah I'm just like maintaining provisioning updating are you pie if yeah if you've never gone through info second approval of the bank like the banks are banks and look for very good reason banks are much more complicated to do a bill like that in right and so I think what was this this event that I was not lost because I have six and a half thousand people in KYC alone six and a half thousand people it's a great example and so I think what I was doing that in the early days partly because there was very little media coverage or interest in it I was kind of figuring stuff out from first principles and so I think they'd agree to which I faked it when I make it was I had to figure out other people I worked with and customers that trusted me enough to allow me to coiterate and develop stuff with them I had to figure out a way to recruit really good people that was actually like I actually think if you take any business very simplistically it's a question of can you build trust with customers and coiterate to develop and and make things work and then can you recruit unbelievable people to deliver that and it actually really comes down to recruiting in a lot of ways I think that that's actually the number one thing we focus on I think of as a talent company as much as anything else like you could you could argue that like not to use a sports analogy but like Nick Saban did not build Alabama football with the process he built it with recruiting the best football players in the country and I think about that the same way as like you have to recruit great people so in some degrees in the early days of that you know 10 12 years ago I was setting a vision and trying to figure stuff out and actually iterating a lot of stuff and I do think we ended up building a lot of things that really worked but it took time and it took iteration as much as anything else it took iteration and trust so I would say the counterintuitive thing is I didn't fake it and then I never told people would definitely work I would actually my entire approach would be to say I think this would work this is my reasoning why I think it would work and let's build this and that actually a lot of people were very comfortable with that I think if you go in and say I have an out-of-the-box AI that solves all your problems people are pretty skeptical I do just want to stay on recruiting because again I always again I think it shows that's what you put on the hand and you're like as a startup CEO one of your biggest jobs is to recruit great people yeah having recruited people across different companies now but McKinsey and now it's invisible what would you advise startup CEOs in the earlier stages knowing all you know now what it takes to be great at recruiting acquiring and retaining great talent yeah it's probably the topic I spent an enormous amount of time focused on that it's probably the topic I think about the most because I actually do think if you get amazing people everything else will follow from that so you agree with the monitor of like higher great people and that can do that work because people kind of push back on that now yeah I think I think not just higher higher retain and evolve great people because I actually think you have to give them a platform that they enjoy day-to-day I think the two things that I believe that somewhat countertuitive is when you recruit a great person I don't think about role most of the time meaning I think people are very role-focused of like I will hire this person and they will only do oil and gas as an example right but the reality is like really good people will run five to six different roles across the seven to eight different products particularly on the business side you may have somebody that does everything from delivery to sales to accounting and you can be comfortable with that if you hire great all-around athletes always and I think the second thing is it has to be fun my view on one of the narratives that has gotten a bit lost in last couple years is if you have a culture that is brutal to work at people will leave they might stay around as long as your stocks high but they're not going to stay and I think you have to create an environment where people really enjoy going to work every day where they're intellectually challenged and where they feel like they can unleash creativity and so I think that's I spent a ton of time thinking about that can you just I didn't want to argue back but I I wanted to build great companies myself yep I'm trying to the 20vc and I try to book good cultures revolute is a brutal culture to work out famous for it but Nick is famously always told me cultures fucking bullshit winnings what matters when people win they learn more they earn more and they grow yeah and that really is culture brutality in bounds drives humans is that wrong no I think it's actually right and let me let me caveat what I said is I think it's also the nature of the business I am in being AI meaning I actually think that's a very true statement if what you're trying to do is scale a relatively consistent business model to do one or two things then that is a function of execution and hiring people to go in very specific roles and do very specific things well and I actually sorry let me caveat my prior comments on that I think the difference is a lot of what we do is research and exploration fundamentally right and so in the AI world it is a different dynamic in that you're trying to figure out very specific problems with customers to solve and build really unique tech and so I think in that world you do have a different cultural dynamic it is a research culture as much as it is an implementation culture is that difficult then when you know I just we do a show every Thursday which is blown up which is incredibly nice for us as a business but it was actually we have Jason Lamkin and Maurer just called two PCs and we talk about news and we talked about Sam Altman and war mode can you do a war mode then in the culture of research and AI where it's maybe more thoughtful does that work yeah there are definitely parts of our like I think if you take our delivery operations teams there in war mode quite a bit of the time so so I think again I'm more describing general I think counter cultural beliefs I have on how to hire certain sets of great people I don't think it applies to every single function of the company I would agree with that I think there are definitely you have to be able to push really hard to deliver certain outputs and I think we do a great job of that but I also think you know there there have been ideas of like every great engineers should be able to spend 30% of their time on new projects as well as sprinting on the existing ones I think it's paradigms like that that are important what decision are you scared to make but you think about it often yeah I think the simplest answer I'd have to that is that growth in this industry relates to the amount of capital you raise and you know you're earlier question on investment I do think there's a world in which if you pursue hyper-square growth it is possible but you have to invest a lot more to do that like every new company every new customer you onboard you have it does cost money to do the four-to-point engineering work you invest more in your tech and so there is an interesting like do you run a business for consistent steady growth for 20 years or do you try and build something that gets to 50 to 100 billion dollars and becomes game changing and we have very much tried to operate in a way where I think we have a path to profitability anything else but we are going to invest in the near term because I think it is it is a very interesting time to do that I know you didn't like to name names but I can because like when McCorey's is like two billion dollars you know like fuck we need to raise more fucking money it's interesting if you look at the players in our space that there have been very different levels of capital raised and people had success more and less I actually think a lot of our investment is in different areas than many of our peers said in our training or focused on a lot of it's in things like the enterprise it's in course offer platforms that are maybe a little bit different than what other focus on so I think you can raise a lot of money and the question is where you spend it again I actually think most of the capital we need in the next five years is more enterprise focused I think we've actually built something on the IT side that feels very very good we were talking about recruiting for I went off on a tangent map you now have offices despite being remote company for several years does remote not work yeah so we were a fully remote company for nine years until I took over we've now off we've now gone largely in person and we do have we do have some folks that work remote but we now have offices in New York we took the old Pinterest space in San Francisco London Paris Poland DC and just opening Austin Texas now and I think the interesting thing I've experienced that is I do think remote you really struggle to build culture in the same way so I think that the things I've experienced as we remote is just a way stronger positive culture of co-location which I think people enjoy their work and get to know their co-workers a lot better as a result of it I think it gives us a lot more depth with customers to be collocated in cities where we spend time with them I mean like if I take London in Paris we need to be co-located with the customers there it can't just be you know someone in a zoom screen in New York do you see productivity increase exponentially yes I think it'd be take engineering as an example like I think you can execute engineering tasks remotely but the process of working through really thorny problems like we so I've tripled the size of the engineering team and what I can tell you is interesting thing is vast majority of those people wanted to be in person now I'm not saying that's true of all engineers but it was interesting how many people particularly the younger 10 years were like I want to be co-located I want to work through things and so I don't even mandate office attendance I just have it in those offices and we have huge appetite I was with our we have 40 people in our London office I was with many of them last night they were all commenting on how many of them come in voluntarily even on like a Friday where they might not need to because they like being around their peers I think that I would actually bifurcate two separate things and I don't think they're related one is the hours you work seven days a week very flexibly depending on client needs exist from physical co-location I actually don't think they're related meaning I think the benefit of integration is if stuff comes up on a Saturday or you're pushing on a new product bill like you will work on that Saturday but if you do that from your your home that's totally fine I think office culture to me is like if you took a hypothetical thought experiment you said over a year I think there is a diminishing return from being in the office all the time where you lose flexibility so as an example if I said we were physically we're remote 100% of the time that would not work at all if I said we were physically in the office six days a week I think that is overkill and you lose great people particularly senior enterprise folks don't want to be in the office on Saturdays but what I think we found a nice balance between is people come to the office most days people really enjoy being with their colleagues they work most days but they can do it from their own home on the weekends and I think that sort of flexibility is good final one for you to a quick fire what did you believe about management that you now no longer believe I think two things I would highlight one is that I think control of is a bit of a fallacy depending on the volume thing of things you have going on meaning I actually think to the question earlier on hiring great people if you're serving several you know let's say a couple of years now you're serving a couple hundred customers on different topics you actually need to have values consistent tooling consistent approaches but you need to empower all those teams at the edge to operate and do what they will and so actually one of the major focuses I've made over the course of the years to reduce a lot of our hierarchy make the organization way more flat so that companies that people at the edge serving customers are empowered to make decisions they have decision making frameworks they've consistent tooling but they are empowered I think trying to control that centrally maybe works in like a manufacturing business but you you lose a lot of latency of decision making so you know I think if you look at like there's a lot of interestingly military history that would say the same thing that is like actually if you look at the function of an army at some point it moves into people in the field make the decisions and so you have to have the training the strategy recruiting to do that and then you have to empower your teams to work and I think I think about a lot of that very similarly and the second thing I think a lot about is in the AI world at least strategy is a somewhat overrated concept and what I mean by that is I think actually all strategy I was talking to a CEO in the biotech space and he was saying that strategy is very important for them because every time they make a capital decision it's a seven year capital cycle right and so in that case the strategy makes a lot of sense but in the AI world one thing that's been interesting to me is every three months the entire world changes and I just had to get very comfortable with that dynamic you have to think about your investment life cycle as core beliefs you have and then 30 to 40 percent of things that you iterate constantly based on new tech so there is tech that you're going to build like a new voice agent comes out that will become obsolete and you have to just be very comfortable that you're building an interoperable set of frameworks that you can integrate the new tech into and that has to be a core function of the business is five year strategic planning is not a useful exercise right now a lot of ways I think you want to think about five years in terms of the cultural context you build the organizational like the institutional memory to use this empowers framework but the actual iteration cycles are much much faster and I think if you don't react quickly that that does not sustain and now I think enterprise the interesting flip side of that is enterprise sales cycles for example are much longer so it's not like you can't survive unless you're making decisions but but I do think the big thing is a lot of the tech being developed changes every two to three months and so you need to be constantly incorporating that into what you build final final final one I promised for a quick far you said about always being traveling and you mentioned a girlfriend earlier how do you make that work and what would you advise me as like hey tips and tricks to not have a severely pissed off girlfriend most of the time I think the first thing is to find a great girl who understands that you are really passionate about you're doing and is is supportive of that I think my girlfriend Claudia has been has been great on that front I am very appreciative of that but look I mean it's tough I'm on the road probably 60% of the time I've you know if you look at my last four or five weeks Riyadh, Geneva, Paris, Berlin, London, San Francisco, Boston, Singapore, now London again so I mean that's a few enjoy this I do in some ways I think that I feel very lucky to be building something at this particular time and with a group of people I love working with you know this happens to be what I spent my last decade doing and it happens to suddenly now people like what a lot of people want to do it's great and so I feel very lucky because of that and so every day I wake up and see what else can I do to be to kind of push that forward and so I do kind of live on the road but look I think you I think some of the things that I've tried is like you know you figure out things like FaceTime you make sure you keep the cadence interaction high because being on the road is tough but I also don't think it's forever I also think I'm at that fun stage of trying to take something to like we kind of went zero to one and now we're trying to go one to end but we're not yet you know fully mature public company or anything like that and so I think she's been very understanding throughout that process are you ready for a quick fire round yeah okay open AI at 500 billion or anthropic 360 billion which would you rather invest in I do not comment on any any players in the mind of other space for a right reasons you can see what is the discount around what's the most underrated in for company today Databricks which is you're going to be like well they're very rated but look I think their tech is great and and I think that it's interesting in a lot of ways the most useful foundation for AI is really good Databricks infrastructure I think when I hear a customer house I'm always very happy what's the best advice that you've been given that you most frequently go back to we kind of talked a little bit earlier but a CEO that I respect a lot when I when I took the role I asked him his advice like what's the best way to think about a team and he said look your job as a CEO is to do three things really well recruit great people create a culture where they love working together and build great things and try and make them all extremely rich and I think it's a funny framework but I think an interesting way to think about like that is my responsibility to employees I want them I want to find great people help them enjoy each other and then build something that becomes big and helps all of them achieve their dreams what's one widely help belief about AI that you think is completely wrong that out of the box agents will solve everything with the push button that is I think the biggest misconception now is that I think many people were hoping the adoption curve will be I buy something I just push into my business and it takes a whole process and fixes it and I think they're realizing it requires training fine tuning in a whole host of process redesign and business ownership you all meet today you have a new four hundred million dollar fund in your partner and the fund with me why should we be investing why most people are not because everyone is it and messy in agents out of the box well look you know I think it's an interesting question because I think a lot of the reason people are investing in the agents out of the box is that they're trying to apply a SaaS paradigm of what's worked historically to AI which is challenging the model building layer is clearly producing amazing returns I think the AI agent layers more complicated now where I think that's also complicated is the application layer is tricky too and I think you hear a lot of commentary on this like many of the applications may or may not work they're not really getting full workflow and betting they're more of like kind of nice to have in workflow context so actually my counterintuitive take would be one interesting question of the pair of the paradigm now is whether new companies built around AI get distribution faster than big companies figure out how to adopt AI I think that's like the interesting paradigms for our society and so I think some of the most interesting new businesses are actual businesses using AI in the physical world that are AI native and that will be highly disruptive so you mentioned revolution banking for example or you could go into like loan servicing there's many different areas where people are standing up new businesses one of the most interesting stats I've heard recently is if you look at why commentators recent class I think it's like the larger it's 2x the revenue of any prior class and many of those are businesses that are actually serving a customer need not selling that customer software if that makes sense and so I think from an adoption standpoint one way to do this is to bet on AI agents which are more like a SaaS paradigm who will sell stuff to customers and the other way to think about is what are business models that will change because of this and I think it was a whole list of like you know gen A.I. native services businesses are very like you know tax accountants etc are really interesting examples of that again you're a partner with me in the fund yeah do we just get used to a world of low margins is that is that how this business plays out that is the world of 70 80 percent software margins over first of all challenged at 78 percent software margins actually ever existed what I mean by that is there's the gross margin but if you look at like profitability in public software multiples it's fascinating right in the last two years you've seen public software multiples go from 20 x to 10 x partly because of growth changes and partly because they've tried as they move profitable their growth slows materially and what you realize is like I actually would take the flip side of this which is the integrated units will be very very profitable because the way they grow they'll be able to acquire customers faster build them things that are good faster and so they won't have the box stickiness but they'll also I would argue a lot of those software companies below the line were not that profitable when you look forward to the next 10 years final one what are you most excited about like you know for me my mother's got amass I look at potential advancements in amass drug discovery treatment pathways what are you most excited for I like to end on a tone of optimism yeah you know I think despite some of my what I call realism on on enterprise adoption I actually am an AI optimist and I actually think that the current narrative on some of the risks are far outweighed by some of the benefits and like just to give a couple examples right and I'll go through forward including health care if you take energy as an example right there's a lot of question around like data center implications for energy but you do the math right now data centers are about 1% of total global electricity usage AI data centers are 0.25 to 0.5% of that so actually really small I don't know cooling electricity air conditioners is 14 to 20% of global electricity usage AI has so many different ways of grid optimization cooling we're I mean the world economic form just came out into this it's going to be massively net net positive from a environmental impact standpoint so I think energy is one where if you think about all the energy needs we're going to have and the investment now going into clean energy because of all this I think we'll actually be in a much better place 10 years from now I think health care is another interesting one if you look at US health care we spend 14,000 per capita per year on patients in the US so that's like a rough spend that's two and a half to three excellent like Germany and Canada spend as an example if you then break down the context of that you know nine percent of that roughly is is administrative something like 25 percent of its waste and then actual cost of care is really challenging I mean that Johns Hopkins is released a stat that 250,000 deaths a year happened because of avoidable errors you see things in AI like 20% better identification of breast cancer risk for example so I think actually health care is another one where the cost framework for health care has been not good over the last 20 years and the cost of care improvements will be really material if AI works well so I think that's another one I think the the one I'm probably most excited about is education if you're a kid growing up in any socioeconomic disadvantaged city in in the world your ability to learn about any topic on earth incredibly quickly is better now than it has ever been in any point history you can take any topic on earth and with just an internet connection learn you can go through and you can pick your topic and I think one of the reasons that's particularly important is educational system we've had for the last 10 15 years actually 50 years doesn't really work and we have massive K through 12 challenges with STEM topics in the US for example we have huge learning gaps largely driven by social demographic context and most of our educational system is based around like teach people biology English and history and like not teach them of basic things like FICO scores or how to do coding and so and to add to all that the college system has has created a student debt crisis where people way too many people are going to colleges that are not worth going to for and taking on enormous amounts of debt to do it so I actually think again I think the way our educational system will shift will function will shift material we're a talent assessment company enormous amount of people we bring in did not go to college and we assess them on cognitive attitude and skill and so I think the the really positive note I would leave on is I think the way that people learn the topics they learn the way we look at resumes and how to screen and assess people will move in a really positive direction and I think a very different one that we've had for the last 100 years absolutely thrilled to hear that there is value in non-college or dropouts as a dropout myself there's been so much fun to do Matt thank you so much for being so flexible with the topic type you've been fantastic did thank you for having me but before we leave you today are you drowning in AI tools chat GPT for writing notion for docs Gmail for email slack for comms and you're constantly copy pasting between them all losing context and losing time this is the AI productivity tax and it's killing your output at 20vc we're all about speed of execution and superhuman is the AI productivity suite that gives you superpowers everywhere you work with the intelligence of Grammily mail and code of built-in you can get things done faster and collaborate seamlessly finally AI that works where you work however you work superhuman gets you from day one with zero learning curve and it's personalized to sound like you at your best not like everyone else using generic AI get AI that works where you work unlock your superhuman potential learn more at superhuman.com forward slash podcast that's superhuman.com/podcast and speaking of tools that give you an edge that's exactly what alpha sense does for decision-making as an investor I'm always on the lookout for tools that really transform how I work tools that don't just save time but fundamentally change how I uncover insights that's exactly what alpha sense does with the acquisition of taegus alpha sense is now the ultimate research platform built for professionals who need insights they can trust fast I've used tegas before for company deep dives right here on the podcast it's been an incredible resource for expert insights but now with alpha sense leading the way it combines those insights with premium content top broker research and cutting edge generative AI the result a platform that works like a supercharged junior analyst delivering trusted insights and analysis on demand alpha sense has completely reimagined fundamental research helping you uncover opportunities from perspectives you didn't even know how they existed it's faster it's smarter and it's built to give you the edge in every decision you make 20 VC listeners don't miss your chance to try alpha sense for free visit alpha sense.com forward slash two zero to unlock your trial that's alpha sense.com forward slash two zero and if alpha sense helps you make smarter decisions daily body coach helps you build smarter habits you know how so many founders and execs say they'll finally take care of their health once things slow down well they never do running a business is a marathon made of high intensity sprints and taking care of yourself is what gets you through those times performing at your best both professionally and personally this is exactly where daily body coach comes in daily body coach is a complete high touch service for busy founders and executives combining personalized nutrition and training with psychology based coaching to help you not just follow a plan but actually build the systems habits and mindset to stay at the top of your game built by an exited founder and led by certified experts with masters and PhD level credentials daily body coach is fully tailored to your life whether you're traveling dining out or in back to back meetings you get daily accountability data driven insights from Dexascans and blood work and a highly certified team backing you if you're serious about performing at your best physically and mentally go to dailybodycoach.com forward slash two zero VC that's dailybodycoach.com forward slash two zero VC and take the next step

Podcast Summary

Key Points:

  1. The host of "20VC" podcast, Harry Stabbings, participated in a charity event to raise money for multiple sclerosis sufferers.
  2. The CEO of Invisible Technologies, Matt Fitzpatrick, has raised over $100 million for the company.
  3. The discussion covers the gap between model performance and adoption in enterprises, challenges with internal AI builds, and the role of CFOs in managing AI initiatives.

Summary:

In the final episode of "20VC" podcast in 2025, host Harry Stabbings shares about a charity event he participated in to raise funds for multiple sclerosis sufferers. The guest, Matt Fitzpatrick, CEO of Invisible Technologies, discusses the company's achievements and challenges in the world of AI. They explore the gap between model performance and adoption in enterprises, highlighting the complexities of internal AI builds and the need for clear metrics and outcomes.

The conversation also touches on the role of CFOs in managing AI initiatives, emphasizing the importance of good data, clear milestones, and outcomes for successful implementation.

FAQs

The AI productivity tax refers to the time wasted by using multiple tools and copy-pasting between them, leading to a loss of context and time.

Invisible Technologies focuses on deploying modular software platforms for various enterprise use cases, addressing challenges like data infrastructure, workflow redesign, and trust to improve AI adoption.

Enterprises encounter difficulties in hiring top-tier AI engineers, maintaining quality in internal builds, and lack the discipline and talent levels found in startups or large tech companies.

CEOs should ensure CFOs focus on clear data, milestones, ownership, and outcomes for AI initiatives, without requiring them to be highly technical.

Enterprises can avoid the 'Accenture paradigm' by focusing on specific operational metrics, evaluating internal and external vendors based on those metrics, and being willing to replace vendors if necessary.

Chat with AI

Loading...

Pro features

Go deeper with this episode

Unlock creator-grade tools that turn any transcript into show notes and subtitle files.