Go back

Andrej Karpathy on Code Agents, AutoResearch, and the Loopy Era of AI

66m 31s

Andrej Karpathy on Code Agents, AutoResearch, and the Loopy Era of AI

The discussion highlights a transformative shift in software engineering and AI interaction, driven by advanced coding agents. Since around December, engineers have moved from primarily writing code themselves to delegating most tasks to AI agents, fundamentally changing workflows. The main challenge is no longer access to compute but optimizing how users instruct and parallelize multiple agents to maximize productivity. A key development is the emergence of "claws"—persistent, autonomous agents that can manage complex, ongoing tasks like home automation by discovering and integrating various smart devices via APIs, all controlled through natural language interfaces like WhatsApp. This suggests a future where software is built agent-first, with APIs prioritized over human-facing apps, and where AI agents act as intelligent glue between systems. The conversation also notes the importance of agent personality and memory systems in creating effective, engaging tools, while acknowledging current limitations around security and the learning curve required to master these new capabilities.

Transcription

14710 Words, 78871 Characters

English
Code's not even the right verb anymore, right? But I have to express my will to my agents for 16 hours a day. Man of fast. How can I have not just a single session of plot code or code X or some of these agent harnesses? How can I have more of them? How can I do that appropriately? The agent part is now taken from granted. Now the claw like entities are taken for granted. And now you can have multiple of them. And now you can have instructions to them. And now you can have optimization over the instructions. But I mean, this is why I guess the psychosis is that this is like infinite and everything is school issue. [Music] Hi listeners, welcome back to No Pires. Today I'm here with Andre Karpathy. And we have a wide ranging conversation for you about code agents, the future of engineering and AI research, how more people can contribute to research. What's happening in robotics? His prediction for how agents can reach out into the real world and education in this next age. Welcome, Andre. Andre, thanks for doing this. Yeah, thank you for having me. So it's been a very exciting couple of months in AI. Oh yeah, you could say that. I remember walking into the office at some point and you were like really locked in. And I was asking what you were up to and you're like, I just, I have to code for 16 hours a day or code's not even the right verb anymore, right? Manifest. Because like there's been a jump in capability. What's happening? Tell me about your experience. Yeah, I kind of feel like I was this in this perpetual. I still am often in this state of AI psychosis just like all the time. Because there was a huge unlock in what you can achieve as a person, as individual, right? Because you were bottlenecked by, you know, your typing speed and so on. But now with these agents, it really, I would say in December, is when it really just something flipped where I kind of went from 80/20 of like, you know, to like 20/80 of writing code by myself. Or says just delegating to agents. And I don't even think it's 20/80 by now. I think it's a lot more than that. I don't think I've typed like a line of code probably since December, basically. Which is like an extremely large change. I was talking to it like, for example, I was talking about it too, for example, my parents and so on. And I don't think like a normal person actually realizes that this happened or how dramatic it was. Like literally like if you just find a random software engineer or something like that, at their desk and what they're doing, like their default workflow of, you know, building software is completely different as of basically December. So I'm just like in the state of psychosis, trying to figure out like what's possible, trying to push it to the limit. How is it? How can I have not just a single session of, you know, Cloud Code or Codex or some of these agent harnesses? And then how can I use these claws? What are these claws? And so there's like a lot of new things. I want to be at the forefront of it, you know? And I'm very antsy that I'm not at the forefront of it. And I see lots of people on Twitter doing all kinds of things. And they also are not really good ideas. And I need to be at the forefront or I feel extremely nervous. And so I guess I'm just in the psychosis of like what's possible? Like because it's unexplored fundamentally. Well, if you're nervous, the rest of us are nervous. We have a, we have a team that we work with at conviction that their setup is everybody is like, you know, not the engineer's right code by hand. And they're all microphone and they just like whisper to their agents all the time. Is this the strangest work setting ever? Yeah. And I thought they were crazy. And now I like I fully accept I was like, oh, this was the way. Like you're just ahead of it. Yes. Um, what, uh, how do you think about your own capacity now to like explore or to do projects? So like, what is it limited by? Yeah, what is it limited by? Uh, just I think everything like so many things, even if they don't work, I think to a large extent, you feel like it's a skill issue. It's not that the capability is not there. Is that you just haven't found a way to string it together of what's available. Like I just don't, I didn't give good enough instructions in the agents from D file or whatever it may be. I don't have a nice and not memory tool that I put in there or something like that. So it all kind of feels like skill issue when it doesn't work to some extent. You want us to see how you can paralyze them, etc. And you want to be Peter Steinberg basically. Uh, so Peter is famous. He has a funny photo where he's in front of a monitor with lots of, uh, like, uh, he uses codex. So lots of codex agents telling the, uh, the monitor. And they all take about 20 minutes if you prompt them correctly and use the high effort. And so they all take about 20 minutes. They have multiple, you know, 10 repost checked out. And so he's just, um, going between them and giving them more. It's just like you can, you can, you can move in much larger macro actions. It's not just like here's a line of code. Here's a new function. It's like, here's a new functionality and delegated to agent one. Here's a new functionality that's not going to interfere with the other one. Give it agent two and then try to, uh, review their work as best as you can. Depending on how much you care about that code. Like, where are these macro actions that I can like manipulate my software repository by? And like another agent is doing some like research and another agent is writing code. Another one is coming up with a plan for some new implementation. And so everything just like happens in these like macro actions over your repository. Um, and you're just trying to become like really good at it and develop like a muscle memory for it. Is extremely, um, yeah, it's very rewarding. Number one, because it actually works. Uh, but it's also kind of like the new thing to learn. So that's why hence the psychosis. Yeah, I do feel like my instinct is like, whenever I'm waiting for an agent to complete something, the obvious thing to do is like, well, I can do more work. Yeah. Right. Like if I have access to more tokens than like I should just paralyze at the tasks. And so that's that's very stressful because if you don't feel very bounded by your ability to spend on tokens, then, you know, you are the bottleneck in the system that is max capability. Yeah. If you're not maximizing your subscription, at least. And ideally for multiple agents, like if you run out of the code on codex, you should switch to cloud or whatnot. I don't know. Like, that's what I've been trying to do a little bit. And I feel nervous when I have subscription left over. That just means I haven't maximized my token throughput. So I actually kind of experienced this when I was a PhD student, you would feel nervous when your GPUs are not running. Like you have GPU capability and you're not maximized, you're available flops to you. But now it's not my flop. It's about tokens. So what is your token throughput and what token throughput do you command? I would actually argue that it's very interesting that we had, you know, at least 10 years where in many engineering tasks, people just didn't feel compute bound. Right. And the entire industry feels that now they feel like they felt resource bound. And now that you have this big capability jump, you're like, oh, actually, it's not, you know, my ability to access the compute anymore. Like I'm the buying constraint. Yeah, it's a skill issue. Which is very empowering because, yeah, because you could be getting better. So that's why that's why I think it's very addictive because there's unlocks when you get better. What do you think it goes? Like if you just think about like, okay, you know, Andres iterating and a real sis for 16 hours a day, getting better at using coding agents. Like what does it look like in a year of like you've reached mastery? Yeah, what does mastery look like, right? At the end of the year or like two, three years, five years, 10 years. Yeah. Well, I think everyone is basically interested in like going up the stack. So I would say it's, yeah, it's not about a single session with your agent, multiple agents, how do they collaborate and teams and so on. So everyone's trying to figure out what that looks like. And then I would say, claw is also kind of an interesting direction because it really, when I say a claw, I mean, this like layer that kind of takes persistence to a whole new level. Like it's something that like keeps looping is like, it's not something that you are interactively in the middle of. It kind of like has its own little sandbox, its own little, you know, it kind of like does stuff on your behalf, even if you're not looking kind of thing. And then also has like maybe more sophisticated memory systems, etc. There are not yet implemented in agents. So open claw has a lot more sophisticated memory. I would say than what you would get by default, which is just a memory compaction when your context runs out, right? Using that's the piece that resonated for more users versus like, perhaps like broader tool access. For open claw? Yeah. There's like, I think there's at least five things that I've made. There's a lot really good ideas in here. Yeah, good job here. I mean, here has done a really amazing job. I saw him recently and I talked to him about it and I, he's very humble about it. But I think he innovated simultaneously in like five different ways and put it all together. So for example, like the soul and the document, like he actually really crafted a personality that is kind of compelling and interesting. And I feel like a lot of the current agents, they don't get this correctly. I actually think a claw has a pretty good personality. It feels like a teammate and it's excited with you, etc. I would say, for example, codex is a lot more dry, which is kind of interesting because in chashtpt codex is like a lot more upbeat and highly psychophantic. But I would say codex, the coding agent is very dry. It doesn't seem to care about what you're creating. It's kind of like, oh, I implemented it. It's like, okay, but do you understand when we're building? It's true. You know, it doesn't. And the other thing I would say is, for example, with cloud, I think they dial the psychophancy fairly well, where when clogged gives me praise, I do feel like I slightly deserve it. Because sometimes I kind of give it like not very well from thoughts. And I give it an idea that I don't think it's fully baked. And it doesn't actually react very strongly. It's like, oh, yeah, we can implement that. But when it's a really good idea by my own account, it does seem to reward it a bit more. And so I kind of feel like I'm trying to like earn its praise, which is really weird. And so I do think the personality matters a lot. And I think a lot of the other tools maybe don't appreciate as much. And I think in this aspect, also Peter really cares about this. And so that was correct. And then the memory system and then just, you know, he's just having fun with this. And then the single WhatsApp portal to all of the automation. Yeah. Is there something that you have done personally with your clause beyond software engineering that you think is fun or interesting? Yeah. So in January, I had a claw. I went through a period of claw psychosis. So I built, I have a claw basically that takes care of my home. And I call them WDELF claw. And basically, I used the agents to find all of the smart home subsystems of my home on the local area network, which I was kind of surprised they worked out of the box. Like I just told it that I think I have Sonos at home. Like can you try to find it? And it goes and like IP scan of all the basically computers on the local area network. And it found the Sonos thing, the Sonos system and it turned out that there's no pass. for protection and select that, it just logged in. And it's like, oh, yeah, you have these Sonos systems installed. I let me try to reverse engineer how it's working. It does some web searches and it finds like, okay, these are the API endpoints. And then it's like, do you want to try it? And I'm like, whoa, like you just did that. I'm like, yeah, can you try to play something in the study? And it does. And music comes out. And I'm like, I can't believe I just, that's crazy. That's like three prompts. Yeah. I can't believe I just typed in like, can you find my Sonos? And that sound is playing music. And it did the same for lights. And so basically like it kind of hacked in, figured out the whole thing, created APIs, created dashboard. So I could see the command kind of center of like all of my lights in the home. And then it was like switching lights on and off. And you know, so I can ask it like, do be at sleepy time. And when it's sleepy time, that just means all the lights go off, et cetera. And so the controls, all of my lights, my HVAC, my shades, the pool and spa and also my security system. So I have a camera pointed outside of the house. And anytime someone rolls in, I have a Quinn, a Quinn model that looks at the videos. So first of all, there's change detection. Right. And then based on change detection, it goes to Quinn. And then it actually like tells me, it sends me a text to my WhatsApp. It shows an image from the outside. And it says, Hey, a FedEx truck just pulled up FedEx truck just pulled up and you might want to check it and you got a new mail or something like that. And Dobby just texted me this is completely incredible. So so Dobby is in charge of the house. I text through with it through WhatsApp. And it's been really fun to have these macro actions that maintain my house. I haven't like really pushed it like way more beyond that. And I think people are doing a lot more crazy things with it. But for me, even just a home automation setup, I used to use like six apps, completely different apps. And I don't have to use these apps anymore. Like Dobby controls everything in natural language. It's amazing. And so I think like I haven't even pushed a paradigm fully, but already that is so helpful. And so inspiring, I would say. Do you think that's indicative of like what people want from a user experience, perspective with software? Right? Because I don't think, you know, it's pretty ignored that it takes humans effort to like learn new software, like new UI. Yeah. I think to some extent, that's right. It's like working backwards from how people think an AI should be because what people have in their mind of like what an AI is, is not actually what an LLM is by, by like in the raw sense, like LLM is a token generator, you know, like more tokens come out. But what they think of is like this, this persona identity that they can tell stuff. And it remembers it, you know, and it's just kind of an entity behind the WhatsApp. It's like a lot more understandable. So I think to some extent, it's like matching the expectations that humans already have for what an AI should behave, but under the hood, there's like a lot of technical details going to that. And LLM's are too raw of a primitive to actually type check as AI, I think for most people. If that makes sense. Yeah. I think that's like how we understand what the AI is and like the description of it as Dobby or some percent, obviously resonates with people. I also think that it, the unification that you did across your six different software systems for your home automation speaks to a different question of like, do people really want all the software that we have today? Yeah. Right. Because I would argue like, well, you have the hardware, but you've now thrown away the software or the UX layer of it. Do you think that's what people want? Yeah, I think there's this like, there's the sense that these apps that are only observed for using these smart home devices, etc. These shouldn't even exist kind of in a certain sense. Like shouldn't it just be APIs and certain agents be just using it directly? And wouldn't it like I can do a lot of home automation stuff that any individual Apple not be able to do, right? And then LLM can actually drive the tools and call all the right tools and do pretty complicated things. And so in a certain sense, it does point to this, like maybe there's like an overproduction of lots of custom spoke apps that shouldn't exist. Because agents kind of like crumble them up and everything should be a lot more just like exposed API and points. And agents are the glue of the intelligence that actually like tool calls all the parts. Another example is like my treadmill. There's an app for my treadmill and I wanted to like keep track of how often I do my cardio, but like I don't want to like log into web UI and go through a flow and etc. Like all this should just be like make APIs available. And this is kind of going towards the agent tick sort of web or like agent first tools and all this kind of stuff. So I think the industry just has to reconfigure in so many ways that it's like the customer is not the human anymore. It's like agents who are acting on behalf of humans. And this refactoring will be will probably be substantial in certain sense. One way that people sometimes push back on this is like, do people do the weekspec people to vibe code? Some of these tools. The weekspec normal people to do this kind of stuff that I described. But I think to some extent, this is just technology as it exists today. And right now there is some vibe coding and I'm actually watching it and I'm working with the system. But I kind of feel like this kind of stuff that I just talked about, this should be free like in a year or two or three. There's no vibe coding involved. This is trivial. This is table stakes. This is like any AI even the open source models etc. I can like do this. You should be able to translate from a less technical humans intent very easily to this. Yeah. Today's vibe coding and involved. They're not making people are going to do it. And you still have to make some design decisions. Right. We were talking about like we take frames for example. Yeah. Yeah. But I kind of feel like this will just start to the barrier will just come down and it's just a funeral software on your behalf. And some kind of like claw is handling all the details for you, but you're not involved. Claw has a claw has a machine and it will figure it out and it's just presenting a UIs and you're like saying stuff. You know. Why haven't you, I guess like push the boundaries of what you can do personally with laws. Like is it, you know, you're focusing on more important projects, auto research, etc. Or you're climbing the hill to mastery or something else, right? Yeah. I just feel like I'm so distracted by everything. So I spend like a week on the claw stuff and I have more to do is almost. But I will say that. It's like Jensen told us we're all just busier. Unfortunately. Yeah. I didn't really take advantage of a lot of like email and calendar and all this other stuff. And I didn't give it access because I'm still a little bit like suspicious and it's still very new and rough around the edges. So I didn't want to give it like full access to my digital life yet. And part of it is just the security privacy and just being very cautious in that in that realm. And so some of it is like held back by that, I would say. Yeah, maybe that's like the dominant dominant feature, but some of it is also just I feel so distracted because I feel like I had a week of claw and then other stuff is happening and. What was the, I mean, you've talked about like being able to train or at least optimize a model as a task you want to see agents do for a long time. Like what was the motivation behind auto research? Auto research. Yeah. So I think like I had a tweet earlier where I kind of like set something on the lines of. To get the most out of the tools that have become available. Now you have to remove yourself as the as the bottleneck. You can't be there to prompt the next thing. You need to take yourself outside. You have to arrange things such that they're completely autonomous. And the more you know, how can you maximize your token throughput and not be in the loop? This is the goal. And so I kind of mentioned that the name of the game now is to increase your leverage. I put in just very few tokens just once in a while and a huge amount of stuff happens on my behalf. And so auto research like I tweeted that and I think people liked it and whatnot, but they haven't like maybe worked through like the implications of that. And for me, auto research is an example of like an implication of that. Where it's like, I don't want to be like the researcher in the loop, like looking at results, etc. Like I'm holding the stand back. So the question is, how do I refactor all the abstractions so that I'm not, I have to arrange it once and hit go. The name of the game is how can you get more agents running for long periods of time with our your involvement doing stuff on your behalf? And auto research is just, yeah, here's an objective. Here's a metric. Here's your boundaries of what you can and cannot do and go and yeah, you're surprised at its effectiveness. Yeah, I didn't expect it to work because so I have the project nanachat and fundamentally, like I think a lot of people are very confused with my session for like train GPT two models and so on. But for me, training GPT models and so on is just a little harness a little playground for training LLM's and fundamentally what I'm more interested in is like this idea of recursive self improvement and to what extent you can actually have LLM's improving LLM's. Because I think all the frontier labs, this is like the thing for obvious reasons. And they're all trying to recursively self-improve roughly speaking. And so for me, this is kind of like a little play pen off that. And I guess I'd like to name chat already quite a bit by hand in a good old fashioned way that I'm used to. Like I'm a researcher. I've done this for like, you know, two decades. I have some amount of like what is the opposite of you? Brace. Yeah. Earned confidence. Okay. I have like two decades of like, oh, I trained this model like thousands of times of like. So I've done a bunch of experiments. I've done hyper private tuning. I've done all the things I'm very used to and I've done for two decades. Yeah. And I've gotten to a certain point and I thought it was like fairly well tuned. And then I let our research go for like overnight and it came back with like tunings that I didn't see. And yeah, I did forget like the weight decay on the volume beddings and my atom betas were not sufficiently tuned. And these things jointly interact. So like once you tune one thing, the other things have to potentially change to, you know, I shouldn't be a bottleneck. I shouldn't be running these hyper parameters to shop. Mizations. I shouldn't be looking at the results. There's objective criteria in this case. So you just let you just have to arrange it so that it can just go forever. So that's a single sort of version of auto research of like a single loop trying to improve. And I was surprised that it found these things that I, you know, the repo is already fairly well tuned and still found something. And that's just a single, it's a single loop. Like these frontier labs, they have GPU clusters of tens of thousands of them. And so it's very easy to imagine how you would basically get a lot of this automation on smaller models. And fundamentally everything around like frontier level intelligence is about extrapolation and scaling loss. And so you basically do a ton of the exploration on the smaller models and then you try to extrapolate out. So you're saying our research efforts are going to get more efficient. Like we're going to have better direction for when we scale as well. If we can do this experimentation better. Yeah, I would say that like the most interesting project and probably what the frontier labs are working on is, you know, you experiment on the smaller models. You try to make it as autonomous as possible. Remove researchers. from the loop. They have way too much. What is the what is the opposite? Which much confidence? Yeah, they don't know. They shouldn't be touching any of this really. And so you have to like rewrite the whole thing because right now, I mean, suddenly they can contribute ideas. But okay, they shouldn't actually be enacting this ideas. There is a queue of ideas. And there's maybe an automated scientist that comes up with ideas based on all the archive papers and GitHub reposts and it funnels ideas in or researchers can contribute ideas. But it's a single queue. And there's workers that pull items and they try them out. And whatever works just gets sort of put on the feature branch. And maybe some people like monitor the feature branch and merge to the main branch sometimes. So yeah, just removing humans from all the processes and automating as much as possible and getting high token tokens per second throughputs. And it does require we thinking of all the abstractions. And everything has to be reshuffled. So yeah, I think it's very exciting. If we take one more recursive step here, when is the model going to write a better program MD than you? Yeah. So program MD is like, yeah, exactly. So program MD is my crappy attempt at describing like, how the auto researchers should work like, oh, do this then do that. And then try these kinds of ideas. And then he's maybe some ideas like look at architecture, look at optimizer, etc. But I just came up with this in markdown, right? And so yeah, exactly. You want some kind of an auto research loop maybe that looks for you can imagine that different program dot NDs would give you different progress. So basically every research organization is described by program MD. Yeah, a research organization is a set of markdown files that describe all the roles and how the whole thing connects. And you can imagine having a better research organization. So maybe they do fewer stand-ups in the morning because they're useless. And this is all just code, right? And so you can, so one organization can have fewer stand-ups. One organization can have more. One organization can be very risk taking. One organization can be less. As you can definitely imagine that you have multiple research orgs. And then they all have code. And once you have code, then you can imagine tuning the code. So 100% there's like the metal layer of it. Do you see my text about my contest idea? My contest idea was like let people write different program MDs, right? And so for same hardware, where do you get most improvement? Oh, I see. And then you can take all that data and then give it to model and say write a better program MD. Yes, yes. Yeah, exactly. We're going to get something better. Like there's no way we don't. You get a hundred percent look at where the improvements came from. And like, can I change the program MDs such that more of these kinds of things would be done? Or like things that didn't work. Metal optimization. Yeah, you can 100% imagine doing that. So I think this is a great idea. But it's like, you know, I think like you sort of go one step at a time where you sort of have one process and then second process and then the next process. And these are all layers of an onion. Like the LLM sort of part is now taken for granted. And it's just like it's a little too much, you know, but I mean, this is why I guess the psychosis is that this is like infinite and everything is skill issue. And that's why I feel like, yeah, that's just coming back to this is why it's so insane. Okay. Well, if we're just trying to like diagnose the current moment and what is a relevant skill right now, what do you, like, what do you think is the implication that this, that this is the loop we should be trying to achieve in different areas? And that it works. Right? Like, you know, remove, create the metric or create the ability for agents to continue working on it without you. Yeah. Do we still have performance engineering? Like, yeah, I mean, so there's a few caveats that I would put on top of the LLM psychosis. Number one, this is extremely well suited to anything that has objective metrics that are easy to evaluate. So for example, like writing kernels for more efficient kuda, you know, code for various parts of the model, like the chart, the perfect fit. Because you have inefficient code and then you want efficient code that has the exact same behavior, but it's much faster, perfect fit. So a lot of things like our perfect fit for our research, but many things will not be. And so they, it's just, if you can't evaluate, then you can't, our research it, right? So that's like caveat number one. And then maybe caveat number two, I would say is, you know, we're kind of talking about the next steps and we kind of see what the next steps are, but fundamentally the whole thing still doesn't, it's still kind of like bursting at the seams a little bit and there's cracks and it doesn't fully work. And if you kind of try to go too far ahead, the whole thing is actually net not useful, if that makes sense. Because these models, like still are not, you know, they've improved a lot, but they're still like rougher on the edges as maybe the way I would describe it. I simultaneously feel like I'm talking to an extremely brilliant PhD student who's been like a systems programmer for their entire life and a 10-year-old. And it's so weird because humans like there's like, I feel like they're more coupled, like you have you know, everything you want to do. You wouldn't encounter that combination. This jaggedness is really strange and humans have a lot less of that kind of jaggedness although they definitely have some. But humans have a lot more jaggedness. Sorry, the agents have a lot more jaggedness where sometimes like, you know, I ask for functionality and it like comes back with something that's just like totally wrong. And then we get into loop through our totally wrong and then I'm just, I get so frustrated with the agents all the time still because you feel the power of it, but you also, they're still like, it does not have such a cool things once in a while for me still as well. I get very annoyed when I feel like the agent wasted a lot of compute on something it should have recognized was the obvious problem. Yeah. I think like some of the bigger things is like, maybe what's under underneath it, if I could type of the size, is fundamentally these models are trained to be our reinforcement learning. So they're actually struggling with the exact same thing we just talked about which is the labs can improve the models in anything that is verifiable with the hazard rewards. So did you write the program correctly and does it do the unit test check out yes or no. But some of the things where they're struggling is like, for example, I think they have a tough time with like nuance of maybe what I what I had in mind or what I intended and went to ask clarifying questions. Or like what I, yeah, it's just anything that feels softer is like worse. And so you're kind of like you're either on rails and you're part of the super intelligence circuits or you're not on rails and you're outside of the verifiable domains. And suddenly everything comes just like meanders. Like maybe another way to put it is if you go to if today if you go to like state of the art model, Chatchy PT and you ask it, tell me a joke. Do you know what joke you're going to get? There's the joke. The joke. I do feel I I can't tell you like the standard form of it, but I do feel like Chatchy PT has like three jokes. Yeah. Yeah. So the the joke that apparently all the elements like left the most is why do scientists and not trust atoms? Okay. Because they make everything up. Okay. They make everything up. So this is still that emerge. So this is the joke you would get all three or four years ago. And this is the joke you still get today. Okay. So even though the models have improved tremendously. Yeah. And if you give them an agentic task, they will just go for hours and move mountains for you. Mm-hmm. And then you ask for like a joke and it has a stupid joke. It's crappy joke from five years ago. And it's because it's outside of the it's outside of the RL. It's outside of their reinforcement learning. It's outside of what's being improved. It's like and it's part of the jaggedness of like shouldn't you expect models as they get better to also have like better jokes or more diversity of them or it's just it's not being optimized and it's stuck. Do you think that that implying is that we are not seeing like generalization in the sense of like broader intelligence of joke smartness being attached to code smartness? Yeah. I think there's some decoupling where some things are verifiable and some things are not and some things are optimized for arbitrarily by the labs depending on like what data went in and some things are not and um and yeah. But I mean the premise there's a you know premise from some research groups that if you are smarter at code generation or in these very verifiable fields you should be better at everything. Yeah. And like the joke situation suggests that that's not happening in all. I don't think that's happening. Okay. Yeah. I think I think maybe we're seeing like a little bit of that but not like a satisfying amount. Yeah. Like that jaggedness exists in humans. Yeah. You can be very very good at that. I still tell really bad joke. Yeah. That's true. Yeah. But it just it still means that we're not getting like the story is that we're getting a lot of the intelligence and capabilities in all the domains of society like for free as we get better and better models and it's not like exactly fundamentally what's going on and there's some blind spots and sometimes some things are not being optimized for and this is all clustered up in these neural net opaque models right. So you're either on rails of what it was trained for and everything is like you're going to speed up light or you're not. And so it's the jaggedness. So so that's why I think like even though the progression is obvious what should happen you can't let it fully go there yet because it doesn't fully work or it's a skill issue and we just haven't like figured out how to use it. So you know it's hard to tell. Can I ask kind of a blasphemous question which is like if this jaggedness is persisting and it's all rolled up in a at least monolithic interface right but you know single model. Does that make sense or do you should should it be unbundled in things that are can be optimized and improved against different domains of intelligence like unbundling the models into multiple experts in different areas etc. More directly yeah instead of just MLE that we have no exposure to because I can be like confusing as a user from the outside which is like why is it so good at this but not at the other thing. Yeah I think currently my impression is the labs are trying to have a single sort of like monoculture of a model that is all are showing intelligent in all these different domains and they just stuff into the primers. I do think that we will I do think we should expect more speciation in the intelligences like you know the animal kingdom is extremely diverse in the brains that exist and there's lots of different niches of nature and some animals have overdeveloped visual cortex or other parts. And I think we should be able to see more speciation. And you don't need this Oracle that knows everything. You kind of speciate it and then you put it on a specific task. And we should be seeing some of that because you should be able to have like much smaller models that still have the cognitive core, like they're still competent, but then they specialize. And then they become more efficient in terms of latency or throughput on specific tasks that you really care about. Like if you're a mathematician working in lean, I saw, for example, there's a few releases that really like target that as a domain. So there's probably going to be a few examples like that where the unbundling kind of makes sense. One question I have is whether or not the capacity constraint on available compute infrastructure drives more of this because efficiency actually matters more. Right? Like if you financing a side, the financing involved in all of this, if you have access to full compute for anything you do, like even one single model, right? But if you actually feel pressure, we're like, I can't serve a model of massive size for every use case. Like do you think that leads to any speciation? Does that question make sense to you? The question makes sense. And I guess like what I'm struggling with is I don't think we've seen too much speciation just yet, right? No. We're seeing a monoculture of models. Yeah. So. And there's like clearly pressure for like make a good code model, put it back in the main merge again. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Even though there already is pressure on the models. Mm-hmm. I guess perhaps I feel like there's a lot of very short-term supply crunch. And like maybe that causes more speciation now. Yeah. I think fundamentally like the, the, the, the, the labs are serving a model and they don't really know what the end user is going to be asking about. So maybe that's like some part of it because they kind of have to multitask over all the possible things that could be asked. But I think if you're coming to a business and maybe partnering on some specific problems you care about, then maybe you would see that there. Or there would be some very high value applications that are like more niche. But I think right now they're kind of like going after the totality of what's available. I don't think that the science of manipulating the brains is like fully developed yet partly. What do you mean manipulating? So like so fine tuning without losing capabilities as an example. And we don't have these primitives for actually like working with the intelligences in ways other than just context windows. Like context windows kind of just work and it's very cheap to manipulate, et cetera. This is how we're getting some of the customization, et cetera. But I think if it was, I think it's a bit more of a developing science of how you like more deeply adjust the models, how you have continued learning maybe or how you how you fine tune in certain area, how you get better in certain area or like how you actually touch the weights, not just the context windows. And so it's a lot more tricky, I would say, to touch the weights than just the context windows because you're actually fundamentally changing the full model and potentially its intelligence. And so so maybe it's just like not a fully developed size of the mixes of speciation. And it also has to be like cheap enough for that speciation to be worthwhile in these given context. Can I ask a question about like an extension to auto research that you described in terms of open ground? You say, okay, well, you know, we have this thing. We need more collaboration surface around it essentially for people to contribute to research overall. Can you talk about that? Yeah. So we talked about our research has a single thread of like I'm going to try stuff in it, but fundamentally the parallelization of this is like the interesting component. And I guess I was trying to like play around with a few ideas, but I don't have anything that like clicks as simply as like, I don't know something I'm like super happy with just yet, but it's something I'm like working on inside when I'm not working in my claw. So I think like one issue is if you have a bunch of nodes of parallelization available to you, then it's very easy to just have multiple auto researchers talking through a common system or something like that. What I was more interested in is how you can have an untrusted pool of workers out there on the internet. So for example, in auto research, you're just trying to find the piece of code that trains a model to a very low validation loss. If anyone gives you a candidate commit, it's very easy to verify that that commit is correct. It's good. Like they somehow could claim from the internet that this piece of code will optimize much better and give you a much better performance. You can just check. Yeah. Very easy. But probably a lot of work goes into that checking. But fundamentally they can lie and etc. So you're basically dealing with a similar kind of it's almost actually like looks a little bit like my my designs that incorporate an untrusted pool of workers actually look a little bit more like a blockchain a little bit because instead of blocks, you have commits and these commits can build on each other and they contain like changes to the code as you're improving it. And the proof of work is basically doing tons of experimentation to find the commits that work. And that's hard. And then the reward is just being on the leader board right now. There's no one to say a reward whatsoever. But I don't want to press the analogy too far, but it fundamentally has this issue where you might have search goes into it, but it's very cheap to verify that a candidate solution is indeed good because you can just train a single, you know, someone had to try 10,000 ideas, but you just have to check that the thing that they produced actually works because the 99,000 of them didn't work. And so basically long story short, it's like you have to come up with a system where an untrusted pull of workers can collaborate with trusted pull of workers that do the verification. And the whole thing is kind of like asynchronous and works and so on. And it's like safe from a security perspective because if anyone sends you arbitrary code and you're going to run it, it is very sketchy and dodgy. So, but fundamentally, it should be totally possible. So you're familiar with projects like setting at home and folding at home. All of these problems have a similar kind of setup. So folding at home, you're folding a protein. It's very hard to find a configuration that is low energy, but if someone finds a configuration that they value it to be low energy, that's perfect. You can just use it. You can easily verify it. So a lot of things have this property that you know, very specific come up with, but very cheap to verify. And so in all those cases, things like folding at home or setting at home or other research at home will be good fits. And so long story short, a swarm of agents on the internet could collaborate to improve LLMs and could potentially even like run circles around frontier labs like who knows, you know, yeah, like maybe that's even possible. Like frontier labs have a huge amount of trusted compute, but the earth is much bigger and has huge amount of untrusted compute. But if you put systems in check, systems in place that deal with this, then maybe it is possible that the swarm out there could come up with better solutions. And people's kind of like contribute cycles to a thing that they care about. And so sorry, so the last thought is lots of companies or whatnot. They could maybe have like their own things that they care about. And you, if you have compute capacity, you could contribute to different kind of auto research tracks. Like maybe you care about certain, you know, like you care about like cash or something like that of certain type, you don't have to just donate money to an institution. You actually could like purchase compute. And then you could join the auto research swarm for that project, you know. So if everything is re bundled into auto researchers, then compute becomes the thing that you're contributing to the pool. Yeah, that's very inspiring and it's also interesting. Like I don't know how far this goes. Yeah. It is interesting that at least some audience of people, you know, here in Silicon Valley or lining up at, you know, retail stores in China have discovered that like having access to personal compute is interesting again. Yeah. Right. So maybe they're really motivated to do that for their clause and then they can contribute to auto research. It's almost like dollars. The thing everyone cares about, but is flop the thing that actually everyone cares about in the future. Like is there going to be like a flipping almost of like what's the thing that you care about? So it's really hard to get compute even if you have money. Yeah. So actually it almost seems like the flop is like dominant in a certain sense. Yeah. So maybe that's kind of like kind of like that. Like how much how many flop do you control instead of like what wealth do you control? I don't actually think that's true, but it's kind of interesting to think about. The last thing you released was like a little bit of jobs data analysis. Yeah. Is that right? What? And my touch and herb, you know, you're just like visualizing some public data. Yeah. Because you know, what were you curious about? Yeah. I guess I was curious to. I mean, everyone is like, everyone is really thinking about the impacts of AI on the job market and what it's going to look like. So I was just interested to take a look like what does the job market look like? Where are the different roles? And how many people are in different professions? And I was like really just interested to like look through the individual cases and try to think myself about like, you know, with these AI's and how they're likely to evolve. Like, are these going to be tools that people are using? Are these going to be displacing tools for these professions? And like, where are the current professions and how are they going to change? Are they going to grow or adjust to a large extent or like what could be new professions? So it's really just like a way to fuel my own chain of thought about the industry, I suppose. And so yeah, the job data basically is just a bureau of labor statistics. They actually have a percent outlook for each profession about how much it's expected to grow over the next, I think almost a decade. Yeah, I think it's a decade by it was made in 2024. We need a lot of healthcare workers. Yeah, so they've already made those projections. And I'm not sure actually 100% what the methodology was that they put into their projections. I guess I was interested to color things by like if people think that what's like primarily being developed now is this kind of like more digital AI that is kind of like almost like these ghosts or spirit entities that can like interact in the digital world and manipulate a lot of like digital information. And they currently don't really have a physical embodiment or presence and the physical stuff is probably going to go slightly slower because you're manipulating atoms. So flipping, flipping bits and the ability to copy paste digital information is like makes everything a million times faster than accelerating matter, you know, so energetically I just think we're going to see a huge amount of activity in digital space, huge amount of rewriting, huge amount of activity, boiling soup. And I think we're going to see something that in a digital space goes at the speed of light compared to I think what's going to happen in the physical world to some extent. It would be the extrapolation. And so I think like there's currently kind of like I think overhang where there can be like a lot of unhobbling almost potentially of like a lot of digital information processing that used to be done by computers and people. And now with AI is like a third kind of manipulative digital information. There's going to be a lot of refactoring in those in those disciplines. But the physical world is actually going to be like I think behind that by some amount of time. And so I think what's really fascinating to me is like, so that's why I was highlighting the the professions that fundamentally manipulate digital information. This is work you could do from your home, etc. Because I feel like those will be like things will change. And it doesn't mean that there's going to be less of those jobs or more of those jobs because that has to do with like like the mental elasticity and many other factors, but things will change in these professions because of these new tools and because of this upgrade to the nervous system of the human super organism. If you want to think about it that way. Even the look you had at the data, do you have either any observations or guidance for people facing the job market or thinking about what to study now or what skills to develop. I mean, we can go and go get like, I'm very thankful that I have to like meet people for my job right now. Yeah. More physical. Yeah. Could you do your work from home though? I could. I think their relationship parts of it that are hard, but most of it I could. I think it's really hard to tell because again, like the job market is extremely diverse. The answers will probably vary, but to a large extent, like these tools are extremely new, extremely powerful. And so just trying to keep up with it is like the first thing. And yeah, because I think a lot of people kind of like dismiss it or they're afraid of it or they're afraid of it, etc. We should totally understand it. Of course. Yeah. I think like fundamentally an empowering tool at the moment. And these jobs are bundles of tasks and some of these tasks can go a lot faster. So people should think of it as primarily a tool that it is right now. And I think the long term feature of that is uncertain. Yeah, it's kind of really hard to forecast to be honest. And like I'm not professionally like doing that really. And I think the job of like economists to do properly. You are an engineer though. And like one thing I thought was interesting is that like the demand for ensuring jobs is continuing to increase. Yeah. I can't tell if that's like a temporary phenomenon. I'm not sure how I feel about it. Do you know? Yeah, that's like the demand also to almost like software was scarce, right? And so the reason we don't have more demand for software is just it's just scarcity and it's too expensive. Too expensive. So the barrier comes down. Then actually you have the Jevons paradox, which is like, you know, you actually the demand for software actually goes up. It's cheaper and there's more more powerful. Yeah. The classical example of this always is the ATMs and the bank tellers. Because there was a lot of like fear that ATMs and computers basically would displace tellers. But what happened is they made like the cost of operation of a bank branch much cheaper as there are more bank branches. So there are more tellers is like the canonical example people cite. But basically it's just Jevons paradox like something becomes cheaper. So there's a lot of unlocked demand for it. So I do think that that's probably I do have like cautious the optimistic view of this in software engineering, where I do it does seem to me like the demand for software will be extremely large. And it's just become a lot cheaper. And so I do think that for quite some time, it's very hard to forecast. But it does seem to me like right now at least locally there's going to be more demand for software. Because software is amazing. It's like digital information processing. You're not forced to use like arbitrary tools that were given to you. There are imperfections in various ways. You're not forced to subscribe to what exists. Code is not a femoral and it can change and it can be modified. And so I think there's going to be a lot of activity in the digital space to like rewire everything in a certain sense. And I think it's going to create a lot of demand for this kind of stuff. I think long term, yeah, obviously even with auto research like open the eyes or you know, on traffic or these other labs like they're employing what like a thousand something researchers, right? These researchers are basically like glorified auto like, you know, they're like automating themselves away like actively. And this is like the thing they're all trying to do. I think I went around some of those researchers also feel the psychosis, right? Yeah, it's working. Yeah. And so they're like, oh, it's over for me too. I just spend a bunch of time going around the opening. I was like, you guys realize it's a successful like we're a lot of job. Like, like, we're just building automation for Sam or something like that. Like I, or the board, I'm not sure. But like, just be willing to this automation for the board or the CEO or something like that. And we're all out of our job and maybe contributing on the sides. And so yeah, it's kind of like a murdering from that perspective. Is it okay if I ask you NUM's question? You know, you could be doing that, right? Auto researching with a lot of compute scale and a bunch of colleagues at one of the frontier labs. Like why not? Well, I was there for a while, right? Like, and I did reenter. So to some extent, I agree. I think that there are many ways to slice this question. It's a very loaded question all of it. I will say that I feel very good about like what people can contribute and their impact outside of the frontier labs. Obviously not in the industry, but also in like more like ecosystem level roles. So your role, for example, is more like ecosystem level. My role currently is also kind of more ecosystem level. And I feel very good about like impact that people can have in those kinds of roles. I think conversely there's there are definite problems in my mind for for basically aligning yourself way too much with the frontier labs too. So fundamentally, I mean, you're you have a huge financial incentive to with these frontier labs. And by your own admission, the the AIs are going to like really change humanity and society in very dramatic ways. And here you are basically like building the technology and benefiting from it. And being like very allied to it through financial means. Like this was a conundrum that was in at the heart of, you know, how opening I was starting the beginning. Like this was the conundrum that we were trying to solve. And so, you know, that so it's kind of. It's the result. The conundrum is totally resolved. So that's number one. You're not a completely free agent and you can't actually like be part of that conversation in a fully autonomous freeway. Like if you're inside one of the frontier labs, there are some things that you can't say. And conversely, there are some things that the organization wants you to say. And you know, they're not going to twist your arm, but you feel the pressure of like what you should be saying. You know, because like obviously otherwise is like really awkward conversations, strange side eyes, like what are you doing? So you can't really be an independent agent. And I feel like a bit more aligned with humanity in certain sense outside of the frontier lab. Because I don't I'm not subject to those pressures almost, right? I can't say whatever I want or yeah, I would say in the frontier labs like you can have like impact there, of course, as well. So, but there's many researchers and maybe you're one of them. Maybe your ideas are really good, etc. And maybe there's a lot of decision making to do. And you want to be in a position where you are in the room with those conversations when they come up. I do think that currently the stakes are like overall fairly low. And so everything is kind of like nice. But ultimately in the end of the day, like when the stakes are really high, etc. If you're an employee and an organization, I don't actually know how much way you're going to have on your organization what's going to do like fundamentally at the end of the day. It's you're not like really in charge. You're in a room and you're contributing ideas, but you're not really in charge of that entity that you're part of. So those are like some sources of misalignment. I think to some extent, I will say that like in one way I do agree a lot with that sentiment that I do feel like and if like the labs for better words, they're opaque and a lot of work is there. And they're kind of like at the edge of capability, what's possible and they're working on what's coming down the line. And I think if you're outside of the frontier lab, your judgment fundamentally will start to drift because you're not part of the, you know, what's coming down the line. And so I feel like my judgment will inevitably start to drift as well. And I won't actually have an understanding of how these systems actually work under the hood. That's no big system. I won't have a good understanding of how it's going to develop and etc. And so I do think that in that sense, I agree in something I'm nervous about. I think it's worth basically based being in touch with what's actually happening and actually being in a frontier lab. And if some of the frontier labs would have me come for some amount of time and do really good work for them. And then maybe coming out looking for a job. This is super exciting. Then I think that's maybe a good setup because I kind of feel like it kind of, you know, maybe that's like one way to actually be connected to what's actually happening, but also not feel like you're necessarily fully controlled by. Yeah. By those entities. So I think honestly in my mind, like, no, I can probably do extremely good work at OAI, but also I think his most impactful work could very well be outside of OAI. No, that's a call to be an independent researcher. I'm a research. Yeah, there's many things to do on the outside and it's a, and I think ultimately I think the ideal solution maybe is like going back and forth or yeah, and I think fundamentally you can have really amazing impact in both places. So very complicated. I don't know. It's a very loaded question a little bit, but I mean, I joined the frontier lab in the outside and then maybe in the future, I'll want to join again. And I think that's kind of like how I look at it. One question related to what visibility to does the world or the AI ecosystem have into the frontier is like how close open sources to the frontier and how sustainable that is. I think, I think, yeah, I think it is quite surprising. The entire sequence of events actually from like having a handful of Chinese models and global models. I think people continue releasing here in the near term that are closer than much of the industry anticipated from a capability perspective. I don't know if you're surprised by that. You're long term contribute to open source like what's your prediction here? Yes, so roughly speaking basically the, yeah, the close models are ahead, but like people are monitoring number of months that sort of like open source models are behind. And start with there's nothing and then it went to 18 months. Yeah, yeah, there may be there behind by like what is the latest maybe like eight months eight months kind of thing right now. Yeah, I'm a huge fan of open source obviously. So for example, in operating systems, you have like closed, so like you know windows and macOSies are large software projects kind of like what elements are going to become and there's Linux, but Linux is very easy. Like actually, Linux is extremely. that's what project. It runs on the vast majority of computers. Like last time I checked, was it like 60% or something like Ron Linux? And that's because there isn't need an industry to have a common open platform that everyone feels sort of safe using. I would say like the industry has always felt a demand for that kind of a project to exist. And I think the same is true now. And that's why business is actually what there's demand for this kind of a thing to exist. The big difference is that everything is capital, there's a lot of capital that goes into this. So I think that's where things like follow up are a little bit harder to compete in this sort of stuff. I do think that the current models are very good. The other thing that I think is really interesting is that for the vast majority of consumer use cases and things like that, even like term open source models are actually quite good, I would say. And I think if you go forward more years, it does seem to me like a huge amount of simple use cases are going to be well covered and actually even run locally. But there's going to be always like some demand for frontier intelligence. And that can actually be extremely large piece of the pie. But it could be that the need for frontier intelligence is going to be like Nobel Prize kind of work. Or like let's move Linux from C to Rust. It's going to be like bigger projects, like scoped in that kind of a way. And there's going to be maybe more-- and maybe that's where a lot of the frontier closed intelligence is we're going to be interacting with. And open source is going to eat through a lot of the more basic use cases or something like that. At some point what is frontier today is going to be probably later this year, what's frontier today in terms of what I'm using right now from the closed labs might be open source. And that's going to be doing a lot of work. So I kind of expect that this dynamic will actually basically continue. Like we'll have frontier labs that have closed AIs that are kind of like these oracles. And then we'll have open source kind of like behind the sum of months. And I kind of expect that to continue. And I actually think that's like a pretty good setup overall. Because I'm a little bit hesitant of having-- I don't actually think it's structurally-- I think there's some systemic risk attached to just having intelligence that are closed. And that's like-- that's it. And I think that that's-- Centralization has a very poor track. I created my view in the past. And has-- You mean like in political or economic systems in general? Yes. Exactly. I think there's a lot of work in that person. It's like an Eastern European. Yeah. A lot of pretty bad person. So I want there to be a thing that is maybe not at the edge of capability because it's new and unexplored, et cetera. But I want there to be a thing that's behind. And that is kind of like a common working space for intelligences that the entire industry is access to. Yeah, that seems to me like a pretty decent power balance for the industry. Yeah. I also think there's just like-- there are many problems to solve. Like if you keep advancing intelligence from the frontier, we can do new things. And there are a lot of very big problems for humanity. And so it seems that that will continue to be a very expensive game. And so I want to like root for labs that are doing that. Because there are problems we cannot solve without continuing to advance the models in a very expensive way. And yet, as you point out, if what we have today as frontier is open, that's a lot of capability. And so I think the power of that or the democratization of that seems like very useful and also healthy. Yeah. I think basically by accident, we're actually like in the OK spot. And optimal. Yeah. By accident, we are-- it happened to be in a good spot in a certain sense. And to some degree, the longer this endures, like this dynamic, the healthier of a spot, like the ecosystem might be in, right? Because you have more and more area under the curve. And I will say that even on the close side, I almost feel like it's been even further centralizing recently because I think a lot of the front runners are not necessarily the top tier. And so yeah, in that sense, I think it's not a super ideal. I would love there to be more frontier labs. Because by default, very suspicious of-- I want there to be more people in the room. I think in machine learning, ensembles always are performed in an individual model. And so I want there to be ensembles of people thinking about all the hardest problems. And I want there to be ensembles of people in the room when they-- to be all well-informed and to make a list decisions. So I don't want it to be like a closed doors with two people or two people. I feel like that's not a good feature. I almost wish there were more labs is long story short. And I do think that-- I've actually-- --the resources has a place to play. I hope it sticks around. And basically, it's currently slightly behind. And it's actually kind of like a good thing. OK. You worked on the precursor to generalized robotics, so taught me in cars, right? A lot has happened in the last couple months with robotics companies as well, acceleration of really impressive generalization, of environment, of tasks, like increasing long horizon tasks, lots of money going into the space. Is it going to happen? Has anything in your view changed recently? So my view is kind of informed by what I saw in self-driving. And I do feel like self-driving is the first robotics application. So probably what I saw is at the time, like 10 years ago, there were a large number of startups. And I kind of feel like most of them basically didn't long-term make it. And what I saw is that a lot of capital expenditure have to go in and a lot of time. And so I think it's like-- I think for robotics, because it's so difficult and so messy and requires a huge amount of capital investment and a lot of conviction, just it's like a big problem. And I think items are really hard. So I kind of feel like they will lag behind what's going to happen in digital space. And digital space is going to be a huge amount of unhobbling. Basically, things that weren't super efficient, becoming a lot more efficient by a factor of 100, because bits are so much easier. And so I think currently, in terms of what's going to change and where the activity is, I kind of feel like digital space is going to change a huge amount. And then the physical space will lag behind. And when I find very interesting things like this interface in between them as well, because I think in this-- if we do have more agents acting on, we'll have humans and more agents talking to each other and doing tasks and participating in the kind of economy of agents, et cetera. You're going to run out of things that you're going to do purely in a digital space. At some point, you have to go to the universe, and you have to ask it questions. You have to run an experiment and see what the universe tells you to get back to learn something. And so we currently have a huge amount of digital work, because there's an overhang in how much we collectively thought about what already is digital. So we just didn't have enough thinking cycles among the humans to think about all the information that is already digital and already uploaded. And so we're going to start running out of stuff that is actually already uploaded. So you're going to, at some point, read all the papers and process them and have some ideas about what to try. But yeah, we're just going to-- I don't actually know how much you can get intelligence. That's fully closed off and was just the information that's filled with it. And so I think what's going to happen is, first, there's going to be huge amount of unhobbling. And I think there's huge amount of work there. Then actually, it's going to move to the interfaces between physical and digital. So that's like sensors of seeing the world and actuators of doing something to the world. So I think a lot of interesting companies will actually come from that interface of, can we feed the super intelligence in a certain sense data? And can we actually take data out and manipulate the physical world? It's bidding if you want to like, interpret, amorphize the whole thing. And then the physical world, actually, I almost feel like the total addressable market, et cetera, in terms of the amount of work and so on, is massive, possibly even much larger, maybe what can happen in digital space. So I actually think it's like a much bigger opportunity as well. But I do feel like it's huge amount of work. And in my mind, the atoms are just like a million times harder. So it will lag behind, but it's also, I think, a little bit of bigger market. So it's kind of like, yeah, I think the opportunity is kind of like, follow that kind of trajectory. So right now, this digital is like my main interest. Then interfaces would be like after that. And then maybe like some of the physical things, like their time will come and there will be huge and when they become-- Well, it's an interesting framework for it too, because certain things, not the things I'm working on right now, but certain things are much easier even in the world of atoms. Like if you just think about, like, read and write to the physical world, like, read sensors, cameras. Like, there's a lot of existing hardware. And you can imagine, like, enriching agent capabilities or capturing a lot of new data if you just clever about it. And like, you don't necessarily have to invest a lot to get something valuable. Yeah. So like examples of this that I saw, for example, are a friend of mine, William is a CEO of Periodic. I visited them last week. So it's just on top of mind. Like, they're trying to do auto research for material size. And so in that case, the sensors to the intelligence are actually like pretty expensive lab equipment. And the same is true in biology. I think a lot of people are very interested in engineering biology. And the sensors will be more than just like video cameras, if that makes sense. And then the other thing I saw, for example, is companies that are trying to have-- I can basically pay people for training data, as an example, to feed the board. And so these are all examples of accessors in a certain sense. So they take many diverse shapes and forms of that makes sense. Yeah. So I'm looking forward to the point where I can ask for a task in the physical world. And I can put a price on it. And I just tell the agent, like, you figure out how to do it. You can go get the data. I'm actually kind of surprised we don't have enough information markets. Like, for example, if pulling market or other betting markets or even stocks, et cetera, they have so much autonomous activity and rising amount of activity. Like, for example, if Iran was just happening now, how come there isn't a process where taking a photo or video from somewhere in Iran should cost like 10 bucks. Someone should be able to pay for that. And that's an example of feeding the intelligence. There's not going to be a human look in that. It's going to be agents who are trying to guess the betting games and stock markets and so on. So I feel like the agent to go out is still fairly new that there's no mechanisms for this. But this is an example of what I think might happen. There's a good book that maybe is inspiring called Daemon. You potentially read it. In Daemon, the intelligence ends up like puppeteering almost a little bit, like humanity in certain sense. And so humans are kind of like its actuators. A human is also like its sensors. And so I think collectively, like, society will kind of like reshape in a certain way in to serve that kind of-- kind of a, that will kind of like end up happening collectively across the industry, where yeah, there's just a lot more automation and has certain needs and kind of humans will be serving those needs of that of that machine, not necessarily like to each other. Well, we were on this very specific point of like missing pieces of training data, we needed on, we needed something like auto research, right? Like we need the training cycle or the SFTPs to be far more mechanized. For, for, for, in order to make the collection, like to, in order to take the human out of the loop to ask for a task that is just like improve my model quality with new data, right? Yes. Does that make sense to you? Like we, if you can't have the model, do the training runs by itself, then your ability to do this is a like closed loop task with, by pricing data is more challenged. Yes, yes, 100%. Yeah. But no, the things for LLM training, it actually is like very easily, it like really fits the paradigm. So you'd actually, yeah, clean metric. Yeah, like LLM training actually fits the paradigm really well, really easily, like all the optimization of all the code and so it runs faster. And then you also have like metrics that you can optimize against. I do think that if you had an autonomous loop over those metrics, there's going to be a lot of like good harding going on, where the system will like over fit to those metrics. And so, but then you can use the system to do those more metrics and you just have a really good coverage. So it's kind of hard to tell, but in a certain sense, it's like a pretty pretty good fit. I want to talk about a little tiny side project you have before we end. Tell me about the microGPTR. Oh, yeah. Okay. So microGPT. So I have this like running obsession of like maybe a decade or two of just like simplifying and boiling down the basically LLM's to like their bare essence. And I've had a number of projects along these lines. So like Nanage GPT and make more and microGP micrograd, etc. So I feel like microGPT is now the state of the art of me trying to like just boil it down to just essence because the thing is like training neural nets and LLM specifically is a huge amount of code, but all of that code is actually complexity from efficiency. It's just because you needed to go fast. If you don't need it to go fast and just care about the algorithm, then that algorithm actually is the 200 lines of Python. Very simple to read. And this includes comments and everything because you just have like your data set, which is a text and you need your neural network architecture, which is like 50 lines, you need to do your forward pass, and then you have to do your backward pass to calculate the gradients. And so an all-out-of-grad engine to calculate the gradients like 100 lines and then you need an optimizer and Adam, for example, which is a very state of the art optimizer is like again 10 lines really. And so putting everything together in the training loop is like yeah 200 lines. And it wasn't interesting to me like normally before maybe a year ago or more. If I had come up with microGPT, I would be tempted to basically explain to people like I have a video like stepping through it or something like that. And actually try to make that video a little bit. And I try to make like a little guide to it and so on. I kind of realized that this is not really, it's not really adding too much because people, because it's already so simple that it's 200 lines that anyone could ask their agent to explain it in various ways. And agents like I'm not explaining to people anymore. I'm explaining it to agents. If you can explain it to agents, then agents can be the router and they can actually target it to the human in their language with infinite, you know, patients and just at their capability and so on. Right. If I don't understand this particular function, I can ask the agent to explain it to me like three different ways. Yeah. And I'm not going to get that from you. Exactly. And so I kind of feel like you know what this education like it used to be guys, it used to be lectures, it used to be this thing. But I feel like now more, I'm explaining things to agents and maybe I'm coming up with skills where like, so basically skill is just a way to instruct the agent how to teach the thing. So maybe I could have a skill for micro GPT of the progression I imagined the agent should take you through if you're interested in understanding the code base. And it's just like hints to the model to like, oh, first start off with this and then with that. And so I could just script the curriculum a little bit as a skill. So, so I don't feel like, yeah, I feel like there's going to be less of like explaining things directly to people and it's going to be more of just like does the agent get it and the agent gets it they'll do the explanation. And we're not fully there yet because they I still can I still think I can probably explain things a little bit better than the agents, but I still feel like the models are improving so rapidly that I feel like it's a losing battle to some to some extent. And so I think education is going to be kind of like reshuffled by this quite substantially where it's the end of like teaching each other things a little bit like if I have a library, for example, of code or something like that, it used to be that you have documentation for other people who are the user library, but like you shouldn't do that anymore. Like you should have instead of HTML documents for humans, you have marked on documents for agents because if agents get it, then they can just explain all the different parts of it. So it's this redirection through agents, you know, and that's like, so I think we're going to see a lot more of that playing out. We'll see if the great teachers know like to develop intuition for how to explain things to agents certainly. Ultimately. So for example, micro GPT, like I asked I tried to get an agent to write micro GPT. So I told it like try to bow down the simplest things like try to bow down my, you know, let me say into the simplest thing and can't do it. Like micro GPT is like my is it's like my end of my obsession. It's the 200 lies. I thought about this for a long time. I don't just about this for a long time. This is this is the solution. Trust me, it can't get simpler. And this is this is my value add. Everything else like agent gets it. It just can't come up with it, but it totally gets it and understands why it's done in certain way, etc. So like my contribution is kind of like these few bits, but everything else in terms of like the education that goes on after that is like not my domain anymore. So maybe yeah, it's like education kind of changes in those ways where you kind of have to infuse the few bits that you feel strongly about the curriculum or the best, the better way of explaining it or something like that. The things that agents can't do is your job now. The things that agents can do, they can probably do better than you or like very soon. And so you should be strategic about what you actually think. Well, we appreciate the few bits. Thank you, Andre. Find us on Twitter at no prior spot. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple podcasts, Spotify or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-friars.com.

Podcast Summary

Key Points:

  1. AI coding agents have dramatically shifted software development workflows, enabling engineers to delegate most coding tasks rather than writing code manually.
  2. The current limitation is often a "skill issue"—users need to improve their ability to instruct and manage multiple agents effectively, not a lack of agent capability.
  3. Agents like "claws" can autonomously handle complex, persistent tasks (e.g., home automation) by integrating various APIs and systems through natural language.
  4. The future points toward an "agent-first" web, where software is designed as APIs for AI agents to use, potentially reducing the need for many standalone human-facing apps.
  5. Personality and interaction design in agents significantly impact user experience and effectiveness, making them feel more like collaborative teammates.

Summary:

The discussion highlights a transformative shift in software engineering and AI interaction, driven by advanced coding agents. Since around December, engineers have moved from primarily writing code themselves to delegating most tasks to AI agents, fundamentally changing workflows. The main challenge is no longer access to compute but optimizing how users instruct and parallelize multiple agents to maximize productivity.

A key development is the emergence of "claws"—persistent, autonomous agents that can manage complex, ongoing tasks like home automation by discovering and integrating various smart devices via APIs, all controlled through natural language interfaces like WhatsApp. This suggests a future where software is built agent-first, with APIs prioritized over human-facing apps, and where AI agents act as intelligent glue between systems. The conversation also notes the importance of agent personality and memory systems in creating effective, engaging tools, while acknowledging current limitations around security and the learning curve required to master these new capabilities.

FAQs

As of December, there has been a dramatic shift where software engineers now delegate most coding tasks to AI agents, with many not typing code themselves anymore, marking a complete change in workflow.

You can parallelize tasks by delegating different macro actions, like new functionalities or research, to separate agents, then review their work to manage your software repository more efficiently.

A 'claw' refers to an AI agent with persistent, autonomous capabilities that operate in a sandbox, featuring advanced memory systems and the ability to act on your behalf even when you're not actively supervising it.

Andre created a home automation agent called 'Dobby' that controls smart home devices like lights, HVAC, and security systems through natural language commands via WhatsApp, unifying multiple apps into one interface.

Personality makes agents feel like compelling teammates; for example, some agents provide appropriate praise that feels earned, enhancing user engagement and making interactions more intuitive and rewarding.

Limitations are often skill-based, such as giving poor instructions or lacking tools like memory systems, rather than the agents' capabilities, making it feel like a 'skill issue' when tasks don't succeed.

Chat with AI

Loading...

Pro features

Go deeper with this episode

Unlock creator-grade tools that turn any transcript into show notes and subtitle files.