The Lazy Generation? Is AI Killing Jobs or Critical Thinking
65m 57s
Can automation and critical thinking coexist in the future of education and work?Today on Digital Disruption, we’re joined by Bryan Walsh the Senior Editorial Director at Vox.At Vox, Bryan leads the Future Perfect and climate teams and oversees the podcasts Unexplainable and The Gray Area. He also serves as editor of Vox’s Future Perfect section, which explores the policies, people, and ideas that could shape a better future for everyone. He is the author of End Times: A Brief Guide to the End of the Worl...
Transcription
13414 Words, 73448 Characters
Hey everyone. I'm super excited to be sitting down with Brian Walsh, editorial director at Vox Media.
When it comes to the intersection of technology, education, and politics,
Brian is an absolute guru who's been writing on this topic for years and has strong views on how these forces will shape our future.
I want to understand what impact AI and emerging tech is going to have on the future of work and the future of education.
Is it going to disrupt entry-level work, more senior roles, or both?
And what we as people, workers, and leaders need to do to be ready. Let's find out.
Brian, thanks so much for joining today. Super excited to have you on the show here.
I wanted to just start by diving a little bit into your what I'll call AI impact forecast, I guess.
So you've been covering AI, new tech, education impact on society.
What do you see playing out in the horizon in the next handful of years?
It's funny, right before I came on to record this, I was in an editorial meeting and we were discussing when we start to see the impact of AI on jobs.
And I think one thing I've noticed there is there's kind of like a gap in the thinking around those sort of near-term forecasts.
Like you can go and can read something like AI 2027 if you want the sort of extreme version of what could happen really quickly.
But what it comes to sort of what we'll see sort of in the short term, that's tougher. I think one thing like we're already starting to see adoption.
The question is really, will that adoption stick? I think a lot of companies I'm seeing that are trying this out.
They're struggling from what I've noticed in terms of really integrating it into their workforces.
I think for one thing, the thing with AI is like it doesn't come with an instruction manual.
And one of the things that's helped me best understand how it works and so forth is just that, you know, it doesn't work like a computer program.
You kind of have to figure it out as you go. And I think that's asking a lot for companies at this moment.
But as these tools start to get better and better, as we start to see some real success cases, especially going outside, you know, Silicon Valley where there's just a natural sort of tendency to adopt it as quick as possible.
Then you'll start to see the impact both in terms of productivity, but also in terms of jobs as well.
And, you know, what are really looking to over the next few years, I think a lot of them might come down to how the economy itself does.
I could see a situation where you get much more rapid adoption, ironically, if we see a significant economic downturn.
And what I mean by that is that, you know, I think we're talking about automating jobs, not even the most cold-hearted CEO generally wants to just start firing their human workforce willy-nilly at replacing their AI.
But if you have a situation where you're forced to make job cuts for structural reasons, suddenly at that point, it starts to make a lot more sense to be experimenting to what you can do with automation, how much can that begin to fill the gap.
And that's, if we see that happen, I think that could actually accelerate things really even more than any sort of technological leap, just the fact that the economy would need it.
Corporate bosses would feel that they could take advantage of that. And that's where you start to see really turnover.
And the other thing I'm looking at really in the short term is what's going to happen with the youngest workers.
We're already being to see a lot of noisy data that indicates, you know, maybe this most recent cutter of grads is having a harder time finding jobs than perhaps they should, even the fact that the economy is still pretty healthy from a job perspective.
If that really starts to stick, that will be really interesting because I think it's logical that the first sort of rung of the ladder to go to be automated will be the ones who are least experienced where often when you're bringing them on, you're kind of doing half an apprenticeship anyway.
If you can start to do that more cheaply with automation with AI, then, you know, that you'll do that but then of course raise the question is exactly what do they do.
And then I sort of raised the question so what happens in the future with your company, right, if you just sort of stopped the new layer of growth for jobs.
I don't have a good answer for that yet. I don't think anyone does, but that's kind of what we're looking at right now.
You know, that part's really interesting to me because, you know, we can talk about them as the cheapest resources or the least experienced resources that are easiest to automate.
But you know, the other thought I had is that I've been playing with is that they're also kind of the bottom layer of the pyramid, so to speak, right, like they're sort of the foundation.
And as you said, they're that growth layer because they grow into the rest of the organization and I don't know, maybe it is a little bit too rose colored for me to say they're the leaders of the future.
But what happens to that cycle if you start to pull some of those people out and I guess when you think about this or based on the conversations you're having, is it like a wholesale sweep of that layer?
Or is it just, hey, we're going to need fewer of these people than we have in the past and maybe that even fits better demographically with the size of that cohort?
Right. Yeah, no, I think I don't see it happening wholesale overnight.
I don't think the tools are ready and I really don't think those who need to adopt the tools are ready either.
You know, I think there's a disconnect there that would make sort of this happening overnight just be really difficult to imagine.
Of course, there are other structural reasons why, you know, younger workers may have a harder time getting jobs, the economy itself is shifting.
Of course, you know, very hard to tell what will happen with things like trade policy, inflation, all those things are kind of muddying the picture.
But, you know, it is one of the biggest questions I wonder about, you know, because it's sort of unusual.
Often in past technological transformations, I would say like the younger people were the ones who were leading the way.
They were the ones taking advantage of new tools and whether it was things like podcasting or things like, you know, publishing on the internet.
What have you, like, that was where it was coming from.
And so this idea that weirdly this newest wave of technology could come for those younger workers first.
It's when just quieting.
And I think it's not something anyone in policy really has an answer for.
Like I'll say it again, like it's just a little baffling to me, given all the energy, all the investment, all the attention that's been put into AI, including with some of the sort of most extreme scenarios.
But at the same time, it seems like you have more people thinking seriously about the possibility of, you know, an AI apocalypse than you do really thinking about what if this is, you know,
a very powerful automation technology, you know, comparable to certainly IT wave in the 90s, but I think even going back further than that, like looking at the industrial revolution, perhaps.
Okay, what happens then?
Like, how do we plan for this?
I mean, you might have people thinking about universal basic income, things like that, but there's so many steps that have to sort of unfold before you get there.
And I look at this and I see a bit of a vacuum.
You know, we at Vox, there's a piece we did a couple of months ago looking at the idea of like drop in remote workers, basically, like trying to figure out,
what are the jobs that seem most likely to get on me first and you can kind of begin to figure that out based off, well, you know, how easy is it to do your job with a computer, right?
Remotely, and break that down to tasks and you'll see that like, if you see it that way, I mean, some like a journalist, a fair amount, you know, for a sort of internet journalist like myself, it's not quite there yet.
We try, or if I spare another with it, but, you know, I think you have this race between the tools getting better and better, getting more accessible.
At the same time, like the bottlenecks tend to come up in companies themselves in policy and then in people as well.
But I'll go back to the say that if you start to see separate from this real economic dislocation, just a natural downward cycle or something that's just a product of trade policy, inflation, take your pick.
Then that becomes something that can be a spur to faster change.
But of course, that'll also be happening in a moment where, you know, it'll already be bad economy.
So I'm a little worried as to what that will be and then we'll come out of that politically as well.
You know, related to all that, one of the pieces I've been trying to sort of unravel for a long time is something I view as a paradox.
And maybe it's not, I'm curious on your thoughts, but we've already talked about there's sort of these two competing forces within AI and AI adoption.
On the one hand, you have, you know, CEOs or organizations viewing this as a force for, you know, somewhere between productivity gains and downsizing, right, efficiencies.
And then on the other hand, you have individuals who are workers or they're using it, you know, using it for their own small business.
And there's this talk track of, well, it's going to make you more productive.
It's going to make you more employable.
But there's just an inherent friction there, right?
Because one of the things I sense is that there's a reluctance to adopt this technology if people believe that it's also going to lead them in some way to replace them or lose their job.
Do you, I mean, first of all, do you buy that framing?
And how do you see this whole, you know, these two modes kind of coming together and then playing out?
No, I think that's a very smart framing. I think, you know, there is a lot of ambient fear around this technology.
I can know within my own workforce, people I work with.
I mean, I think we've already had an experience going through the internet and there were really wrenching changes that meant and has meant for media.
And now you have AI and it's much easier to focus, I think, on the potential downsides because what, you know, a CEO might hear productivity and think, oh, that's great growth.
Someone else hears productivity and think you can do more with fewer people.
And at the end of the day, if we're really talking about this, I mean, that's kind of what has to happen, right?
You know, like he doesn't create really significant productivity gains unless you're getting into a situation where each work can do a lot more.
And while the hope and what we've seen, I'm sure, you know, in past automation waves is that over time, that's exactly what happens.
You see economic growth, you see dislocation in the short term, especially in certain classes.
But overall, people find new jobs and new economies and usually making more money and the whole economy gets better off.
Now, the downside there is like that short term pain can be quite wrenching.
And for people who are caught in it, it can mean lifelong economic loss.
So just because the country gets better doesn't mean you yourself get better.
And I think it'll be easier for us to point to those examples of people being hurt by this than the sort of broader improvements you might get.
And I think, you know, again, it's going to really depend on the kind of company and the kind of business we're talking about.
You know, and some, I think, you know, software, for instance, we're already seeing that anything to do with Silicon Valley, I think, you know, there'll be far fewer barriers to adoption.
They'll be eagerly doing it, in fact.
But if you look at other areas like, you know, including sectors that really need productivity growth, something like education, for instance, or health care, you know, you're starting to see it some in health care.
I think, you know, first off with really cool tools around things like ambient listening, the ability to capture conversations in doctors' offices, turn that into information as a usable data, really free them up from some of the really onerous paperwork that they'll have to do.
You'll start to see that, but at the same time, like, would you start to see AI diagnosticians?
That I doubt.
And frankly, you know, something like the medical profession has a lot of ability to sort of throw up barriers as they have in other areas.
So what you'll get is this very spiky kind of wave of adoption and effects.
And I think, you know, how that break down, it'll come down to technology.
But really, I think politics and the control there will actually matter more in terms of the power of different kinds of lines of business, different kinds of workers as well.
So let me maybe put an even finer point on that, because I think we're already getting to what I wanted to talk about next, which is, you know, broadly, who do you see as being the winners and losers of this disruption?
I think I see the winners, I mean, those who can obviously help start one, the companies that are making AI, I think, are definitely going to be winners without too much trouble.
They won't all be winners.
And I'm looking into the situation now, I think, you know, we've had this saying around Vox at like 2025 is like the year of AI productization.
I think that's a word.
Basically, we're taking this from a technology so many people can actually use.
Alright, and I think what you see in what GPT has been doing, which came out with Google, the ability to really focusing on trying to meld this with existing tools.
Ultimately, like the companies that are able to do that best that are able to lock that in are the ones to succeed.
So something I'd chat, I think, you know, is already beginning to get close to kind of like Coca-Cola status, I suppose, right?
Like in terms of they're becoming the kind of default.
And that leaves, say, like a company like Anthropic, you know, they're making really great technical products, you know, like struggling.
And I wonder how long, quite honestly, they'll be able to run independent.
And then, of course, a Google, you know, for all the challenges they've had, has this huge base to go from has this huge sort of work with Google WorkSweep, all those things.
So I think those companies that can lock into products will win.
You know, I think beyond that, when it comes to sort of more individual level, I think, yeah, there will be a benefit to being.
Benefit to being a first adopter.
You know, I think whether you're talking about someone who does work like I do, whether you're talking about someone who does work on consulting for something like that, all those kind of fields.
Individually, if you're able to sort of master this technology, yeah, you can put yourself forward, you can become more productive.
And you can, I mean, it's kind of a cliche, I think, but like this idea that it's not the AI that will take your job, but the person who can understand the AI and use the AI.
That is true to a certain extent.
So I think, you know, again, going back to those people and those companies that work in fields that can be done remotely, that can be automated.
They should be the winners overall, although a lot of people in those industries might suffer quite a lot at this location.
In terms of losers, I think, you know, again, like the companies that can't sort of lock their ad into products will struggle.
Those who really struggle internally to figure out how to use these will.
So I think it comes down to how nimble can you be around understanding the tools, figuring how to work them into your workflow.
And at this point, that's really an open question.
And, you know, it took years for the IT revolution to really start to show up in productivity standards.
I mean, it was like a joke for the longest time, because I think it's Eric Brynjolfsson at Stanford now.
And this sort of J Kerber productivity, right, where like it takes time and investment to figure out these new technologies, you don't see it immediately.
And then once you get around that starts going up.
I think we might be at the point where that is starting to happen.
Still lots of things are getting away like we still could have technological barriers.
We could have a situation where what if something goes really wrong, you know, like something like a three mile island situation for AI.
I don't think we're at the stage where that's likely just because it hasn't been integrated enough to the real economy.
And if that were to happen, that can throw the brakes on things really fast.
I think it's very hard for both politicians and for just ordinary people to imagine, you know, how something could be dangerous, how something to be threatened until you've actually seen it in the real world.
So there could be a million movies about dangerous AI or even more so a million think tank reports.
We won't really see that until something bad happens, whether that's cyber security, something like that.
So that's sort of out there too is something that could break it.
We could just hit technological barriers to, you know, whether that's in terms of the ability to power these data centers, which here in the United States where I am, like, I think that's a real question.
I don't know if we can do that.
You know, that's not something that can be fixed with AI.
That's something that has to be fixed with policy.
So that could be a barrier.
Then of course, you just run into situations around training data or some kind of, you know, stailing law begins to break down.
And suddenly we'll look back in the few years and time being like, we were able to optimistic how fast it actually happened.
There's there's so much in that answer that I want to dig into.
And my mind is going in a thousand different places.
But maybe maybe let's start with the Three Mile Island comment, because that's really interesting to me.
And I think it's, you know, part of that ambient fear that a lot of people feel.
There's so much conversation now about, you know, AI from everything as a tool to make you 5% more productive to, you know, going to have a doomsday event that wipes out the human race.
When you think about how, you know, some sort of some sort of catastrophe befalls us with AI at some point that that makes us reflect on that.
Do you have a perspective on how that might unfold and how that probably won't unfold just based on, you know, all the conversation happening around it?
Yeah, I think in terms of how it might unfold.
You know, I worry about the offensive use of AI, and that's sort of tied into, you know, pre AI cyber security issues.
Like I think it's well known at this point that, you know, countries like China, countries like Russia, probably have pretty good penetration into like physical infrastructure in the US.
Now that's probably going to be AI enabled, not necessarily AI, but if we really start to see something that a computer can do that causes real life fatalities.
You know, let's imagine sewer system being shut off, a power plant being destroyed, something like that.
That would be, even if it's not necessarily directly from AI, that would be something I think would really freeze progress.
I think as well, the possibility of really ramped up AI misinformation.
So now I'm just leading people in a dangerous kind of way.
You know, when I see things like the VEO image, I started video generation, you know, it's getting so good that it just feels like it almost,
it's shocking to me that hasn't happened yet, where we haven't had a real situation where fake video really convinced people.
Now, you know, in some part, maybe that's because we have in the back of our heads and in general, we just don't trust media of all kinds as much as we used to.
But as it gets better and better, I worry about that too.
The real fear I have is, and this has come up recently in some of the sort of safety cards around Claude, for instance,
the ability to really enable some kind of bio terrorist attack, that's a scary thought.
You know, I think it's very unlikely still, because even if you have a model that can tell you exactly how to do it,
it's like, I do it, it's not that easy, thankfully.
But, you know, as we start to see that, like, there's no kidding around the fact that AI is a enabling technology in a lot of ways.
You know, it reduces the skill set needed to do all kinds of things.
Like, for me, personally, reduce the skill set I needed to do maintenance on my air conditioner, because I could just ask ChatBT what to do.
Flip side of that is, I could reduce the sort of skill set you need to do something very dangerous or very bad.
And then I also think, I worry, this is a longer term concern, but, you know, a certain kind of skill erosion that happens has become due dependent on this.
And that's where it goes, it's interesting to go back to young people, because they're also the same class that's graduating now.
I believe they were either sophomores or freshmen when ChatBT first came out.
And we've seen the impact this has had on college, like one of my favorite stats around ChatBT is where you can see, like, usership go up during the school year and then come summer it goes down, which gives you a sense of how a lot of people are using this.
You know, I do wonder long term what that will do to us, you know, becoming too dependent on that.
I already feel myself a little bit like that.
Like, I now, I don't really Google anymore, I use ChatBT a lot.
I ask questions and, you know, I'm coming in there with like 20 years of experience of doing it mostly on my own.
But it's amazing how fast, if you don't keep using those skills, they can kind of be in the grade.
And if you're starting with someone, you know, it's a little similar to like the idea of like losing that first rung on the job ladder.
If you didn't get it in college either, like, I do wonder what kind of skills these students will have.
Maybe they'll just be really good at using.
Yeah, I don't know.
Yeah.
But I don't think that's automatically the case.
I think, you know, it's, especially for the kind of things you do in school, it's much more easy to use as a crutch and substitute than it is to be somehow like endabling yourself to do more.
So let's talk about education for a minute, which we've kind of naturally come into.
My sense is the entire education system is under threat in a way I certainly haven't seen in my lifetime.
And it's a, you know, it's a very many headed threat at this point, it feels like, but certainly AI is one of the big components there.
Do you have any sense of, you know, what are the best universities and colleges doing differently?
What are the best professors and teachers doing?
Like, has anything started to emerge about what to do or even, you know, lessons about certainly what not to do?
Because it just, it just seems like there's so much fog of war here.
The number one thing not to do, I've seen is nothing.
If you don't change how you were educating people, preach HBT, you're not going to be educated them.
Because if you're giving them things like papers that can be at a home, there's really no way to get around the fact that like it will be almost logical for them to use chat.
And this is not just in, you know, this isn't all universities, all levels, I think we're seeing this just because the shortcut is there.
So, you know, I think that's one thing.
And then, you know, there was hope that you could use detection tools, but those are not reliable.
You're too likely to get false positives or miss things increasingly and kids are getting smart enough they can kind of go in and do AI.
There are even tools to do that, apparently.
So you have to change that.
I mean, some of the most basic things that I've seen professors do is go into in-class assignments, blue books.
I don't know how, if you had these when you were, you know, before, but like I remember having to open up a book that had an essay question and write it out with a pencil.
I feel really sorry for the college professor who had to grade my handwriting, but like, you know, there was no way to use automation tools for that.
I think there's also really smart professors.
I mean, one I really looked to is Ethan Malak, who's a business professor at the Wharton School University of Pennsylvania.
From the start with chat, he's really integrated that into his work with his business school students.
And that's smart because they will be using this and it would be strange for them not to know how to use it.
So I think, you know, the ones who are being creative about how we can both ensure that like when you're testing students in one way or another, you're actually testing them, not their tools.
That's good.
But I think also, especially if you're dealing with whether it's a business school or other areas where you're likely using AI more and more in the future, if you can integrate that into your education system, I think that's great.
And then, you know, interestingly, like if you look outside the kind of North American college system, there's a lot of potential.
And I do like to focus on the positive things that we can see out of this.
A lot of potential for automated one-on-one tutoring, especially in the global South.
There's been a number of really interesting research studies, I think, in Nigeria around that.
These are places where there simply aren't enough teachers, aren't enough tools.
That's where something where, you know, you're essentially going from zero to something, and we can only expect those tools to get better.
That's really promising to me because I think there's a huge education gap around the world that AI can help close.
Even here in the United States, where, you know, you mentioned all the attacks that are happening on education from whether it's the White House or the lingering impact of the pandemic.
You know, one issue I have been writing about for a long time has been this lingering, lingering learning loss we've experienced in the pandemic.
It's very serious.
This is overlapping with all the things we're seeing with automation.
One of the best ways to solve that is through personalized tutoring.
And of course, there aren't enough Cuban tutors to go around.
Maybe AI can help in that way.
So I think it's a mix of some sort of old school ways of just getting off the computer all together with assessment, but also definitely trying to sort of integrate it in a creative way.
I think just doing the lecture and the take on paper, you know, that won't do anything for anyone anymore.
Right. So it's a combination of actually, you know, coming to terms with the fact that students are using AI and we need to make sure they know how to use it properly, as well as making sure we don't lose the actual critical thinking
and understanding skills that are so important in having a track that focuses on that.
I mean, more important than ever, I would argue.
I mean, like, yeah, it's like, this is when you need, you know, the critical understanding to know what's real and what's not.
You know, that will become ever more important.
You know, so you have to have that.
And, you know, it's a lot easier said than done.
I think schools have been trying to treat critical thinking for years, really, ever since we've moved beyond just more of an auditory back to kind of wrote method of learning.
It's like, oh, let's teach them how to think.
It's not as simple as that.
It requires really talented teachers, which is a whole other story.
But, you know, there is a more positive vision of this where, you know, you can personalize education.
That's always been something that's lacked in certainly the US educational system.
You know, it's just not meant to teach individuals meant to teach people in classes, sort of.
If we can get beyond that, if there are ways to use these tools to really personalize and with also sort of keeping students from the natural tendency to try to do less.
That could be really great.
But, you know, we're not, I have not seen a lot of examples of that so far.
And right now, whether you're looking at test scores or any sort of measurement like that, like educational picture in the United States is not great.
When you, you know, looking into your crystal ball, if you look at the whole education system 10 years from now, you know, do you think it's going to be five, 10% different from it?
How it is right now?
Or is it going to be 80% different?
Like how great is the need for it to reinvent itself?
And, you know, to what degree do you think the ability is there to capitalize on that?
I mean, the educational system is gigantic.
And it's like steering a battleship.
So the idea of really rapid, like 80% change in a decade is just fundamentally hard to imagine, at least on the whole.
You know, I think you'll have individual professors, individual schools that really will change a lot in order to get changed.
You know, I would bet for something like 20% maybe, you know, I think like one of these days will figure out you can't get the test differently.
That has to happen.
So that's one thing.
Then I think, yeah, like you will see also like another part going on here when you're looking at education, especially higher education here in the US is that you're, and I think elsewhere in North America, you're facing a demographic shift that's going to be really just locating for colleges, even if
something else, like we're talking about was happening, you will simply have far fewer students, a lot of colleges, especially more expensive, smaller ones will go out of business.
It's already happening.
There just aren't enough customers.
And if you, you know, if you start to cut off the supply of students from outside North America, then you really start to see some problems.
So I think you'll see fewer colleges.
I think perhaps you'll be able to see one scale up, like where you see a sort of a flow to quality.
And, you know, at the most elite universities in the United States, there are far more students who could go there, who are smart enough and who could definitely succeed there, then they've allowed to have spots.
And one could argue that that's a mistake.
Like, you know, we should be trying to ramp up the ones who are best at this.
You know, so I think that might begin to happen.
I do think as well, like there might be a situation where people start to rethink college.
That's already begun to be the case.
You know, you're starting to see, especially coming out of the pandemic, there was sort of a bit of a decline.
That's reversed for now, which I wonder quite honestly, is a bit of a leading economic indicator.
Anytime you start to see kids go more into grad school or law school, that's usually a sign that they don't see a great job market and it's a good place to hide out for a couple years.
But, you know, I think we'll start to like how valuable our colleges, how much do you need that?
Like, I can't imagine suddenly you'll see widespread being like, well, these degrees mean nothing.
But I do think that it will be less the kind of just the fundamental norm that maybe it was, you know, for most the 21st century.
And students might have more choice for that.
Perhaps there are things like trade schools or they can have more career focused tracks.
Again, right now, probably more people go to college to really benefit from it.
You can see that in the fact that a lot of people in the United States start college and don't finish, which often then leaves them with the worst of both worlds.
They don't have the degree, but they often have the debt.
Maybe that will begin to change.
If you start to see companies being more open to not just looking at the credential of a college education for everything, that could shift things.
But then, you know, again, we were just talking about, you know, wiping out entry level jobs.
So what would that mean for it?
I have really a hard time knowing, but certainly like if you're less confident in the premium you'll get from your college education, you are probably less likely to pay up for it off the bat.
So I think you'd see fewer going what they do then, you know, I mean, if you're thinking of jobs that are more automation proof, like anything can be done in person.
The electricians were going to have to, you know, keep this grid going, things like that.
Historically, there haven't been enough of those workers.
Maybe that changes.
We talk a lot about a manufacturing wave in the United States.
Maybe I'm actually pretty skeptical of that for a lot of reasons, one of which is like, we actually have a lot of open manufacturing jobs.
Right now that have been filled.
But again, you know, give it a few years and perhaps that'll start to change people's opinions around just just following that thread for a second.
And again, I want to deliberately put a fine point on it.
But if you were talking to someone who is 20 or, you know, the parent of someone who's 20, what, you know, what's your kind of best advice or guidance for, you know, what the next handful of years of their, you know, their life, their career, their education should look like so they can best position themselves for success.
I mean, I would say a lot of the same rules still apply.
Like I think like skill in STEM will be useful for a long time.
It's possible like, you know, just coding is not going to pay off the way used to it.
And that's already the case, those jobs, even if they haven't declined and you've seen a lot of big tech companies really sort of pulling back from the huge amounts of hiring that we're doing in the pandemic.
I think that won't be as good as it used to be, but you'll still put yourself, I think, in a better position if you can sort of immerse yourself in that world.
You're more likely to have the skills to figure out what comes next, I think, you know, so if I would just, you know, I would recommend that they do that.
Not everyone has that aptitude. I'm an English major, personally, I would not.
I love books, but it's in good conscious.
Any 20 year old friend of mine to do that, you know, I think, you know, it's also a matter of.
Curiosity. How can you demonstrate that? How can you feed that?
You know, these are kind of boilerplate answers, but there's a reason why they've worked.
You know, you think about like, who do you want to hire? Like, what are you looking for? I think about that sometimes.
And I'm looking for, you know, you're just to work. I'm looking for curiosity.
I'm looking for a willingness to try things, experimentation, maybe becomes more less about, you know, what specifically you knew, then what kind of person can you become?
If that makes sense? How do you demonstrate that in the future?
That you're someone who is flexible, that you're someone who, you know, wants to grow, wants to learn continuously?
Because that, you know, will be even more necessary than used to be, I think.
Yeah, no, that, that makes sense. And it resonates with me, too.
And I don't know if you found this in your career, Brian.
I like coming to terms with it more and more, that it just feels like skills are a lot more teachable than attitude.
Like, as you said, like people who come in with curiosity, with drive, with like a real willingness to, you know, roll up their sleeves and try things.
It's just, to me, that's so much more valuable in someone coming into the workforce than I know how to use this exact, you know, program or system.
And yeah, it feels like, you know, that's only going to become more pronounced.
I wish there were a better way to identify that. I mean, that's something, you know, quite often, like when you were doing job interviews,
I found, you know, what you find yourself doing is assessing the person as they are.
Like, what have they done? What have they learned? Where did they go to school? What was their last job?
When you really need to do is figure out, how can I predict who this person will become while they're with me?
You know, and, you know, then sort of experimenting with what are the questions you asked for that?
How do you sort of indicate that? You know, maybe one day I will help me just identify that off the bat, I don't know.
But, you know, that's what I think, you know, if you're an employer, that's what you should be looking for.
Really thinking of that, because also the past will tell you less than it used to, I think, because it will not resemble the future even more than it has in the past.
It's a bit circular.
So just, just following that thread for a minute, do you have any thoughts on what the workplace of the future looks like?
What the organization of the future looks like? And, you know, what skills are going to be more and less important as, you know, organizations try and succeed and thrive in this new world?
I think, in terms of skills, it would be important. I think management skills, actually.
I think, you know, in some ways, these tools might give us the ability to manage larger groups, managing AIs to a certain extent.
So I think figuring out how one can do that best, you know, how do you measure that? How do you get better at it?
I think that's really important, because it's, you know, when we're talking about rapid change, some of your workers are the ones who are going to have to adapt to that.
But they will do it much more effectively if they have a management team that's engaged and able really to work with them.
So sort of those kind of, you know, we call them people skills.
I guess sometimes they're called soft skills, but there's no doubt they're really important.
I think anyone who's been in the workforce for any time knows that, because you've seen the difference between someone who has those skills and someone who doesn't.
And it's a whole lot more pleasant to work under someone who hasn't and who doesn't.
So that's one thing. I think in terms of what the workplace will look like, you know, it's funny, like I spent years writing about remote work.
Is it here to stay? Is it not here to stay? I mean, hybrid.
You know, I think we've settled into some kind of in between, you know, the idea of going back to office parks full time and all the rest.
That doesn't make sense. And I have a hard time believing that, you know, the preponderance of AI tools is going to arrest that.
I think it'll just accelerate it more.
That said, like, again, for the same reason I just talked about people skills, like they're, I think companies that put in the effort to create workplaces that people want to be in at least some of the time and are very mindful about how that's used.
You know, one thing you can't do anymore is this kind of, well, everyone's going to show up and it'll all kind of work out itself.
No, you got to be a lot more deliberate in terms of building that workforce in terms of managing that workforce.
So I think you'll still see something hybrid, but the companies that succeed with the ones you figure out how to use that hybrid time in person really effectively and also to sort of figure out ways to keep people tethered even when they're not there.
So that I think that's one thing.
I think it will be smaller.
You know, I think I could see smaller teams because small teams able to do more.
You know, so the idea of massive workforces that seems less likely, you know, I'm very interested to see what will happen with new businesses and startups.
You know, I mean, because it's a lot easier, I'm sure you know, like to institute new rules to try new things when you're starting from scratch.
You know, I've, in my 20 plus years of being a journalist, like I started my career at Time Magazine, which when I started there, I've already been around for like 65 years, I think at that point.
You certainly have ways of doing things.
And, you know, there was definitely a challenge of changing when things have been not established, people have been there that long.
Then I've worked at a place like Axios, you know, sort of a new media startup, newsletter business, like everything is being invented from scratch.
And those are people who are taking those lessons from those older, more established companies they've left, applying them and what they wanted to keep, what they wanted to just and, you know, was a lot more nimble in that way.
So I think, you know, I'm very curious, like, we'll startups, you know, I know there was, there was a trend where, you know, we were going for like the really lean startup, you know, like really few workers.
That seems possible.
I've also seen like some companies having pulled back from that companies like Klarna, like talk to big game about, you know, being able to automate almost their whole workforce, but actually that turns out there are some complexities there.
But that's it, like, I think if you start to see, you know, I think the most ambitious, you know, ideas or something like when's the first one person billion dollar startup.
I, you know, I want to say that's impossible.
But is it, you know, I mean, a world where that's possible as a world where we're going to have really fundamental change, I think, even if it's only a few sectors, we can do that.
Let alone what that means in terms of everything else, you know, the share of capital and so forth and so on.
So, you know, I think smaller, more nimble, I think, and the importance of management, you know, the course of getting the most out of the people you have, that's going to be the sort of formula of success in the future.
So, coming back to this, you know, this notion of winners and losers, and you touched on this already, but one of the big debates we're hearing about, and maybe it's a false debate, is whether AI and some of these emerging technologies are going to disrupt the big players here, right?
And create the opportunity for, you know, whether it's a one person billion dollar company, or just, you know, a wave of upstarts, like we saw with the advent of web of social media.
Or whether these tools actually better position the incumbents and make them stronger and help them get farther ahead and, you know, grow their moats.
Those are two very different narratives. Do you have, you know, a prediction or a perspective on, you know, which is the more likely outcome?
You know, I think, I do think that there will be a bigger incumbent advantage now under the rules of AI than was the case in, say, Web 2.0, Web 3.0, you know, and the reason is because these are huge capital intensive industries, right?
I mean, like, it requires a lot of money, as we've seen with chat between others to scale up an AI company.
It is not like getting Facebook off the ground, right?
You know, and of course, like, you also have to, you know, the cost of running these models, cost of getting customers, cost of serving customers is really expensive.
It doesn't scale in the same way that it did, I think, when you were just adding social media to users.
So that tells me that, like, those companies that have that capital will have a bit of a moat, shouldn't they have more resources to play with?
That said, like, I think it'd be absurd to think that nothing will change, right?
I mean, like, Google is a good example. You know, it is not in the position it was a few years ago.
You know, yes, does it still have mostly a lock on search?
Yes. Is search changing in a way that maybe it can't control? Also true.
You know, so that's, that was like, I think of that as the monopoly that seemed totally unbreakable.
Yeah, it was. And then you can look at a company like Apple that should be succeeding, you know, has been on the front line of every sort of consumer facing technological revolution.
And it's really struggling. And that comes down to who can do this better and who can't, you know, and that really comes down to individual decisions in the part of CEOs.
Who do you hire to run your AI system? What are you trying to do? What are your sort of standards?
So we kept a situation where Apple and Google, two of the most powerful tech companies of forever, basically, you know, really lose market share significantly.
And I would not have predicted that they seemed invulnerable. But, you know, we're seeing a lot of institutions that seemed invulnerable, proven more vulnerable.
You know, Harvard University, I would have bet would be around long after the United States was gone, maybe not, you know.
So I think, you know, one hand, yes, like the resources matter. But at the same time, I think, you know, there will be winners and losers, even within the big giants here.
And then we wrote it for new companies to make inwards and back.
Well, and it sounds like what you're saying, and, you know, I'll be a little bit flippant with this, but even within, you know, the grand horse race here, we can expect a lot of position changing among the runners.
Yeah. Yeah. I mean, look, I mean, look, what if, I don't think it's too likely, but what if anthropic, you know, in six months time, really gets AGI. I mean, yeah, all bets are off then then we start to sort of change the math and everything and all of these predictions kind of go out the window, right.
And it is a scientific challenge, first and foremost, right, like, yes, you need the horsepower in terms of just data center, you need the energy, you need all those things, none of which are cheap.
But, you know, there might be someone out there who makes a breakthrough. And that's the situation where if you can do that, that breakthrough, then metastasized it sort of multiplies itself beyond, I think, what we've seen in the past, any single one.
So there's a reason why, you know, you ask me for career advice, my career advice would be become an AI scientist, I suppose, because they're going to benefit and, you know, the war for that town will really matter because it can really decide the difference.
Like, I think, right, honestly, like, who, you know, who Apple chose to run AI may be a major problem with their company going forward. Again, a company that seemed just sitting on top of the world.
And they face other challenges. I mean, obviously, you know, as manufacturing, tariffs and all the rest, but it just shows how those decisions can make a huge, huge difference.
Right. So, yeah, said another way, like, literally, everyone is vulnerable here.
And if you're talking about, like, this kind of massive technological shift, if you're talking about an intelligence explosion, like that actually being achieved, then nothing is safe.
Yeah, then I feel, you know, I think people like, again, like the AI 2027 writers have done a really great job of sort of thinking through something that's very hard to think through, you know, bit by bit, that scenario.
But, you know, in reality, like, I think I would, I would hesitate to really make any prediction at that point. I don't know. Because then you're talking about something that becomes a geopolitical object.
You're, you're talking about power shifts, you know, on an international scale. It kind of makes like, I'm going to predict what happens with the market next year kind of feel not that important in comparison.
Yeah. So, so, you know, geopolitical shifts we've talked about as this technology expands, like it, that the resource requirements are somewhere between exponential and asymptotic.
You mentioned earlier in our conversation, the word policy a few times. What, what role do you think policy plays, you know, blanket, whether it's with the big tech firms, whether it's with any given sector, whether it's with anywhere from national security to just, I don't know,
I guess kind of securing and easing the path forward for, you know, citizens of the US and beyond.
It's a really hard question. I mean, here in the United States, at least, because you have a it is, we should not underestimate the fact that it is just really, really, really difficult to get regulation right when you're dealing with emerging technology.
You know, it's almost like a paradox, like, when that technology is, you know, still small, you have the power to regulate it, you know, because they haven't gotten the point of like, a Facebook or meta, you know, where they have so much market share and they can kind of push back on regulation.
You know, you were at that stage where they are not too long ago.
Problem is, when it's that small, you also don't know where it will go. You don't know how to regulate in a way that won't smother it, which I don't like anyone really wants, but also that will be safe.
And so, like, period, that is just a really hard challenge.
On top of that, you have the fact that, you know, the ideal version of this would be something Congress is doing that's not really within its capability anymore, both because of the way Congress has run and also because, frankly, there's just very little scientific understanding there.
You know, they've gotten better, like it's not quite as bad as the old days where, you know, systems of tubes or whatever with the internet, but it's still, you know, quite hard to do, you know, and at the same time, like,
there's no clarity in terms of, like, what are you regulating for, right? Are you regulating for safety?
All right, there's been efforts at that. They haven't really succeeded.
You know, even California's bill didn't ultimately get through the governor's veto.
Are you regulating for worker protections? Are you regulating, because that comes with its own dangers?
It comes with the fact that some classes might get favorite and others not.
Are you regulating for what the economy should look like on the other side of this?
The main tool that the government has been using around that has been antitrust law.
That's a really crude instrument, I think, you know, and also, I think the very fact that we just talked about where even the big incumbents are at threat here, kind of, to my mind, kind of dampens that argument that you can use that, because I don't think that's the case.
So, you know, what I'm hoping for really is, you know, maybe this is something that's almost a new political movement, and it still probably has to wait for us to really see this happen, for it to really come to being, but, okay, what economy do you want?
All right, like, try to regulate towards that, not try to regulate in the sense of stopping individual things, but rather, like, try to imagine the political economy that you would ideally want, and how do we sort of create the rules that can get us there?
And that is going to mean thinking about really hard questions like, what happens if we start to see huge numbers of entry-level white-collar jobs disappear?
Okay, what do you do with those people?
You know, what does it mean if, you know, fewer people have to work?
You know, those are good questions to have on one hand, but it is a wholly different society than one we've all lived in forever, basically.
And, you know, I see smart thinkers on the outside sort of thinking about this, but that hasn't really filtered into national policy yet, too.
And then on top of that, frankly, you know, we're just, things are a bit messy, at least in the U.S. government right now.
And there is, it's caught between the strange, on one hand, more tech-right kind of pedal-to-metal accelerationism, such that I think in the budget bill that was being debated recently in the past,
they had stuck in a rule that said no state or local A.I. regulation for 10 years, which I'm pretty sure is not actually legal, based off budget reconciliation rules, I can be wrong.
So you have that, but then you also have a government that's very nostalgic, you know, that I could see totally flipping on this.
And I don't know how that's going to work its way out, but the ideal version of that is that there's some sort of effective synthesis between those two voices.
In reality, it seems more like the ping-pong.
And, you know, I wish I knew better what would happen after that.
But I think the reality is, as we see this all the time, is you just don't really get regulations to lie for the fact.
Like, you can do it on a hindsight basis, really, really hard to do it in advance, to do without getting it wrong in one way or another.
And it's so interesting to get your perspective on this, Brian.
And it's, you know, to me, it's so obvious to, like, I really hear your natural curiosity about all of this and kind of the, huh, you know, there's these factors and, you know, asking these really big questions.
In terms of your perspective and just your outlook, are you more kind of optimistic or pessimistic when you look at this future economy, this future political economy?
Are you looking at it with, you know, fear and trepidation?
Are you looking at it like, wow, there's this, you know, this unlimited potential to unlock?
You know, how do you sort of see that shaking out?
You know, I look at it in this way, and I look, it's informed by thinking about history, right?
So we face, putting it aside for a second, we face a number of, you call them existential challenges, right?
Climate is one, we still have old ones like nuclear weapons, we have the resurgence of conflict.
You know, we will have demographic challenges that are quite serious in the future, or they're coming right now for some countries.
We face things like that before, you know, back in the early 20th century, there was a real shortage of fertilizer.
Hey, where did Bosch come around to create that process?
Suddenly they have artificial fertilizer that without that, we would not have the population we have now.
So that was a technological sort of leap that enabled us to kind of get ahead of that challenge.
I think where I tend to fall down is, and that partially is because being somewhat pessimistic about effective political change,
I do place a lot of hope in technological change to help us get past those challenges we face.
So, you know, with something like AI, I feel like I sort of lean on the side that we can't afford not to pursue it,
that we face challenges that require scaled-up intelligence, and that is how we'll get there.
Now that, you know, if you want to talk to me in 15 years' time when I'm in the human-robot war, you know, you can remind me of what I said that.
But I do think, you know, I feel optimistic in that sense that intelligence historically has been a good thing for humankind.
It brings bad things to you, but mostly it's been good. Being able to sort of scale it up in this way could be massively good for society.
Now, what I fear is really the -- it's not the AI fear, it's the humans, right?
Like, I fear the destabilizing effect that really rapid change could bring about.
Not just in terms of the political economy, but on an international scale.
Here, at a moment now where, like I said, conflict is at a higher rate than it's been before, where, you know, we have a serious super-powered confrontation or rivalry between the United States and China.
Historically, when you see that happen, bad things tend to occur, and I worry about that.
And I worry that, like -- and this is something that's very involved in the AI 2027 paper -- that this could be the instigation for conflict on a much bigger scale.
Because suddenly it will seem really essential, because there's a certain first-mover advantage for countries if they're the ones who can achieve this.
That sounds great on their end, but when you destabilize with that degree, you invite counteraction, and that strikes me as very hard to control.
And that's kind of what I worry about, like on the big-scale worries.
Like, I feel both optimistic, you know, most of the day is about what this can do on a sort of sector-by-sector basis for human beings.
I hope that, like, we can use this in a way that creates value for everyone, ultimately, even through a period of pain and adjustment.
But I do worry that, on an international scale, that kind of sort of race or conflict really could get out of control.
And that's the kind of thing that could happen without AI at all.
You know, that's in the case in the past. But I worry that this is the kind of thing that can be -- can take an already destabilizing international situation and really sin first span.
And then no one knows what happens after that.
Right.
That first-mover advantage with AI, without AI, that's become one of the overarching narratives of the technology in general.
And I've heard it time and time again that it has to be an arms race.
Sorry, it doesn't have to be an arms race, but it has to be a race of some sort because it's winner-take-all.
And this is how all these, you know, mass organizations, and to some degree, you know, nation states are justifying this investment, these building of, you know, data centers.
There was kind of an implication there that that's right and that you buy into that.
I just want to -- I want to confirm, because I also hear people tell me the other side of it, which is, you know, that's BS.
That's just cover so that they can get more investment, so that they can try and get ahead.
You know, where's the truth?
Yeah. I do lean on this sort of first-mover advantage.
I do think if we assume this technology is as powerful as it is, it can be, I suppose.
And another part of it is how rapidly it progresses.
Then that, to me, just makes the first-mover advantage all the greater, right?
You know, like, I think back to the early days of the Cold War with what became an arms race there.
Once, you know, the U.S. for a period of time was the only nuclear power out there.
At the same time, like, it was limited in what that meant.
But like, if AI, if we're talking about super intelligence, then, you know, the difference between the day before and the day after seems massive to me.
And you can be only a couple of months behind, but you might be behind permanently.
You know, that -- and really, that could have gone off in a limb there, because that has not been the way it works, usually.
Like, there's never been something that is so dominant that it just freezes power dynamics in one place.
Maybe this could be.
I also understand that, you know, it makes sense -- I do think there's a certain amount of cynicism that goes into,
whether it's companies or others who are kind of using this idea of a race to whether it's sort of stable off-regulation,
whether it's to get more resources.
Yeah, like, I can see that. Like, certainly there is a benefit to them that goes beyond the real geopolitical questions at stake here.
But, you know, and I just think they're probably more right than they are wrong there.
And that's a scary thought, because then, you know, you know, if you had to go back to the nuclear arms race question, like, that was a state race, right?
Like, there wasn't like a -- I don't know, GE had its own nuclear arsenal that was developing.
It was the United States with Soviet Union. It was an Asian state thing.
That's not the case here. It's this weird mix, right?
All the sort of bigger players in the West, at least, are all private, or they're not sort of state-owned.
Maybe they're a little differently elsewhere, certainly state-influenced in a place like China.
How does that work out, honestly?
Because those companies have their own interests that are separate from the countries they happen to be based in.
You have new players like the Middle East, like what role will they play?
Where they have what they can offer is a tremendous amount of capital and the ability to generate a lot of electricity.
I don't know how that'll work out.
You know, so I think while at the same time, like, I really do worry that -- I think this is real -- that the race idea speed you pass safety.
At the same time, I think there's real -- that there's truth behind that.
There's truth behind the fact that it might be a winner's take-all race.
And that's a dangerous dynamic. I wish I knew a better way out of that.
I'm hoping smarter people in here are thinking about that, but it does certainly look like that's the case.
Yeah, it's interesting.
And it's so -- as you said, there's so much uncertainty and so many unknown unknowns at this point that it can really shake out so many different ways.
You know, Brian, we've covered in this conversation so many different aspects, so many big questions, and so many big kind of potentialities.
We're in a space right now where there's so many what-ifs, there's so many big questions.
And there's also so many big answers, like so many big statements about like this is what the future is going to look like.
I'm curious whether it's AI hype or just, you know, kind of boosterism around anything that's going to happen in the next handful of years.
Are there any narratives you're hearing out there right now that you're just like, this is BS?
Like, this is not going to come true. This is, you know, clearly somebody with their own agenda that, you know, you challenge.
Right. I certainly think that, like, the people who are, like, knee-jerk, this will never happen.
Like, that's the people I really don't believe in.
It could be a case. I just find it unlikely, I think, and it tends to, it tends to sort of UAI through the lens of past failed technological transformations, you know, like, a metaverse, whatever,
maybe however you feel like crypto, something like that, but that's certainly come back.
So I find that unlikely. I think the ones that are dangerous are the idea that, like, we can, there will be no pain and no suffering along the way.
Like, that seems unlike, that seems just not possible to me. There is no way that a transformation this grants cannot have extreme pain and dislocation in the shorter term.
You know, so if you're saying that, if you're like a real sort of accelerationist and like there's no downsides here, I don't know what world you're living in.
You know, I think beyond that, on the one hand, like, it really is helpful when people are doing forecasts to be specific, to create, like, here is a vision of how things can be, because that's the thing you can sort of test against.
You can sort of falsify. At the same time, if someone's telling you that is the way things are actually going to be, then almost certainly they're going to be wrong.
It's funny, like, thinking back to, you know, let's go back, like, 15 years, 2009 or so, if you ask someone, like, what will social media do to the world?
You probably would have gotten a lot of talk about, you know, revolution in Iran, you know, or Egypt, the Arab Spring.
That's what, you know, it's going to connect us all, it's going to sort of drop all these boundaries of states, like, that didn't quite happen.
So there are things we could be predicting now that we will just be fundamentally wrong about the nature of the technology itself and how will be used by people.
You know, I mean, that's something we don't often think about, I think, is that it's not just how the technology is developed, it is then how it's implemented.
And that's going to be different from place to place, person to person.
So I think, I like to think in scenarios, I like to think in different possibilities.
I think that's the best way to think about this, not as, like, one way, but here are some different possibilities that can happen, whether there's one where things get very out of control, very destructive, very fast.
There is one where progress is much slower, whether for technological reasons or for political resistance.
And there's probably like a good scenario where there is some short-term pain, but the benefits are really big.
That's where you get that 10, 15% growth a year, which is just off the charts really.
And ultimately, that would be a much better world to live in, I think, than what we have now.
Well, and it sounds like the unifying factor across your, you know, the scenarios you've got in your mind is still that it's going to be a bumpy ride.
And there's going to be some pain and there are going to be some losers.
And so with that in mind, you know, in your mind, do we have a responsibility collectively to minimize some of that?
Or to try and figure out how we can make it less painful for some of the people who are going to be disrupted here?
And if we do, you know, does that fall on individuals? Does it fall on organizations? Does it fall on governments?
Like, what does that look like if we're going to minimize that to some degree?
Yes, we do, I think. And also, I think we need to, you know, there's a self-interest element here, too.
If I were in charge of an AI company, I would be thinking about this.
I would be trying to figure out what role can we play in reducing that pain?
Because if you don't, in a democratic system like this one, you can be the subject to a serious backlash.
I mean, just, you know, look at some of the other companies in Silicon Valley to see how that can work out.
So, yeah, I think that's the case that you do need to think about that.
There has to be real action there. I think that will come and should be coming from the companies themselves.
Ultimately, we'll have to be political. You know, I'd love for a new political movement to come out of this.
I don't know what it would look like.
But if this isn't as important as that, you know, the same with the industrial revolution led to massive political changes in terms of how we're organized.
New parties, things like that. We might need something along those lines.
Ultimately, we may need to even reimagine what it means to be a person, right?
If work and economic role is no longer central to who we are, we're going to need to figure out something else to fill that gap.
And we don't really have, at least in the United States at least, we don't really have it at this moment.
That is exciting because it opens up a whole kind of space to figure out what it means to be a person.
It's also like if we can't fill that space, I think scary things can flow to that vacuum.
Right, right. So it's a time for reflection and individually and collectively figuring out what life looks like in this kind of assisted world.
I do, yeah.
Yeah, yeah. Brian, before I forget, I did want to ask you about journalism and what this means for journalism, which, you know, I know it's like one lens here out of many.
But, you know, in some of the conversations I'm having on this show, I hear everything from these technology, you know, AI, generative AI is going to, you know, get rid of 90% of journalists
because, you know, organizations are just going to, you know, fall in love with, you know, I'll say again, somewhat flippantly like, oh, we just have AI spit out another top 10 list
or just create, you know, slop content that people will read and that's the new face of journalism.
Or on the other end, you know, if you're an investigative journalist and you're doing research, using something like deep research to help you do this in a way that's much cheaper and more scalable than it has been in many decades
and it actually is kind of a force multiplier for journalists. What's your outlook as someone who's kind of in the, you know, eye of the storm here?
Two things. One, absolutely it could be used to sort of amplify the power of individual journalists. I think especially when it comes to those who can process information quickly, you mentioned something like an investigative reporter can use that to aid.
Yes, there's no doubt about that. The bigger problem is less really the tools themselves than where does it go? You know, what we face in journalism is an audience problem above all else.
I mentioned I started my work at our magazine back then it was in paper firm. Our competition was like the other magazines for the last part or things you can read.
Now anyone who's on the internet or publishing is competing against everything else. And it's Netflix against this podcast against the video game or whatever.
And so I do worry that the issue of AI because it's an explosion of content is that it will make it even harder to identify to find an audience and that ultimately is what we need to do if we don't have the audience nothing we're doing really matters.
That's less of an that's been a problem that's been ongoing for a long time before I really came to the scene.
I guess I hope could be that like, there could be a tools that identify better sources that begin to sort of know to discern quality.
You know, Google is never very good at that, given the entire world of search engine optimization, you know, engineering.
I hope maybe if they I guess be, you know, artificially general intelligent or even super intelligent, I hope it can tell the difference between good news and poorly done news.
But those are actually really separate from me. I think at the end of the day, these are questions that have existed in the business for quite a long time.
There's no easy solution to them.
You know, again, hope that maybe these tools will be used to serve identify higher quality targets. I think there's always some evidence that, you know, they can sort of raise the epistemic quality, you know, by working with users.
They can also be used against that in that way. I guess my hope is that that I hope is that like it, it can be used as educational tool that ultimately can sort of support higher quality sources of information that would ideally include us.
Well, and there's that that's really interesting. And it's something I didn't think of that I wish I thought of, which is the audience problem as in some ways being more pressing than the the AI one.
And for me, you know, as a podcast host, it's interesting because now there's like, you know, you talk a little bit about epistemology about truth and journalism and that at odds with sensationalism.
And how do you have the headline? How do you get the click? How do you compete with compete in the economy of attention? And, you know, what does that mean for journalists and what does that mean for for truth?
And I mean, my sense is we're kind of at a point of no return there, right? Like you can't just go back and convince people that they need to be, you know, stop clicking on the, you know, exciting headline and all those things.
A lot of them are sort of leftover from business models that don't exist anymore. You know, from a geographic monopoly that new sources had within a limited geographic spread, that's gone. The internet changed that forever.
You know, I think the future, you know, it's a little boring, but like it's more of a one to one reader to audience relationship. That's why things like Substack have taken off in that way kind of way where you were filing less a brand than you are individuals and sort of you can establish trust that way.
I think that's the future to a certain extent the promise like it's inherently more limited, you know, like the idea of a mass media in that way won't come back in the same sense. And so what does that mean for the body politic hasn't been great so far.
Again, I, you know, some sort of long shot hope that I can play some role in filtering out information that'll be really important. But will that over outweigh the slop that will be used to create just flooding the the band with everywhere I'd not been kind of skeptical.
Well, and you used a word that's really interesting to me, which is trust, right, because if we're if we're in this kind of, you know, flood of slop and low quality, you know, quantity over quality.
Is there an opportunity for, you know, whether it's individuals or Substack to build trust and that in some way becomes the inoculation against all this misinformation, disinformation.
No, I think so. Yeah, absolutely. That's already happening. Again, it's a scale question. We're talking about orders of magnitude, smaller scale, when you're in terms of who you're reaching. So yeah, that's, you know, that's, that's, that's a bit troubling.
And also, you know, this concern that you could kind of cut off this sort of economy that generates the information that you then use to go train and sort of these models run on. I look at Google and what it's doing with AI summaries. And I can't help to think like it's kind of eating its own seed corn, because
the link economy is what makes the advertising economy that gives them all that money and they seem to be hell bent on destroying that even if they feel probably that they're an existential fight over AI and therefore can't afford not to win that.
Yeah. Yeah. A lot to digest a lot to think about. Brian, I wanted to say thank you so much for joining us today. It's a super interesting conversation. I really appreciate your insights.
Thank you very much. Great to be here.
Key Points:
Discussion with Brian Walsh on the impact of AI and emerging tech on work and education.
Uncertainty around AI's near-term impact on jobs and workforce integration.
Concerns about the potential effects of AI on young workers and the future economy.
Winners and losers in the AI disruption, with emphasis on adaptability and productization.
Considerations on the potential risks of AI misuse and misinformation, including cyber security threats.
Summary:
In a conversation with Brian Walsh, the focus was on how AI and emerging technologies will influence work and education. There are uncertainties surrounding the immediate impact of AI on jobs, particularly in terms of workforce integration challenges faced by companies. Concerns were raised about how young workers might be affected and the potential economic implications. The discussion also touched upon the winners and losers in the AI disruption, highlighting the importance of adaptability and productization for success. Furthermore, there were considerations regarding the risks associated with AI misuse, particularly in offensive cyber activities and misinformation campaigns. The conversation underscored the need for proactive measures to address these challenges and ensure responsible AI adoption.
FAQs
AI and emerging tech are expected to disrupt both entry-level work and senior roles, requiring workers and leaders to adapt.
Companies struggle with integrating AI into their workforces due to the lack of an instruction manual and the need to figure out how to use it effectively.
Younger workers may face challenges in finding jobs as automation may target entry-level positions, potentially affecting the future job market.
Companies adept at integrating AI into products are likely to benefit, while those unable to adapt may struggle. Individuals who can master AI technology may have a competitive advantage.
Potential dangers include offensive use of AI leading to cyber threats and misinformation. Barriers such as technological limitations and policy issues could also hinder AI adoption.
A catastrophic event involving AI, like cyber attacks on critical infrastructure or AI-enabled misinformation, could impede progress and raise concerns about the ethical use of AI.
Chat with AI
Ask up to 5 questions based on this transcript.
No messages yet. Ask your first question about the episode.