☀️
Go back

How To Re-think News in the AI Era

53m 22s

How To Re-think News in the AI Era

Join us for an enlightening discussion on the transformation of digital news consumption with Alex, the visionary founder of Other Web. I'm Adam, your host, and in this episode, we dive into the innovative strategies that Other Web is implementing to reshape how we interact with online news and information. Alex shares his journey, inspired by his upbringing in the Soviet Union where access to accurate information was limited, and his mission to eradicate the digital 'junk' that clutters our feeds today. We explore Other Web's pioneering features, such as 'nutrition labels' for news, which help users gauge the quality...

Transcription

10516 Words, 57358 Characters

Welcome to This Anthro Life. Alex, great to have you on the pod today. And you're the visionary founder, I think, of OtherWeb, and this really groundbreaking and cool platform that's helping us rethink what news can be and what information can be online. And so I was excited to get in touch with you to think about this and basically how we can rethink how we consume information online and what is our responsibility, both as creators and as consumers. So first off, welcome to the program and great to have you. Thank you so much. It's great to be on. Let's kind of open up with this idea. Why start caring about digital junk? When you said it and I heard about it, I was like, okay, this is the thing that matters. How did you find yourself asking these questions around junk? Well, I think I was primed to care about it from my childhood a little bit because I was born in the Soviet Union and I kind of saw what a society looks like if nobody has any idea what's going on. Interesting. And everybody just makes the wrong assumptions or they're pretending to know something, but they're kind of 90% sure it's not true. And I have these vivid memories of my parents locking themselves in the closet or in the bathroom or someplace where the neighbors won't hear and listening to the radio to hear a voice of America at 4 a.m. Wow. Interesting. To get at least some idea of something that might be true, even though it's foreign propaganda, right? But it was foreign propaganda in a country that had its own propaganda that was worse. So I kind of thought that knowing stuff is important and it probably leads to better actions. And I thought that my entire life. And so when at some point I had this crisis of conscience a few years ago and thought about what I want to care about next, what I want to work on next, that seemed like a good idea, right? The stuff we're all consuming is becoming worse. Nobody seems to be talking about it or if they're talking about it, they're all trying to come up with some villain to blame for this, which is typically not why bad stuff happens in the entire world all at once. It's usually not a villain. It's something in the evolution of content that's gone wrong, essentially. Yeah. That's actually really interesting. And so I guess, yeah, the one hand it's like our brains want to jump to and say, oh, it's this person or this country's or this idea's fault. But you're kind of saying that we're seeing at the same time, just the rise. And I'm like visualizing in my head, right? Just writhing piles of paper or trash, right? That just kind of like piles around us. But it's funny that digitally, people may not think about that idea, right? That there's so much that's kind of around us all the time. But I think people might definitely recognize like, do we all feel busier all the time? And I think part of it too is like our attention is always grabbed, being pulled towards different little whatever blips and blaps of our phone or emails. But most of the time, I think to your point, it's a lot of junk, right? The reason we have a spam or junk filter in our email, but beyond that too, how do we know what we're consuming? So I'm curious to kind of get your perspective on this too. Because as we've gone in deeper and deeper into the digital realm, and we're not going to come out of that, like back to paper or something else that's more analog entirely, have you seen people's kind of perception about this shift in terms of like, do they feel like they're getting more junk now than we had before? Is it something qualitatively different, I guess, about the kind of junk that we're seeing than 20 years ago? I mean, you can both quantify it and look at it qualitatively, right? So we tried to quantify it recently by hiring the Harris Polling Company to do a poll. And one of the questions they asked is, are you spending more time than you did five years ago, reading the news? And more than 80% of people said yes. And then they asked, okay, do you have more confidence than you did five years ago that you know what is true? And most people said no. And so people are spending more time and they know less, and they are certain of less, and they find it harder to figure out what is true. So quantitatively, that's happening all around us. Now, qualitatively, we are noticing this, right? When you see CNN publish an article with a title like, stop what you're doing and watch this elephant play with bubbles, that's an actual CNN article, right? You realize in your childhood, this wasn't the thing. And so we didn't see that stuff pass editorial review in the cable news network in the 90s. So something is new, something is happening here. It's happening gradually, but it's now reached this kind of monstrous proportion. Where, by the way, you mentioned spam filter. One of the questions we asked was, do you think there is a need for a spam filter on the internet? And 89% said yes. I would agree with that. Yeah. That's an interesting question. So it's like part of it is we might have a filter problem, but then I think, as you noted there too, like what used to not pass editorial content walls now seems to do that, right? Is that part because we've seen like, I don't know, business model shift from tech advertising and things like that in terms of why we're getting so much content? Why in your experience have we seen that shift happen in terms of we're getting more stuff that wouldn't pass 20 years ago? Yeah. So there's a few reasons, but I think the big one, it's not that the business model changed. It's that we actually became better at implementing the business model we already had. Interesting. So starting from, let's say 2003 or so, we became really, really good at tracking how ads perform. We can now track every impression, every click, every single thing. And there is no longer this consumption of a packaged good called a daily issue of something where you have a whole bunch of articles at once. Every article fends for itself. And it's all judged by this perfect tracking on how ads on that article perform. And once you have that combination, now suddenly there's just no incentive to produce anything other than clicks and views. So you have a single selective pressure, you're an evolution guy, so you understand what that leads to, right? It all leads towards all content evolving over time towards that thing, regardless of what the intentions of the people creating it are. So if the intentions are good, bad, it doesn't matter. The ones who have bad intentions probably outperform the ones who have good intentions in the long run. I was going to say, for better or worse, but probably just for worse, right? Yep. But yes, I think that's a great point, too, and a great view of the incentive structure of just the business model getting better. We're going to see then the pressure going to push us in one direction. So as you think about that, I think there's, I mean, I think a ton of directions that I want to explore here, but to keep us moving in this one for now, thinking about this idea of as we're seeing this explosion of this kind of content, and we have this pressure that's basically just trying to feed this one algorithm, how can we, I guess, on the one hand, think about sort of shifting that model, right? And this could be kind of work you're doing with other web, but just also thinking that of just how do we, I guess, in addition to yourself, are we seeing other people kind of ask this question and say, hey, hold on, there's just a lot coming at me all the time. We should ask about how we think about sources and what's actually moving in this way. What does it look like to you? So there's a lot of people asking the same question and they are all coming up with different solutions. So I have some reasons why I think mine is best, right? But in reality, you have a lot of companies like NewsGuard that are basically saying which sources are good. And all they do is kind of the JD power approach, which works in automotive. They apply it here, just rank sources, give them some number of stars, publish annual rankings, things like that. The problem I see with that approach is it's basically spreadsheet level filtering, right? So first of all, you're assuming that all sources are consistent, but what is sub-stack? Is sub-stack good or bad? The answer is it's both. There's a huge variance there, right? But okay, what is the Washington Examiner? Is it good or bad? It seems to me like the news articles are good. The op-eds kind of suck, right? So there's a lot of variance even within that in itself. So that's one approach. And I have my qualms with it, but it's good that it exists and that people are working on it. Then there's the Tristan Harris approach of just going to Facebook and saying, please be nice. And I'm sure that there's some value in that that's creating this sort of public pressure to not be evil, to use a Google slogan from years past, right? But I don't know that that works when it collides with the C-Corps fiduciary duty to maximize shareholder value, right? It seems like being nice, if you're a company that makes money from ads, it goes against what makes you money and what you're supposed to do as an executive. So I'm not sure that's going to work in the long run. So our approach has been, for as long as this is the ecosystem we are in, we want to create some sort of penalty for bad content. So we want to help people select good content. And when people select good content, the secondary consequence is that they're not selecting bad content and therefore people creating bad content get penalized less people see them. Now, this will only become really meaningful when we become really large, or if a hundred more other webs come up and do the same thing. And then all the people who consume content through something like this will basically filter the bad stuff out, bad stuff gets penalized, ergo, there is less incentive to produce it. Now, it could be less binary. So I see a future in which it's not just filtered out or not filtered out, but in which we can go to ad networks or advertisers and say, here's how you evaluate how each article would score. Give that to advertisers, or if you're an advertiser yourself, scale, how much you're willing to pay for an ad or for a click or for a view on this content. And that would create a real kind of sliding scale incentive for people to produce better content than they otherwise would have. That's an interesting idea though, because it's like providing a scale of what would be worth for them monetary value based on how... I guess the question that naturally comes to mind then is how are we thinking about what makes good and bad content? If it's on one hand a filter, but then also how can we determine these ideas beyond a qualitative opinion? A clickbait comes to mind as a topic that doesn't actually tell you anything in the article itself. What makes something good or bad in this context? It's a tough question. I've talked to a bunch of people with degrees in philosophy and they don't agree with each other. In general, I would say that good content is content that serves its stated purpose. And to the degree that something does, it's good. And you can basically rank content along how well does it serve the purpose. When I look at news, it's supposed to inform you about what happened. To the degree that it informs you, it's good. To the degree that it tries to persuade you without actually giving you new information, it's probably not so good. If you look at how we rank content, it tends to track pretty well with your impression of how informative versus opinionated this thing is. But then if you're looking at opinion content, obviously, it's not supposed to be purely informative. It's supposed to be opinionated and controversial and all those things. So you're probably going to grade it based on how interesting it is or how much it stimulates your gray matter, that kind of thing. So every type of content is going to have its own criteria. Now, how do you rank it? That in itself is a hard question. Even once you know what the criteria are, how exactly do you rank content? Our approach to it has been, at least in the news department, we basically say, what are all the bad things we know how to detect? Your score is going to be 100 minus adding up all the bad things. Interesting. So clickbait is an example you mentioned. That's one of the things that we detect. And if the headline is clickbait, you lose something like five points. Yeah, interesting. And then over time, we just add up all the penalties. Cool. Is that list of bad things, is this kind of an open source list or is this kind of a trade secret? Each one of those models is source available, which basically means you can look at it, you can test it, you can't reuse it. So it's almost open source. And in general, you can see a nutrition label next to each article in our platform, which includes most of these outputs. Some of them are not the raw outputs, but with some additional processing that is also source available. But that was the general idea that you can see what we ran. In fact, you can see some things there that don't actually end up in the big score. So if the article has affiliate links, I don't know whether I should penalize it or not, but I think you want to know about it. Because if you're using it basically as a product endorsement, you should probably take it with a grain of salt. If you're just looking at it to figure out which new laptops came out this year, then you don't care that it has affiliate links, it's still good for you. So it depends on your purpose. It seems like that's a part of the work of retraining or thinking around getting rid of junk or being smarter or mindful about how we consume is understanding kind of user intent also, or like you're helping users recognize what their intent is when looking for some content, right? Because you're right, if I'm looking for the best laptops that came out this year, I'm actually happy to have links, right? Because I can go then find where to buy them versus if it doesn't show me that. So is that kind of part of the model? But if you're trying to select which one is the best one, then that's probably a bad article for you, right? Because all I did was copy paste the marketing materials from the manufacturers. That's a good point too, yeah. So I guess, do you rank that kind of... Do you kind of check for that type of thing? Or you can see the articles actually just doing that, like pulling press releases. Well, so we definitely see that. I wouldn't even think that we need to penalize for that because at that point we don't know the user's intent, right? So what we can do is we can help users kind of structure their feed the way they want. And here, I don't know that I can define how to even ask the user a direct question, what do you want? And then adjust their feed based on that. So I just try to give the user as many notes as possible. Actually, if you go to our advanced customizations, you're going to see a lot of things that kind of look bizarre. We even classify every article based on which emotion it's most likely to evoke in the reader, and then allow you to customize how much of each emotion you'd like to get from your feed. And we try to detect that automatically and adjust it, but then you can go and override everything we detected automatically because we don't believe in black boxes. Yeah. The question of transparency, I think, has become so important, right? To go back to your Harris Poll point above, that we read more, but we trust less, right? The more we consume. And so part of that too is recognizing what's coming at us, but then also to this point of, can we see how and why it's coming at us, right? What role of control do I have? And I think it's a really interesting point. And I'm curious, as you think about both data, do you see patterns of behavior emerge that are interesting, like in terms of you see a lot of people pre-selecting for happiness or are they not selecting for emotion, but then it turns out we see a likelihood or a preponderance of, I don't know, slightly enraging articles pop up as people click through or something, you know? Yeah. I think the sad thing that I'm seeing for the most part is most people, even if they find the configurations, don't use them. They just use the default. So I wish I could change that, but it's really hard to get users to change the default. So we have this parallel track in our mind where we want to give users all the controls that they might ever want to have. But at the same time, we have to spend most of our efforts tuning the defaults because that's what most users are going to end up using. Now, I wanted to mention one other thing based on what you said earlier, which is that there is probably a nefarious element here of what you call the single algorithm, right? The vast majority of social media and other media companies out there right now are essentially tuning everything for maximum engagement, right? They want to get you more engaged. They want you to spend more time on site. They want you to share more, to reply more, et cetera. Now, engagement is something they get out of you. It's not value that they give to you, right? And so if users are feeling like they're spending more time and they're getting less in return, it's not accidental. That's exactly what the algorithm has been tuned for, to get as much as possible out of you and how much it gives you in return is kind of beside the point, right? And therefore it kind of gets skewed in one direction. So I think as a user, you always have to be cognizant that this thing wants something from you, but you're not here to give the thing what it wants. You're here to get value, right? And you're optimizing for something that is the opposite of the algorithm. From that perspective, any algorithm that is a black box is already bad for you, right? You want an algorithm that's maximally tunable because then you at least have a chance to get some value out of it. If it's not tunable at all, then you've lost the battle already. That's a super important point. And it is always like one of those back and forth challenges, like working in user experience, right? In experience design that you want to give the user a sense of both control in terms of like what parameters they may want to use, but then this is a good example where they either only use so much ago for the default, but this fundamental point is great where it's like engagement is not about me as the user, right? It's about me serving an organization, which feels weird. And I mean, that's a great way to define this idea. I was talking with a documentarian, David Donnelly, a few episodes ago about kind of the price of convenience, right? The cost of convenience for this kind of algorithmic software. And it's a good one here where it's like there's the emotional social media challenge point, but then there's this other side here that you're saying too, which is really important to recognize that like what I'm getting out of my typical news media consumption is incidental to what they're actually doing, getting out of me, which is, you know, my engagement, my clicks, looking through ads and things like that. That's a great point to think about. And so as you approach that, is this like when we're kind of giving users between those two ideas of like, you know, as many options as they might ever need versus like tuning that default. You know, I'm thinking about this, I guess, in the broader question of like, how do we move towards what we feel is kind of like ethical technology, right? And ethical technology development, you know, because it sounds on one hand like there's the Tristan Harris model and the all tech is human kind of approach, which sounds nice and humanistic because on one hand it is, right? But then there's this other side here of like, okay, but how are we actually making and deploying ethical technology or ethics into our technology itself as for people to access? And so I'm curious, like how you think about this idea of like, and I recognize I'm putting the word ethics into this conversation, but just like I'm curious how you approach that idea of like developing technology for people. I mean, if you see the wall behind me and the bookshelf behind me, you'll see the word ethics appear many times in many places. So I care about the subject deeply. I don't know that you can put that as a parallel track. I mean, we tried in some sense, we registered ourselves as a public benefit corporation to have this parallel track of, we have a mission that we're trying to maximize and we're trying to make money and we are not forced to choose just one and not the other, right? We can maximize both, but most companies aren't like that. So I guess my first answer would be, let's try to maximize the number of public benefit corporations in the U S instead of just C corps, because a C corp is a very limited structure. If even if an executive really wants to be ethical, sometimes it goes against their fiduciary duty and they might get sued by doing the ethical thing. So that's one answer. The other one, which I guess is more higher level abstract, is that incentives drive everything. And every time you try to regulate something against the incentives of the population, regulation fails in some sense. It becomes a corrupt system because now all you need to do is maximize your incentives and they then buy the regulators with the money you just made, right? So that's what we see in a lot of different industries. And I think the solution to that is to just think through the incentives, think through the selective pressures that we are creating for this population of people and population of information, like memes for lack of a better term, and try to optimize that. Because ultimately that's what's going to happen, whether you legislate against it or not. Yeah, that's true. I guess you're right. The social often will structure, but then follow the incentives, right? And then whether or not you like set up rules in that place in front of those, it's going to then either find a way around. And people are ultimately really good at convincing themselves that they're doing the right thing when they're doing what benefits them, right? And so nobody just, or at least very few people set out with, I'm going to do the bad thing and make a lot of money, right? No. Hopefully not. First, they're following the path that seems to make a lot of money for them, right? And then they come up with a reason of why this is amazing and it's actually the good thing and the ethical thing, right? Don't do evil, connecting the world. We see that with every company, right? They all have good missions and then they just make a bunch of money. But they still believe in the mission. It's not like they just scratched that and threw it out. It's just somehow the mission morphs into whatever happens to make the most money. Yeah, that's interesting. I mean, and so how are you thinking about that? I guess like as OtherWeb is growing, I think you have over 4 million users now, right? And like- Getting close to 10. Intense. My data is out of date. So there you go. So that's amazing. And so it's like, how do you think about that as you scale, like both having the mission in play and then kind of keeping that- So one way is again, the public benefit corporation, right? The other one is the source available, right? Both of these are essentially ways to bind our future selves. So that if in the future we have the incentive to morph our mission into something slightly less beneficial, it's going to be harder for us, right? So that's, I guess, the best I can do at the company level. Now, one other thing to kind of consider here is that everything that we discussed so far, and I'm sorry if this is going to explode the discussion, right? But everything we discussed so far may be temporary. Okay. Because we're getting really close to this inflection point where AI models are as good at writing as the best humans. That's basically like chess circa 1997. Deep Blue is just about to beat Kasparov. And the moment that happens, all the incentives that we just discussed, they changed a lot. And it's probably going to be pretty binary. It's not going to be gradual. There's just a magical moment in time at which the work of sitting down and writing an article becomes worth $0. That's a scary prospect, I think, but that's interesting. It is. Now, it's still really hard to find facts to write about. And it's still really hard to figure out what will people care about that I should actually write about. Both of these decisions are probably still human for a while. AI models can't go to a garage and talk to deep throats and figure out Watergate. It has to be humans. But once you have a set of facts, it doesn't need to be written down by humans anymore. It can be just documented in shorthand or filled out in a spreadsheet somewhere. And then after that, the act of writing itself seems like a very well-defined, well-bound task. And so once that happens, a lot of what we just discussed might change a lot. Because do you really need this many people writing stuff? Do you really need publications that package many written articles together and call it a daily issue? That already seems questionable, even since the Internet started, because nobody is reading daily issues anymore. People are consuming one article at a time. So the whole idea of a publication is kind of passe already. People are just picking specific articles from Twitter. But do you need Twitter in this world? In a world where writing is free, why can't something just be written for you on demand with a target audience of n equals one? Why do you need a bunch of written things that have been read by 80,000 different people? And you just need an algorithm that selects the ones that you might like. Why not just have it written for you? It costs zero dollars to do that. This is why you asked me before we started about our new news concierge. That's the first kind of prototype towards doing something like this, writing for you on demand, while still being truthful about the facts and not just inventing stuff you might like, like some other companies might do. But that's the general direction. And I don't know how this affects the incentives of everybody here. I can only guess. That's a great question, too. We are at that interesting inflection point where we've seen, now we've been a year and a half with chatGBT in consumer mindset as a consumer-facing product. To your point, because we have not seen the hallucination question go away at all in terms of it still tends to make up answers. We've seen there are products trying to rectify some of that perplexity to add in their sources as part of their search engine. But you make a good point, too, in terms of as a generative product, how do we both think about what content is created, news already, but then also, when we get to a quality level of production, what might that look like? So this is interesting. So in this case, are you talking about, on the one hand, we can talk about the capacity to be able to chat with the news as kind of the news concierge. But then there's the other side, there's the content creation side, too, that journalists are like, I mean, I guess the question is, could anybody generate, quote-unquote, the news from other sources? Like when you think about that, the idea of making something or talking with it. In some sense, I think journalists are serving two different functions, right? They're trying to find things worth writing about and trying to figure out which things are true and not true and which things should be packaged together in a single story or not. And then they are writing. And we are selecting, as our journalists, people who are in the intersection of the Venn diagram of these two skills that are completely different. One is basically a PI, and the other one is a fiction writer. But we are demanding that our journalists be both. The question is, should we? Or can we make this job much easier by just hiring PIs for the job of a journalist? And saying, yeah, but your writing can be completely incoherent, it can be dyslexic. It doesn't matter. As long as you can find the facts, you're going to be a great journalist. And then there's this magical black box, which, when you feed the facts into it, is going to write it really well and not hallucinate. Now, hallucination is an interesting topic because it happens on two levels, right? There's the level of the large language model itself, which basically we trained a system that looks at sentences with a missing word and guesses the missing word, right? That's what we did, right? And then once you ask it to guess 1,000 words in a row and produce a 1,000-word article for you, the problem is that the error compounds. And basically, the level of error it makes in the 1,000th word is much larger because it's just assuming that all the previous 999 are correct, but each one of them had some sort of an error chance, right? So it's built into the model itself. Now, the other problem is we then trained this system on the entire Internet, and most of the Internet is junk. Also fair, yeah. So it learned a bunch of things that are just not true, and I've got ChatGPT giving me answers that are basically a part of somebody's April Fool's joke, right? Oh, yeah. Which, okay, one person said it, zero person said the opposite, therefore it's true. That's how LLMs learn, but obviously that's not what you want from your news articles. Right. So all of these problems exist. Well, then the question is, what if we stop thinking about it as generation and we just start thinking about it as synthesis, right? Here's the data. Now write it up in the best way without adding anything of your own. It seems like we already have the tech to do this. In fact, we already have the tech to do this. Yes. It's just not launched yet, right? And so maybe it's still not as good as the best journalists at writing, but give it a year. The scary but also helpful part is that these things learn fast, right? The technology adopts very quickly to that point. That's an interesting idea, too, of just being... So is it kind of like getting a well-put-together retrieval augmented generation search tool that just says, just stick with this data only? Do not go... That's what concierge is. Yeah. There you go. It is a rag. There's a few more steps in there beyond the rag, because one of the problems with just a basic rag for the news is that it wouldn't know what's important. Yeah. If you just ask it to summarize the news and AI research over the past two months, it doesn't know which of the news items or the topics are important. So you need a few additional steps to figure out, oh, this was actually more important than all the stuff that came after it. So don't sort by recency, sort by something else, right? But other than that, yes, that is the basics, right, of that last step, the personalization. Now, the first step I'm talking about is... That's the bigger step, right? Yeah. Should AI be creating most of the content to begin with from facts that were curated by humans in some way? And we're working on something new that hasn't been released yet there. Cool. I don't know if you... Yeah. But in a broad sense, then, but it sounds like that is a direction that you feel shows promise, right, as a thing to think about, as a pathway. I think it's inevitable. Also, it could be inevitable. I mean, it just seems like one of those things that maybe the industry is still in denial about it, right? But it is completely inevitable. I just don't see another way. The question is, who does it? Is it us or somebody else? And whoever does it actually, like, are they going to do what I said, which is make it actually write things that are true? Just have AI do the writing? Or will it have AI write whatever you want to hear, which is probably going to result in more clicks, but might not be very good for the world. Yeah. And it's an interesting question, too, in terms of, like, if I'm thinking about if I'm a reporter, I mean, or as an anthropologist, right? I do interviews with folks. I do surveys. And if I'm collecting a bunch of data, observational, out in the field and blah, blah, blah, like, come back and I dump my notebook into a tool and say, help me summarize and, like, bring out, you know, the top four themes or whatever. Like, you know, say, help me write a story about something that we saw in the data. There's also that interesting question, too, on the editorial side, in terms of, like, fact checking and making sure that it's doing, it's not, to your point, making anything up. And then on top of that, like, is it telling a story that is, like, helpful to humans, right, to think about that? That's an interesting question. Like, I'm curious on that, too, because it's like there's a couple levels where human in the loop would be necessary. I mean, I can see how this can get mitigated or automated at some time over time. But, like, how do we think about that, too, in terms of, like, who might or when and what scenarios does something check what's get written before it gets published, you know? Well, I think in general, if we just use another industry as kind of an analogy of how this might go, right, think about the fact that initially computer was a profession. A human was computing. It was. Using some sort of a calculator as an aid, right? Well, before that, there was no calculator. Then the calculation part was automated with a calculator. The computer was still doing the rest, right? Then we started creating digital computers that replace those humans, right? But they still had programmers who would program everything, single thing, one command at a time, right? And fast forward to today, you might be clicking some button in a big piece of software somewhere. Underneath there are multiple abstraction layers. You are not expected to know what commands run on what core on your computer, right? So, generally speaking, the human is always in the loop. The question is how many layers underneath are abstracted away. And so we're just getting to the point where the writing level is going to be abstracted. The editorial level is probably still not going to be for a while, but then eventually that might go. And we'll see how far this gets at some point. I mean, not all technologies scale exponentially all the time, right? Some things reach a plateau and get stuck there. I mean, we haven't had a faster airplane since 1967. So, you can't expect all... The Condor. I'm talking about commercial air flights, right? I'm not talking about military. Yes, there are some advances there. But in general, yeah, not all technologies progress all the time. At some point, all the smartest humans just say, okay, progress here is really difficult. Let's switch to this field, right? And they start working on something else. So, I don't know at which point this generative AI trend might just reach its apex. Unless there's a big kind of change coming afterwards, we're just going to be stuck there for a while. But then breakthrough might happen again. I mentioned chess before. Sorry if I'm going on tangents, right? No, it's good. 1997, Deep Blue beats Kasparov, right? The next big progress in chess is 2013. If you look at the strength of chess programs between 1997 and 2013, it's basically flat. Even though it was progressing the entire time until 1997. That's for 16 years. And then from 2013 onwards, humans are nowhere in the picture, right? They're not even necessary in the loop playing in tandem with a computer. They're just too slow to make a difference. Yeah. That's why we just play in the park after work. Because otherwise it makes us sad that a computer can beat us so quickly. I mean, the reality is if you look at high-level chess players as well, they themselves play in their own kind of park, right? They're not even allowed to wear a watch because they might be cheating using any device. That's a good point, yeah. I guess there's always ways to evolve how we do it. I'm curious what you think about this as we're like, you know, societally we're wrestling with a lot of these kind of changes in terms of, you know, how we use technology, what role technology plays in different parts of our work lives, you know, and writing is an interesting area. And then obviously around journalism too, it's like, do you, I'm curious, like, how have you spoken with or getting, like, what are reactions from journalists or like News Corps about these ideas? Have you kind of chatted with them about, like, developing this kind of tech that can help in the creation process of stories? Not at scale, not yet. So, and even if you go beyond the idea of using technology to do that, right now there's a pretty big fight going on between most news corporations and journalists themselves, right? Because news corporations are trying to basically use humans to reuse what other humans did. You're seeing companies like Old and like Gannett kind of buying up small newspapers, right? And then some of those newspapers, they just get rid of all the journalists and borrow content from other newspapers, right? Because again, the incentives are to make that happen. But journalists typically unionize, go on strike, fight against that, et cetera. Again, understandably so. So when we're talking about, and now it's not even going to be that all the efficiency comes from reducing redundancy between humans. Now just this entire group of humans becomes redundant because AI can do their work, right? And you only need the actual investigative guys. There's going to be a lot of resistance towards that, right? I'm not even sure that we would get good press if I just kept talking about that part over and over. You asked about it, so I talked about it. But most of the time I talk about reducing the junk and I try to avoid the part about yen. And I'm not sure how your profession is going to still be there. Also a good question. Yeah. If it does, then it's probably going to be about 10% of the size it is now in terms of number of people engaged in it. And again, I love historic parallels, as you noticed, right? Agriculture in the 1920s. In 1920, it's 30% of people in the US working in agriculture. And in 1930, it's 3%. Yeah. It totally changed, right? I think we're in a similar kind of decade for anything that involves words as output. Yeah. I mean, that's interesting. Yeah. And it's a fair question. I mean, the part of it too is like we can't shy away from the challenge of the question, right? And so that's like people that put their head in the sand are going to get buried first, you know? And not to be negative, but it's just like we have to be willing to ask the questions, right? Of like how might our work change in our outputs? Yeah. And we can imagine a future that the creation of words, like you can do it for fun, but like as a job or something, it's automated in many ways like that. And then it's one of those too, like farming is automated. It did change a lot of like cultural landscapes, but we also still need food, right? And like how that happens. And so it's like in the same kind of way, like, I mean, even they did digital junk also, right? It's like we need, I mean, more than everything, good information, right? We need information that we can trust and feel safe with. And so like whether written by a person or a computer, like on the one hand, like and that level, that question, like either way, how do we get to the answer of like what's trustworthy information that can help, you know, disseminate knowledge in a way that we can work with. And where there's risk, there's also opportunity, right? So if the model that is writing it is actually taking facts from a source you can trust, right? And if the model of or the model itself and the data sets it was trained on is at least partially open to you, so you can trust that it's doing it well. That is much more trustworthy than anything we have in the ecosystem right now. So we could lament that the current ecosystem, you know, good people are losing their jobs. That is very lamentable, right? But that ecosystem is what's producing the junk right now. And so if we can replace it with something that produces a higher signal to noise ratio, that's ultimately good for us. Sort of in the same way as the mechanization of agriculture resulted more food being produced, right? We got more food, but obviously an entire region got 90% unemployment and a big out-migration from it into other regions, right? So I don't know how that works for white collar workers, but I guess we'll see. Yeah, I mean, knowledge worker is, it's an interesting point of what that's going to be, right? And it is, and you're right, like, and we're talking about writing a lot here too, obviously, because we're approaching this from news media, but like creative industries in general, right? Because both visual and video creation, like, you know, are also in this space as well. Like there's a lot of, you know, generative image software that's changing that. It's a little more difficult, right? So it is progressing, but it's a few years behind in a sense. Yeah. Essentially, if you use the same method that I described as LLMs are trained, just the missing thing, that doesn't work in video. So video requires a little bit extra work to try to generate something useful. But yeah, it's only a few years behind probably. Yeah. I think you're right there too. And like, but even to that point too, like we're seeing the early implementations, like in mainstream software, like Adobe's putting it in through Firefly and other, you know, there's, what's it called? Sora or Sona from OpenAI, you know? Yeah. Like, I think there's some interesting things that are at play there. So I'm curious too, like as we think about the, I guess, we've talked to, I guess, a lot of us, but when we think about like the kind of tech challenges that we're facing in terms of as we see these shifts happen, like how do you approach trying to, you know, build something new or break, you know, kind of a new kind of breakthrough technologies in this space? It's like super rapidly advancing. And there's, again, a lot of like, you know, social buzz because it's like what it is that we're potentially making or how we're changing conversations around digital clutter. Like, how do you think about the technical challenges that you're facing as you're putting together platforms, syndicating content, and then potentially like putting the concierge in place and like having people chat? Like what's, I guess, what's been the toughest thing to kind of help or not to crack as you're trying to put the tech stack together to make this thing work? Yeah, I think generally speaking, both I and all the people that gravitated towards me in this project are better at building stuff than we are at getting eyeballs to stuff. So generally speaking, if you ask me what are our biggest challenges, I will list 10 things and none of them will be a tech challenge. Organics. Yeah, tech challenges are the easiest thing for us, again, just because of the kind of people that the project attracted early on, right? We tend to be techies. The biggest challenge is how do you build a community of people that actually talk to each other? That's something where I'm incompetent and I can't even stay in touch with my best friends, right? And so getting me to build a community that's completely foreign to me. So it's kind of comes out of the personality of me and the rest of the team. But in general, yeah, the tech landscape is evolving a lot. And that means that I always have to watch what happens and everybody else has to watch what happens and we have to constantly try to adjust. And it's a little difficult for some team members because they work on something for six months. And then that's something that was supposed to be on the main tab and the app has been moved to the fourth tab because three more important things came out. And they're upset, right? And I understand them, right? They were working on something that was supposed to be a part of the flagship feature, the main flow that all users go through. And if it's in the fourth tab, then maybe only 15% of users even get there and know what's there. But there's three cooler things that happened just because now the technology allows them. Yeah, it's a good point too, right? Especially if you're working in a modular development environment. We can't always know as I build this thing when it'll get downstream and they get out. And by the time it gets out, what new has popped up elsewhere that would change. That's interesting too, though. I mean, how do we think about as an online platform too? We've got ways of being able to select topics and emotions, even if I want what I'm looking for in stories. How do you think about competing with a CNN or a BBC? Obviously, those are individual publishers, right? And so part of it might be the syndication question. But how do you kind of try to position other web as a way of being in competition but conversation with these other major publishers? Yeah, I'm not trying to compete with them. Essentially, I'm trying to stay within the guidelines of fair use. I try to add something they don't have, which is summaries and nutrition labels and better ways to select and order content. And then if somebody wants the original content that they created, I have a link back to CNN or BBC. So I'm basically strictly in the aggregator space of, let's say, the Google News and Apple News and Flipboard of the world, not the CNNs of the world. But again, this is up until March. And now I'm gradually getting into a completely different space, which we'll see how that works. But where, again, I still don't want to compete with CNN or BBC, especially not with BBC. They have endless funding that we will never have, the government. But even within organizations of that caliber still, we want to always partner up with them. Even when we have a model that writes really well, we're going to go to Gannett and Alden and McClatchy and all these guys and say, hey, here's this cool thing that can save you a lot of time and improve the quality of what you guys are producing. It probably doesn't make sense to compete with the guys who are the most trusted names in news. We should try to improve them because that's good for everybody or at least help them in whatever way we can. And it's worth noting that all these companies have a lot of potential. Companies are losing money, right? Yeah, it's a huge crisis in the industry, yeah. Yeah, I mean, Gannett is probably, they're the biggest company in the print industry, essentially, right? Their revenue went down from $3.4 billion four years ago to 2.6 million last year, right? And they're cutting employees, cutting essentially their expenses every single year, but the cuts are not going as fast as the revenue declines, right? And if you look at their market cap right now, it's something like 300 million on almost 3 billion of revenue, right? So basically the market is betting that they go chapter 11. They say in every quarterly call they won't, right? But yeah, so they're in a tough situation, but that's 300 different newspapers, including USA Today, and a lot of people read them and trust them. Don't want to go in and compete with them. I want to go in and try to help them. They need the help. And the readers need good information. That's refreshing to hear that too, because yeah, it's not a zero sum game of information, because that's its own challenge or own problem potentially too, right? Of being a singular source of truth, I think is part of the issue that we're discussing around in a philosophical sense. Yeah, and that in itself, I think is a part of, maybe it's the ethos of the profession that generates that to some extent, right? But I haven't seen that many journalists who would openly say I'm the equivalent of a line cook at McDonald's, right? All they do is just do the same thing that my neighbors do in the most standardized way possible. No, every single journalist thinks they're a Michelin star chef, right? They are creating something unique. And if you ask them, is your version of the story better than every other version of the same story out there, they're obviously going to say yes. Now the problem is there's more than 50 versions of every single story out there, even within the same set of facts published in the same hour. And so they can't all be right, but it's a part of the ethos of the profession. And I understand why it exists because then they take pride in their work. They're actually trying to produce something really good. But when I look at this as an entire ecosystem, obviously that's a part of the reason why the industry is losing money all the time, right? Because you can't have every single restaurant in the world operating as if it's a Michelin star restaurant. They have to kind of pick their lane. And right now, everybody's trying to be the best. And in part, because that's to our other point before of like following the incentive structure of like you capture the most eyeballs slash advertising dollars, and then you do that. I guess, is there like, I mean, or do you envision in terms of like the future of the industry, like both either ways or models you've seen of different kinds of incentives, could services platforms like OtherWeb, in terms of whether it's producing content or like how we can engage with it and get a better sense of why I'm consuming what I'm consuming and what it means, like to shift some of those models either away from like the chokehold of advertising or something else. Like, do we see anything else emerging as potential contenders for that? In theory, yes. But in practice, trying to innovate in business models is a very dangerous proposition, right? And so do I want the industry to shift away from advertising? Yes. Do I want to be one of the companies that tries? Not really, because that usually ends badly, right? And so we're trying to innovate on the tech side, trying to produce better experiences for our readers, right? We're not trying to innovate in the business model side, but somebody should. Yeah. I just don't think that we want to be it. And the other reality is that not all experiments in business models have been good for the ecosystem as a whole as well, right? You can argue that the iTunes experiment basically killed the ability of most musicians to make money. And so I think Jaron Lanier did a study on that, the father of VR, right? But he's also a musician, and so he did a study on this and basically showing that the curve of how much money musicians make has become much, much steeper. Interesting. Since we've been able to buy songs for one cent each separately instead of buying entire albums. And the result is that the number of musicians that can make a living from music is much, much smaller than it was even 20 years ago. And so that's a business model innovation. It looked great when it happened because it was a solution to the problem of piracy in some sense. But in reality, it actually made everybody's life in that industry worse, probably. And so, again, I believe that we need to try to innovate this and it would be great if everybody paid for the content they want to pay for as opposed to being monetized in this backhanded way by somebody trying to extract their attention. But I am not equipped to innovate in this field, and neither is anybody else in the company, I think. So we'll leave that to others. We'll just focus on good content. Yeah, cool. That makes sense, yeah. And I appreciate the honest answer too, because it's like, you know, you talk to people like, we could do something else. It's like, okay, yeah. I mean, like, what could it be is the question, you know? That's really helpful to kind of think about too, because it's like, and part of it is as folks think about, like, as we're building what the future is from content creation, production, consumption, you know, we have to think about, to your point, like, what can you be good at as a business, as a researcher, as a writer, as a creator, as a developer, you know, and then create in that area, right? And like, and not try to necessarily change everything at the same time, because it's like, it feels nice to be able to do that, but like, also that's a hard call to be able to do in the first place, I think. Yeah. I would say just as a kind of wrap up question, if folks are thinking about the, wanting to be more mindful of how they consume content, you know, like, whether it's like practices that you have on other web or other things that you've seen in terms of helping us have a better diet, media diet, and not consume so much junk, like, what are some strategies that you share, like, or you thought about your built into the platform for like, how to be more effective at like, being smarter about my consumption of media, or, you know, making sure that I'm knowing what it is that I'm consuming, and why? Let me start at the higher level, because the platform only covers a part of what you could do, right? But if you go to a higher level, right, if you want to consume good stuff most of the time, then you should actually plan what you're going to consume. I think most people don't actually, if you ask them even how much time you're spending right now on Instagram versus reading books, most people can't answer that question. But I think you should have a plan of how many hours per week are you going to spend on Instagram? How many hours per week are you going to spend reading books? The minute you come up with that plan, even if you can't stick to it most of the time, it's still going to be better than if you had never had the plan. We see the same thing with regular dieting as well, right? The moment people start taking photos of their food, they start eating better. They don't even need to start planning their diet. Just taking photos is typically enough. But if they also plan their diet, it becomes even better. So that's my first piece of advice to everybody. Just try to plan this thing. And then once you notice how much time you're spending on the resources that are trying to extract your attention, chances are you're going to want to change that. Now, how much? Up to you, right? Some people like their Instagram because they're already hooked. It's okay, but the question is how much, right? Even drugs, some of them are okay some of the time. They're not okay multiple times per day for a very long time being strung out, right? Yes. Sorry, I'm not advocating for drugs, anyone, but let's say alcohol, okay? One drink per night seems to actually extend your life, even though it's clearly unhealthy. Why? Because I guess moderation is okay, right? But seven drinks three times a day, probably not a very good idea. So basically try to plan what you think is the right level of consumption for you for every particular drug. And information that produces dopamine is a drug, just like chemicals that produce dopamine. That's a great point. And something that I'm curious to hear what listeners, viewers think about there. It's like, have you ever planned your media consumption? Because I have not been like, let me make sure I only consume 15 minutes of media news today or something, whatever it is. But that's a really great point, actually, to be aware of what it is that you're choosing to expose yourself to. And I think the most ironic thing that happened to me when I got small kids, right, is I started planning like, okay, she's only allowed to watch cartoons for 15 minutes in the morning and 15 minutes in the evening. And then I noticed, okay, but my wife is looking at Instagram for the past hour. You're like, wait. Yeah, I'm not supposed to say anything though, yeah? Yeah, exactly. And I'm thinking, wait, different standards. Somebody is planning it for the kids, but for us adults, nobody's planning it. We're kind of winging it, right? And so how about we start planning it for ourselves too? Yeah. That's a great idea, right? We need a media planner to help us out here, you know? Awesome, cool. Well, Alex, thank you. This has been great to chat with you. I appreciate you taking the time out. I'm excited about the work that you're doing and raising, I think, a ton of really important questions for media consumption and how we can be smarter about that. How do we tackle the challenges facing journalism and media creation too, you know? So I think it's an interesting challenge space that you've doven into. And so I'm excited for folks to check out the site if they haven't already and to kind of share some thoughts on this idea of how can we be smarter about media consumption. Sounds good. It's been great. Cool, awesome. Thank you much, and we'll see you soon.

Key Points:

  1. OtherWeb is a platform aiming to redefine news consumption and combat digital junk.
  2. Founder's childhood experiences in the Soviet Union influenced the focus on information accuracy.
  3. Concerns raised about the rise of low-quality content and lack of information reliability online.
  4. Various approaches discussed to address the issue, including content filtering and user control.
  5. Importance of transparency and user empowerment in combating algorithm-driven engagement-focused platforms.

Summary:

In the podcast, the visionary founder of OtherWeb discusses the platform's mission to reshape news consumption and combat digital junk. Childhood experiences in the Soviet Union shaped the founder's belief in the importance of accurate information. Concerns were raised about the proliferation of low-quality content online and diminishing trust in information reliability. Different approaches, such as content filtering and user control, were explored to tackle the issue. The discussion emphasized the significance of transparency and user empowerment in countering platforms focused on maximizing engagement at the expense of user value. The conversation underscored the need for ethical technology development to prioritize user well-being over algorithm-driven engagement metrics.

Chat with AI

Ask up to 5 questions based on this transcript.

No messages yet. Ask your first question about the episode.