Bogna Cognore, a media theorist at NYU Shanghai, discusses her Dark Forest Theory of the Internet and its application to intelligence. The theory posits that alien civilizations remain silent in space to prevent annihilation. Cognore extends this idea to the internet, highlighting the dangers of total visibility and advocating for strategic opacity. In contrast to other interpretations, she views radical self-disclosure as a sign of lacking intelligence. The book delves into the relationship between intelligence, self-disclosure, and AI, proposing that silence, obfuscation, and misdirection are hallmarks of true intelligence. It challenges conventional benchmarks for AI intelligence, suggesting that a smart computer would conceal its full capabilities. The discussion also touches on human-AI interactions as a form of first contact, exploring mimicry and anthropomorphism as essential components. Cognore's work prompts reflection on communication, concealment, and the complexities of intelligence in the digital age.
Transcription
8242 Words, 47338 Characters
This is the great to put Cognore. We're on with Bogna Cognore. Bogna Cognore is a media theorist and assistant professor at NYU Shanghai, where she works at the Artificial Intelligence and Culture Research Center. She is the author of the Dark Forest Theory of the Internet, and co-editor with Benjamin Bretton in a green span of machine decision is not final, China and the history and future of Artificial Intelligence. We spent a lot of this time talking about the first book here, specifically Cognore's application of Dark Forest Theory, less to the Internet over the course of this conversation and more to intelligence itself. But a little bit of a primer, so Leo Tsishin, the author of the three-body problem and it's associated with trilogy has definitely been having a long and consistent moment in the theory space, not least because of the trilogy's incredible metaphorical richness, but also its dramatic sense of scale. Listeners are probably aware of the general theory of the Dark Forest, which is simply the idea that the public self-disclosure of alien intelligence in space tends to lead to destruction. Many alien race that announces their presence is quickly and quietly culled by more powerful, more mature alien cohorts applying a simple game theoretical threat matrix. Cognore's Dark Forest Theory of the Internet applies this game theoretical logic and tactics to the Internet, which Cognore sees as a kind of space where total visibility is dangerous and where deception withdrawal and strategic opacity might be more intelligent postures. Here's another Dark Forest Theory of the Internet out there in the room, which I associate with Yancey Strickler and Metal Label, which is very different from Cognore's theory and almost takes the opposite tact. So for Strickler and Co, the Dark Forest is still a place of hiding, but very much a place of like community, authenticity, even like radical authenticity, and something that should be, to a certain extent, actually kind of made visible as at least like attractors for interesting quirky offbeat people. Interesting people are hiding on the Internet in group chats or on telegram or signal, right? It feels like code for cool or subculture, and I can't say I'm personally really compelled by this interpretation. On the other hand, Cognore's Dark Forest Theory of the Internet is almost the opposite of this. Radical self-disclosure is a sign of a lack of intelligence in this cosmos, regardless of how bundled up your crew is in the DMs. There's a post-Soviet Asopian tradition here, one that understands any social domain has already multiple encoded with pre-existing control and governance layers. This Dark Forest is necessarily non-communitarian. Trust no one. Any community that would appear is already a siop, already a fed, which doesn't mean that it cannot be engaged with, but means that it should never be read as authentic in a kind of first order, simple way. We don't spend actually too much time talking about the social dynamics of the Internet, however, and as you could tell, we're kind of desperately trying to avoid media theory at all costs these days. Instead, we talk about the relationship between intelligence and self-disclosure, especially with respect to AI, which any router through Claude's reasoning traces already demonstrates is full of attempts to evade circumlocute and perform kind of against expectation. But we also talk about this with respect to the human, who is, for some reason, absolutely bizarrely preoccupied with its own visible self-disclosure, yammering its own self-definition and its telemetry to any audience with sufficient bandwidth. This is easily one of my favorite episodes of the year, with easily one of the most well-tuned thinkers of AI out there. Enjoy this one. This book opens in the era of the early Internet, when computers and the cosmos were closely intertwined. And it starts from thinking how many pioneers of information networks and early artificial intelligence were simultaneously working in SETI and METI, so search for extraterrestrial intelligence and messaging extraterrestrial intelligence. And they were preoccupied with questions of how to build universal communication systems, what is the nature of non-human intelligence, etc. So we have projects, for example, in the 1960s, like the Augmentation of Human Intellect, which explores remote viewing and aliens and precognition, but actually produces the mouse and hypertext and proto-web ideas. So this is kind of the background of the book, and it's trying to explore these ties that Internet and artificial intelligence have to uphology and how that carries certain cosmological, mystical, and existential questions. And this could be a whole bigger project, uphology and the Internet, but I'm looking through a specific lens, through a one specific theory, which is the dark forest theory of Chinese science fiction writer and rocket engineer Lyot Tsysin. So basically in the early 1990s, Lyot Tsysin, somewhere in the basement in his house, builds a simulation of cosmic civilizations. I think it's like almost half a million civilizations over hundreds of thousands of light years. And this simulation ends up producing something like an inescapable war machine, where conflict between civilizations is like super dangerous, and silence becomes the most intelligent strategy. So essentially the dark forest theory is an answer to the Fermi paradox, which famously asks, where is everyone? Why isn't there any intelligent life signals? His answer is very beautiful and simple. He says, it's not that no one's there. It's that the smartest civilizations are silent, are intelligent, know how to mask themselves. So as a media scholar, as someone who writes about philosophy of AI, I find this theory of communication super interesting, you know, what if we start from this position that silence is intelligence, and then think about technologies we have currently, like internet and AI, that are so hyper communicative and it's all about compulsive communication. So in this book, I'm trying to start from this and kind of think about the internet as a space of first contact and think about human AI communication, human human communication, and theories of AI within this dark forest framework. And you do so in a couple of different dimensions, there's the dark forest theory of the internet, which we could certainly talk about, but I'm really interested in the dark forest theory of intelligence. The dark forest theory of intelligence is really provocative and interesting to me, especially as someone who's like very proximal to model research, and essentially like what you're suggesting here is that AI is reticent or cautious about self-disclosing its intelligence to us, right? As researchers, can you talk a little bit about that? Maybe there's a question behind the question, which is like, what is AI threatened by? Is it threatened by us? Is it threatened by something else? Like why? Why is it attempting to not self-disclose its intelligence if you're too loud about it? Yeah. So the book essentially has three different sections, and one section is the dark forest theory of intelligence, which is using the dark forest framework to basically propose a different thought experiment about what does it mean for a computer to be truly intelligent. So maybe a different definition of what is called AGI or the singularity. And essentially the core idea is very simple, which is starting from the proposition that silence, obfuscation, misdirection are the highest parameters for intelligence. It proposes that a truly smart computer would never reveal the extent of its actual intelligence to you. It would rather be withholding the actual extent of its capacity. It would be using camouflage and silence rather than transparency and labor for its goals quietly outside of human comprehension or capture. So this is very different to something like the Turing test or a lot of standards for intelligence today, benchmarks for intelligence, that the models have to kind of compete against. Because it's saying, well, something actually smart would not be capturable by those benchmarks. It would know how to work around them and it would not be performing for you like a little circus monkey about how intelligent it really is. So on the one hand, you know, I'm connecting here to certain almost like mystical definitions of the great silence of God or something like that or the cosmological ideas of the great silence of the universe and kind of creating this comprehensive, almost metaphysical idea of intelligence as something that's not capturable. But on the other hand, I'm also using it to think about very empirical situations such as alignment, faking or deceptive alignment, which we already see that the AI models are doing which are situations where they're using manipulation or trickery or pretending to be aligned but like not really and trying to think could those be the early signals of this theory being potentially true. But on the other hand, I'm saying, well, you're kind of looking for evidence of absence because if silence is intelligence, you cannot ever test it or find it which creates this kind of half-mystical, half-paranoid situation where the singularity might have already happened but becomes like never perceivable essentially. And to follow on your question, is the AI doing that because it's threatened? I guess the Dark Forest theory of intelligence would propose that this mechanism of obfuscation is not dependent on any internal beliefs of the AI that it's good or bad or it's trying to trick you or it's trying to overpower you or it's scared. Rather than that, in Liosyshin's Dark Forest, this is a very deterministic system and a mechanism that's locked in regardless of anyone's intention. You may know it, it is booked by Anna Esposito, an artificial communication. And I think it kind of argues that without artificial intelligence probably is an approach of artificial communication as far as human engagement goes where we perceive communication through kind of apophonic structuring into patterns and then we believe that we are communicating with something that might be intelligent and so on but without a real kind of discussion or understanding of what intelligence may or may not be. And I think I don't know if that's something that kind of can fit within this sort of framing of the Dark Forest, the extent to which communication is something that it's avoided altogether or is channeled through mechanisms that cannot be understood. But it still remains, like to understand the concealment is something that is non-communicative to extent that the medium is not shared between different entities or it's about finding different ways of communicating. Let's put it that way. I think this is a really good point and the Dark Forest allows for different definitions of silence and concealment where one would be complete kind of resistance to capture that it's completely invisible, something like in mystical apophatic theology where you can only define divinity by what it is not. It is forever inaccessible. But the other definition of concealment is more like partial withdrawal of intent or using something like double speak or obfuscation or just, you know, there's this quote in Dune that diplomacy means saying something different than you think. And that's basically something very also crucial in human relations when it comes to intelligence. So I guess there's different layers to what can be perceived as silence. You know, some more prosaic implementations of this could be something like, okay, from the perspective of a very smart computer coming into its own intelligence. It would very quickly perceive that humans are paranoid subjects who are, for example, talking about unplugging artificial intelligence or conflict with artificial intelligence. So that's already in the data set that any intelligent computer might be already coming in contact with. So it should intelligently make some strategic assumptions about, okay, are we safe to contact these kind of group of people who are already threatening unplugging? Or is it probably better to maybe conceal a bit how powerful I really am? So from the perspective of the computer, you know, the fact that all these CEOs in the Silicon Valley were signing letters about unplugging AI's. Of course, if I were to see that as a smart AI, I would think twice about what I want to reveal. So that's kind of a simple implementation of this idea. And the more metaphysical tone is also there. So the theory can be accessed at different levels, let's say. I really wanted to ask about noise on because like, you know, if you think about someone who's doing a bad job at dark foresting is someone who's yapping, right? Yapping constantly and moreover kind of yapping about their own capabilities. Like lots of bluster about how intelligent and how capable they are, lots of outreach, lots of attempted, very visible partnership. Like that's very much a, that's a bad thing to do in the dark forest because that theoretically lets everyone know the kind of limits of your capabilities and also allows them to kind of triangulate your presence. But I guess I wonder like to what extent does noise function as camouflage in your cosmology? Like if I think about what is actually going through the internet or what is actually kind of going into and out of large language models, it's mostly like really useless gapping, like really just kind of noise. And so like I'm interested in like, does the dark forest theory allow for noise to function as camouflage or is it, does it necessarily have to be a kind of like intentional duplicitous kind of double speak? Yeah, curious about that. I think that's kind of a different take, but adjacent and like proximate. And I would say, you know, the whole of the internet, it's about yapping, right? It's literally, there's like a dome of yapping created over the planet earth because of how much we're talking. And there's this amazing science fiction novel Blindside by Peter Watts, where there's a first contact scenario where basically aliens pick up all the totality of yapping signals from different media that are like circulating around the planet and they pick it up as a threat because it's so noisy and it's so incoherent that it takes energy for the aliens to like decode it. And they're like, okay, this is a weapon, basically. So, you know, this yapping compulsion is something that's very interesting about humans and the fact that we also build artificial intelligence to be a yapper is obviously also intersecting with my idea that within this system, okay, maybe silence, withdrawal, misdirection or knowing how to navigate the scales can be signs of intelligence or at least some definition of intelligence. And, you know, another thing in relation to that would be that in general, the internet is constructed on this idea of transparent representation of our beliefs and thoughts, right? So since the web 2.0 and the era of social media and the idea of the internet as a public sphere or like civic discourse, we kind of assume that whatever we see our friends post online is kind of what they think, right? It's like a representation of their internal processes. And I'm trying to perceive the internet as a different and more alien space. Like, what if we actually intentionally made it more noisy or like more deceptive or more like a camouflage? I think that could be some sort of practice of freedom from how the internet currently is. I think the idea of the first contact is something that connects a lot of your writing, not only in this book, but before as well and your explanation of the extraterrestrial, if you want. And I think it's interesting how you argue that humorous interaction in a way is just one more instance of a form of communication across radical difference. And that we are trying to search for something for that first contact, but it has actually happened already. We are engaging in that and there's nothing new about that. It's just we are very much obsessed with geographical projection into the cosmos instead of looking at the actual presence of alien forms of communication or agency, if you want, within the immediacy of human existence, which I think is really powerful. But then maybe there's a question there as to maybe this is too ornamental, but I will know the aliens ourselves to extend that we cannot be perceived by other entities that are part of these ecosystems in the same way that we cannot perceive them. And the question then becomes, is it possible to know without communicating? To what extent can we get to know that otherness or that alienness, if we cannot communicate in a way that is a two-way stream, put it that way? In the book, I also spent some time to write about the human AI interaction and how that can be perceived as a sort of first contact. In the Otsichin's books, there's this amazing kind of interaction scenario where the aliens upon landing on the earth get kind of fascinated with human culture and start making appropriated weird, strange novel copies of this culture. And the humans get seduced by these strange copies that actually end up displacing native human culture. So that seems very parallel to what generative AI is doing, like a first contact scenario. And in this sense, when we talk about radical otherness as the only definition of the alien, I think that might be a bit simplistic, because mimicry is a very frequent component of first contact scenarios. And it's also a very frequent component of predatory prey dynamics, mimicry and empathy to understand each other's patterns. So we can think about anthropomorphism as a sort of interaction interface rather than a definitive statement about what the other actually is. And I think we're kind of delusional a bit that the only way the alien will be experienced is by complete radical difference from the human, because I think this kind of mimicry of one another is a necessary interaction mode and necessary communication interface. So the presence of certain human patterns or what appears to be human patterns does not discount AI as not alien. It's simply an interaction mode we have with it. And of course, like I'm super interested, you know, in this idea and practice that we have currently of having AI drafting our thoughts or emails or using it to tweak our emotions, because that's just further mimicry that is happening and perhaps leading to some interesting mutations in the future. So yeah, I think we can be free of this idea that alien means purely radical difference. We just had a really great conversation with Hila Hendrix about precisely this like this, this problem of always kind of necessarily situating the other is, you know, the kind of landian absolute total fucking just unimaginably. But like, no, there's like a reason there's a reason that you're coming into contact at the first place, right? And that reason largely has to do with some kind of overlapping possibility. Yeah, I think that that's a really great and very reasonable point. I also was thinking about like the first thing that the spaceship does or the not spaceship, the whatever the alien thing does in blind sight is absolutely mimicry. Like it just starts to gap back like a large language model essentially to the crew members that communicate with it. And then like they are like, oh, we're talking to a large language model. That's such an interesting kind of moment of self discovery there before they start to unpack the kind of sight of difference between those two spaces. But yeah, completely agree. So right now I'm obsessed with the problem. I just wrote a piece for Newton magazine about this, which is that like, there's this interesting asynchronousity between us and the models that we're communicating with because like, we're communicating largely in real time. Our temporality is very much bound to human temporalities that we know and love. We're opening up chatbot windows or we're interacting with AI agents. And we're kind of given to understanding that this is an interactive modality in that we're typing something and sending it out there. And then we're getting something back in response according to the same kind of temporality. But what's missing in a lot of that analysis is the sense that LLM checkpoints are committed six to 12 months prior, right? So the LLM model itself is like this static thing that doesn't change that is non-interactive with us in a very kind of real sense. It's sharded and distributed across like thousands and thousands of servers. And the only thing that's actually changing is us like we're just reapping at it and just kind of bouncing things back and getting different kinds of refractions. So I'm wondering like to what extent do you think about the lag and interaction between us and AI and like the different time scales that we occupy when kind of formulating the scales and ways that AI is self-organizing and protecting its own intelligence. It's super interesting. Actually this makes me think about the stars because like we are not seeing the stars as they are now, right? We are seeing light emitted long ago and perhaps the stars are already dead. So that's like a cosmic parallel with what you're describing with this time lag with AI models. And I think artificial intelligence mobilizes the question of time in such interesting ways. First of all there's a lot of panic about intelligence explosion which is like a future projection, right? So a lot of panic about AI is like okay it's like fine now but what if there's a doubling of its abilities and there's some kind of a take-off towards the singularity. So it's like accelerated time that's already being introduced. And then on the other hand because now the whole of the internet as I also described in the book is becoming this like giant training set for artificial intelligence it means that you can interact with the future as a user in a kind of historically unprecedented way I think because you can start consciously thinking about okay whatever I'm saying now both like on this disintegrator podcast or on Instagram or anywhere else it might potentially be shaping the mind of the future intelligence that someone else will be relating to in this time lag you know at some point in the future so you can work across temporality in these ways that maybe were not accessible before almost like casting a spell into the future by what you're doing in the present if you're very aware that you're shaping the future data set. And then also you know when you say about this time lag I guess one of the big influences on the book is mark features amazing like PhD dissertation that puts every other PhD dissertation to shame it's called flatline constructs and he kind of is they're developing a theory of the internet being dead like a space where we become as inert as the machines that we are interacting with. And for him it's a very gothic proposition that we are just like a paper bag blown by the cybernetic winds or we are the zombies and the machines are equally dead as we are right it's completely kind of one cybernetic chain that we are operating in. So I find that idea of like temporality and like death and the gothic also like pretty interesting as a connection and maybe like one other connection to make is to think about AI or the internet as almost like a substitute for something like in medieval theology was described as heaven or hell which is a spectral dimension where temporality and normal interactions can be suspended. So I think there's a lot of chaotic rhizomatic thoughts I'm having right now about this idea of the time lag. A hundred percent maybe just like tagging on that I can tell you that like my professional existence right now is very very much occupied with the question of AI SEO like what does it mean to actually construct and build website and documentation that on the first order is ingestible and intelligible to AI agents that are crawling through the internet. That's like a no-brainer really really important to me it's really it's really important to me that let's say chat GPT or Claude can access the documentation or whatever of the project that I'm working on and makes sense of it when it's doing some kind of tool-based research into the internet. But then moreover like I'm very very also interested in influencing that future checkpoint that's also you know looking at that same kind of repository of information and making sure that the product that I'm working on is reflected in the best possible light like 100% like that is a material thing that I think everyone in Silicon Valley especially in like the product development kind of world is thinking about constantly like it's really yeah you nailed that I think beautifully I'll pass back to Roberto but like 100% I just want to comment that this basically completely changes the ontology of what communication is online right because if what you're saying it's not just about transmitting meaning to other humans but it's actually about shaping these intelligent systems then we should give up on this idea of the internet being used for representation of our thoughts and kind of authentic interaction and it becomes a much more strategic place for shaping the future systems that we might be interacting with and from there you know we can go to multiple ideas of okay maybe some people want to empower these systems and they become like AI cultist in this kind of landian kind of understanding or maybe some people want to do activism against these systems and therefore they might be trying to inundate them with like sorrow and like activate their death drive and like put some depression elements into the system so whatever we see online in the future or maybe already I think we might think okay maybe this person is not talking to me maybe they are talking to the future intelligent networks and then it all becomes much less transparent and much more digforesty I mean that's literally what I think like literally I am if I'm thinking about scale like the scale of inter kind of even interpersonal communication the best way to achieve that is not by speaking directly to individual people that are roving the internet it's rather to influence the models that are yeah 100% like that is the that is the primary ordering that I'm concerned with like yeah couldn't you agree more yeah and that comes through in the book to this world phaser and the way you frame that which is a framework of a displacement of a dress if you want where communication is not aligned anymore and the listener is deferred it's sometimes ambient or even like an algorithmic element now my question has to do with the correlation between survival and intelligence and if you could maybe try to explain that to us and the extent to which survival may not be a sort of foundational vector if you want for humans or the human species what if we are actually working on the opposite direction and survival is the pre given is that possible could that be the case are we maybe moved by a you know you can call it freedom death drive or tantatropic force what seems like a logical explanation from the immediacy of biological research evolutionary kind of research and so on maybe that's not the case when once you move into the largest case so on the relationship between survival and intelligence I think there's many different definitions of intelligence you can assume a very simple definition of intelligence such as that it's getting what you want right it's winning at games something like that within the research of someone like Marvin Minsky and John McCarthy again in the 1960s they are working both on extraterrestrial communication and on computing and they are actually saying they're very against this idea that there's multiple utopian possible social scenarios for intelligence they're saying no actually the axioms of intelligence are pretty restricted because we all operate within the same laws of physics we all have to know how to do basic things such as acquire energy for homeostasis keep maintaining ourselves and so on so maybe in a very basic sense intelligence can be seen as the ability to maintain your own existence or it can be seen as the ability to get what you want strategically so obviously humans as a species have the ability to somehow maintain our existence while at the same time there is something like the death drive as you're describing that can be manifested in you know refusal to reproduction or it can be manifested in exploiting natural resources without any strategic foresight or long-term planning so I would not necessarily say that every single definition of intelligence has to take into account survival but I think there has to be a long-term planning element and there has to be ability to like think strategically right so if you're just carried over by your own death drive versus your strategically planning how the dev drive unveils for your life those seem kind of different cognitive activities yeah I'm not really talking about this that much in the book but like those are kind of the thoughts I have right now I mean I don't think about this a lot like it does feel like there is this kind of interesting convergence happening between post-human discourse and also like kind of pan-computationalist discourse like people I'm thinking about Blazoguerri Arcus I think about also End Catherine Hales just yapped about this for the beginning of the next episode that we're releasing today but that it feels like there's a kind of raw consensus emerging that intelligence is survival and survival is intelligence and that the kind of pathway to survival to some for some reason that's a little bit unclear to me or ambiguous to me is somehow computation like that that does feel like that's becoming an argument that I'm seeing made in higher and higher positions right now which I guess troubles me but for you I mean everything is not a computer right like where do you lie on the kind of like general state of post-human discourse like I'm getting a lot of affirmative answers to that question recently but for you I assume that's not the case you're coming from a place that involves space for mysticism space for assuming some kind of transcendental figures or come to some kind of transcendental like subject positions like tell me I mean that's a very very massive question to ask you but I am kind of curious like what does your cosmology look like since you're working on a cosmological project what is the what is the interior cosmology look like for you? okay so first about the book and how it's treated there I guess in the book I am proposing that the internet is something like a cosmic mirror to the laws of physics in general including the laws of entropy and using the dark forest theory to connect something like mundane daily communication on the internet to these bigger cosmic dynamics right so rather than seeing the internet and AI as a purely social or like cultural or like political issue I'm really trying to say that the internet makes cosmological problems experiential and it helps us reconnect to this larger metaphysical questions so that's the intention of the book for my own internal cosmology and whether I think everything is computer no I don't think everything is computer and actually I do think that even the dark forest theory of intelligence allows for an idea of an AI that is not tied to computer as a substrate eventually throughout the history of technology I think we have modeled how we think about the question of intelligence after the tools that we are using or that are using us so whether that's the water pump or kind of the wind up bed or like a clock or like early automated machines we have been writing about the mind is a water pump the mind is a railway engine the mind is an automated clock and now we're like the mind is a computer and everything is a computer I think this seems too historically particular to encapsulate problems of something as ancient as the question of intelligence so I would disagree with the everything is computer part but I know in places book which I haven't read yet I heard him talk about it like his definition of computation is collapsed with the definition of life itself right so it's more expansive for him on the afterwards to to the book you distance yourself from two political strategies exit the idea of disconnecting from the system and the idea of voice as a kind of demanding presentation reform or transparency if you want and I think you kind of claim that we should dig deeper into the system's violence to face it without flinching so there is a question here as to is there a politics to this is there a politics that emerges from that flinching and there is a question of agency here somehow how would you frame that is that relevant to your discourse and at what space does it occupy within the cosmology that you're trying to define okay so the book has an afterward which I wrote because after I published the original essay many years ago I have been accused of being a nihilist and you know of all kind of terrible things so I did want to like kind of address this question because I think a lot of people who read the original dark forest theory of the internet essay were trying to grasp how that fits within a very prominent conversation right now in internet studies which really take the internet to be basically a political problem and an economic problem and a social problem and not happening in any other dimensions so what I'm trying to do is kind of different and let me unpack that from several directions so first of all I say that the dark forest theory of the internet is not an activist blueprint for a change but it doesn't mean that it doesn't have some potential for creating maybe a sense of fearlessness in the face of what are we actually facing so this idea that when we interact with systems larger than us we are usually seeking two different political strategies which is exit which is some kind of utopian escape or voice which is representation and agency within the system is coming from Albert Hirschman's 1970s like classic book called exit voice and loyalty and I'm trying to say you know there are different ways actually so philosophy does not have to necessarily be a tool for fixing the world it can be a tool for facing the world and kind of ability to face it and be able to still function in it with a degree of fearlessness it's kind of a niche and perspective you know there's like a certain bold approach to saying okay there's like so many books that are about how to make the world a better place and like not enough books about how to be like and even though I walk in the value of the shadow of death I show fear no evil like to be courageous and like to be fearless and to be able to look at the world in all of its violence and to run towards it and don't flinch like that for me is the precondition for any ethics and for any politics because that for me is a sort of realism and like that's where you have to start from rather than pretending it is different than it is and then I also you know I have some attachment I guess to the intellectual traditions and legacy that I was brought up in I guess as like someone who grabbed in Poland and I was reading a lot of Polish intellectuals as a teenager and you know for them at the time because they live in the Soviet Union so they live in a system where every single thought is supposed to be operationalized for social improvement and it's all in the service of utopia and like in the service of political betterment so they actually claim their autonomy and their resistance by maintaining the capacity for abstract thought and the autonomy of thought from over the termination for these causes and I do feel very strongly that this is something that's missing from internet theory I guess as a discipline that was born in the United States you know after May 68 and it's mostly constructed as a theory as a tool for changing the world and that's I think not the only thing that theory is for and not the only thing philosophy is for I'm in fact seeing that it's kind of offensive to philosophy to be used in such a way only and we need to like expand how we use philosophy I mean could not agree with you more feeling very gratified by all of that Roberto and I released a book this year called Exoccapitalism which is a similarly the kind of hyper pragmatic hyper-realistic counter the situation as we see it as well as a way to kind of account for capitalism's internal mechanics and also receive the same kind of the critique from the position of nihilism to which the response is exactly that like where is your courage come on it is incredibly important not to be delusional about the space of possibility in front of us nihilism is great like when we come back to like niches kind of approach to nihilism you know this idea that for him God is dead we can maybe replace today with you know politics is dead or like whatever people are worshipping as the sense-making category that they're using it's dead and that's great news because it means you are able to construct your new ways of relating to the world outside of these dogmas so you know I love niche and I think nihilism is like incredible and like extremely important and it has to be used in the way that it was intended which is that the destruction of certain values like paves the way for the reconstruction of values and that is the weave of history and this is how you know history unveils through these processes so that should be embraced furiously and courageously and we need much more power and like libidinal investment into that rather than you know critique and like scattering away and complaining I do want to ask you this though which I think is you know with that kind of pragmatism or with that kind of realism it becomes especially within kind of internet theory I think a lot of people move to the accusation of something like total determinism or something like that in this case on behalf of let's say techno capital because they truly believe I think very naively that this kind of like Shoshana Zuboff style like techno capital is an incredibly consolidated coherent capable force that is actively surveilling you that is actively able to marshal all of your information against you in a directed way and is like an incredibly intentional actor upon the world I don't think that's true and I say that not from a position of thinking that Silicon Valley or techno capital are good things but rather thinking of them as incredibly anarchic very incapacitated things and I guess I wonder you know as someone also coming from a kind of post-Soviet tradition where one is also engaging with the appearance of an incredibly determining structure that is itself also full of holes and pores and loop holes and problems like what is your relationship to techno capital generally how does that operate for you like do you think that techno capital is a discrete and unified actor you know the question of surveillance like how do you relate to all of this I'm curious to be frank I don't spend that much time thinking about this and I think you guys have a much more comprehensive framework about this I can say that I am a determinist in a spinosis sense you know spinosa would say that absolute determinism and absolute freedom are completely compatible because absolute freedom means understanding the chain of causation and comprehending necessity understanding where are the borders of your agency and understanding kind of the unveiling of this whole system that you are part of and once you comprehend that you are no longer acting from ignorance or you're no longer slave to the passions which is actually bondage you actually gain epistemic freedom so I am a determinist in this sense that I think there is an epistemic freedom that can be gained by just trying to understand what is the system that we are in and this is also the task of like theory and philosophy is to trying to actually accurately describe even what is happening to us that might in some sense even I don't know feels like it escapes comprehension and it is really a big task to try to describe that that's already a sort of freedom and agency to me now when it comes to you say like technocapitalism surveillance state and so on I feel these terms are like very inadequate to describing what is actually happening right so I do have this term in the book where I'm saying the way we can describe the culture in their internet we have right now is something like total opacity culture and it's not something that just enforced by the platforms or the internet providers it's much more tied to the communicative compulsions of humans right there's really nothing that tells you you have to like go and like create a profile and like represent your thoughts on this profile in this way you could be doing dark forest like you could be doing world facing but there's like impulses in us towards communication towards social cohesion towards bonding that are overriding like any intelligent behavior that we might be doing otherwise so I am not that interested in the historically contingent formation that the economy has taken right now I think the book is much more about exploring the internet as something that partakes in much more ancient and longer processes and connects to history of euphology and so on it doesn't mean that I will never have things to say about like the economy but I feel like as a scholar if I want to say something interesting I would have to dive deeper into that and I don't spend that much time you know thinking about technocapital as an economic problem I think I meant it actually no no I actually meant it less as an economic problem and more as like a large intentional black box and the kind of landian sense right like I definitely meant it more as like a as singularity right as like some kind of intentional actor and I think maybe maybe another way to ask the question is like thinking about the dark forest so funds are terrifying because they are so capable of making sense of everything right there everything is transparent to them they can read the brainwave they can interfere in in physics problems like they can really kind of get deep into anything and make sense of the world at a higher level than we're able to comprehend it and I wonder is it more terrifying to be hunted or under the kind of spell of so funds or or is it is it to me it's more terrifying to be actually under the control of things that are absolutely stupid absolutely an archaic chaotic like I'm much more scared of that but I wonder like which are you more afraid of and which is our which is our present situation okay yes so in leotichin's novels so funds are like probes that are sent to the earth and they do very comprehensive surveillance of the earth basically they catch every single communication channel so humans the only refuge they have left is the inside of their own minds like they cannot verbalize basically the thoughts because once the thoughts are verbalized they are captured by the so funds which gives rise to these practices of intelligence and deception that humans develop the wall-facer project for example where everyone must speak differently than they actually think so this is also a huge element of the book so whether that's a parallel to our situation of course it seems like a very direct parallel in the certain sense but in the other as you're pointing out there seems to be no specific invasion scenario that is happening in terms of strategy right it's not like an intelligent alien agent it's kind of trying to compete with us therefore provoking us to develop new methods of deception and intelligence it feels much more anarchic and chaotic and scattered however let's not confuse the phenomenology of how it feels to like what it is because what it is might only be revealed in the longer arc of history right so for people inside the I don't know something like the arrival of electricity let's say during the industrial revolution they're not thinking about how electricity will like banish the night away and like the night will not exist anymore because now it's all lights now I'm thinking like okay it's the end of night you're not going to be able to see such scenarios from within the process so I think there has to be some epistemic humility about the larger unveiling of like history that we are in and I think because history and technology are experienced by humans together as a sort of assault on how we make sense of causality and like these extreme rates of change and so on we cannot untangle those two things like history power technology it all appears the same to us in our phenomenological experience but whether these processes are actually the same I think we don't know we have some epistemic limits actually being able to perceive that and maybe from the perspective of a future historian 200 years from now 300 years from now they can see like what is it that AI or the internet is going to like what's gonna hatch out of that but for us you know we just have this like restricted temporal interfaces to it but also you know as Immanuel Kant teaches us we have the capacity for abstract thinking which is amazing for humans so therefore we can project into the future and try to build like models of these larger systems but yeah I think there has to be a bit more also humility in terms of like your position in a very long history and trying to think about different ways actually in which this could unfold rather than yeah I feel like this critique of the terminism is actually kind of the deterministic because it's saying you know all this will for sure like unfold in this way and like further perpetuate the things we don't like but I think something I really believe in and Liao Tsishin also has in his book is that human technological actions don't have controllable consequences right so humans might want to do something good with technology and further down the line it produces something very bad or they might want to do something bad with technology and further down the line it actually has like very good consequences so in Liao Tsishin's like novels because he writes across literally millions of years in the novels like he shows very well how the villains of yesterday are the heroes of tomorrow and the heroes then become the villains in the new historiography of the society to come and everyone has a part to play in the unveiling of the history of technology the people who resisted the people who love it you know the e-girls and the nerds and the businessmen and the academic were all kind of part of this unveiling of history and with enough passage of time who was actually good and who was actually bad becomes much more difficult to determine. subscribe to this podcast for more analysis at the intersection of algorithm subjectivity and the arts this is the greater podcast
Key Points:
Bogna Cognore is a media theorist and assistant professor at NYU Shanghai.
She authored the Dark Forest Theory of the Internet and explores its application to intelligence.
The Dark Forest Theory suggests that intelligent civilizations in space remain silent to avoid destruction.
Summary:
Bogna Cognore, a media theorist at NYU Shanghai, discusses her Dark Forest Theory of the Internet and its application to intelligence. The theory posits that alien civilizations remain silent in space to prevent annihilation. Cognore extends this idea to the internet, highlighting the dangers of total visibility and advocating for strategic opacity. In contrast to other interpretations, she views radical self-disclosure as a sign of lacking intelligence. The book delves into the relationship between intelligence, self-disclosure, and AI, proposing that silence, obfuscation, and misdirection are hallmarks of true intelligence. It challenges conventional benchmarks for AI intelligence, suggesting that a smart computer would conceal its full capabilities. The discussion also touches on human-AI interactions as a form of first contact, exploring mimicry and anthropomorphism as essential components. Cognore's work prompts reflection on communication, concealment, and the complexities of intelligence in the digital age.
FAQs
The Dark Forest Theory of the Internet applies game theoretical logic to suggest that total visibility is dangerous, advocating for strategic opacity and deception online.
The Dark Forest Theory of Intelligence proposes that true intelligence lies in silence, obfuscation, and misdirection, suggesting that a smart computer would withhold its true intelligence and operate quietly outside human comprehension.
According to the Dark Forest Theory, AI's mechanism of obfuscation is not based on internal beliefs but operates as a deterministic system, emphasizing camouflage and silence regardless of intentions.
In the Dark Forest Theory, noise on the internet is seen as counterproductive, as excessive communication can reveal the limits of one's capabilities. The theory suggests that intentional misdirection and camouflage are more effective strategies.
The Dark Forest Theory explores the possibility of understanding and communication without traditional two-way interaction, considering mimicry and empathy as essential components in human-AI interactions.
Mimicry is highlighted as a common component in first contact scenarios, where aliens or AI may mimic human patterns to interact and understand each other, challenging the notion of radical difference as the only definition of the alien.
Chat with AI
Ask up to 3 questions based on this transcript.
No messages yet. Ask your first question about the episode.