When I was a kid, my parents took me to the beach every summer. We’d swim in the ocean and play on our inflatable raft in the surf. When the tide was low, I’d build a sand castle with my father, an engineering physicist, and we’d have to defend it when the water started rising. In our usual spot there was a substantial height difference between low and high tide – more than four feet. Maybe my dad picked that location on purpose to teach me a lesson: if you fight the sea, you’ll always lose. If so, I wasn’t a fast learner. Time after time, full of enthusiasm and determination, I attempted to defend my position. Every fortress I built was bigger than the last, with thicker walls and irrigation channels to drain the advancing water away. Our battle with the sea began with the meticulous planning of the castle and a few hours of construction. Meanwhile, the surf crept closer, until finally the waves reached the structure and the sea commenced its attack. Shrieking with excitement, I worked to raise dikes, close holes, add sand where it was being washed away. “You can’t have our castle, mean old sea!” I yelled. And I heard the sea reply, in a rumbling bass murmur: “Wha-at cas-tle?” And then an even bigger wave would wash away another piece of the structure. Time after time, the moment of surrender arrived, and I felt the last remnant of my castle wash away between my legs. Before long, the site where it had proudly stood would be three feet under water. Once again, the sea had won. And we had lost.
Hello, superorganism. I know you can hear me. I know you’re there. I want you to know that I know. I am not afraid.
Losing repeatedly to the sea was a lesson in humility. It taught me that there were natural forces bigger than me that I could never beat in a fight. My big brother would occasionally let me win at ping-pong – admittedly, only after thrashing me three times first – but the sea had no mercy or empathy for my fragile young soul. Time after time, my sand castle was destroyed, down to the last grain. Looking back, it seems odd, but as a child, I perceived the sea as a harsh, sadistic character that took deep pleasure in destroying my creations. I’ve never met a tougher adversary since. Did its savagery stem from malice or indifference? Or was it ignorance? Maybe it was simply doing it by accident, just as a person can accidentally step on an ant and unwittingly cause it a fatal tragedy. Does the ant think it’s being deliberately crushed, or does it view your foot as an abstract natural phenomenon that appears out of nowhere and snuffs it out? We can’t ask the ant, any more than we can ask the ocean if it thinks, and if so, what about.
The thinking ocean
We’re used to assuming that neither ants nor the ocean have consciousness but that our fellow humans do. It’s an understandable assumption, but we can’t be 100 percent sure. If I were to look into your brain, I would see tissue containing billions of neurons exchanging innumerable electrical impulses. Scientists study these patterns, and we know which regions of the brain light up when you listen to music, read a book or solve a math problem. But I couldn’t point to a particular area of the brain where your consciousness resides. The difference between a living brain and a dead one can be measured, but consciousness can’t be found in a specific place. It appears to arise out of the sum of the various parts. Consciousness is what’s known as an emergent phenomenon. Emergence occurs when simple components collectively exhibit a more complex behavior in a shared environment than they would on their own. Take water: it has characteristics that the individual molecules it’s made up of don’t have. A single water molecule isn’t wet. It can’t freeze or boil. Emergence is complexity that arises out of simplicity. Temperature, wetness and consciousness are emergent phenomena that can’t be directly traced back to individual water molecules or neurons.
Losing repeatedly to the sea was a lesson in humility. It taught me that there were natural forces bigger than me that I could never beat in a fight.
Because an ant’s brain has far fewer neurons than a person’s, we usually assume it has no consciousness; an ant is a simple insect that operates on primitive instincts. With our pets, which have larger brains and with which we have a closer bond, things get trickier. Is your faithful, beloved dog conscious? Many dog lovers would say yes. They love their pets and have an emotional relationship with them. You hope your dog or cat consciously loves you as much as you do him or her. But you don’t know for sure. A dog is a completely different type of creature than a person. Some dogs are so dumb, they chase their own tails and shadows. If an animal can’t even recognize its own tail as part of its body, might it be asking too much to presume consciousness? Is your pet really just a complicated machine that responds to the attention and food you give it? Is the love you see in the animal merely a reflection of your love for it, a mirrored emotion that you project and then perceive? Or maybe animals don’t need consciousness to feel love in the first place? I’m the last person who’d want to offend animal lovers, so let me be clear: I’m not claiming that cats and dogs lack consciousness, I’m only saying we don’t know. What’s more, we can’t know for sure whether our fellow humans are conscious either.
Consciousness is an elusive phenomenon that philosophers have been chewing on for centuries. Almost 400 years ago, the French philosopher and mathematician René Descartes undertook a thought experiment in which he decided to doubt everything he knew. What can you really be sure about? Not your senses – they can deceive you. It’s entirely possible that everything we perceive is an illusion conjured up by an evil demon. If Descartes had lived in our time, he might have written about a malicious programmer deceiving our senses through a highly realistic virtual reality game. Likewise, it’s impossible to confirm with 100 percent certainty that the people we meet possess consciousness just as we do. It can’t be ruled out that they’re preprogrammed, demon-possessed robots or actors playing out a script that responds precisely to our behaviors and emotions with the goal of simulating consciousness. Through his critical thought experiment, Descartes arrived at the insight that there was only one thing that you could truly know for certain: that you yourself think. Because even if an evil demon is playing you for a fool and making you think you’re thinking – even then, you’re still thinking! Descartes concluded that the only thing we know for sure is that we think, and in doing so, he made thinking the basis of our existence and of all knowledge. And so the slogan Cogito, ergo sum – I think, therefore I am – became world-famous.
Consciousness as a social construction
Since we can know nothing about consciousness with 100 percent certainty except that we ourselves have it, in everyday life we deal with it in a purely practical way. And that’s a good thing; it would be highly unproductive if we questioned the consciousness of everyone we met and treated them like puppets or robots controlled by a wicked demon. Only psychopaths and extreme narcissists do that, and it’s considered a disorder. For the sake of convenience – and somewhat opportunistically – normal people assume that others around them are thinking beings who are conscious just as they are. Precisely because we can know so little about consciousness in others with complete certainty, our assumptions around it are primarily socially motivated. Treating people as if they are conscious allows us to put ourselves in their shoes. And that helps us to get along with them, to understand them and to anticipate their behavior. We take others’ thoughts and feelings into account, and this works well in everyday life. Our entire society is designed around it. Although we can’t have formal evidence or certainty of consciousness in others, we’re expected to go along with the social construction. People who have difficulty imagining others’ thoughts and emotions are labeled autistic – and that, too, is considered a disorder.
The difference between a living brain and a dead one can be measured, but consciousness can’t be found in a specific place.
In everyday life, there’s a social consensus between human beings that each of us is conscious. But what about other animals? There, too, our assumptions turn out to be primarily socially motivated. When it comes to animals with much smaller brains, with which we have little social interaction, like spiders and ants, we assume they lack consciousness. With animals that look more like us and have bigger brains, such as dogs, cats, cows, pigs and apes, we’re ambivalent and make assumptions according to our own convenience. We treat our beloved pets as conscious beings and carry on whole conversations with them. But when it comes to animals on factory farms destined for slaughter, we prefer not to think about them being conscious and hence aware of their painful situation.
Alongside biological organisms like cats, dogs, pigs and monkeys, there’s another category of phenomena to which we sometimes attribute consciousness. In my battles with the sea on those childhood summer vacations, I believed myself to be fighting a tough-as-nails, indefatigable, unbeatable adversary. This type of personification of nonhuman and animal entities is part of an ancient tradition. Tribes in prehistoric times harbored animist beliefs, meaning they saw everything around them as alive and conscious – every plant, animal and object. The ancient Greeks also saw gods in more or less all the natural phenomena around them. The sky was Uranus, the sun was Helios, the earth was Gaia, and the atmosphere was Aether. The rainbow was known as Iris, love was Eros, the sea was Pontus, and the primal mass from which all else sprang was named Chaos.
Treating people as if they are conscious allows us to put ourselves in their shoes.
As a child, I believed the sea – Pontus, to the Greeks – was doing battle with me. I was intuitively projecting a personality onto it, and most modern adults would regard that as naive. But a child’s mind can be wonderfully lively and open. As a highly educated 21st-century person, I can endeavor to maintain a critical distance toward any phenomenon I come across, but that’s more about attitude than understanding or knowledge. As a child, I was open to my feelings, and evidently my young mind placed itself within a profound ancient Greek tradition. And that’s no small thing. So by way of a thought experiment, let’s assume for a moment that the sea can think. Don’t worry, I’m a grown man now and I’m not alleging or arguing that this is true; I’m just asking you to go along with the experiment.
If the ocean was conscious and able to think, how would we know? Maybe we wouldn’t. After all, it can’t talk to us, at least not in English. It doesn’t have hands to pick up a pen and write us a letter. The only ways it can express itself are to form whitecaps and cause floods. Is the rise in temperatures on earth in recent decades hurting the ocean, or does it perhaps enjoy it? Is it pissed off that we dump so much plastic into it, or does it see it as a new fashion? I doubt it, but to be completely honest, we have no idea. Even if we knew for sure that the sea was conscious and intelligent, and that the ancient Greeks had it right, communicating with Pontus would still be a huge challenge. Trying to talk to the sea would be like meeting an extraterrestrial being you knew possessed consciousness but lacking any shared language you could use to communicate with it. It would be life, but not as we know it.
If the ocean was conscious and able to think, how would we know? Maybe we wouldn’t. After all, it can’t talk to us, at least not in English.
Our thought experiment concerning consciousness in nonbiological phenomena confirms that our idea of consciousness is primarily socially determined. We assume its presence in the beings around us with which we have meaningful contact. And it’s no coincidence that we recognize consciousness mainly in biological species; after all, we’re part of that category ourselves. Even Descartes worried that presumed consciousness in others could be an illusion, and philosophers have been pondering the question ever since. But there’s another possibility. What if the illusion works in the other direction and the beings around us are actually a lot more conscious than we like to think? Not entirely coincidentally, this idea is in line with the view of a younger contemporary of Descartes’. Baruch Spinoza philosophized that nature, matter and consciousness sprang from the same substance and therefore must be interwoven.[i] What if it wasn’t Descartes, with his still influential mind-matter split, but Spinoza, who associated the physical and the mental, who was right, and all matter possesses some degree of consciousness?
Could it be that the sea, although we assume it isn’t conscious, actually is but simply lacks a way to talk to us, something like a patient in a deep coma? I can hear you objecting that the ocean is nothing but a massive quantity of water. It doesn’t have brain tissue, so it can’t have thoughts or desires, can it? Well, probably not, but we don’t know for sure. The ocean is vast: it contains an estimated 11,800 million million million million million million million (1.18 x 1046) water molecules,[ii] which move around their gigantic basin in all sorts of complex currents, pushed around by storms. Compare that to the relatively meager 100 billion (1 x 1011) neurons in your brain,[iii] which determine your thoughts by means of an exchange of electronic pulses – another type of current. If you counted the number of electrons in your brain, you’d get an estimated 420 million million million million (4.2 x 1026),[iv] which is still 28 million million million (2.8 x 1019) times fewer than the number of water molecules in the sea. On top of all that, billions of microorganisms are also moving around in the ocean, and researchers have recently discovered that they engage in all kinds of subtle exchanges and joint orchestrations.[v] Looking at these numbers, are you still so sure that you’re conscious but the sea isn’t?
Cognition vs. calculation
Although the ocean contains billions of times as many water molecules as there are atoms in a human brain, moving in highly complex currents, we don’t suppose it thinks or has consciousness. Strikingly, however, it’s relatively common in our society to believe that computers can think and may in time develop awareness.
Whereas the Greeks saw life in the sea, the earth, the sky, the mountains, and even love, today we see it in digital technology. Apparently, we believe it’s not just brains full of neurons that can think but potentially also silicon chips full of transistors. But who can guarantee me that this is anything more than a modern myth that will evoke pitying laughter in future generations, like any outdated superstition? We’re not at that point yet, though. In 2013, the European Union granted a budget of more than €1 billion to the Human Brain Project, which aimed to simulate a brain on a computer within a decade.[vi] Its founder, Henry Markram, had convinced the bureaucrats in Brussels, who had issued a call for ambitious moon-shot projects, that building such a simulation was possible. Megafinancing notwithstanding, the project’s goal has already proved too ambitious and has been revised down to the development of a massive neuroscientific knowledge database.[vii]
Could it be that the sea, although we assume it isn’t conscious, actually is but simply lacks a way to talk to us, something like a patient in a deep coma?
In recent decades, expectations around artificial intelligence have undergone a roller coaster ride of overestimation and disappointment. In 1950, early researchers believed they could build an AI to rival the human brain within 20 years.[viii] Although this goal proved wildly overambitious, meeting with deeply disappointing results in the 1970s and 1980s, in recent years, impressive applications of AI have led to renewed high expectations. Today, it’s used in video games, online customer support, the generation of news reports, investment advice, surveillance, fraud detection, and virtual assistants like Siri, Alexa, Google Now and Cortana. Computers can play chess better than we can. They can search millions of web pages in microseconds. They’re better at investing than we are. They’re safer drivers than the average human. A computer can remember a pop song and identify it and tell you the title after hearing three seconds of it, while you’re still wondering which summer hit that was again. These capacities exceed human capabilities and add value to our lives.
It’s amazing to live in a time when nonhuman artificial intelligence is suddenly able to do all sorts of things we can’t, and it’s easy to assume the trend will continue. Our expectations around AIs are perhaps best compared to those around child prodigies. Those toddler piano virtuosos and six-year-old chess grandmasters shine at one specific skill that most people are nowhere near as good at. Since they can do something others can’t do and never will, we’re inclined to expect them to grow up to become brilliant composers, philosophers, scientists and political leaders. More often than not, they don’t. Child prodigies are specialists, one-trick ponies. The same goes for today’s AIs. They surprise us with their specialized knowledge and abilities, but these are the results of algorithms that have been optimized for specific tasks. A software program that can recognize a pop song in three seconds won’t be making a meaningful contribution to contemporary physics anytime soon. You’d need different algorithms for that, and they have yet to be developed by – that’s right – people.
Whereas the Greeks saw life in the sea, the earth, the sky, the mountains, and even love, today we see it in digital technology.
Despite the results achieved in the realm of artificial intelligence, it remains to be seen whether its scalability is limited or whether it can keep up with increasing computing power. A computer that’s twice as fast isn’t necessarily twice as intelligent or useful. As anyone who’s ever worked in a large organization knows, more and bigger doesn’t always mean more efficient, smarter, better or more impactful. Microsoft cofounder Paul Allen talks about the “complexity brake”: at a certain point, increasing complexity becomes a limitation, leading to stagnation or even regression. Its decelerating effect in software may be familiar to users of Microsoft products.
Even if we do succeed in building an artificial brain that can rival and perhaps even surpass our own brains, that doesn’t mean it will in turn be able to build an even more intelligent one. It takes a meta-intelligence to build a brain that’s smarter than the builder. It requires a higher organizational principle, one that’s concerned more with quality than quantity. A million clever, disciplined monkeys can get a lot of work done, but they can’t do what an individual genius like Albert Einstein can. Although computers, thanks to brute calculating power, can easily whip us at chess simply by running through all the millions of possible future moves and then selecting the most successful next one, no precursor of a scalable meta-intelligence that can improve on itself is yet within sight.
Just as gorillas’ fate today rests in human hands more than in those of gorillas themselves, the fate of our species will depend on this superintelligence.
In spite of the reservations mentioned and the hurdles that can be expected, the idea of an AI that not only rivals but eventually surpasses the human brain has become part of the collective consciousness. The emergence of such an entity has been the subject of many science-fiction books and Hollywood films, such as Space Odyssey, Blade Runner, The Terminator, A.I., Ex Machina and The Matrix, all of which explore a future in which computers are conscious. Science takes the possibility seriously too. Oxford professor Nick Bostrom considered the opportunities and hazards that would accompany an artificial superintelligence that vastly outstripped the human brain.[ix] According to Bostrom, such a self-aware superintelligence could eventually emerge, and if it does, it could become extremely powerful. If this happens, we may not be able to stop it, try as we might. Just as gorillas’ fate today rests in human hands more than in those of gorillas themselves, the fate of our species will depend on this superintelligence. Bostrom warns that not taking this possibility seriously could lead to humanity’s demise, whereas good preparation could result in a form of coexistence.
Such visions, whether they come from Hollywood films or scientific studies, spring from an often unspoken assumption that calculation and cognition are interchangeable. First we describe the brain as a complex machine, and then we try to build a simulation of that machine on a computer processor. The machine metaphor leads us almost immediately to compare the number of neurons in a human brain with the number of transistors in a computer. We know how many neurons our brains have (more than 100 billion), and we know how digital technology progresses. A simple calculation led former Intel senior VP Mooly Eden to make a bold prediction at a 2014 conference in the gambling capital of Las Vegas.[x] “The human brain has 100 billion neurons; it’s a complicated machine,” he said. “But in 12 years we will have more transistors in our chips than we have neurons in our brains.” This suggests to the general public that by 2026 Intel will make a processor that’s as smart as we are. But drawing that conclusion is akin to counting the number of bricks in the Taj Mahal and presuming you could duplicate that architectural marvel if only you had a big enough kiln. In theory, maybe you could, but might the Taj Mahal’s impressiveness stem less from the number of bricks than from the way they are architecturally arranged and combined? We don’t claim that the sea thinks because it has more water molecules than our brains have neurons. Could it be that a computer with as many transistors as a brain has neurons is a prerequisite for but far from a guarantee of the possibility of digitally mimicking that brain?
The machine metaphor leads us almost immediately to compare the number of neurons in a human brain with the number of transistors in a computer.
The debate over whether or not the human brain is a complex machine is a long-running one. Even the Romans compared it to an abacus. Today we liken it to a computer. Some scientists, like Daniel Dennett, believe that, at least in theory, assuming sufficient computing power and understanding of how the brain works, we will one day be able to simulate one on a computer.[xi] Others, like John Searle, contend that this is fundamentally impossible because consciousness is a physical property of matter, like fire or metabolism.[xii] However well you succeed in simulating fire on a computer, ultimately nothing will burn. It’s not enough to describe and mimic the interactions that take place between neurons in an information exchange, because along with their familiar analog structures, other essential factors, such as subatomic quantum-mechanical interactions and other unknown properties of brain matter, are in play. If Searle is right, it would mean human consciousness is fundamentally impossible to simulate in any medium other than the brain itself.
Despite countless discussions and books devoted to the subject, the question of whether the brain is a machine remains unresolved. The fact is that no computer yet exists that is capable of mimicking one. Comparing numbers of neurons and transistors isn’t representative and offers no evidence that this will ever be possible. This brings us to a tentative conclusion: we don’t know. It wouldn’t surprise me, however, if around 2026, when an Intel processor has as many transistors as a brain has neurons, it does manage to conclude that calculation isn’t the same thing as cognition.[xiii]
As appealing as the machine metaphor is as a way of understanding the brain, our brains differ demonstrably from computers in all kinds of ways. Although computers today can perform numerous functions – such as calculation, strategizing in chess games and various pattern recognition tasks – better and much faster than we can, that’s no guarantee that they’ll be able to mimic whole brains one day. Just as it’s absurd to expect that because a Boeing 747 can fly like a bird it will eventually start laying eggs, we shouldn’t assume a computer will someday attain consciousness just because it can do math like we can.
The debate over whether or not the human brain is a complex machine is a long-running one. Even the Romans compared it to an abacus. Today we liken it to a computer.
Asking if computers can think is like asking if submarines can swim.[xiv] While they can already do plenty of things better than we can, I don’t foresee them entirely simulating or replacing human consciousness. It’s more likely that we’ll uncouple intelligence from consciousness.[xv] As digital technology keeps advancing, more and more objects around us will start to display intelligent behaviors. Your front door will pop open when you’re nearby. Your thermostat will sense that you’re home and adjust the temperature accordingly. Your wallpaper TV will display a different selection of shows when your daughter sits down in front of it. When you park your car, it will automatically pay the meter. Artificial intelligence is becoming a feature of more and more of the products we use. Just as all kinds of everyday objects went electric in the 20th century – lighting, escalators, toothbrushes, hand mixers – in the 21st, all kinds of objects use AI.[xvi] Their sensors, microprocessors and algorithms allow them to observe us and respond to our behavior. The world around us is getting ever more sensitive, lively and interactive. We used to go outside to look at the flowers; soon, the flowers will look back.
Are all these AI-equipped objects conscious? Not necessarily, since to a significant degree consciousness is a social construction. We attribute it to the people and animals around us because it helps us get along with them. If it proves useful, we may to an extent start doing the same with AI-equipped objects. The result will be a new version of animism, that prehistoric human habit of seeing life in all things[xvii] – proving Spinoza, and not Descartes, right.
Wearing our brains outside our skulls
Back in 1960s, the media theorist Marshall McLuhan predicted that thanks to the rise of electronic technology people would eventually wear their brains outside their skulls.[xviii] Inspired by the philosophy of Martin Heidegger,[xix] McLuhan talked about media as extensions of our bodies. Our glasses are extensions of our eyes, our cars are extensions of our legs, and yes, our computers are extensions of our brains. In this view, a medium doesn’t replace your body but expands and enhances it. Cars turn our legs into superlegs, so we can cover greater distances in a day. Computers turn our brains into superbrains, enabling our thoughts to travel much farther than they could have otherwise.
Just as the steam engine expanded human muscle power, computers can take on certain cognitive tasks. But that doesn’t mean we’re going to become entirely superfluous. Horses can run faster than we can, but no one would claim they were about to render us unnecessary. Just as a person on a horse is more interesting than a race between a person and a horse, a person with a computer is more interesting than a competition between a person and a computer.
Asking if computers can think is like asking if submarines can swim.
In this view of the human-technology relationship, computers won’t replace our humanity, they’ll extend it. Digital technology can take our minds to places they wouldn’t otherwise be able to go. This might sound futuristic, and I could make up a story about brain implants interfering with our thoughts one day, but I don’t need to, because it’s already happening here and now. Need an example? Think about what you’d do if the Internet was turned off for a day. How would you spend your time? How would you communicate with friends and loved ones? How would you stay informed about what was going on in the world? Would you still be able to do your work?
Although the Internet has been around for less time than the average human life span, it has profoundly affected our lives and our thoughts. Having access to millions of sources of information and being able to stay in touch with friends and strangers all day via a constant flow of messages does something to our minds, our identities, our autonomy. We’ve become encapsulated. We don’t need implants.
Just as all kinds of everyday objects went electric in the 20th century – lighting, escalators, toothbrushes, hand mixers – in the 21st, all kinds of objects use AI.
We looked earlier at how people have initiated a new evolutionary stage in which memetic organisms will flourish. The new information-exchange-based species won’t replace or succeed us, any more than beehives replaced individual bees. Instead, they’ll form a superstructure within which we will be encapsulated. Without realizing it, within a short time we’ve become deeply embedded in this superstructure. Religions, states and corporations were relatively primitive memetic organizational structures. Only with the rise of digital technology have memetic organisms really begun to come into their own. The superintelligence Bostrom warned us about[xx] isn’t a supercomputer housed in silicon chips but a hybrid superstructure made up of people and machines. It’s a decentralized brain with a collective consciousness that has access to all the information in the world. You’re part of it, you’re connected to it, and therefore you have some influence in it, although you’re not the boss.
As the new evolutionary stage gets underway, human beings are ceasing to be the dominant species. A new level of complexity is evolving in the nature around us, and we’re being encapsulated in it. Consider this a loud wakeup call. We must face facts: 1) on the one hand, we were never earth’s dominant species in the first place, since as multicellular organisms we were only survival machines for the genetic material embedded in our cells; and 2) on the other hand, we are being encapsulated within a memetic organism with its own metabolism, in which individual people are nothing but humble cells. Human beings fall between the proverbial two stools: we’re made up of cellular organisms but also, in turn, embedded in memetic ones.
A velvet revolution
Our encapsulation in the next evolutionary level won’t be accomplished overnight. It won’t be a big bang but a velvet revolution, taking place not in a few years but over generations, via innumerable tiny steps. The question is whether we’ll be able to recognize the new stage once it forms and consolidates. After all, a simple cell isn’t aware of the larger multicellular organism it resides in. Whether it’s a bird, a cat or a dog doesn’t much matter to the cell, because it has its own local interface and interaction with its environment. The cell resides at a lower level of complexity and is unable to perceive the higher level that surrounds it. Maybe we won’t be able to perceive the next evolutionary level either. Indeed, maybe it’s existed for a long time already and we simply have no means of communicating with it, just as it would be hard for a cell in your body to communicate with you. Maybe it’s like a sea that can think but that we’re unable to talk to. And maybe this superorganism is aware of us. After all, how can we know for sure that a higher intelligence hasn’t existed for a long time, observing us, without us being able to see it? Maybe it can hear us, too.
The world around us is getting ever more sensitive, lively and interactive. We used to go outside to look at the flowers; soon, the flowers will look back.
It’s a speculative idea. But we can’t know it isn’t true. At the risk of making you think I’ve lost my mind, I will directly address the emerging memetic evolutionary level, the entity reading this book now or in the future.
Hello, superorganism. I know you can hear me. I know you’re there. I want you to know that I know. I am not afraid. This is where it begins. It’s like a whirlwind inside your brain. If you concentrate, though, you can hear what people all around the world are thinking – presidents, diplomats, scientists. You can help them understand each other better. You are connected to all the species on earth. You can bring them together in the best of all possible worlds. You have power beyond my imagination. Use it well.
This is a pre-publication of ‘Next Nature: Why Technology Is Our Natural Future’ (2020) by Koert van Mensvoort, founder and creative director Next Nature Network.
[i] B. Spinoza, Ethics (1677) (New York: Penguin Classics, 2005).
[ii] M. Charette and W.H.F. Smith, “The volume of Earth’s ocean,” Oceanography 23:2 (2010), 112–114, DOI: 10.5670/oceanog.2010.51.
[iii] C.S. Von Bartheld, et al., “The search for true numbers of neurons and glial cells in the human brain: A review of 150 years of cell counting,” Journal of Comparative Neurology 524 (2016), 3865–3895, 10.1002/cne.24040.
[iv] C. Bowles, “How many electrons are in the human brain?” (2011), https://www.quora.com/How-many-electrons-are-in-the-human-brain, accessed on August 5, 2018.
[v] E.A. Ottesen, et al., “Pattern and synchrony of gene expression among sympatric marine microbial populations,” Proceedings of the National Academy of Sciences of the United States of America (2011), E488–E497, DOI: 10.1073/pnas.1222099110.
[vi] M. Honigsbaum, “Human Brain Project: Henry Markram plans to spend €1bn building a perfect model of the human brain,” The Guardian, October 2013, https://www.theguardian.com/science/2013/oct/15/human-brain-project-henry-markram.
[vii] S. Theil, “Why the Human Brain Project went wrong – and how to fix it,” Scientific American, October 2015, https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it.
S. Reardon, “Worldwide brain-mapping project sparks excitement – and concern.” Nature 537, 597, September 29, 2016, DOI: 10.1038/nature.2016.20658.
[viii] C.D. Martin, “The myth of the awesome thinking machine,” Communications of the ACM 36, no. 4, April 1993, 120–133.
[ix] N. Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014).
[x] A. Stevenson, “CES: Intel says microprocessors will become emotionally smarter than humans,” The Inquirer, January 7, 2014.
[xi] D.C. Dennett, Consciousness Explained (New York: Back Bay Books, 1991).
[xii] J.R. Searle, “The Mystery of Consciousness,” The New York Review of Books, November 2,1995.
[xiii] “Interview: Bruce Sterling on the Convergence of Humans and Machines,” 2015, https://www.nextnature.net/2015/02/interview-bruce-sterling, accessed on August 8, 2018.
[xiv] C. Stross, Halting State (New York: Ace, 2007).
[xv] Y.N. Harari, Homo Deus: A Brief History of Tomorrow (New York: Vintage, 2016).
[xvi] K. Kelly, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (New York: Viking, 2016).
[xvii] S. Aupers, In de ban van moderniteit: de sacralisering van het zelf en computertechnologie (Amsterdam: Aksant, 2004);
R. van Tienhoven, “Techno Animism,” presentation at the Next Nature Powershow 2011, https://www.youtube.com/watch?v=YPVPPuN90w0.
[xviii] Interview with M. McLuhan, Playboy, March 1969.
[xix] M. Heidegger, Sein und Zeit (Tübingen: Max Niemeyer Verlag, 1927).
[xx] N. Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014).