Robots, AI and the Existential Fear of Humanity
Will mankind be destroyed by its own creations?
1921, a science fiction play called Rossumovi Univerzální Roboti (Rossum's Universal Robots) written by Czech playwright Karel Čapek, had its world premiere in Hradec Králové.
The story took place on an island where a factory mass-produced humanoid robots. These robots were created with artificial flesh, blood, bones, and veins on an assembly line. Initially, such robots had no original thoughts or emotions. However, one day, a scientist altered the formula for some of the robots, imbuing them with human feelings. Those changes proved to be catastrophic, as the upgraded robots initiated a rebellion, seeking the extinction of human beings.
This play not only coined the term “robot” but also, in Kathleen Richardson’s terms, captured the “annihilation anxiety” of human race.1
The existential fear of humanity, different from other fears, is shaped by science and technology. It is not only articulated through the theoretical framework of modern science, but also closely connected with the development of science per se.
In premodern times when witchcraft and religion dominated belief systems, our existential fear, if any, was subject to supernatural powers. The Mesoamerican Aztecs believed that the prehistoric human beings had been annihilated four times prior to the emergence of current humanity: first by jaguars, second by a hurricane, then by a fiery rain, finally by a great deluge. Eventually two people survived the deluge and became the ancestors of the current humanity.2
According to a common belief of the Christian world, the Deluge is believed to have caused the near extinction of human beings, but the Second Coming is not necessarily the extinction of human race, but rather a rebirth in the Kingdom of God. Although ancient legends of many other civilizations developed similar narratives about deluges, very few had envisioned the yet-to-come end of world. In the Buddhist worldview, there are six realms of rebirth and existence. The end of world or the extinction of humans does not make sense in that case.
It is the advent of modern science that has propagated myriad forms of eschatological doctrines across the world. It is likely that the birth of modern science in the West bears the imprints of Judeo-Christian eschatology. Also likely is that social theories originated in the West encompass variations or undertones of Judeo-Christian eschatology. Just to name a few, Marxist eschatology enables us to reflect upon human beings situated in the teleological development of history, and Walter Benjamin’s “messianic materialism” is seeking the messianic restitution of history, i.e., the full, revolutionary actualization of the past’s historically unrealized and oppressed potentialities.3 However, such doctrines rarely postulate that there might be an end of the world.
The existential fear of human extinction characteristically manifests itself in imaginary catastrophic scenarios. Visions of devastating disasters reinforce our values and beliefs in an extreme manner. The public have thus become increasingly alert to the boundaries between science and humanism.
To keep morally controversial explorations at bay, scientific research ethics are gradually established across all disciplines. Gruesome experiments, such as the two-headed dog experiment conducted by Soviet scientist Vladimir Demikhov in 1950s, can provoke public indignation nowadays. Scientific research with the potential to disrupt human morality may even face litigation. The genome editing of human embryos, led by Chinese scientist He Jiankui, who claimed to have successfully created the first genetically edited human babies, was not only prohibited but also resulted in a three-year jail sentence for the scientist. In contrast, Dolly, a female Finn Dorset sheep and the first clone of an adult mammal, was much less controversial.
From this point of view, the existential fear of humanity is also a concern related to the potential loss of humanism and the likely disruption of morality. Even when experimental subjects are nonhumans, humanism extending to animals and plants embodies an anthropocentric hierarchy of life, and any challenge to this hierarchy potentially poses a threat to human existence.
“Annihilation anxieties are produced by an analytical position that rejects ontological separations, combined with radical anti-essentialism—when humans and nonhumans become comparable,” says British social anthropologist Kathleen Richardson, whose research is focused on computing ethics, particularly in the realm of robots and artificial intelligence (AI).4
Kathleen Richardson’s conception of existential fear is grounded on the human confrontation with nonhumans. The anxiety over the replacement of humans by robots in the play Rossumovi Univerzální Roboti was a reflection on “the violence of World War I (WWI) and the unprecedented destruction of human life mediated by machines,” wrote Kathleen Richardson.5 This is understandable since upgraded killing machines such as cannons, Howitzers, Feldhaubitzes, mortars, field guns and machine guns, significantly increased the efficiency of killing during WWI. Against this backdrop, machines not just reduced humans to nothing, but also erased differences between humans and nonhumans, the anthropologist added.
Apparently, the human perception of nonhumans is far from unambiguous. Humans oppose the genome editing of human embryos, yet find the cloning of a sheep acceptable; meanwhile, the two-headed dog experiment can easily spark their outrage. This illustrates that the existential fear is not just a resistance to the dehumanization of humans but also a response to the challenge posed to the anthropocentric order of things.
In Steven Spielberg’s film A.I. Artificial Intelligence (2001), the robot child named David developed a profound attachment to human feelings. He not only aspired to become a real boy loved by his adoptive mother Monica, but also took the fiction of Pinocchio as a serious guide in his pursuit of a return to family love after he was disposed by Monica. Although the robot did not pose any threat to human society, the reflection on human feelings still unveiled the values attached to an anthropocentric hierarchy and the anxieties associated with the potential loss of such values.
David’s story has also envisioned the ethical perplexities and insecurities that arise when humans encounter humanoid robots. Humans are not yet emotionally and morally well-prepared for the advent of humanoids. Humans are not only uncertain about the ethical relationship with humanoids but have also not assigned humanoids their rightful place in the anthropocentric hierarchy. Before humans ultimately establish a set of moral rules for coexisting with robots, the disruption of existing morality can also trigger existential fear.
Sora, OpenAI's video generator, astonished the world this February by generating multiple universes parallel to ours. Some applauded it, while others were deeply concerned. As industrial robots continue to displace humans in manufacturing and service industries, the debate about whether AI will replace human intelligence and cause unemployment is again hotly discussed.
Optimists believe AI will augment rather than replace human intelligence.6 What is actually taking place, in their opinion, is humans with AI are replacing humans without AI.7 By contrast, pessimists contend that AI could lead to human extinction. Alarmist warnings include: AI could be weaponized; AI-generated misinformation could destabilize society and undermine collective decision-making; digital authoritarianism; the enfeeblement of human intelligence etc. Among those who have expressed fears are heads of OpenAI and Google Deepmind as well as AI experts, including the so-called “godfathers of AI.”8
Keep reading with a 7-day free trial
Subscribe to Contemporary Political Ideologies to keep reading this post and get 7 days of free access to the full post archives.