At a symposium sponsored by NASA in 1993, science fiction writer Vernor Vinge postulated that within thirty years, we would create a sentient artificial entity with superhuman intelligence. “Shortly after, the human era will be ended,” he concluded. This event, which he termed “the singularity” would change the balance of power on this planet, as humans would not be the smartest beings in the world.
“When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale,” Vinge told his scientist-filled audience. It’s the end of an era, perhaps the longest any species has known. But it is officially over. “Now….we are entering a regime as radically different from our human past as we humans are from the lower animals.”
In Our Final Invention: Artificial Intelligence and the End of the Human Era, journalist and documentarian James Barrat investigates the progress being made toward computers that mirrors the intellectual capacities of human beings (Artificial General Intelligence or AGI) and those that far surpass it (Artificial Superintelligence or ASI). Barrat paints a picture of a process of technological development so thoroughly imbricated in the technological and economic architecture of human society that it is practically unavoidable.
Even more alarming is the blithe disinterest evinced by AI promoters such as Ray Kurzweil, whose borrowing of Vinge’s “singularity” gives it a much more positive spin. Most of the technologists to whom Barrat spoke assumed that super-intelligent AI would either be (or could be programmed to be) completely benign, or that the question didn’t really matter, since there is nothing that can be done about it.
The dangers related to the drive toward AGI/ASI have occurred to at least some in technology-related fields. In 2000, Bill Joy, a co-founder of Sun Microsystems, published an article in Wired under the title Why the Future Doesn’t Need Us. Assessing the possible changes that advanced AI might make in human society, Joy wrote, “I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.”
Joy’s solution was for human beings to voluntarily find some way to step back from the precipice, to resist the temptation to develop superhuman intelligence, at least until we can find some plausible way of reining in the concomitant threats. But his suggestion merely illustrates the gravity of the threat, since the one thing that can be said about it is that it is utterly unrealistic, given the current structure of human civilization.
Others who have recognized the potential threat to humanity from predatory AI, such as the technology writer Eliezer Yudkowsky, or the Machine Intelligence Research Institute, with which he is associated, have proposed that we try to engineer AI in such a way as to make it more sympathetic to its human creators. One of the more alarming tidbits related by Barrat, with regard to his interactions with technologists, is the frequency with which Asimov’s three laws of robotics are cited as a way that the dangers of AI to human beings could be mitigated.
But even a cursory reading of Asimov’s work (to say nothing of its extensions by other authors) illustrates the incredible difficulties in terms of operationalizing those laws. Yodkowsky, who, to his credit, is not prone to such simplifications, coined the term “Friendly AI” to conceptualize an approach that would try to imbue superintelligent AIs with some sort of human values or human sympathy (as opposed to Asimov’s wrote algorithmic rules.)
Of the many problems associated with this, two seem particularly daunting. The first is that one of the characteristics of ASIs is likely to be the capacity to learn, and thereby to evolve. So, even if we managed to build human sympathy into the original programming architecture, there is no guarantee that it would remain through the subsequent developmental iterations that the ASI itself might generate. Humans, and those working in technology no less than the rest, have a tendency to anthropomorphize everything from their pets to their cars. This leads to a tendency, as Barrat describes it, to view ASI as somehow homologous with normal human consciousness (as if that never had any negative characteristics) and thus to believe that it think that ASI will be something like us.
Unfortunately, there is every likelihood that an entity a thousand times more intelligent than we are would view us in the way that we view creatures whose intelligence we dwarf, like squirrels and insects. It has often been noted, both in discussion of colonialism and in speculations about possible contact with extraterrestrials, that in meetings between groups at widely differing levels of technological development, the group at the lower level tends to fare quite badly. There is no reason to believe that this is not an apt model for our interactions with an ASI. Generally speaking, the people working toward the singularity (Barrat designates them “Singularitarians”) don’t recognize that this is a problem.
The promoters of advanced AI and neural nets tend to take a two-pronged approach to justifying their work. AI has the potential to create a human utopia, but whatever its outcome, there’s no stopping it. The question becomes what the likely outcomes for human beings are. At the most general level, one might want to know exactly what it is that human beings are supposed to if not productively employed. Marx’s analysis of class structure was based on the premise of an increasingly profound division between owners of capital and the proletariat, comprising those who only have their labor power to sell on the market. The technological developments described above raise the prospect that the bottom will drop out of the market for human labor power altogether.
What will it mean to be human after his happens? Labor has always seemed to be both a source of, and a barrier to, human happiness. We seek leisure, but too much of it causes stagnation. Under capitalism, the need for human labor was, at the very least, a bargaining chip which those who didn’t possess the means of production could use to ensure their own survival. The utopias of the Singularitarians seem never to answer the question of what is to become of mankind when most of its members lose their raison d’etre.
Marx’s most extensive statement of what he envisioned in a post-revolutionary society runs as follows: “[I]n communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.”
That hardly seemed practicable when Marx wrote it (and of course he never bothered to speculate further), but the parallel speculations from the right seem hardly more realistic. There is no way back to the world of small farming and artisan niche production of the past, even if pollyanna-ish version of that world that populates the imaginary of the libertarian right ever existed. With no means of generating income, the vast majority of human beings become superfluous, even dangerous, since their one unavoidable characteristic would be to tax the non-renewable resources of the planet.
Eric Drexler, who mostly limits himself to uncritical cheerleading for the possibilities of nanotechnology, did speculate at one point in the 1980s that governments might eventually decide to dispense with their citizenries. There are a whole range of other possibilities along these lines, all of which come down to the problem of what the mass of humanity will be “good for” when those in the to .1% of income distribution no longer need their labor power at all.
But even these alarming scenarios pale before the prospect that some sort of ASI will eventually unilaterally decide that it has a better use for our atoms than we do. It appears markedly unlikely that the necessary energy will be put into the development of FAI before something much, much smarter than human beings arises in some laboratory somewhere. Those who see Asimov’s laws of robotics as the answer seem unaware of how those things usually turned out in Asimov’s books.
The underlying problem is that both the drive to automation and that toward AGI/ASI are being undertaken in the context of a set of political and economic institutions specifically designed to prevent any sort of common human decision making about the future of the species. Pace the efforts of MIRI and related groups (who are mostly viewed by the Singularitarians as a bunch of mildly deranged tree-huggers), only a serious transformation in the structures of human governance has the slightest prospect of redirecting the future, and even that is pretty threadbare.
Capitalism creates conditions whereby human beings don’t have the opportunity to parse the risks that they face collectively. Rather, it is left up to the vicissitudes of the market and logic of capital. Absent some sort of collective assertion of social goals, capitalism seems set to eat itself and to consume humanity in the bargain.
Photographs courtesy of Joel Schalit.