COEVOLUTION: ARTIFICIAL HUMANS
Thus far, the history of computing has followed a trajectory of ever greater integration and interaction between humans and machines. Fashioning our tools to fit us ever more closely—in line with millennia of human practice—
we have not previously considered developing tools that were unsuited to our anatomy or intellect, instead remaining guided only and wholly by the limits of our biology. But now the advent of AI may persuade at least some of us to contemplate a reverse mission: In a case where our tools appear to outpace our capabilities—as AI sometimes does already—might we consider engineering ourselves so as to maximize the tools’ utility and thus ensure our continued participation in shared endeavors like those outlined in previous chapters?
Biological engineering efforts designed for tighter human fusion with machines are already underway. Starting with physical interconnects by means of chips in the human brain,1 they seek a faster, more efficient way to bridge biological and digital intelligence. Forging such links could augment our ability to communicate with machines, challenge them on their own terms, ensure that the knowledge gathered by AI is ultimately passed on to humans, and convince AI of the worth of humans as equal partners.
Indeed, not only could attempts to construct such “brain-computer interfaces” bolster humanity’s effort to integrate with machines, but neural engineering may be only an intermediate phase of transition toward actual symbiosis. Achieving true parity with AI would likely require steps that go beyond individual modification. For instance, a society might attempt to design a hereditary genetic line customized for amenability in collaboration with AI. Such new interconnections between biological and artificial intelligence could sidestep, or consign to the past, human inefficiencies in the absorption and transmission of knowledge.
But the dangers—ethical, physical, and psychological—of such a course may well outweigh the benefits. If we succeed in revising our biology (likely through the use of AI), humans may lose a baseline on which to ground our future thinking around possibilities or perils that we might confront as a species. But if we do not acquire such new capacities, we might put ourselves at a disadvantage in coexisting with our creation. As things look now, extreme self-redesign may not be necessary—and indeed we authors think it generally undesirable. But the choice between alternatives that seem fanciful now may soon need to be confronted as real.
Meanwhile, in trying to navigate our role when we will no longer be the only or even the principal actors on our planet, we might enlarge our thinking with a look to the history of biological coevolution itself. Charles Darwin wrote at length about the curious process by which species reciprocally affect each other’s evolution.2 Though he never used the word in his writing, Darwin was among the first to recognize that coevolution is a major force organizing life on Earth.
The genomes of interacting species are linked; they change in response to each other over time. Both the long, slender beaks of hummingbirds, for instance, and the long funnels of certain flowers have together grown to more extreme dimensions to serve each other’s mutual needs. While religious leaders in Darwin’s day believed that such custom adaptations were proof of divine design, Darwin provided evidence of another explanation.
And coevolution may not be unique to earthly species. In astrophysics, one theory proposes that the entire expansion of the cosmos can be attributed to coevolution, with black holes and galaxies developing in an interdependent way no different from that of hummingbirds and flowers.3 Moreover, in the sense that coevolution involves multiple parties designing new internal arrangements in response to each other, it is similarly to be found in the marriages of people, the platforms of political parties, and the relations of nations—as in, for example, the offensive and defensive evolutions that ultimately stabilized nuclear dynamics during the Cold War.
Perhaps coevolution is the rule, then, and stasis the exception? If so, it must be asked whether the lack of change thus far in the human species, despite the birth of AI, is itself a natural development. And if not, what should be our response? Should we pursue accelerated human progress at all costs, whether out of loyalty to the concept of evolution or out of apprehension of its alternative?
Some fear that, with the arrival of a technology with “superior” intelligence, we are facing our own extinction. What to do? If that possibility is nothing more than a logical side effect of coevolution running its course, should we rebel, or not? As the French philosopher Alain Badiou says, “It is the sea herself who fashions the boats, choosing those which function and destroying the others.”4 To survive in that case, we would have to learn, as in the past, how to build better boats. In this scenario, AI functions first as our main threat and then, ideally, as our partner.
If we take this approach, however, then in trying to mitigate the risks of one technology we would paradoxically be heightening the risks of another. Biologically—or, worse, genetically—something could go awry. Speciation could cause the human race to split into multiple lines, some infinitely more powerful than others. If, in some cases, difference would be desirable—for example, in the creation of a group of humans biologically engineered for space—in other cases it could further entrench inequalities along existing fault lines within and among human societies.
Altering the genetic code of some humans to become superhuman carries with it other moral and evolutionary risks. If AI itself is responsible for the augmentation of human mental capacity, it could create in humanity a simultaneous biological and psychological reliance on “foreign” intelligence. It is not clear how, after intimate physical entwinement and intellectual commingling, humans could easily overcome that reliance so as to challenge or divorce ourselves from machines if needed. As has been the case with other technologies, adoption and integration can result in a dependence difficult to untangle.
Perhaps most concerning would be our collective ignorance: we may not even realize that we have merged. And if we do realize that we have, could ordinary humans even recognize or identify a defect—or a defection—in a human with machinelike abilities? Let us suppose that safety concerns could demonstrably be allayed; nevertheless, the mental shift attendant upon humanity’s self-redesign in service of an intimate partnership with or dependency on silicon-based tools would remain an extreme development. To quote Tolstoy again: “Without control over the direction, there is less regard for the destination.”5 Wherever technology takes us, that is where, willy-nilly, we would go. Or, as has been observed before, “A nation which does not shape events through its own sense of purpose eventually will be engulfed in events shaped by others.”6 Moreover, if we have modified humans so dramatically as to be unrecognizable, have we really saved humanity? To omit all our imperfections and palliate all our deficiencies might be to disregard the value of the human project. “Upgrading” ourselves biologically might backfire to become a greater limitation on ourselves.
Given the heavy risks, the pathway of evolving humans to suit AIs cannot be our current preference. We must seek an accessory or alternative way to thrive in the age of AI. If we are unwilling or unable to become more like them, we must, while we are able, find ways to make them more like us. Toward this end, we need to apprise ourselves more fully not only of the essential and evolving nature of AI but also of humanity’s own nature, and we must attempt to encode these understandings in our machines. If we are to entwine ourselves with these nonhuman beings and yet retain our independent humanity, these efforts are essential.
Comments
Post a Comment