Humans and Our Artificial Clones

In his 2017, book Life 3.0 Max Tegmark wrote of the almost infinite achievements we could realize through AI but also of the great perils we faced, if and when AI-based machines reached a state of autonomy from human control.  Hardly six years later, we are starting to freak out from the latest advances of AI in one single area, that of producing human-like written language.

As you may have read, more than 1000 tech leaders and researchers sent out an open letter warning us of the profound risks unchecked development of AI poses to humanity and called for a moratorium so that we have time to contemplate about the potential consequences. *  In a very insightful OP-ED piece in the NYT, Yuval Harari and his co-authors laid out the risks of supplanting humans by AI technologies in the creation of language, culture, and civilization.  In another on line piece in the Times, the linguist Noam Chomsky and his co-authors ridiculed the idea that AI programs, like ChatGPT and Bard, are true substitutes of human language and expressed apprehension at the thought that a mechanical synthesis of already produced information devoid, though, of emotion and moral judgment could be confused with human language.  On the other side of the debate, there is no shortage of people who, full of excitement and anticipation, can’t wait to see what the future of AI will bring to humankind.

 As I have written in other posts, innovation and technology have been both a boon and a bane for humanity.  One thing we can say with certainty is that our innovative prowess has not lived up to its most hyped promises.  Leisure from work is still a luxury, the pace of life has become more maddening, poverty and sickness are pervasive, and the geometrically increasing complexity of human life consumes ever greater amounts of energy and natural resources that threaten the climate and the survival of other species.  So how do we then approach the promise and threat of AI? 

First, I think that we are past the point of losing autonomy to machines.  Our lives are already so embedded in the world of machines, that I am not sure we can call ourselves masters.  Let’s try to imagine a life without the tech world with which we have surrounded our lives and I bet shivers will come down everybody’s spine.  We already live in a symbiotic world of humans and machines.  They may be our creations but we can’t do without them.  That means we have long ago surrendered our autonomy to our many Frankensteins. 

Second, questions as to how AI will affect the economy, jobs, politics, and education, though important, are in my opinion second order questions.  The questions we ought to be asking as we evaluate the promises and risks of AI are these:  Will AI be safe for the climate?  Will AI be safe for the biodiversity of species?  And most importantly, will AI be safe for the essence of human nature?  Though a livable climate and an ecosystem with biodiversity are extremely important, the most consequential question we need to contemplate about and ultimately answer is “what do we want our future as humans to be like?” 

My first concern here is the impact of the AI world on our evolution as a species.  Living within an environment of humanoid physical robots and brains will be unlike the environment we have encountered thus far.  How will our cognitive and emotional make up respond and adapt to this new environment?  Reason and emotions have evolved to foster cooperation between humans as a means of improving our chances for survival.  Will this cooperation and sociality in general erode when human beings start to rely on cooperation with AI creatures?  Think, for example, of  children raised by AI nannies.  Do we have any scientific or otherwise reliable method to make predictions about such and other questions of similar relevance to our future as authentic humans rather than hybrids of humans and machines?

The expressed voices of skepticism and alarm about the effects of AI suggest that we need to develop the tools to check and control advancements in AI that have the potential to put us at an existential risk.  To come up with some plan of response we can look at how we coped with two other life-transformative developments.  Namely, the development of nuclear physics and genetic engineering. 

Nuclear physics gave us the promise of plentiful and clean atomic energy but also the potential to destroy human life.  In the 1960’s and 1970’s the activism of the still young baby boomers of the world along with the logic of safe containment resulted in treaties among the nuclear powers that established limits to the development and quantity of nuclear weapons.  And proliferation of atomic energy plants has been checked in many countries through local public resistance. 

Genetic engineering has also presented us with deep moral questions regarding the possibility of molding human life in ways we have deemed to belong to the exclusive realm of nature and God.  Since 1971 researchers and academics working in genetics have initiated several self-imposed moratoria that have halted further applications of genetic engineering.  They have also drafted rules of safe conduct in research to minimize unintended consequences. *

Regulating and checking the development of AI though is not going to be as easily achieved.  Unlike nuclear power which was mostly developed in state-funded national labs or genetics that was developed in academic labs, AI development is backed by for-profit mega-firms (Google, Microsoft, Meta, and others) that have strong incentives to monetize their inventions.  Another difficulty is that the immediate effect of AI advances is increased convenience in carrying out a variety of tasks and this numbs our urgency to think about the ultimate consequences down the road.  Nor do I sense a widespread interest within the public at large that would mobilize activists to demand a governance model of checks and balances on AI.  And yet, the risks and dangers are real and call for action.  So, we do need to mobilize governments, AI innovators as well as the public on an international scale to find the right path forward.

In the end, it comes down to drawing a line between the freedom to develop new knowledge driven by our human curiosity and the necessity to apply it wisely for the good of humanity.

*The recent letter on AI echoes the pronouncement on standards and limits in genetics research of the 1975 Asilomar Conference in California.  An informative account of how research and applications in genetic engineering have been contained within some bounds can be found in Matthew Cobb’s book AS GODS – A Moral History of the Genetic Age

Unknown's avatar

Author: George Papaioannou

Distinguished Professor Emeritus (Finance), Hofstra University, USA. Author of Underwriting and the New Issues Market. Former Vice Dean, Zarb School of Business, Hofstra University. Board Director, Jovia Financial Federal Credit Union.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.