The Real Debate About AI

According to Stuart Russell, author of Human Compatible, the standard model of technology has so far worked as follows: we build a machine; we give it an objective to carry out; and off it goes doing its job.  And one more very important thing.  When we want to stop it, we turn it off.  Artificial intelligence, however, can upend this model because down the road it has the potential to learn how to disable the Off switch button.

The current debate about the place and the future of AI in human civilization seems, however, to mostly treat AI as part of the standard model.  It is true that groups of scientists, researchers and policy makers have expressed grave concerns about the existential risk AI poses to humanity, but their voices seem to be drowned by the numerous news articles that are more concerned with the usual pedestrian matters that surround each new technological breakthrough.  That is, the articles we daily read on AI first and foremost promote the great contributions AI can make in the fields of medicine, public health, education, and numerous other fields, arguably all of them valuable.  They also bring out the potential risks as AI tools can replace people in numerous jobs or be abused or misused for nefarious and criminal purposes.  This is the type of debate that has surrounded every past technological innovation dividing people between technology enthusiasts and Luddites who abhor technological advances.  However, AI has a new potential we need to address.

This potential is that our own creation can develop the capacity to no longer listen to us and turn against our own interests.  It is with this in mind, that we need to pay attention to the voices of skepticism.  Russell published his book in 2019, and yet this book is still relevant especially after OpenAI’s introduction of its AI algorithm ChatGPT.  Russell’s maim objective is to alert us to the risk of humans losing control over their AI creation.  This can happen when we build superintelligent AI tools (software and robots) which eventually learn to become autonomous from any human control.

Russell does not advocate that we shut down all research on AI.  Instead, he lays out his principles for building provably beneficial AI machines which even if they achieve autonomy will continue to serve humanity.  But for this to happen, AI machines must be trained to align their preferences and objectives with those of their human masters.  That means AI machines must be trained to be truly altruistic toward humans.  The core element of this altruism is that AI machines will be incapable of placing their survival over the interests of humans.

To build altruistic AI machines, however, poses significant problems for their creators.  To acquire preferences and objectives that align with ours, AI machines must learn from us.  To do so it means we have to open-up to them our most private secrets.   Even so, there is the problem that humans are not perfectly rational – far from that, whereas AI machines will be built to operate as rational thinkers.  How can they make sense of our idiosyncratic emotions, thinking processes and decisions?  Furthermore, as Russell writes, we are unpredictable and we are often uncertain of our preferences.  None of these challenges make the building of altruistic – as opposed to self-preserving – AI machines a certain success.

Equally skeptical about the coexistence of humans and machines is another pioneer of AI, Mustafa Suleyman, co-founder of the DeepMind, which in 2016 managed to beat the world champion of the game GO.  Like Russell, Suleyman does not believe that we should hold back research and innovation in AI.  In his 2023 book, The Coming Wave, Suleyman advocates for the containment of AI so that it does not escape beyond the point of no return as far as human control is concerned.  Drawing from his experience at DeepMind, Suleyman warns that we have no idea when containment or control is over and AI dominates humans as a superior artificial species.  That’s what he and Russell call our Gorilla problem.  That is, just as we humans emerged out of a common ancestor to dominate our fellow primates, AI can emerge from us to dominate us as a superior, albeit artificial, species.

All this may be dismissed as fanciful imaginative talk.  But that’s exactly why people like Russell and Suleyman worry about the state of the present debate about AI.  Russell sees three strains in this debate.  First, there are the denialists, who make various excuses to dismiss the severity of the AI problem.  Second, there are the deflectors.  They recognize the problem but they claim there are more serious problems to fix in many aspects of human life that make the ultimate risk posed by AI a second-order problem.  Lastly, there are the over-simplifiers who try to assure us the ultimate problem will be solved because after all they, as experts, know so.

Meanwhile we see that AI research and tools have started to come out without a solid framework of checks and monitoring.  The European Union seems to be the only governmental authority to set some regulatory boundaries around AI research and applications based on the principle of “do no harm to humans.”  The Biden administration recently issued a declaration of wishes and admonitions but without any regulatory or enforcement bite.  China, Russia and other international players have been reluctant to impose concrete road maps. 

At the same time, the beneficiaries of this foggy landscape are the big data aggregators (Meta, Google, Microsoft and few others) who have a huge advantage over their competitors for a very clear reason.  As mentioned above, AI assistants serving humans must learn a whole lot about them.  Which means only big data aggregators have the data for such training.  Moreover, given the ability of these aggregators to compel users to surrender data, our privacy will be invaded much more than now.  In their effort to maximize the alignment of preferences and objectives between AI machines and humans, these aggregators will become more and more expansive in the types of data they will try to pry from us.  So, we need to set serious boundaries around our privacy and we also need to make the playing field of AI a lot more level to avoid dominance by few big players.

Mustafa Suleyman makes a great point we need to heed.  Superintelligence will be the last innovation humans make.  After that Super AI will be able to do everything.  This raises an important and momentous question.  What human-based and inspired capabilities do we humans wish or need to maintain before we lose our drive and/or skills to create, to imagine, to compute, and relate to others?  Do we wish to retain any degree of agency for ourselves?  That’s what the real debate about AI ought to be.

Author: George Papaioannou

Distinguished Professor Emeritus (Finance), Hofstra University, USA. Author of Underwriting and the New Issues Market. Former Vice Dean, Zarb School of Business, Hofstra University. Board Director, Jovia Financial Federal Credit Union.

One thought on “The Real Debate About AI”

  1. Very interesting. I wonder withall the rules How do we control experiments by rouge nations and individuals? Will the likes of Elan Musk and similar others obey the rules?

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.