Until several years ago, I had difficulty to readily recognize that AI stood for artificial intelligence. Not any more. References to AI have become quite ubiquitous in the press and books. The immediate interest in AI is in connection to how it will improve everyday living and how it will impact jobs and employment. As important as these effects may be, AI is destined to impact human lives in even more profound ways.
AI is part of the fourth industrial revolution that includes nanotechnology, quantum computing, biotechnology and the internet of things. Just as a reminder, the previous industrial revolutions starting in the 18th century were associated with the steam engine, electricity and tele-communications, and most recently the digital technology.
AI, however, can be emblematic of a new stage in human development, which the MIT physicist Max Tegmark calls Life 3.0. Tegmark thinks of life “as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprint for its hardware.” Life 1.0 was the stage of simple life forms whose hardware (arrangements of atoms) was replicated by information carried in their DNA without any new learning after birth. The next stage, Life 2.0, emerged from the evolution of the hardware of Life 1.0 that eventually produced species, primarily the human species, that developed the ability to learn after birth and design new software (processing of information) that gave rise to social patterns, culture and advancement of knowledge. In Life 3.0, humans with the aid of super technology (AI) will become masters of their destiny by being able to redesign both their hardware and software and take control of the evolutionary process. It is also the stage when AI machines have the potential to become autonomous.
The difference between human intelligence and AI is that humans rely on biological matter to develop intelligence and learning. In contrast, AI is non-biological. The big insight that drives the development of more human-like AI machines is that intelligence and learning, whether human or artificial, are the products of physical matter, like atoms, and information which is also collected and analyzed by matter. In the brain, this is accomplished through the neural networks. Both matter and information processing obey physical laws. Therefore, ability to understand and apply the physical laws that underlie the capacity of human brains to develop intelligence and master learning can lead to the design of machines that are also intelligent and capable of learning. The Google AlphaGo device is already an example of a machine that can learn on its own. Its prowess in learning was demonstrated when it beat the champion of the Chinese game GO.
The challenge, a rather scary one, is what happens when through its ability to learn, AI develops into Superhuman AI. Tegmark envisions a multitude of potential scenarios describing human life with AI. Here are some interesting scenarios:
- Libertarian utopia: AI machines and humans peacefully coexist in a system that recognizes legal and property rights for both.
- Benevolent dictator: Adverse impact of AI on human welfare is avoided through the abolition of property and the institution of a guaranteed income for all.
- Gatekeeper: AI becomes the regulator of new advances and sets limits; whether they are good to humans depends on the goals of AI.
- Protector god: AI behaves as benevolent supporter of humans.
- Enslaved god: AI is controlled by humans who use it for good or bad ends.
- Conqueror: AI prevails and unbounded gets rid of the now redundant humans.
- Reversion: Humans put an end to technological progress and eliminate AI.
These possibilities beg the question: how will humans manage to deal with the power of AI. If AI has the potential to learn and develop its own goals, how do we protect ourselves? The safest way is to design AI with goals that are compatible with ours. But how do we decide? Are we going to decide collectively, say, through a UN type arrangement? What happens if one country adopts a more aggressive AI model that can threaten the wellbeing of other countries? China’s determination to advance its AI technology belies their apprehension to falling behind in such a critical area.
More importantly, besides forming its own goals, an autonomous AI may acquire the capacity for consciousness. Those who, on religious or philosophical grounds, believe in the dualism of matter and soul (as two separate entities) will have difficulty reconciling with the idea that AI machines can develop consciousness, that is, ability to sense experience subjectively. Evolutionary biologists, neuroscientists and analytical philosophers, though, are approaching consciousness as the product of physical processes obeying physical laws. Theory and experiments already provide support for this. If consciousness is what information feels when it is processed in certain ways, and the processing follows physical laws, then it could be possible to develop AI with that potential. If and when AI advances to the point to have consciousness, the line between humans and AI machines will become extremely blurred.
Humans do not have a good record of catching up with the consequences of new technologies and the result has often been anxiety and upheaval. AI represents yet the most critical challenge humans will have to face. AI that can learn on its own and form goals may be still many years away or, for now, in the imaginative minds of scientists. But so were many other advances sometime in the past, like, for example, cloning. Understanding the potential power and the implications of AI is a critical step towards planning and preparing for a human life with AI.
Note: Max Tegmark’s book is titled LIFE 3.0: Being Human in the Age of Artificial Intelligence. To follow developments in AI relevant to human future go to http://futureoflife.org
My next post will deal with the more immediate concerns arising from AI in regards to jobs, employment, and some possible solutions.
Very concerning subect, are we losing our humanity to adopt the artificial logic, is there a space for imperfaction, or all imperfect (infidels) in this new artificial inteligence relgion we have to conform or be eliminated?
LikeLike
Our more important concern today is how data gathered from social media platforms are being used — monetized or weaponized. It is only humans who can determine how AI will dominate us – either we control it or it controls us. There are no clear answers, but human greed and insatiable hunger for power are pointing to the latter.
LikeLike