Peter Diamandis in his book talks about Abundance, and he’s probably right. Technological progress could plausibly lead to a heavenly future for the human being on earth, where the abundance of goods and services, at near-zero cost, will be a reality for the whole planet. We are now at the elbow of the exponential curve beyond which technology will begin to progress at speeds never observed before. Diamandis argues that robotics and artificial intelligence will, very soon, bring us close to zeroing the production costs of almost all goods and services such as food, energy, transport, and housing. Projects like Blockchain promise to decentralize control over money. Medicine and bioengineering will continue to extend the average lifespan without limit and improve its quality. Genetics will understand the mechanisms of life and will be able to create and manipulate it as it is already starting to do with CRISPR genetic editing. Cybernetics will mix biological and digital, creating augmented senses and hybrid beings, while understanding the human brain will allow the development of Brain-Computer interfaces or even transfer a person’s consciousness to the cloud with obvious repercussions on ethics and philosophy.
As utopian as it may seem today, all this can even reduce, to zero, the need for men to work. Machines and artificial intelligence will do “for free” almost all the work that today only man can do, and produce the goods and services we enjoy.
All this is not the re-proposal of old ideas of visionary science fiction authors from the ‘60s, based mostly on their visionary creativity. Today technological startups, laboratories, and research centers are already bringing to light embryos of these projects whose progress will intertwine and feed each other with all the others, until we reach what Ray Kurzweil called The Technological Singularity.
The Technological Singularity
Kurzweil borrowed the word singularity from cosmology by coining his The Technological singularity, to define a technology-based event, unique in the history, not only of humankind but of the entire universe. It’s not a specific day on the timeline, it’s a moment, in the near future, when technological progress will be so fast that it will become self-feeding, autonomous and almost independent from the will of man.
Ray Kurzweil is now Director of Engineering at Google, but he has spent his life dealing with technology, inventing the OCR, the Text-to-speech synthesis, the CCD flatbed scanner and even put his nose in the music synthesis developing the synthesizers that bear his name, Kurzweil 250.
Nevertheless, when he is introduced at the beginning of one of his conferences or for an interview, the one thing that is never omitted is the accuracy of his predictions. He predicted the fall of the USSR because the technology would have decentralized the control of information. He predicted that by the year 2000 a computer would beat the biggest human chess champion and that the Internet would have exploded within a few years after its maturity.
For the future, Ray Kurzweil predicted that by 2029 artificial intelligence will be able to pass the Turing Test, but above all that The Singularity will take place a little more than two decades from today, that is by 2045.
The quality of life in the Western world is constantly growing. Although our erroneous perceptions tell us something else, we live in the best present since man is on earth. All quality indicators today are higher than in the past, from average wealth to human rights, from the longevity to the spread of democracy.
If we look at Ray Kurzweil’s Law of accelerating returns, we can only imagine an even better future that follows the exponential trajectory that all progress has followed so far, from the creation of the universe to the present day. The fascinating description of the six epochs that Kurzweil makes in his books shows us how solid this trajectory is when it is read backward and how close the exponential explosion is when it is read forward.
The reason for this exponential trajectory can be simplified with a concept: parallel progress in specific areas such as information technology, communications, transport, bioengineering, energy, is intertwined with all others, and it constantly increases the speed of each other. In physics, the increase in speed is called acceleration, and the curve that represents it is exponential.
It is natural that these predictions appear excessively optimistic to us because our brains function linearly rather than exponentially. An example of this common error is the forecasts of the International Energy Agency, which in recent years has systematically underestimated the forecasts for the share of solar in the production of electricity.
The other scenario
The scenario envisioned by Diamandis and Kurzweil is just one of the scenarios that the technological explosion promises. Another possible scenario is one in which things will not go so well.
Many scholars, scientists, futurists, and industrialists, are raising the level of alarm on AI. It is reasonable to think that this emerging tool will acquire soon, such a great power that the optimists fear it as a weapon in the hands of unscrupulous men, while the pessimists fear it as a conscious living being, infinitely more intelligent than any human being and therefore able to subdue the human species.
No one can predict whether an AGI (Artificial General Intelligence) will one day acquire consciousness or not. The reason is simple; we have not yet agreed on what our consciousness is. Some people think that consciousness is generated after a certain threshold of complexity of a cognitive system (biological or digital) is exceeded; others say that it has a quantum key instead. Whatever generates a conscious being, given the speed of development we have witnessed in recent years, in my personal opinion, it is not unreasonable to think that an autonomous and self-aware AGI is no more than two decades away.
For this reason, scholars are beginning to think about how to regulate, in the years to come, research on artificial intelligence in order to contain this potential and find a way to harness it in the ethical lines of our time.
AI is not only powerful but also an extremely versatile tool that can be used on a wide variety of problems, from the optimization of passenger flows in an airport, to the forecast of financials, meteorological or geographical spread of viruses.
But it can also be used for less benevolent purposes. It is easy to imagine the importance that it will play in the endless race between cyber-crime and cyber-security, in financial fraud or in the analysis of big data for persuasive purposes.
In fact, if we make artificial intelligence collaborate with big data, then we will have an unimaginable power in our hands.
Yuval Harari reasonably expects that in the short term, big data and artificial intelligence will reach a level of accuracy that will enable them to know us better than we know ourselves.
Today we already see, scrolling the wall of our social networks, ads of products that fall within our preferences, or news about topics on which we often dwell. Given the constant growth in the production of individual data that tracks our preferences, it is not unreasonable to think that in the coming years the understanding these systems will have of us, may become a manipulative lever that leads us to non-spontaneous behaviors.
Big data also has another undesirable effect when used by authoritarian regimes. It is very easy that the tracking of citizens’ behavior through big data and artificial intelligence can be used for repressive purposes to keep under control the hotbeds of political and social dissidence that ignite in countries that have little respect for human rights.
Ethical genetic editing
Technology is not just about algorithms. Another significant progress that will accompany its benefits with many headaches is genetics. 2019 is the year in which the chequered flag has been waved on the CRISPR circuit that started the race towards human genetic editing. First up was China with the experiment of twins born with a genetic modification that immunized them from HIV, but it took just a few months and other countries announced their genetic editing experiments on humans.
Much more than other fields of research, genetic editing brings with it countless ethical implications that man will have to face for the first time. Soon, we may be able to “order” a child with the traits we would like him or her to have, or we will have the technological power to give birth to an in vitro “spare” twin, just to have organs, blood and marrow backup for our first child.
Again, the scientific community that deals with genetics is busy laying the ethical foundations for scientific/technological progress that are in line with the ideas of good and evil that man gave himself so far.
A new Cold War
Today, 2019, the two antagonists, the United States and China, tease each other on the trade front and, given the historical thoughtfulness and harmony of the two governments, it is not impossible that the competition shifts from Huawei to custom duties, embargoes and military espionage.
An escalation of low blows that, at best, will divide the globe in two. Two separate Internet, two separate international trade circuits, two separate scientific research communities, two Declarations of human rights.
If this is the best of scenarios, the worst is one in which the escalation does not stop at a break-up as separated at home, but leads to military conflict.
A conflict that, at least initially, will not involve gunpowder, because a much more interesting weapon is available to both. A weapon that does not detonate military buildings and does not directly kill people, but that collapses economic empires destroying the credibility of planetary brands, that alters political results, that empties the virtual coffers of crypto currencies, that mess up the automated supply logistics of primary or secondary goods.
Artificial intelligence will immensely enhance the hacking techniques that may be used to make scorched earth around the enemy.
As a consequence of this potential danger, our societies will strengthen the security systems that will become much more complex, closed and invasive, eating spaces to our freedom.
A major debate is currently underway on the neutrality of technology, but I am convinced that technology, like these three mentioned above, as well as science, do not have an intrinsic character of good or evil, but that it acquire it from its user.
The classic metaphor in this philosophical diatribe is the knife. A knife is only a tool, but it can be used for good or evil.
My opinion is that science is neutral, but products deriving from it may not be. Research produces an advancement of knowledge, but the product developed on that research can be designed with good or bad intent. The Internet is not good or bad, but a social network designed to keep users glued to it, may not be.
Artificial intelligence research produces awareness and new capabilities, but a product based on that knowledge can be maliciously developed to influence user behavior. Knowledge of artificial intelligence can create benevolent products that solve problems and improve human life, or malevolent products that harm someone for the benefit of others.
The expansion of knowledge must go on; it is the product that must be kept under control. It is not an ethical AI that must be developed, but an ethical industry and ethical users must be educated. Trying to create a hard-coded ethical DNA in technology would be targeting the wrong problem, but it would also be useless for another reason.
Today, unfortunately, we do not live in a globally interconnected political system capable of regulating the entire community of men and its sub-communities. There is no body of global sovereignty over technology. Instead, in the Donald Trump era, those few global networks of commercial and industrial collaboration between superpowers have reversed the course and may have started a process of stiffening.
Let’s say that one of these political blocks, starts a moral process of self-regulation of the artificial intelligence, drawing up an ethical manifesto for the artificial intelligence to which all the industry under its jurisdiction must adhere. Such regulation will not necessarily be applied by others. Each nation is free to regulate, within its borders, technological research and its industrial or military application.
Consequently, in a competitive and non-cooperative system like the one we live in, no nation or group of nations will be inclined to set boundaries to the development, knowing that the antagonists (political, commercial, industrial or military) will continue freely without any moral or ethical constraint.
Therefore, it makes little sense to try to stem the potential dangers arising from the misuse of technologies by acting on the technology itself. Instead, I think it is essential to focus on what will really make the difference when such a technological power will be available to man, that is, man itself.
The positions that have been taken warning us of the dangers of artificial intelligence only triggers the usual unfounded panic and the general paranoia of public opinion towards the wrong threat.
How many times have I heard that the Internet has ruined the world because children no longer play in the street and because they are at the mercy of pedophiles?
Paranoia only slows down progress.
Just as the Internet has not ruined the world, artificial intelligence is not the danger for the future.
Man will benefit from these technological advances only if he can create robust and collaborative social structures on the whole planet.
When The Technological Singularity will be here, man will have to welcome it with a stable network of international collaboration that fosters mutual development, instead of a bunch of small and large regimes in constant cyber conflict. It will have to be equipped with an educational system adequate to the speed of time, which educates minds to adaptability rather than stiffen them in patterns of the last century. It will need a higher environmental awareness that gives us the tranquility to look at the future and not the anxiety to fix the damages done in the past.
I believe today, rather than trying to give an ethical heart to artificial intelligence, it worth focus attention on the lever that will make the difference between an Orwellian future of global techno-regime and a free future of techno-abundance: politics.
I find emblematic of all this, the moment when Daenerys Targaryen (from Game of Thrones – spoiler alert) on her mighty dragon, was standing on the walls of King’s Landing deciding what to do.
We have two decades to think about what to do. If we want to succumb to anger and selfishness by creating a tomorrow of conflict, scorched earth and protective walls, or if we want to listen to the wise Tyrion and get off the dragon renouncing prevarication and enjoying the victory of a prosperous and free future.
This cannot be achieved by compensating our lack of ethics stuffing some of it in our tools but improving our organization and our set of values. This can only be achieved by improving the quality of our politics.
illustration: Dennis Schäfer