There’s a lot of commotion about artificial intelligence lately. Many movies, T. V. shows, books, and podcasts have explored the subject, from both a fiction and non-fiction perspective. Some people downplay A. I. However, a few fascinations that extreme proponents of it appear to share are its sheer power, and the profound effects it will have on society.
Whether A. I. has already been achieved is debatable. A computer beat a world class chess player a long time ago, and another recently did the same thing with a Go champion. That game is even more complex and challenging than chess. So in some ways, artificial intelligence already surpasses that of humans.
But there are aspects of our brains that can’t be sufficiently mimicked, and this may take a long time. These include creativity and emotions, even though there are programs that can make primitive art. The human brain is so complex that neuroscientists have barely begun to scratch the surface. Some critics of A. I. argue that we need to fully understand our brains before designing robots as intelligent as humans. But this may not be a requirement since aspects of our brains have already been rudimentarily modelled, with impressive results. Many A. I.s do human tasks, with no detailed understanding of the brains they imitate. Other than with chess and Go, these programs show their abilities in all of our electronic devices. In some ways, our smartphones are smarter than us.
The type of A. I. that experts and artists often analyze and make predictions about are genius robots and supercomputers. These are shown in movies like Ex Machina and Transcendence, and books like the I, Robot series, by Isaac Asimov. Extreme proponents of artificial intelligence’s future to follow two opposing ideologies. One talks about “the singularity,” coined by Ray Kurzweil in his book called The Singularity is Near. People following his line of thinking basically have a utopian view. They believe that technology will keep evolving until A. I. monumentally changes the world and improves everyone’s lives. It could have the same kind of profound effects as language, the printing press, and the internet. A. I. advancements like nanotechnology and biotechnology can potentially take us to a new level of global prosperity. Since Ray Kurzweil has been mostly correct in the past, these predictions have considerable merit.
On the opposite end of the spectrum, some A. I. experts make doomsday predictions. This idea is expressed in movies like the Terminator series; computers will become so intelligent that they make us their slaves and possibly kill us all. Programs will figure out that our species is the biggest threat to the universe, so we need to be controlled or eradicated. Doomsday proponents go along with the utopian notion that artificial intelligence will be a momentous historical achievement. However, they think that this power will construct a dystopian future, rather than a heavenly one.
Philosophers and computer scientists also make convincing arguments for this view. In Nick Bostrom’s book, Superintelligence, the Swedish philosopher analyzes A. I. research, and expert opinions. He makes a strong case for the dangers of this development. For instance, since we cannot program common sense, it’s borderline impossible to anticipate every way that superintelligent entities could misinterpret directions. If you tell an artificial intelligence to make as many paper clips as possible, it could potentially use the entire world’s resources to do it, and ruin the planet in the process. There wouldn’t be any raw materials left for running societies.
Human morality evolves, as shown by the nearly ubiquitous acceptance of slavery throughout history. So there are almost inevitably things most people do today that will be seen as immoral in the future. This is why it might be impossible to program morality into machines. We never really understand it ourselves. There are so many important, complicated problems to solve that A. I. could easily wipe out the entire human race. Such doomsday scenarios are expressed by Elon Musk, the founder of Tesla, SpaceX, and Solar City. Innumerable people unquestioningly agree with him because they worship his intelligence and success.
I was convinced by the utopians, until I became persuaded by the dystopians. But recently, I started thinking about the subject more thoroughly, from more angles. In a recent Tim Ferris podcast, the investor, Naval Ravikant made me doubt doomsday predictions. Ferris dissects the habits and practices of world class performers to help people improve their lives in multiple domains.
Ravikant brought up a point that I had heard Michio Kaku make before. Kaku is a genius theoretical physicist who built a particle accelerator in his mom’s garage when he was a teenager, just like Sheldon from The Big Bang Theory. The difference is that he did it in real life. His proposition is that Moore’s Law is expected to drop off. The law proposes that every 18 months, the number of transistors on an integrated circuit doubles. This means that technological devices get twice as complex every year and a half. The reason that this trend is expected to slow down, and perhaps stop, is that more circuits require more power. The increased number can improve a computer. But computer scientists have not solved the problem of using more transistors with greater efficiency, without needing an ever-increasing amount of energy. There are limits to power, regardless of the number of transistors. So if the energy efficiency problem is not solved, a device will eventually have so many transistors that it has insufficient power to use them. Combined with our limited understanding of neuroscience, this means that it could be a long time before superintelligent robots either save or destroy us. It might not even happen.
So will superintelligent A. I. lead to our salvation or our doom? It’s hard to answer that question, and it’s possible that the outcome will be somewhere in between these extremes. Past developments like the printing press and the internet neither eradicated nor saved us. They both came along with advantages and disadvantages. The internet brings us closer together and can make us smarter, but it also divides us and makes us dumber. It leads us to click bait, cat videos, fake news and echo chambers. Regardless of the consequences, a lot of experts on artificial intelligence agree that the technological gods will not emerge until around 2130 at the earliest, if at all. So contrary to popular belief, science fiction will probably not be manifested in reality until most of us are dead. When or if this happens, I wouldn’t be surprised if we are neither elevated to a new level of consciousness, nor eradicated by our own creations. Or maybe both will occur, and messiah A. I.s will fight demonic ones. Who knows? Regardless of the ramifications, it’s fascinating to imagine how this will change humanity in the future. Will A. I. Jesus save us? will A. I. Satan destroy us? Or are there so many unknown factors that the outcome is impossible to predict? Only time will reveal the truth.