The Checkered History of Foretelling the Future
Soothsayers haven't scored well over the years, and AI Soothsayers are likely no exception
These are the words of one of the most renowned researchers in the field of artificial intelligence:
In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable.
There have been a lot of predictions lately about the future of AI and society, some positive and some negative. Some very negative.
So the question we’re faced with, one that’s appeared time and time again over the years, is this: what is the likelihood that any particular expert or group of experts is correct when attempting to predict the future?
Depending on your inclinations, you may consider the AI expert’s quote above as a positive or negative prediction. But if it’s even remotely correct, it would seem quite reasonable to accept the dire warning of AI Dystopians as at least worthy of consideration and their sense of urgency as completely appropriate.
Omniscient Prognostication
In 1814 the polymath Pierre-Simon Laplace published A Philosophical Essay on Probabilities in which he described a thought experiment speculating that an entity with appropriately vast intelligence would be able to determine the position and active forces on every atom of nature and thus predict:
…the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
The entity of this thought experiment, now referred to as Laplace's demon, could thus use its knowledge of the universe to predict the future with supreme accuracy.
Laplace's demon represents the extreme end of probabilistic forecasting, for if any such entity actually existed, it would be able to predict the future with 100% accuracy. There are many arguments against the possibility that such an entity could ever exist, including whether the universe itself is completely deterministic or not. Regardless of the prospects for Laplace’s demon, the odds for the rest of us accurately predicting the future are decidedly poor.
Predicting the future is hard. We don't know the exact state of every particle in the universe, and we don't have Laplace's demon to fill us in. With our merely human level intelligence, we're always operating with incomplete information and imperfect intellect.
Yet this hasn't stopped many of us from trying, and history is littered with considerably more people who’ve failed miserably than with people who’ve succeeded.
Magnifying Ignorance
Adding to our ignorance of the future is our use of and susceptibility to logical fallacies. People both suggest and accept Appeals to Accomplishment and Authority, even though expert knowledge of the present barely suggests and in no way guarantees expert knowledge of the future. People frequently insist on making Hazy Generalizations when they have little to support their predictions. Often they use Mind Projection Fallacies and suggest that the truth of their prediction is self-evident for anyone who’s properly informed.
On top of this, there are the ever-present cognitive misfires that cloud our thinking about the future and those who predict it. Studies of the Overconfidence Effect show that there's usually quite a wide gap between what people think they know and what they actually know, and the effect has been shown to be even more pronounced in people who consider themselves to be experts.
This is not to say that they are delusional in their self-assessment of expertise, but merely that while they know more than a non-expert on a particular subject, they probably don't know as much as they think they know. When it comes to knowledge of the future, this knowledge gap widens considerably.
Unfortunately, the Overconfidence Effect is only one of many evolutionary traits that plague our quest for reasoned and rational thought. Most humans feel some degree of suspicion and apprehension when faced with uncertain circumstances, which certainly makes sense given our evolutionary history. Proto-humans were much more likely to survive if they were worried about the potential of being attacked by dangerous animals than if they walked around blissfully oblivious to the real potential for danger.
At least on a small scale, false positives tend not to be as dangerous as false negatives when it comes to predicting threats. Unfortunately, when inapropriately scaled up, this tendency has led to extreme handwringing through the years over future disasters that never came to be.
Too Many People
Overpopulation, for example, has been a popular worry dating back to at least the 18th century, when the English cleric, scholar, and economist Thomas Robert Malthus warned of population problems. In 1798 Malthus published An Essay on the Principle of Population, in which he warned that humanity was forever doomed to be caught in what has come to be called a Malthusian trap. The trap is a kind of Catch-22 in which any increase in agricultural production causes population growth. This growth inevitably outpaces the increased agricultural production and results in famine, which then limits the population again. Thus, humanity is doomed to experience continual cycles of population growth and famine from which it can never escape.
Although worries about the Malthusian trap subsided somewhat with the Industrial Revolution and the mixed bounty of progress it provided, fears of runaway overpopulation began to grow dramatically with the environmental movement that hit its stride in the late 1960s and 1970s. Biologist Paul Ehrlich wrote in his highly influential 1968 book The Population Bomb:
The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now. At this late date nothing can prevent a substantial increase in the world death rate...We must have population control at home, hopefully through a system of incentives and penalties, but by compulsion if voluntary methods fail. We must use our political power to push other countries into programs which combine agricultural development and population control.
Speculation on overpopulation has continued to shift back and forth over the years since Ehrlich's book was published. Of course, hundreds of millions of people did not actually starve to death in the 1970s due to overpopulation. The world population growth rate seems to have peaked between 1962-1968 and has been on a downward trend ever since. The production of mass quantities of food has become increasingly industrialized and globalized, and the problems we find in feeding people are almost exclusively political rather than technological or environmental.
Currently, most predictions of world population, such as those from the United Nations, project that it will peak sometime later this century at between 8.6 billion and 10.4 billion people before declining slightly. Yet many people still fear overpopulation and consider it our unavoidable future.
Not Enough Resources
Tied closely to those fears of overpopulation and starvation in the 1970s were fears of resource depletion. In 1970 Scientific American published an article by Harrison Brown, a chemist at the National Academy of Science, in which he estimated that humanity would run out of lead, zinc, tin, gold, and silver before 1990 and copper shortly after 2000.
And the same year that Ehrlich's book was published, a loosely affiliated group of intellectuals began meeting in Rome to ponder a holistic approach to the many problems they believed existed and were tied to humanity's effect on the planet. This group of scientists, high-level government officials, and business leaders began calling themselves the Club of Rome and hired a group of MIT scientists to create computer models for predicting the future. In 1972 they published The Limits of Growth, in which a number of dire predictions were made, including that the world would likely run out of copper, gold, silver, lead, mercury, tin, tungsten, zinc, natural gas, and petroleum by the year 2000.
The 1970s also gave us the worldwide oil crisis, in which oil production dipped severely and oil prices skyrocketed. Despite overt political causes, these shortages led to much scientific speculation on the limits of the world's oil supplies, and this filtered into political thinking. In a 1977 speech to the nation, then President Jimmy Carter stated:
Unless profound changes are made to lower oil consumption, we now believe that early in the 1980's the world will be demanding more oil than it can produce…I know that many of you have suspected that some supplies of oil and gas are being withheld from the market. You may be right, but suspicions about the oil companies cannot change the fact that we are running out of petroleum.
This was not an idea President Carter cooked up on his own but instead was one promoted by many experts.
Chasing the Wrong Demons
Perhaps more importantly, Carter later prescribed a solution to this looming disaster, stating that:
Too few of our utility companies will have switched to coal, which is our most abundant energy source.
Carter, a conservationist at heart, thus recommended that the U.S. "increase our coal production by about two-thirds to more than 1 billion tons a year." And here we see a less obvious issue that arises with extreme and unfounded alarmism. As has become more apparent over the intervening decades since Carter's speech, coal combustion produces more greenhouse gases than the combustion of any other fossil fuel. Obviously, and unfortunately for a conservation-minded individual like Jimmy Carter, it turns out that the insupportable fear of running out of oil led to promoting a fuel source significantly worse for the environment and for society.
The anti-nuclear movement of the 1970s and 1980s fell into the same trap. Fear of both a nuclear meltdown and the inadequate disposal of nuclear waste brought the nuclear industry to a nearly complete standstill. Unfortunately, the alternative was burning significantly more fossil fuels and pumping a huge volume of greenhouse gases into the atmosphere. It's indisputable that the number of premature deaths caused directly and indirectly by the use of fossil fuels dwarfs all deaths ever caused by nuclear power. One could certainly argue that more nuclear power plants might have resulted in more nuclear disasters and potentially more deaths than fossil fuels, although this is statistically unlikely given the historical record of nuclear power plants and fossil fuel production and use.
The point, however, is that unfounded alarmism in one area can lead to blindness in other areas.
Many of these predictions fail because they are False Dichotomies, they insist on all-or-nothing scenarios. In projecting future possibilities, they neglect to consider technological and societal progress, middle-ground outcomes, or the direct effect that knowledge of the potential problems may have on the future of those problems.
They also neglect the "unknown unknowns," a phrase that former U.S. Secretary of Defense Donald Rumsfeld infamously used to describe the lack of evidence available for Iraq's having weapons of mass destruction. The unknown unknowns have been responsible for many future predictions running afoul of the actual future. They can make things turn out better or worse than expected, and unfortunately there’s usually no way to know in advance which way it will go.
Aggregating Unknowns
So what should we think about the scary conclusions of the AI Dystopians, including those that are experts in the field of AI, such as Stuart Russell and, more recently, Geoffrey Hinton? The quote at the beginning of this post predicting AGI in three to eight years was from Marvin Minsky, a computer science professor at MIT, co-founder of its AI laboratory, and a seminal figure in the history of artificial intelligence.
The quote appeared in a 1970 article in Life magazine titled, "Meet Shaky (sic), the first electronic person: The fascinating and fearsome reality of a machine with a mind of its own." Shakey was a an early robot developed at what was then the Stanford Research Institute and which could (very slowly) navigate through a field of simple obstacles with a degree of autonomy.
Minsky certainly wasn't alone in his optimism about the progress of artificial intelligence. Herbert Simon, a professor and Nobel laureate at Carnegie Mellon University and an expert in computer science, cognitive psychology, and political science, predicted in his 1960 book The New Science of Management Decision that:
Technologically, as I have argued earlier, machines will be capable, within twenty years, of doing any work that a man can do.
The history of computer science is littered with such optimistic predictions. When it comes to more modern predictions of AGI's arrival, there’s been a little more reluctance to engage in such bold predictions. At least until recently; this has changed now with the arrival of LLM systems like GPT-4 and LaMDA.
Expert predictions about future events in areas with many unknowns have not had a very high success rate. In fact, the success rate in well-established areas isn't so hot, either.
Moreover, aggregating expert opinions about such future events does not greatly increase the accuracy of prognostication in much the same way that gathering together ever-greater numbers of medieval theologians won’t result in more accurate information about the number of angels that can dance on the head of a pin.
It's certainly worth stating explicitly that it is the nature of our reality that any new day may bring about some breakthrough leading to eventual human level or greater artificial intelligence. But intelligence like ours is complex and our knowledge of its terrain and boundaries is limited. Predicting its arrival is full of unknown unknowns.
The less exotic warnings, such as the spread of misinformation and bias or the loss of jobs due to LLM-based AI, are certainly based on fewer unknowns if not fewer fallacies and cognitive biases. They’re certainly topics worthy of discussion. But its also worth considering in that discussion if we are, in fact, chasing the right demons or if the demons exist at all.