Discover more from SynthCog
Hope, Hype, and Fear on the Road From AI to AGI: An Intro to SynthCog
SynthCog explores the intersection of AI, AGI, and society
Things seem to be looking kind of grim for humanity. A search of contemporary news items as we start the 21st century's third decade will result in a slew of reports on existential threats piling up to wipe us out. On the list are old standards like nuclear war, global pandemic, and an asteroid hit. Then there’s the up-and-comer and perhaps now champion, climate change. But while global pandemic has notched up a rung due to the outbreak of the Covid-19 virus, some are proposing a new contender for the number one spot.
Although still a hazy concept to most of the general public, this new contender seems to be surging in popularity if judged by the hyperbolic headlines in the news, the number of articles featuring the Terminator movies' silver-skulled robot, and pronouncements by tech and science personalities that we're heading to Armageddon at the hands of our future creations. A new end is nigh and its initials are AI.
In The Beginning
Since its first use in a 1955 Dartmouth College paper, the term artificial intelligence, or AI, has proven to be slippery, mysterious, and burdened by an overabundance of hype and misunderstanding. In that original paper, titled A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, four highly regarded researchers in the burgeoning fields of computer science and information theory proposed a "2 month, 10 man study of artificial intelligence be carried out over the summer of 1956."
The topics to be tackled included designing software that would: simulate the higher functions of the human brain, use natural language, mimic the functionality of neurons, improve itself, form abstractions from specific examples, and exhibit creativity. The paper confidently claimed in the opening paragraph:
We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together over a summer. (McCarthy 1955)
Needless to say, the research problems proved somewhat more difficult than originally anticipated. For most of the next 50 years, very little progress was made in any of these areas. The few scattered references to artificial intelligence in the media during those years usually referred to movie robots rather than anything in the real world. What little press did appear regarding artificial intelligence was generally positive, portending a possible future technology that might enable us to solve many of our problems and create things never before possible.
Some felt that it would trigger a massive shift in society during an event referred to as the technological singularity, in which technological advances accelerate so rapidly that it's impossible for us to foresee or even conceive of what happens beyond it. Accompanying such advances would be the flowering of prosperity and health for the human species, a utopia of abundance.
But this started to change during the first decade of the 21st century as a growing unease about the future began to permeate Western society, perhaps spurred on and reinforced by the attacks of 9/11, growing acceptance that Earth was undergoing global climate change, and the rise of new technology leviathans in the business world. Articles and books began to appear exhibiting a growing concern about the possibility of machines vastly more intelligent than a human and the problems this might portend for society.
Meanwhile, after many false starts, the practical field of artificial intelligence began racking up some significant accomplishments starting in the early 2000s. Growing in parallel with hopes and fears of the future has been a surge of optimism as machine learning techniques and hardware innovations have coalesced in new and unforeseen ways.
The capabilities of this new technology are impressive and seem eerily intelligent to us in ways that previous technology hadn’t, and this has led to a shift in the use of the term AI to refer to the narrower capabilities of this new technology. The term artificial general intelligence is now commonly used to refer to broader capabilities such as those outlined by the Dartmouth scientists in 1955.
Splashy AI technology demonstrations have led to a flurry of lucrative dot-com-style startups and company rebranding efforts, with AI figuring prominently in many an investment prospectus and mission statement if not always in truly practical innovations or applications.
Today, a few years into the third decade of the 21st century, technological unease commingled with these AI accomplishments has engendered a concern around artificial intelligence that has swelled into gnawing anxiety for many and abject fear for some. In the press and online, the term artificial intelligence is accompanied by the phrase existential threat and technological unemployment as often as investment opportunity. The influence of giant tech companies has become uncomfortably more apparent as they feed on the personal data of the masses and seem to manipulate not only our personal habits but the currents of society itself.
Given this ominous situation, it seems that it would be helpful if the general public had a better understanding of not only what the threat or opportunity may be, but also what is actually meant by the term artificial intelligence. An existential threat is a threat to our very existence, and while nuclear war or an asteroid hit are fairly concrete dangers, artificial intelligence is still a very nebulous concept to most people. What knowledge exists among the general public is more likely to be based on science fiction movies or online shopping suggestions than any basic understanding of the actual technology.
Artificial intelligence can be found in everything from your thermostat to your refrigerator to your car, in your Walmart online shopping experience and your recommendations on Netflix and Spotify. So is this the AI that's going to lead to killer robots at some point in the near future? Is this the same AI that's going to use humanity as batteries while ensconcing us all in a virtual world à la The Matrix movies? Is that kind of AI, the kind that is actually an existential threat, even possible, and if so, how much longer do we have to enjoy our freedom before the machines enslave or obliterate us?
To many these questions sound cartoonish, but to a growing number they sound very real and too important to ignore.
SynthCog is short for Synthetic Cognition, a term that I prefer to artificial intelligence or artificial general intelligence (something I’ll explain further in a future post). This blog is an examination of contemporary and future artificial intelligence and how it’s perceived by and affects our society. The nature of today's AI technology can be addressed in a straightforward way despite the deluge of hype and misinformation provided by marketing departments and mildly informed journalists. There’s a lot going on in this area lately, and it’s worth exploring the dichotomy between the amazing things it can accomplish and the not always obvious limitations it operates under.
The future possibilities of AI, however, are significantly more difficult to address. First, it’s important to identify what is meant by AI in such a context, specifically a context in which it operates at or beyond the level of a human being in at least the intellectual tasks that humans engage in. As mentioned above, this is typically called AGI for artificial general intelligence, although the terms AI and AGI have become increasingly jumbled through misconception and seemingly intentional misdirection on occasion. The term intelligence itself is a difficult one to pin down and is worth examining in more depth in a future post.
There are many more unknowns than knowns in the functioning of the brain and consequently for any strategies in replicating it. Given this state of ignorance, projections of possible outcomes resulting from such a future technology are highly speculative at best. But it’s important to discuss them as long as this caveat is acknowledged, for most AI scientists agree that such technology will very likely be possible at some point and will undoubtedly have a significant effect on human society.
There is plenty of discord in both the public and professional arenas on the topic of artificial intelligence and what it portends for our future. Given this, it's exceedingly difficult for anyone to filter through all the tangled threads of debate and grasp the nature of the issue and the validity or speciousness of the points being made.
The topic of artificial intelligence has unfortunately been plagued with more than its fair share of arguments that are overly skeptical or not skeptical enough, belief systems wielded as if they were scientific tools, and wishful and fearful thinking manifested in equal measure. Scientists and other leading voices in the field have on occasion let their scientific rigor drift into pseudoscientific folly and their empirical mindset evaporate into the vapor of magical thinking. The impulse to shock and awe has overshadowed healthy skepticism, and critical analysis has often been plastered over with a pastiche of logical fallacies, cognitive biases, misdefined terms, and poorly formed arguments.
The press has eagerly gorged on the tasty clickbait morsels, gee-whiz proclamations on machine learning's boundless potential or terrifying and lurid speculations concerning the rise of superintelligent machines. Little effort seems to have been expended in exploring the foundations or likelihood of either line of thought. Instead of taking on the best arguments of their ideological opponents in this debate, many have chosen to ignore them and counter only the weakest arguments. There has been a tendency to talk past one another and to dismiss or vilify those with opposing points of view.
There are a multitude of opinions and perspectives surrounding the pursuit of AI and AGI. In many ways the discourse reflects the brain itself, a Möbius strip of rationality and illusion that twists and turns and folds back on itself. Ironically, the brain's logical and cognitive shortcomings all come into play in the hype and heightened emotions surrounding this discussion, a discussion both deeply rooted in and highly influenced by the nature of human thought itself.
This blog is an attempt to rein some of this in, to form a solid basis of discussion, and then take the best arguments from all sides and examine them critically. A growing number of people are familiar with some of the speculation regarding the dangers of artificial general intelligence and superintelligent machines, but I’ll explore the foundations on which much of this speculation is based and examine how sound those foundations are.
The goal is to find a path through the forest of fallacies, biases, and hype that currently exist and try to separate the knowns from the unknowns. Forming a more complete picture of the current state of artificial intelligence and the growing commotion surrounding it is crucial in weighing not only whether such a technology might eventually challenge our own intelligence but also whether we should rejoice or dread its doing so.
SynthCog is a reader-supported publication. To receive new posts and support the blog, please consider becoming a free or paid subscriber.