Using AI to Scare Yourself and Others Redux
The Future of Life Institute's open letter to pause AI development
In my last post I described an op-ed written in 2014 that kicked off the cycle of modern anxiety about the dangers of AI/AGI. On March 22, 2023 a similarly themed open letter was released to the public that again warned of the dangers of AI/AGI and called for a 6 month moratorium on training systems similar to but more powerful than OpenAI's GPT-4. The letter was released on the website of the Future of Life Institute, an organization founded in 2014 with the following mission statement:
The Future of Life Institute’s mission is to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.
Three of the four authors of the 2014 op-ed are affiliated with this organization and, one would assume, involved to some degree with this open letter (Stephen Hawking, the fourth author, passed away in 2018).
Like the 2014 op-ed, this new open letter takes a valid concern about current AI and a questionable concern about future AGI, fuses the two into a mishmash of wild speculation and paranoia, and offers a handful of directives ranging from reasonable to ridiculous to authoritarian.
The open letter has been in the news a lot lately as the THING YOU SHOULD BE VERY ALARMED ABOUT du jour. Any analysis beyond that has been pretty sketchy, so I think it’s worth taking a look an dissecting the letter and how it relates to the current state of AI affairs.
Mushy Terminology —> Mushy Reasoning —> Mushy Conclusions
AI typically refers to the types of technology we have now, such as machine learning systems, and AGI typically refers to hypothetical technology of the future that is capable of human-level or beyond intelligence. Readers may have noticed my occasional use of the term AI/AGI rather than one or the other. This is because much of the AI Dystopian messaging conflates the two, so it’s hard to talk about one without the other (which is why they conflate it). As in the original 2014 op-ed, the new open letter continues this practice. It opens with this:
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.
Right off the top, the letter makes use of the pretty mushy term human-competitive intelligence. This is not a well-defined or widely-used term and is not defined anywhere in the letter, so its specific meaning is left up to the reader. Since the letter was published shortly after the release of GPT-4 and GPT-4 is mentioned in the letter, it seems reasonable to assume that GPT-4 or something like it would qualify to the letter’s authors as human-competitive intelligence.
What’s not clear is where the intelligence line is drawn and what else qualifies as human-competitive. Spreadsheets, databases, Wikipedia are all quite competitive with human intelligence in their particular domains, but none of them, or GPT-4 or anything similar to it, really qualifies as possessing intelligence at all if we’re talking about the type of intelligence we attribute to humans. In any case, the important point is no doubt that whatever human-competitive intelligence is, it’s very dangerous!
Scattered throughout the letter are footnotes of references meant to back up the statements made, but following reference links seems instead to highlight many of the problems inherent in the letter’s messaging.
The opening sentence above has two footnotes. The first contains a list of articles, papers, and books that present research (and “research”) on risks ranging from analyses of the financial and energy resources needed to create large language models to the impact of such models on the workforce to worrying about superintelligent overlords deconstructing humans into their fundamental elements to create more computing resources (among other existential catastrophes). That’s quite a range, and that’s what happens when you conflate terms.
It’s the same obfuscation of terms that the previous op-ed engaged in, a fallacy of Equivocation. The authors should really pick a lane. Make a statement about the risks and misuse of current AI technology or a statement about the hypothetical risks and misuse of currently-non-existent-but-potentially-possible-in-the-future AGI. But don’t mush them together into one incoherent statement that refers to two very different things as if they were the same thing.
I suspect that the authors are more concerned about the latter risk rather than the former, as the letter quickly devolves into statements like the following:
Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
GPT-4 is not AGI. It has many impressive properties and the bounds of its abilities are still to be determined, but it will not suddenly “wake up” and take over the world and kill everybody. I’m pretty sure that the same is true of the next iteration.
There are references in the letter to AI scientists who have made very cautious statements that there may be some spark of something in GPT-4 that they feel may be the first signs of something like AGI or at least could lead eventually to something closer to AGI. I’ll go into this in more detail in an upcoming post, but I suspect what GPT-4 actually shows is this: some of the things that we associate with human intelligence don’t really require intelligence, at least our kind of human intelligence. I suspect many people working in the field are thinking along those same lines.
This conflation of terms is really one of the worst aspects of this letter and AI Dystopianism in general, as it’s what twists reasonable concern into alarmism. It not only causes confusion for journalists, politicians, and the general public, but it also causes confusion about the signatories to this letter and why any particular person chose to sign it. Many are concerned about the risks of current AI technology but don’t buy into the AI Dystopian view of existential AGI risk or that GPT-4 is on the cusp of AGI or that AGI is arriving anytime soon (as an example, see Gary Marcus's blog here, here, here, and here).
Soylent Green Is People
Another unfortunate aspect of the open letter is the perpetuation of the idea that no one is listening or paying attention.
The second footnote refers to the “acknowledged by top AI labs” part of the first sentence. And sure enough, the footnote contains links to an interview with Sam Altman, CEO of OpenAI, and an interview with Demis Hassabis, CEO of DeepMind. Both interviews show each of these CEO’s to be not only very intelligent, but thoughtful and highly aware of the positive and negative potential of the technology they’re developing. They both offer very considered opinions of where the technology might lead and why they’re very cautious in their approaches.
In fact, both companies have very clear and detailed statements available as to how they approach AI safety. OpenAI not only has multiple papers on its safety efforts and goals dating back to its 2017 charter, but also information on planning for AGI technology of the future. Deepmind began putting together an ethics board in 2014 and opened a unit of the company devoted to ethics in 2017 called DeepMind Ethics and Society. In fact, pretty much all of the large companies (and most of the smaller ones) that are developing AI have similar boards, departments, papers, and/or statements, as mentioned in my last post.
These companies already collaborate with each other on safety issues and already are in regular communication with governments about the positives and negatives of the technology. There are already organizations to coordinate discussion and planning of AI related issues, such as the Partnership on Artificial Intelligence to Benefit People and Society, of which all the companies mentioned above (and more) are members.
There are also intergovernmental organizations such as the Organisation for Economic Co-operation and Development (OECD) that has a policy branch specifically dedicated to AI and which released guiding principles for AI development in 2019 and continues to develop and promote safe and beneficial AI development worldwide. Agencies and organizations concerned with AI safety and ethics seem to be proliferating like rabbits in springtime.
Yet, the first paragraph of the letter ends with a warning about Frankensteinian excess:
Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
And it goes downhill from there…
Caution Based On Science Reality Rather Than Science Fiction
I’m not suggesting that all the players are perfect and on top of everything. They’re all still human, after all. What I am suggesting is that the reality is not what is being portrayed by this letter or by AI Dystopianism in general.
The letter mentions the 2017 Asilomar AI Principles that were developed during a conference organized by the Future of Life Institute in 2017. That conference was modeled after the 1975 Asilomar Conference on Recombinant DNA. This earlier conference involved discussion of established science with both known current dangers and very credible potential dangers. The principles developed mainly focused on creating specific safety procedures with which to perform lab work with the technology then available. That makes sense. As technology progressed, more guidelines were developed to guide scientists and mitigate dangers.
The 2017 Asilomar Conference on Beneficial AI was not this. While “widely-endorsed” might be somewhat of an overstatement, people in and around the field of AI did support it with a resounding, “sure, why not.” That’s because the principles were so vague that it was fairly easy to get people to agree with them. The principles that involved AI, which were most of them, were aspirational in nature rather than specific safety guidelines like those in the 1975 Asilomar Conference on Recombinant DNA. The principles involving AGI were really not much more than gossamer wisps of hope and warning. For example, here’s the 23rd and final principle:
Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
Sure — great. Sounds good.
The overall point is that yes, there should be thoughtful discussion about the technology that’s being developed, but it should be based on what’s real rather than fictional. And, to a large degree, that discussion is happening. Maybe it should happen even more — that’s a reasonable opinion. All the major players have already indicated that they’re eager to continue engaging in that discussion.
Something frightening poses a perceived risk. Something dangerous poses a real risk. Paying too much attention to what is frightening rather than what is dangerous—that is, paying too much attention to fear—creates a tragic drainage of energy in the wrong directions.
— Factfulness: Ten Reasons We're Wrong About the World--and Why Things Are Better Than You Think by Hans Rosling, Anna Rosling Rönnlund, Ola Rosling
We Have Met The Enemy And He Is Us
The following questions are asked near the beginning of the letter as justification for the proposals that come later:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?
In the followup FAQ, the authors state:
The use of AI systems – of any capacity – create (sic) harms such as discrimination and bias, misinformation, the concentration of economic power, adverse impact on labor, weaponization, and environmental degradation.
I hate to be the bearer of bad news, but that’s all already happening without the help of “nonhuman minds.” Sure, these are valid concerns, but the general intelligences we should worry about for the foreseeable future are biological, not artificial. Propaganda and untruths have always been widely spread, and people have created increasingly better tools to do it over the centuries, like language, writing, the printing press, radio, tv, the Internet. These are all valuable tools, but like any tool, how they’re used depends on the user. Similarly, jobs have just about always disappeared, even fulfilling ones, due to automation or foreign competition. Historically, society has dealt with it pretty effectively. Maybe it’ll be worse this time, maybe not, but there’s no reason to think that human society isn’t resilient enough to deal with it.
Again, the use of AI systems doesn’t create these things, the people using it do. The key is to consider the usefulness and upside potential of a tool versus its risk. Hammers are very useful tools, but you can use a hammer to build a house or crack someone’s skull. Attempting to ban or limit a tool’s usefulness rather than regulate its use is usually a poor idea; more importantly, it pretty much never works.
When it comes down to it, the main problem with this letter is that it obscures the real issues we need to deal with now and in the foreseeable future. Some of the signatories no doubt see past the science fiction patina to these real issues, and that’s likely why many of the signatories support it. There is ample evidence of quite a lot of thought on these issues in the AI community as well as in academia in general and in government. Certainly the proliferation of institutes, boards, committees, and agencies that concern themselves with AI would seem to indicate this.
Who Watches The Watchers
There are some reasonable suggestions in the letter. For example:
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
While one might wonder why “loyal” is in there, the rest of the proposal seems fine. In fact, it seems to mirror exactly what OpenAI, Deepmind, and other AI companies have already said themselves. In fact, the authors even state in the followup FAQ:
Public comments by leaders of labs like OpenAI, Anthropic, and Deepmind have all acknowledged these risks and have called for regulation.
Great! There seems to be pretty widespread agreement from those developing this technology, related outside agencies, and government bodies that this technology is something that needs careful thought as we proceed. While the previous Asilomar AI conference didn’t really provide any solid guidelines (and, admittedly, at the time there was a lot less to guide), another conference to generate more specific, actionable guidelines (like the Asilomar biotech conference of 1975) might well be worthwhile.
Unfortunately, things start to go a bit off the rails after this. The other actions called for seem likely to be unworkable, and a few of them seem pretty ominous.
There’s the main proposition:
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
This seems not only unlikely, but it’s really hard to see how this will accomplish anything related to their concerns. The FAQ accompanying the letter attempts to explain why this really does make sense, but I don’t think it does this successfully. The authors state that this pause will give all the various entities time to talk amongst themselves and come up with all the safety guidelines, regulations, and procedures. And I guess, create tools to figure out what’s going on in the black boxes at the heart of current Large Language Model (LLM) systems.
It then states that this isn’t possible in that time frame.
We are aware that these responses involve coordination across multiple stakeholders and that more needs to be done. This will undoubtedly take longer than 6 months.
Well, I guess I do agree with that.
In any case, it looks like these sorts of discussions are already taking place. Perhaps this will spur these discussions forward a little faster, but it might be at the cost of spreading undue alarm, which inevitably results in poorly thought out decisions.
Then there are statements like:
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
This pause should be public and verifiable, and include all key actors.
These all have hints of reasonableness, but it seems overshadowed by traces of a creeping authoritarian undercurrent that seems to flow just under the surface of more than a little AI Dystopian thought. These statements leave unstated who the “independent outside experts” and the “we” and the people doing the verifying might be or who gets to decide. The letter then moves into more direct statements like:
If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
For those who’ve watched any recent government hearings on technology, such as those involving social media, this will not be a comforting thought. As a measure of “success” the authors of the letter happily state in the followup FAQ that the Center for Artificial Intelligence and Digital Policy (yes, another AI watchdog group) has asked the FTC to stop new GPT releases. This certainly seems like overreach for the agency, although something they seem to have more and more interest in lately.
Getting Others To Play In Our Sandbox
Perhaps the biggest problem with this is that there are governments that may have other plans than the authors of the open letter. It’s notable that the 38 member countries of OECD, the intergovernmental agency mentioned above that has released guiding principles for AI development, does not include China or Russia. Both of these countries have shown quite a bit of interest in AI, as one might expect. Vladimir Putin said the following in 2017:
Putin, speaking Friday at a meeting with students, said the development of AI raises “colossal opportunities and threats that are difficult to predict now.”
He warned that “the one who becomes the leader in this sphere will be the ruler of the world.”
Putin warned that “it would be strongly undesirable if someone wins a monopolist position” and promised that Russia would be ready to share its know-how in artificial intelligence with other nations.
Well, I hope he’s still good for that promise, but recent events would seem to make that less and less likely.
Indications from China haven’t been quite so warm and fuzzy, with big plans to integrate AI into their military and outspend the world in domestic artificial intelligence development. Given current tensions between China and the West, they’re not likely to get warmer or fuzzier soon.
Obviously, this isn’t good. Perhaps it would be worthwhile to put more energy into getting humans to cooperate with other humans rather than worrying excessively about nonhuman minds crashing the party.