Governments Take AI To Task
The U.S. Senate considers cracking down on social media by proxy as the E.U. develops a jobs program for lawyers
There’s a phenomenon that pops up frequently in discussions of AI that I call Unintended Obfuscation. This phenomenon is not exclusive to the field of AI, but it arises frequently in such discussions because the technology of AI is complex and unfamiliar to most people not directly involved in its creation.
Unintended Obfuscation begins when someone not very technically grounded in the technology asks a question or makes a statement about AI that doesn’t really make sense. The person in the field who attempts to answer the question or address the statement tries to make sense of the question anyway, and the answer they provide either has little to do with the question or little to do with the topic of AI. What arises is a situation in which people are talking to each other but no one is really communicating.
AI in the U.S. Senate
This phenomenon blossomed wildly in the recent U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing that took place on May 16th, 2023. This isn’t to say that the hearing was not worthwhile in some way, and some of the senators did have reasonable questions. However, it’s clear that there’s a long road ahead when it comes to government leadership in this area.
Most of the participants seemed happy with the hearing afterwards, but I’m not so sure it went that well. It reminded me of the Tower of Babel except that while everyone was speaking different languages, none of them seemed to notice.
Giving testimony were Sam Altman (CEO of OpenAI), Gary Marcus (Cognitive Psychologist, Professor Emeritus of New York University, and substack writer), and Christina Montgomery (Chief Privacy & Trust Officer at IBM).
The hearing kicked off with an audio clip of committee chairman Senator Richard Blumenthal giving opening remarks about current AI concerns. The remarks, it turns out, were written by ChatGPT to seem like a something Senator Blumenthal would say, and the voice was an audio deepfake created to sound like the senator.
Deepfakes, both visual and auditory, are definitely an issue to be concerned about. The technology has advanced quite a lot recently, and it’s possible to push out a deep fake of somebody saying something they never said in order to promote or discredit or mislead.
However, the opening remarks themselves were not about this. Here they are:
Too often we have seen what happens when technology outpaces regulation, the unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.
There are a lot of issues specific to AI systems based on Large Language Models like GPT-4 that are worth discussing, both good and bad. However, most of those opening remarks and much of the hearing that followed had very little to do with those issues. Instead, it seemed as if the Senate was trying to take a second shot at regulating social media after missing the target the first time.
Most of what was discussed had very little to do with the kind of AI represented by OpenAI or IBM. Some senators simply threw in a lot of buzzwords in hopes one would connect (e.g. privacy, data security, virtual you, blockchain, Russian interference, Section 230, and, of course, quantum).
Here’s an example from Senator Marsha Blackburn:
You’ve got financial services that are saying, how does this work with Quantum? How does it work with blockchain?
Sometimes the reach of these systems was taken to a hyperbolic and somewhat nonsensical extreme, as in this example from Senator Mazie Hirono:
For example, you can ask AI to come up with a funny joke or something, but you can use the same, you can ask the same AI tool to generate something that is like an election fraud kind of a situation.
It’s hard to say what is meant by an “election fraud kind of a situation” or how ChatGPT would engage in it.
Senator Cory Booker expressed the following concern:
It is really terrifying to see how few companies now control and affect the lives of so many of us. And these companies are getting bigger and more powerful.
I don’t disagree with that statement, but this is certainly nothing new and has a history that includes the Dutch East and West India Companies of the 17th and 18th century, the robber barons of the 19th century, and companies like IBM, Ford, GM, and GE in the 20th century. It’s certainly nothing that’s dependent on AI.
Senator Jon Ossoff worried that GPT-4 or its progeny will result in systems that replicate the Philip K. Dick story and Steven Spielberg movie Minority Report, in which people are arrested for crimes they are going to commit in the future. He stated:
So we may be confronted by situations where, for example, a law enforcement agency deploying such technology seeks some kind of judicial consent to execute a search or to take some other police action on the basis of a modeled prediction about some individual’s behavior.
Luckily, this is already illegal. The above scenario seems clearly unconstitutional and in violation of multiple federal laws, but I’m not sure how much that scenario has to do with regulating AI technological development. Senator Ossoff followed up this statement with a question to Mr. Altman about whether we need a national privacy law, asking him:
And what would be the qualities or purposes of such a law that you think would make the most sense based on your experience?
The somewhat flummoxed Mr. Altman admits to this not being his area of expertise, and he further tries to clarify that the LLM model used by OpenAI just reads in what’s available on the internet — it doesn’t have access to individuals’ private data.
Privacy is a big issue with social media and other tech companies in which end users are the product, but it’s not a particularly critical area of concern for something like ChatGPT, which has no advertising and uses a subscription model for its premium product. Users can easily opt out of having their queries used to further train the system. Privacy with LLMs, at least in their current and foreseeable iterations, doesn’t seem like one of the more important concerns.
Misinformation Everywhere
One of the biggest concerns throughout the hearing was the spread of misinformation and disinformation, with both terms used vaguely and somewhat interchangeably many times over the course of the hearing. I discussed this in a previous post, but it bears repeating that misinformation is in the eye of the beholder and can change to just being information over time. If it’s used to refer to anything specifically, it’s typically something the person using it doesn’t like.
Dr. Marcus opined:
We don’t really have the tools right now to detect and label misinformation with nutrition labels that we would like to, we have to build new technologies for that.
I suspect that no amount of technology is going to be able to effectively detect and label misinformation when half the people in that hearing chamber don’t agree with the other half as to what it is and isn’t. It’s unlikely that any two people in the world can agree 100% on what is and isn’t misinformation. Expecting some future version of ChatGPT to be an oracle of truth is a fantasy; this isn’t because it can’t be made objective, but because no matter how objective it is, some subset of the population could still consider any particular output to be misinformation.
Alignment of Values
The question of misinformation is closely tied to the question of alignment, which as used in discussions of AI means aligning the system’s functioning with the “values” of some group. Mr. Altman states at one point:
One thing that I think is very important is what these systems get aligned to, whose values, what those bounds are, that that is somehow set by society as a whole, by governments as a whole.
Once again, whose values are we aligning to? The values of just the senators in that room are not aligned in quite a large number of ways, let alone society in general.
These systems don’t think and they don’t have opinions. What they output is what they’re trained to output. Many statements and articles about LLMs place the blame for biases in their output on biases spread throughout the internet and all the other material used to train them. If you ask ChatGPT, it will also say this.
But it’s not really true. As mentioned in a previous post, once the LLM is trained on a vast corpus of data to form an initial framework, it undergoes another training phase to shape its interactions with users. That training phase is done by humans, and it seems likely that it’s these humans who introduce most of the bias seen in the system’s output.
This is why the researcher David Rozado was able to show that ChatGPT was strongly left-leaning in tests designed to measure political perspective in humans, and he was then able to train a system based on the same framework used in ChatGPT to instead test as very right-leaning.
Trying to align these systems to some mythical human value system is going to be a Sisyphean task. Hoping for governmental bodies to regulate doing this in a reasonable and timely fashion seems even more difficult.
What Kind of Government Control Are We Talking About?
Senator Blumenthal asked the following suspiciously authoritarian question:
I think there has been a reference to elections and banning outputs involving elections. Are there other areas where you think, what are the other high risk or highest risk areas where you would either ban or establish especially strict rules?
It seems like there is an underlying assumption here that it’s ok for the government to ban informational tools related to elections simply because some group, even a large group, might not like the information they provide. This seems a bit contrary to the First Amendment of the Constitution.
It’s hard to pin down exactly what sort of information the senators feel would be ban-worthy, but it’s worth noting that every state already has laws against tampering with elections. This includes interfering with the voting process in any way.
This question was initially directed toward Ms. Montgomery from IBM. Her somewhat confusing response:
The space around misinformation, I think is a hugely important one.
Not a very informative answer, but then what would be a good response to such an unsettling question?
When it comes to spreading misinformation, there are two factors to consider. The first are the tools available to corporations, governments, and individuals with which they may sway public and individual opinion, and the second is the sophistication of individuals to identify and resist those tools. Both have evolved over the years and, I suspect, they’ve advanced in roughly the same proportion. That doesn’t mean that this will continue to be the case in the future, but it does mean that this situation isn’t new and that the historical record doesn’t support the level of angst reflected in these statements.
Nailing Things Down
Perhaps Senator John Kennedy summed up the entire hearing best when he posed three hypotheses:
Many members of congress do not understand artificial inteligence
That absence of understanding may not prevent congress from plunging in with enthusiasm and trying to regulate this technology in a way that could hurt this technology
There is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying
It’s certainly hard to deny the first two. The third one may be overstating the case a bit, but there is certainly a danger posed by unscrupulous actors.
Senator Kennedy then posed the question:
Please tell me in plain English, two or three reforms, regulations, if any, that you would implement if you were queen or king for a day.
Answers included internal bureaucratic oversight, an FDA-like regulatory agency, funding for AI safety research, and independent audits.
Senator Josh Hawley seemed to have the same lack of faith in the government as Senator Kennedy, but he also seemed to demonstrate the validity of Senator Kennedy’s first hypothesis. He listed what he felt were the potential downsides of the current technology:
So loss of jobs, invasion of privacy, personal privacy on a scale we’ve never before seen, manipulation of personal behavior, manipulation of personal opinions, and potentially the degradation of free elections in America. Did I miss anything?
Loss of jobs is certainly something worth considering, but I’m not so sure about the other ones. These are all current areas of concern in regards to social media, mainstream media, corporate and government data collection, etc. But I think what’s seems to be missing is any evidence or reasoned speculation that these are the main areas of concern for LLMs and similar systems. Unfortunately, Senator Hawley seems to have put this list together from some of the witness testimony during the hearing.
Senator Hawley stated what he felt was a potential solution to the problem:
Why don’t we just let people sue you? Why don’t we just make you liable in court? We can do that. We know how to do that.
Mr. Altman responded with understandable confusion:
I mean, please forgive my ignorance. Can’t, can’t people sue us?
Yes, yes they can. Section 230 of the Communications Decency Act of 1996, which is what gives online platforms like Facebook and Twitter immunity from being sued for the content of its users, does not apply to ChatGPT or GPT-4 or similar systems. LLM-based systems are fundamentally different from social media platforms and communication carriers. There are very few areas in which suing someone or some company is not an available option.
Muddling Terms
While most of the AI concern expressed by the senators was very similar to their concern about social media, the topic of AGI did briefly come up in a roundabout way.
After Senator Hawley discussed the list of bad things above, he did bring up the open letter from the Future of Life Institute. He asked the witnesses if the people calling for a moratorium on AI development were right.
Mr. Altman and Ms. Montgomery did not see the need or purpose of doing this (Mr. Altman mentioned that there was more than six months between development of GPT-4 and its release, and that they haven’t even started training a potential GPT-5).
Dr. Marcus, however, actually signed the letter. He said he signed it because he took the letter “Spiritually, not literally.” And I think that’s a problem.
The main concern of the authors of that letter was that superintelligent machines are going to destroy humanity if we’re not careful. That’s it. They like to cloak it with references to some other more contemporary issues, but this is their main concern and has been for some time, a topic I’ll discuss in upcoming posts. Yet, as I’ve mentioned previously, they frequently muddle together discussion of AI and AGI to purposefully obfuscate the debate.
By signing the letter, Dr. Marcus is signaling to other people not as knowledgeable as himself that this is a valid and urgent concern. Yet, even Dr. Marcus himself says that he’s skeptical AGI is particularly close to being developed and that he doesn’t think it should be an urgent concern. During one of his answers in the hearing, he throws out the possibility of it being developed in 50 years.
It's clear that the at least some of the senators are vaguely aware of this idea of superintelligent machines being a danger to humanity. It's not clear that they understand what the authors of the open letter, and AI Dystopians in general, think the danger is.
For example, Senator Blumenthal said the following:
I think you have said in fact, and I’m gonna quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term.
Senator Blumenthal seems to be referring to a blog post in which Mr. Altman stated:
Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.
Mr. Altman goes on to state in the post that he doesn't think this is going to be an issue anytime soon. However, he's clearly talking about doomsday type scenarios in which these SMIs take over the world to the extreme detriment of humans. Obviously, he's not talking about job loss. This is what happens when words and concepts get muddled together.
In his answer, Mr. Altman discussed job loss, since that seemed to be the gist of Senator Blumenthal’s question. Shortly after this exchange, though, Dr. Marcus stated at the end of one of his answers:
And last, I don’t know if I’m allowed to do this, but I will note that Sam’s worst fear I do not think is employment. And he never told us what his worst fear actually is. And I think it’s germane to find out.
Mr. Altman then went on to discuss his fears about current and near term AI, but it’s clear that Dr. Marcus knows that this is not the danger of superintelligent machines. He also knows that Senator Blumenthal, and likely many or all of the other senators, are mixing up fears of existential danger from superintelligent machines at some point in the not very near future with current AI concerns.
Yet Dr. Marcus seems to be promoting that confusion rather than trying to clear it up, even though he’s already stated it’s not a near term concern. Is he doing this because it helps generate alarm about the AI he is concerned about, just as he signed the open letter that he agreed with spiritually but not literally?
Dr. Marcus stated that one of the inciting events that made him so concerned was the New York Times article I discussed in a previous post in which the author of the article harassed Bing Chat until it became nonsensical. I’m not sure that this particular New York Times article, or any article in the popular press, should really be the the deciding factor in what you think of AI.
This brings us back to Senator Hawley’s list of AI concerns and where they came from. Chances are they came from Dr. Marcus’s opening remarks, in which he lists all those things as grave concerns about AI like GPT-4:
Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened. Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do. Choices about data sets that AI companies use will have enormous unseen influence. Those who choose the data will make the rules shaping society in subtle but powerful ways.
He also mentions some exaggerated anecdotal evidence that AI is an unprecedented threat to our society. The majority of his statement boils down to the possibility that people will do bad things over the internet, kind of like the things they’re able to do now without AI and the kinds of things that they were able to do before the Internet (e.g. spread misinformation and bias, etc.).
Outside of the previously mentioned issues regarding handwringing about misinformation and bias, the question remains as to how ChatGPT and its ilk are going to make matters drastically worse than, say, Facebook or Twitter. The handwringing is there, but the evidence to back it up seems somewhat sparse.
It’s worth looking deeper into the rest of Dr. Marcus’s opening statement, and perhaps I will in a future post. One of the more interesting aspects is how an event involving some technology, like AI, gets spun up by the press into hyperbolic fantasy on a fairly regular basis.
Stay On Target
It’s certainly not a bad idea to have Senate hearings on this sort of technology. There are many valid concerns and, even without them, it’s worthwhile to educate government leaders on new technological developments, especially when they have the potential to greatly affect society.
Job disruption, deep fakes, intellectual property issues, legal liability — these are all quite valid topics of discussion when it comes to LLM systems and other types of machine learning. Discussion of some basic testing standards before releasing products to the public is also worthwhile, although I’m not convinced that having the government regulate this is the most effective way to approach the issue.
But if we just have more hearings like this where meaningful, evidence-based issues are drowned out by much less likely and/or unrelated concerns and anecdotal half-fictions, then we’re not going to make a lot of progress.
The EU AI Act
This brings us to the European Union AI Act, which was referred to several times during the hearing and was recently approved in draft form by an EU subcommittee. The EU excels at bureaucracy, and the proposed AI Act doesn’t disappoint. It’s a little over 100 pages of small type, and yet it still manages to be plenty vague enough to ensure vast employment opportunities for armies of lawyers well into the future. Perhaps this proves the point that technology always creates more jobs than it destroys.
The act covers everything from building AI systems to deploying them to banning them, and calls for the creation of a European Artificial Intelligence Board to oversee the implementation and enforcement of its regulations across the EU.
There are many areas in which developers of AI tools or end users of those tools can be fined. Areas that are banned outright include manipulating behavior and exploiting vulnerable groups in ways that cause them physical or psychological harm. Given the current atmosphere of microaggressions and words equated to violence, this seems as if it could apply to almost anything, and it seems likely that the more egregious forms this manipulation and exploitation might take are already illegal (e.g. fraud).
Also banned are social scoring systems and biometric real-time remote identification by law enforcement. While these prohibitions seem reasonable, neither one of these necessarily requires AI (depending on your definition of AI), particularly social scoring systems.
And that’s really the problem with a lot of this stuff. If there are outcomes that seem negative, then the outcomes are what should be addressed regardless of the tools used to create those outcomes. Outlawing or strictly regulating the specific tools simply creates loopholes for other tools and stifles progress on the potential positive outcomes that might arise from using those tools.
In any case, it’ll be interesting to see how EU AI Act, if implemented, affects development and deployment of AI systems in Europe.