Discover more from SynthCog
For most AI endeavors, the last 20% makes a big difference
The 80/20 rule (sometimes the 90/10 rule) is a rule of thumb that pops up in many areas of endeavor. The idea it conveys is that the first 80% of any complex task takes the same amount of effort as the last 20 percent. This is frequently turns out to be a best case scenario — frequently the last 20% (and the last 10% after that, and the last 1% after that) takes exponentially more effort and expense than whatever came before.
There are many factors that cause this completion impedance to grow the closer one gets to the finish line of an undertaking. As more pieces come together, the complexity of the system overall grows substantially, more factors become interdependent on each other causing more unexpected problems, and deficiencies in the underlying approach become more evident and difficult to remedy or overcome.
More and more effort and expense goes into less and less actual progress. Sometimes the utility and desirability of the end result fade as inevitable compromises become necessary. Sometimes the end result imagined just isn’t achievable, and what ultimately results isn’t useful or desirable enough to justify the effort put in.
The thorniness of the 80/20 rule really stands out when expectations of the final result are spread widely and publicly. This has been the case for a number of technologies over the years, and is a primary factor in the hype bubble and burst phenomenon that seems to repeat itself on a fairly regular bases in the tech industry.
The Paperless Office — Old School Hype
The early 80s saw the hype explosion of the “paperless office” that was first predicted in the 70s and thought to be imminent with the arrival of the personal computer. Unfortunately, worldwide paper use actually doubled from 1980 to 2000 with all the printers, copiers, and FAX machines that accompanied the office PCs. Only recently has the quantity of paper started to diminish as more and more people raised with computer monitors enter the workplace.
In the end, the technology available and the work patterns of offices prevented the paperless office concept from even getting close to 100% of usability and desirability among the public.
Virtual Reality — Hype and Hype Reborn
Another well known 80/20 hypegoblin is VR. The first 80% cycle ran through the late 1980s and the early 1990s only to die off as an interesting but not quite usable technology. Then it was revised in the 2010s into a hype phoenix of massive potential that, it was thought, could now succeed given the technological progress since the first round.
Facebook bought the pioneering VR headset company Oculus in 2014 for a couple of billion dollars and actually changed its own corporate name to Meta to hype the technology and its own concept for a global VR environment. And yet it’s managed to lose well over $30B just since 2019 trying to get past the 80% barrier that would make the technology usable, useful, and, most importantly, desirable by the masses.
Then, of course, there’s Magic Leap. Founded in 2010 and achieving maximum hype between 2015 and 2017, it’s managed to raise around $4B since inception and has yet to generate any significant revenue.
Autonomous Cars — A Cornucopia of Hype
The latest technology to run afoul of this rule is autonomous cars. The roots of the current autonomous vehicle hype cycle started with a program instituted by the Defense Advanced Research Projects Agency (DARPA), a government agency that has as its mission the exploration of advanced technology for possible military use. In 2004 they held the first of what became a series of grand challenges, in which a prize was offered to any individual or team who created the technology most successful in completing a specified set of tasks.
The first challenge was to create and run an autonomous vehicle that could maneuver through a 150 mile off-road course in California's Mojave Desert within a limited time. The course the vehicles were to follow would only be supplied shortly before the race itself and would consist of a list of GPS waypoint coordinates. The prize was $1,000,000.
The results seemed far from promising. Of the 21 vehicles that qualified for entry, only seven completed the preliminary qualifying course, although the judges decided that eight more vehicles had completed enough to enter the final race. Of these fifteen, two failed before the race started. None came close to finishing the course, with the most successful vehicle failing after a few hours and just over seven miles into the 150 mile course.
Although a failure, this challenge is illustrative of the technological acceleration that makes some giddy and others queasy. The very next year 43 vehicles made it to the qualifying course, and of those, 23 qualified for the race itself, which would take place on a new 132 mile course. Only one of those 23 failed to surpass the previous year's most successful vehicle, and five vehicles successfully completed the course, with the winner finishing in just under seven hours. This was a spectacular achievement, especially given the complete rout the year before.
The next Grand Challenge two years later took place in a low density urban setting, and six teams finished the course successfully. The overwhelming success of these competitions was not lost on those paying attention. The dream of self-driving cars suddenly seemed within reach, and many companies began developing their own autonomous car capabilities, with the largest and best funded effort started by Google in 2009 and eventually named Waymo.
Since then, many players of entered the fray and many have predicted that fully autonomous vehicles for public use were just over the horizon.
Not surprisingly, one of the most vocal proponents of autonomous cars has been Elon Musk. Among the many statement’s he’s made over the years are:
We’re going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years. — 10/21 2015, Fortune Interview
I feel pretty good about this goal is that we will be able to demonstrate a demonstation drive of our full autonomy all the way from LA to New York. So basically from home in LA to let’s say dropping you off in Times Square, NY and then having the car parking itself by the end of next year (2017) without the need for a single touch including the charger. — 10/19 2016, Tesla Press Conference
I feel very confident predicting that there will be autonomous robotaxis from Tesla next year — not in all jurisdictions because we won’t have regulatory approval everywhere,” Musk said. “From our standpoint, if you fast forward a year, maybe a year and three months, but next year for sure, we’ll have over a million robotaxis on the road. — 4/22 2019, Investor Call
We’re also working on a new vehicle that I alluded to at the Giga Texas opening, which is a dedicated robo-taxi that’s highly optimized for autonomy, meaning it would not have steering wheel or pedals…And so it’s, I think going to be a very powerful product. Where we aspire to reach volume production of that in 2024. So I think that really will be a massive driver of Tesla’s growth. And we remain on track to reach volume production of the Cyber Truck next year. — 4/20 2022, Tesla Earnings Call
Musk is not alone in making optimistic predictions, and many more have been made over the years. Here are just a few from 2016 as the hype was building to nostril flaring intensity:
General Motor’s head of foresight and trends Richard Holman said at a confererence in Detroit that most industry participants now think that self-driving cars will be on the road by 2020 or sooner.
Johann Jungwirth, Volkswagen’s appointed head of Digitalization Strategy, expects the first self- driving cars to appear on the market by 2019.
The company confirmed that it will launch the new electric and autonomous iNext model in 2021.
Autonomous vehicle fleets will quickly become widespread and will account for the majority of Lyft rides within 5 years.
Currently (as of late 2023) there are no fully automated cars on the roads. While some vehicles can operate with moderate autonomy in certain situations, none can routinely and reliably take the place of a human behind the wheel other than in very controlled and/or monitored conditions.
The latest autonomous car news involves Cruise, a leading autonomous vehicle company now owned by GM. Until recently Cruise had been running a very limited autonomous taxi service. On October 2 one of its cars caused severe injuries to a woman, and Cruise suspended all of its driverless operations. It was then revealed that Cruise’s autonomous taxis required remote human intervention every 2.5 to 5 miles.
As imperfect as human drivers are, it seems that bridging the gap between working better than a human in a small range of environments and working better than a human in any environment is going to be more difficult than many anticipated.
Large Language Models — Hype Darling of the Moment
Right now the focus of technohype is LLM AI systems that will either make everyone’s job easier and more productive or cause a massive wave of job loss as the systems take over the jobs of human workers.
As I’ve mentioned in previous posts (such as this one and this one), I think there are significant limitations in the capabilities of current LLM systems. While there have been a number of predictions in the last year that we are on the verge of creating AGI with LLM systems, I think this is a very dubious conjecture. It seems quite likely that LLMs will hit the 80% barrier well before they reach anything that could be called true AGI.
The Rise of Artificial Mediocrity
Of course, it has to be said that many human beings operate at 80% (or lower) of full capacity. This is an unfortunate but hard to ignore truism. In any large human endeavor, there will likely be those who lack motivation, talent, communication ability, intelligence, or some other quality, and this keeps them from achieving anything better than 80% competency.
Yet these individuals still manage to hold the jobs in which they fall short of true competence, and many are able to retain those jobs for long periods of time despite this shortcoming. The reality is that there are a number of workers who, no matter what they may be capable of in other areas of human endeavor, work at a level of only passing mediocrity in the positions they currently hold.
With autonomous cars or LLM ghostwriters or future AI service representatives, we may find that we hit the 80% wall with current approaches and can’t get much past it without some sort of dramatic technical breakthrough. Consequently, it’s quite possible that we end up unable to create the artificial expertise we’d hoped for and instead end up with artificial mediocrity.
While this means that replacing competent human workers and drivers may be a difficult or unreachable goal in the foreseeable future, replacing those whose abilities are less than competent may not be.
There are certainly a lot of bad drivers out there, and reaching their level of competence is probably achievable if not necessarily desirable. It’s possible that just steadily improving current technology will get us to the level of a very competent human driver and perhaps even to a level that surpasses human capability. Or it may be that we’ll never get to the level of a very competent human driver without achieving AGI, or at least some aspects of it.
They are an infinite number of edge cases in driving, and it simply may not be possible to address enough of them adequately unless a driving system has a real understanding of the surrounding world. No autonomous driving system today has anything remotely resembling comprehension of the world around it or the nature of the activity it’s engaged in. If extending current AI technology isn’t able to do the job, fully autonomous cars won’t be available anytime soon.
For desk jobs, it may mean that a large number of people who only reach a level of acceptable mediocrity at their jobs will be replaced by AI systems that can replicate that mediocrity at a lower cost. For US companies, this may hit offshore services first, where companies are already trying to save as much money as possible on service workers and where communication issues that adversely affect competency are most apparent.
The Lessons of AI History
It’s probably not surprising to most that many endeavors, especially those in the technological arena, end up taking longer than expected. The more important point, though, is that it’s not always possible to tell when something will just take a little longer than expected and when something will require a radical shift in technology to be possible at all.