No AI Revolution On The Horizon

By Englebert Clangworthy on

AI man over robot
Image by Alpha India

Recently, Rob Mason, wrote an article here called “Revolution”, which contained what I think is a rather optimistic perspective on artificial intelligence (AI) and what it is on the cusp of doing for us – or some might say to us.

I am hugely enthusiastic about AI. I must be, it’s in my job title. The breakthroughs that have arrived over the last couple of years as novel services like ChatGPT, Whisper, Dall-E and Stable Diffusion are immensely impressive, as are the advances made in computational drug discovery and protein folding. Tesla’s Autopilot technology never ceases to amaze me.

Nevertheless, I must tell you that, in my view, the gap between the hype and the reality is a yawning chasm of unknowable depths. My industry perspective is that very little of what Rob expects to see is even remotely likely to materialise on a 10-year timescale, less still is it just around the corner.

For example, the promise of autonomous vehicles (AVs): Today, autonomous trains, trams and trolley buses are doable. Autonomous cars are not. Why is this? A few reasons, such as trains being on rails limiting the number of variables in play, but mostly it’s because trains in motion can pretty much be completely isolated from uncontrolled interactions with squishy humans. Without an unfathomable global step-change, that will never be true of roads.

A lot of trials of AVs have taken place and almost all of them have been scrapped or shelved, either by their sponsors or by the government. Some high-profile and vexed legal cases have been seen relating to AVs over the last few years. There are too many catastrophic failure modes for which we currently have are no good answers. Solutions are going to take years. Legal liability is a massive unresolved issue. If the AV kills someone, who is to blame? Who gets sued? Who pays? Who goes to jail? Who decides how the AI weighs and resolves the trolley problem? Without significant legal and ethical changes - none of which should be made lightly or by ill-informed lawyers and politicians - there is no satisfactory way to address these questions.

The agency problem is also true with any kind of robotics in any situation where they mix with humans, either directly or in terms of the products and by-products of their work. Some person somewhere must accept accountability for all of this. Good luck finding that guy.

Current AI technologies can certainly assist with a variety of jobs and even reduce the number of people needed to do some of those jobs, but they cannot take over jobs entirely because, again, they don't have technical or legal agency. For an AI to have legal agency and accountability, it needs to have some kind of legal personhood. Corporations have legal personhood, but only as an administrative convenience. Behind that legal personhood is a board and a CEO who are all – despite impressions to the contrary – actual people. What do you even do with a robot that has killed someone? Robot jail? Crush it? If they all run the same software, what does that fix? Can a robot have mens rea?

True innovations such as artificial general intelligence (AGI) where an AI is capable of replacing a rounded, competent human being, is probably not within the capability of current AI tech. Part of the reason for this, say researchers Abeba Birhane and Marek McGann, is that the large language models (LLMs) that underpin things like ChatGPT are premised upon a crude and incomplete understanding of what language even is. Current thinking in cognitive science is that language is not something whose essence can be captured in a fixed statistical model trained on a corpus of text. It is an experience and a process of interaction between humans, with all their prejudices, emotions and irrational urges, and all of whom have skin in the game, be that legal, financial or reputational.

"The idea is that cognition doesn't end at the brain and the person doesn't end at the skin. Rather, cognition is extended. Personhood is messy, ambiguous, intertwined with the existence of others, and so on,” said Birhane, speaking to The Register.

Something like Alpha Fold is a good example of where AI can work - using massive computing power and statistical techniques like stochastic gradient descent to essentially throw a trillion things at the wall and see what sticks. That isn't going to work for generating anything that is currently done by creative humans, who make breakthroughs by synthesising disparate and not obviously related pieces of information to make an intuitive leap.

Sean Thomas – an author who writes a column for The Spectator – seems absolutely convinced that ChatGPT is going to take his job. I’m pretty sure that if this happens, it will be because – bestselling or not – he isn’t a very good or original author. I’m not well known for my optimism, so I find it strange that I have a more optimistic view of Sean’s prospects than he does himself.

I'm not saying these things can never be achieved with AI, but they probably cannot be achieved with even the most awesome enhancements to the current technologies, because they are lacking in some of the fundamentals required, and there is currently no clear way to overcome this. As a chap from Gartner once said to me, “Nobel Prizes and Fields Medals will be handed out” before these technologies reach the state of maturity where they can fulfil the promises made in Rob’s article.

Another senior Gartner analyst says that AGI is unlikely to appear in any useful form in the next 10 years. Meanwhile, Goldman Sachs do not currently see how about half of the $1Trillion invested in AI technology to date will ever be recouped. There are good reasons to suspect that even at $24 for a monthly subscription, OpenAI are haemorrhaging money on ChatGPT.

There are other obstacles, too. Today’s AI is enormously power hungry, particularly during training, to the point where it won't scale to the levels needed to fulfil any of the promises Rob writes of, without significant breakthroughs - there simply aren't the resources on the planet to overcome that. Nor are there the words: a current crisis is that, having already poured the internet’s entire supply of high-quality text into the training of the models, researchers are running out of ways to better train the LLMs we have today. They are seriously experimenting with the idea of models generating their own training data. Iterative systems generally do one of three things: They tend towards zero, or infinity, or chaos.

We also face various possible logistical challenges. Global supply chains are fragile and likely to break down as the USA repudiates its Bretton Woods role as security provider for global shipping lanes. The only country in the world with the capability (on a 10 year timeline) to produce the silicon for high-end AI systems is Taiwan, which has China breathing down its neck, waiting for the right time to fulfil Xi's promise to reunify. Peter Zeihan thinks we’re likely to reach a point of scarcity where we must choose very carefully where to deploy AI. See the books “Goodbye Globalization” by Elisabeth Braw, and “The End of the World is Just the Beginning” by Peter Zeihan for more on these things.

So, while AI has enormous promise indeed, there is a mountain to climb, and we don’t yet have sight of the peak or the tools to survive when we get there.

-------

Engelbert Clangworthy is an AI Specialist at a multinational manufacturing company and a 30-year veteran of the corporate IT world.