Elon Musk and Apple’s co-founder Steve Wozniak have recently signed a letter calling for a six-month moratorium on the development of AI systems. The goal is to give society time to adapt to what the signatories describe as an “AI summer”, which they believe will ultimately benefit humanity, as long as the right guardrails are put in place. These guardrails include rigorously audited safety protocols.
It is a laudable goal, but there is an even better way to spend these six months: retiring the hackneyed label of “artificial intelligence” from public debate. The term belongs to the same scrapheap of history that includes “iron curtain”, “domino theory” and “Sputnik moment”. It survived the end of the cold war because of its allure for science fiction enthusiasts and investors. We can afford to hurt their feelings.
In reality, what we call “artificial intelligence” today is neither artificial nor intelligent. The early AI systems were heavily dominated by rules and programs, so some talk of “artificiality” was at least justified. But those of today, including everyone’s favourite, ChatGPT, draw their strength from the work of real humans: artists, musicians, programmers and writers whose creative and professional output is now appropriated in the name of saving civilisation. At best, this is “non-artificial intelligence.”
As for the “intelligence” part, the cold war imperatives that funded much of the early work in AI left a heavy imprint on how we understand it. We are talking about the kind of intelligence that would come in handy in a battle. For example, modern AI’s strength lies in pattern-matching. It’s hardly surprising given that one of the first military uses of neural networks – the technology behind ChatGPT – was to spot ships in aerial photographs.
However, many critics have pointed out that intelligence is not just about pattern-matching. Equally important is the ability to draw generalisations. Marcel Duchamp’s 1917 work of art Fountain is a prime example of this. Before Duchamp’s piece, a urinal was just a urinal. But, with a change of perspective, Duchamp turned it into a work of art. At that moment, he was generalising about art.
When we generalise, emotion overrides the entrenched and seemingly “rational” classifications of ideas and everyday objects. It suspends the usual, nearly machinic operations of pattern-matching. Not the kind of thing you want to do in the middle of a war.
Human intelligence is not one-dimensional. It rests on what the 20th-century Chilean psychoanalyst Ignacio Matte Blanco called bi-logic: a fusion of the static and timeless logic of formal reasoning and the contextual and highly dynamic logic of emotion. The former searches for differences; the latter is quick to erase them. Marcel Duchamp’s mind knew that the urinal belonged in a bathroom; his heart didn’t. Bi-logic explains how we regroup mundane things in novel and insightful ways. We all do this – not just Duchamp.
AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present and the future; of history, injury or nostalgia. Without that, there’s no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the “intelligence” part.
ChatGPT has its uses. It is a prediction engine that can also moonlight as an encyclopedia. When asked what the bottle rack, the snow shovel and the urinal have in common, it correctly answered that they are all everyday objects that Duchamp turned into art.
But when asked which of today’s objects Duchamp would turn into art, it suggested: smartphones, electronic scooters and face masks. There is no hint of any genuine “intelligence” here. It’s a well-run but predictable statistical machine.
The danger of continuing to use the term “artificial intelligence” is that it risks convincing us that the world runs on a singular logic: that of highly cognitive, cold-blooded rationalism. Many in Silicon Valley already believe that – and they are busy rebuilding the world informed by that belief.
But the reason why tools like ChatGPT can do anything even remotely creative is because their training sets were produced by actually existing humans, with their complex emotions, anxieties and all. If we want such creativity to persist, we should also be funding the production of art, fiction and history – not just data centres and machine learning.
That’s not at all where things point now. The ultimate risk of not retiring terms such as “artificial intelligence” is that they will render the creative work of intelligence invisible, while making the world more predictable and dumb.
So, instead of spending six months auditing the algorithms while we wait for the “AI summer,” we might as well go and reread Shakespeare’s A Midsummer Night’s Dream. That will do so much more to increase the intelligence in our world.
Evgeny Morozov