It has been interesting to note what has been written about Artificial Intelligence (AI) in recent weeks, particularly as far as the hype around ChatGPT is concerned. As far as I can judge, most of this essentially amounts to the perception, or fear, that AI has outstripped humans as far as intelligence goes. Dr Harvey Risch’s eye-opening account of his ‘conversation’ with an AI has amply demonstrated that this is an erroneous perception, and yet it seems to persist anyway.
A recent experience I had with a ChatGPT enthusiast conveys the same impression concerning the widespread belief that the latter AGI (Artificial General Intelligence) is the equal, if not the superior of humans in the smartness department. It occurred on the occasion of a talk I gave to the members of a cultural organisation on the topic of the extent to which the work of Freud and Hannah Arendt can provide insight into the current growth of insidious totalitarian measures of control globally.
One such telltale development is the World Health Organisation’s attempt to rob countries of their sovereignty by successfully amending its constitution. This attempt failed two years ago when African countries opposed the proposed amendments, but in 2024 the WHO will try again, having lobbied African leaders vigorously in the meantime.
Following my talk, someone related the theme of it to AI. Specifically, this pertained to my claim that Freud’s concepts of Eros (life-drive) and Thanatos (death-drive), on the one hand, and Arendt’s notions of natality (every human brings something unique into the world through being born) and plurality (all humans are different), on the other, cast light on the nature of totalitarianism. It also related to the question of whether totalitarianism can be sustained by those who promote it. It turned out that, after the topic of my talk had been circulated, he had asked ChatGPT to comment on it, and brought the AI’s ‘answer’ along to the meeting in printed format to show me.
Predictably, for a linguistic pattern-recognition-enabled and predictive research machine with a huge database (which is what ChatGPT really is) it was not difficult to unpack accurately what the relevant Freudian and Arendtian concepts mean – any student could find this on the internet or in a library too. But where the AI faltered concerned the link I established between these thinkers’ ideas and current events unfolding in global space.
Recall that I had employed Freud and Arendt’s concepts heuristically in relation to what, arguably, are signs of totalitarian ‘moves’ being made in various institutional areas today. ChatGPT – again predictably – did (and arguably could) not elaborate on the connection I had implied in the circulated title of my talk, and had simply ‘stated’ that there was ‘some’ relationship between these two thinkers’ ideas and totalitarianism.
The reason for this should be immediately apparent. Nowhere in ChatGPT’s database is there any information – in the format of a legible interpretation – of what events such as the sustained attempt by the WHO to become the world’s governing body (referred to above), are symptomatic of, namely an incipient global totalitarian regime. For ChatGPT (or any other AI) to be able to come up with such an ‘interpretation,’ it would either have to be entered into its database by its programmers – which is unlikely, if not unthinkable, given its implied criticism of the very constellation of powers that gave rise to ChatGPT’s construction – or the AI would have to possess the capacity that all ‘normal’ human beings have, namely to be able to interpret the experiential world around them. Clearly, no AI has that capacity because of its dependence on being programmed.
My interlocutor disputed this explanatory response on my part, arguing that ChatGPT shows its ability to ‘reason’ in every ‘answer’ it comes up with to questions one might ask. This, I pointed out, is not an accurate description of what the AI does. Remember: ChatGPT produces anthropomorphic responses in everyday language to questions put to it. It does so by using examples detected in the colossal datasets that it has access to, and which enable it to predict successive words in sentences. Succinctly put: it is capable of statistical pattern-finding in these huge databases, using ‘machine learning.’
This is not what reasoning is, as every student who has studied logic and the history of philosophy should know – as René Descartes argued in the 17th century, reasoning is a combination of intuitive insights and inference or deduction. One starts with an intuitive insight – say, that the lights have gone out – and infers from there that, either someone has switched them off, or the electricity supply has been disrupted. Or one can reason (by deduction) from one set of givens (the intuitive insight) that another is either likely or unlikely. At no time does one have recourse to massive amounts of data which one scans for patterns exhibiting similarities, and ventures anticipatory predictions on that basis.
Nevertheless, as one can ascertain from computer scientists such as Dr Arvind Narayanan, a professor of computer science at Princeton University, people (like my interlocutor) are easily fooled by an AI such as ChatGPT because it seems so sophisticated, and the more sophisticated they become, the harder it would be for users to spot their shortcomings concerning pseudo-reasoning as well as their mistakes.
As Dr Narayanan indicates, ChatGPT’s responses to some computer science examination questions he posed to it were spurious, but they were phrased in such a specious manner that their falsity was not immediately apparent, and he had to check them three times before being certain that this was the case. So much for ChatGPT’s vaunted capacity to ‘replace’ humans.
One should remember, however, that what has been discussed so far in comparative terms is the topic, whether an AI like ChatGPT operates in the same way as humans at the level of intelligence, which pertains to differences like reasoning as opposed to pattern recognition, and so on. One could phrase the question in terms of inferiority and superiority too, of course, and some argue that humans still appear to outsmart AI, even if an AI can perform mathematical calculations faster than humans.
But it is only when one shifts the terrain that the fundamental differences between a human being, viewed holistically, and an AI, no matter how smart, can be seen in perspective. This is mostly overlooked by people who engage in the debate concerning humans as opposed to ‘artificial’ intelligence, for the simple reason that intelligence is not all that matters.
To illustrate what I mean, think back to what happened between world chess champion Garry Kasparov and Deep Blue, the IBM ‘supercomputer,’ in 1997. Having been defeated by Kasparov in 1996, Deep Blue scored the first win over a human being by a machine in the following year, and then, too – as with ChatGPT today – there was universal lamentation about the supposed ‘demise’ of the human race, represented by Kasparov being overcome by a computer (an AI).
Like today regarding ChatGPT, this reaction was emblematic of the error committed by the vast majority of people when they judge the relationship between AI and humans. Usually such an evaluation is carried out in terms of cognition, by assessing which is more ‘intelligent’ – humans or machines. But one should ask whether intelligence was the appropriate – let alone only, most suitable – measure for comparing humans and computers (as representative of AI) then, and indeed now.
Understandably, Kasparov’s humiliation by the machine was reported everywhere at the time, and I recall coming across such an account where the writer showed a keen understanding of what I have in mind when I refer to the right, or appropriate yardsticks for comparison between humans and AI. Having reconstructed the depressing details of Kasparov’s historic rout by Deep Blue, this writer resorted to a humorous, but telling little fantasy.
After the symbolic defeat of the human, she or he fabulated, the team of engineers and computer scientists that had designed and built Deep Blue went out on the town to celebrate their epochal triumph. It would be wrong to write ‘their machine’s victory,’ because strictly speaking it was the human team that scored a victory by means of ‘their’ computer.
The punchline was prepared for when the writer asked, rhetorically, whether Deep Blue, too, went out to paint the town red with Light Pink to relish its conquest. Needless to emphasise, the answer to this rhetorical question is negative. It was followed by the punchline, which stated the obvious; namely, that ‘humans celebrate; computers (or machines) do not.’
Looking back, it strikes one that this writer was a visionary of sorts, employing a fiction to highlight the fact that, although humans and AI share ‘intelligence’ (albeit of different kinds), it does not mark the most obvious, irreducible differences between AI and people. There are other, far more decisive, differences between humans and AI, some of which have been explored here and here .
Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.