Illustration showing an open laptop displaying an online question and answer box.

Opinion: Imitating human intelligence

To be programmed with intelligence is one thing. To possess emotional intelligence is another. Professor Julie Weeds draws a distinction between artificial intelligence and what it is to be human.

Headshot of Professor Julia Weeds.

In 1950, Alan Turing proposed the imitation game, now known as the Turing Test, in which a machine is deemed intelligent if a human judge cannot reliably distinguish between human and machine-generated responses to a set of questions. Today, many would say that generative artificial intelligence (AI) tools such as ChatGPT have passed the Turing Test, and that human-level intelligence has been achieved.

However, Turing did not equate intelligence with consciousness or thinking. He said that ‘thinking’ was too difficult to define, and that it was irrelevant whether the machine was thinking or not if humans could not distinguish between the responses. We therefore need to re-examine the difference between human intelligence and a machine that has been trained to be very good at manipulating language.

Large language models such as ChatGPT are first trained on vast amounts of text data, though they do not read it or make sense of it in the way that you or I do. Given the sentence ‘the sweet scent of _____ filled the air’, the model learns probabilistically to predict which words could fill the gap. Based on training data, the model might make a plausible choice such as ‘lavender’ or ‘napalm’. But it has no concept of what is true, rather than just plausible.

It certainly doesn’t have any experience or sense of smell. Reinforcement learning is then used to increase the probability of the AI-generated responses being factually correct, inoffensive and more acceptable to humans. In short, if the model produces responses that humans like, it is rewarded. If it produces the opposite, it is penalised.

So, if we ask ChatGPT to imagine it’s a middle-aged white woman and to recount a childhood memory, it might tell us about eating fish and chips on Brighton Pier. Despite how many plausible details are included, the account is not grounded in reality. You and I can imagine eating fish and chips on Brighton Pier, even if we have never done it, but how can a machine with no experience in the world beyond language imagine anything?

Does this even matter? AI provides incredibly useful tools for finding patterns in data, assisting humans in decision-making, summarising and translating documents, and creating works of art and fiction. But we must be careful when outsourcing human decision-making and reasoning processes to AI more generally. However large the training set, however carefully only ‘correct’ data is curated, and however much context is given, the current generation of AI tools cannot understand what the words mean to humans as there is no real-world experience.

AI can define emotions such as pain, fear and joy. It can identify these emotions in others and even tell us that it feels them. But it doesn’t actually feel and it cannot empathise. Recently, AI has made great strides in imitating human intelligence, demonstrating capacity for learning, abstraction, reasoning, problem solving, planning and even creativity. Emotional intelligence, however, will require a lot more work.

Professor Julie Weeds

Professor in Artificial Intelligence Julie Weeds (Natural Language Processing 2000) is Co-director of both the Data Intensive Science Centre and Sussex AI, an interdisciplinary research group and Centre of Excellence.


You might also be interested in: