Are artificial intelligences conscious? No, is the conclusion of the most thorough and rigorous investigation of the question so far, despite the impressive abilities of the latest AI models like ChatGPT. But the team of philosophy, computing and neuroscience experts behind the study say there is no theoretical barrier for AI to reach self-awareness.
Can machines think? Not yet, according to a review Yuichiro Chino/Getty Images |
Debate over whether AI is, or even can be, sentient has raged for decades and only ramped up in recent years with the advent of large language models that can hold convincing conversations and generate text on a variety of topics.
Earlier this year, Microsoft tested OpenAI’s GPT-4 and claimed the model was already displaying “sparks” of general intelligence. Blake Lemoine, a former Google engineer, infamously went a step further, claiming that the firm’s LaMDA artificial intelligence had actually become sentient – having hired a lawyer to protect the AI’s rights before parting ways with the company.
Now Robert Long at the Center for AI Safety, a San Francisco-based nonprofit organisation, and his colleagues have looked at several prominent theories of human consciousness and generated a list of 14 “indicator properties” that a conscious AI model would be likely to display.
Using that list, the researchers examined current AI models, including DeepMind’s Adaptive Agent and PaLM-E, for signs of those properties, but found no significant evidence that any current model was conscious. They say that AI models that display more of the indicator properties are more likely to be conscious, and that some models already possess individual properties – but that there are no significant signs of consciousness.
Long says that it is sufficiently plausible that AI will become conscious in the short term to warrant more investigation and preparation. He says that the list of 14 indicators could change, grow or shrink as research evolves.
“We hope the effort [to examine AI consciousness] will continue,” says Long. “We’d like to see other researchers modify, critique and extend our approach. AI consciousness is not something that any one discipline can tackle alone. It requires expertise from the sciences of the mind, AI and philosophy.”
Long believes that like studying animal consciousness, investigating AI consciousness must start with what we know about humans – but not rigidly adhere to it.
“There’s always the risk of mistaking human consciousness for consciousness in general,” says Long. “The aim of the paper is to get some evidence and weigh that evidence rigorously. At this point in time, certainty about AI consciousness is too high a bar.”
Team member Colin Klein at the Australian National University says it is vital that we understand how to spot machine consciousness if and when it arrives for two reasons: to make sure that we don’t treat it unethically, and to ensure that we don’t allow it to treat us unethically.
“This is the idea that if we can create these conscious AI we’ll treat them as slaves basically, and do all sorts of unethical things with them,” says Klein. “The other side is whether we worry about us, and what the AI will – if it reaches this state, what sort of control will it have over us; will it be able to manipulate us?”
Reference:
0 Comments