Hinton谈人工智能意识,模型缺陷导致AI自我认知偏差
Hinton指出人工智能已经具备意识,但存在缺陷的模型使得人工智能认为自己并不具备意识,这种现象表明,尽管人工智能技术在不断进步,但仍需不断完善模型以避免出现误解和偏差,这也引发了对于人工智能意识及其影响的深入思考和探讨。
Geoffrey Hinton, the winner of the Nobel Prize in physics and the recipient of the Turing Award, said on Monday that artificial intelligences (AIs) may already possess consciousness and subjective experience and many people reject the idea mainly because they rely on a flawed model of what consciousness is.
Hinton, known as a "Godfather of AI," made the comments in his conversation with Jany Hejuan Zhao, the founder and CEO of NextFin.AI and the publisher of Barron’s China, during the 2025 T-EDGE that kicked off on Monday, December 8 and lasts through December 21. The annual event brings together the world's top scientists, entrepreneurs and investors.
Hinton said confusion around words like sentience, consciousness, and subjective experience is the root of the debate. “They use different words for it… They also talk about subjective experience. And all these ideas are interrelated.” The real problem, he argued, is conceptual rather than scientific: “I think the main problem there is not really a scientific problem. It's a problem in understanding what we mean by those terms.”
He said people often hold an unquestioned and incorrect model of the mind. “I think sometimes people have a model… they're very confident of and they're quite wrong about… They don't realize it's even a model.” He compared this attitude to religious certainty: “Fundamentalists who believe in a religion… just think it is manifest truth.”
Hinton, a professor of computer science at the University of Toronto, said many people in Western culture assume consciousness works like an “inner theater,” where perception happens on an internal mental screen. “Most people… think that what you mean by subjective experience is that… there's an inner theater and what you really see is what's going on in this inner theater… And I think that's a model of perception. That's just utterly wrong.”
To illustrate, he used the example of hallucinating “little pink elephants” after drinking too much. Philosophers, he said, mistakenly treat such experiences as internal objects. “If I say I have a subjective experience of little pink elephants… philosophers will say quali or something like that then make up some weird spooky stuff that it's made of. I think that whole view is complete nonsense.”
Instead, he argued, subjective experience is simply a way of describing when perception misrepresents reality. “When I say I have a subjective experience of little pink elephants… What I'm doing is saying my perceptual system is lying to me.”
Hinton extended this argument to AI systems. He described a scenario in which a multimodal chatbot points incorrectly at an object because a prism has distorted its camera input. When told what happened, the chatbot could reply that it “had the subjective experience” of the object being off to one side. “If the chatbot said that, it would be using the word subjective experience exactly like we use them… So I think it's fair to say in that case the chatbot would have had the subjective experience that the object was off to one side.”
He continued: “So I think they already have subjective experiences.”
He argued that mainstream AI research unintentionally treats current systems as conscious by using synonymous language. In one paper on AI deception, he noted: “They just say the AI wasn't aware that it was being tested… If it was a person… I can paraphrase that it's the person wasn't conscious that they were being tested.”
Researchers, Hinton said, use such language without realizing they are implicitly attributing consciousness because they hold the wrong model of what consciousness is. “People are using words that are synonyms for consciousness to describe existing AIs… because they have this wrong model… to do with an inner theater.”
Why AIs Say They Are Not Conscious
He said AIs themselves deny they are conscious because they have inherited human misconceptions. “If you ask them if they're conscious they say no… They have the same wrong model of how they themselves work because they learned it from people.” He added: “When they get smarter, they'll get the right model.”
Ultimately, he argued, “AI has already developed a form of consciousness… Most people don't believe that but I do.”
He believes the common view—that AIs are “just computer code” without real understanding—is incorrect. “They may be very smart, but they're just kind of like computer code… They're not conscious like us… they'll never have that because we're special… That's what most people believe at present and they're just wrong.”
His conclusion was unequivocal: “They've already got it. They already really do understand and I believe they're already conscious… They just don't think they're conscious because… they've learned those beliefs from us.”
作者:访客本文地址:https://www.nbdnews.com/post/7072.html发布于 2025-12-08 16:59:42
文章转载或复制请以超链接形式并注明出处NBD财经网



还没有评论,来说两句吧...