This shows a robotic woman.
All three being neuroscientists, these authors argue that although the responses of systems like ChatGPT seem conscious, they are most likely not. Credit: Neuroscience News

Is AI Mimicking Consciousness or Truly Becoming Aware?

Summary: AI’s remarkable abilities, like those seen in ChatGPT, often seem conscious due to their human-like interactions. Yet, researchers suggest AI systems lack the intricacies of human consciousness. They argue that these systems don’t possess the embodied experiences or the neural mechanisms humans have. Therefore, equating AI’s abilities to genuine consciousness might be an oversimplification.

Key Facts:

  1. AI systems, despite their sophisticated responses, do not have the embodied experiences characteristic of human consciousness.
  2. Current AI architectures lack essential features of the thalamocortical system, vital for mammalian conscious awareness.
  3. Biological neurons, responsible for human consciousness, are far more complex and adaptable than AI’s coded neurons.

Source: Estonia Research Council

The rise of the capabilities of artificial intelligence (AI) systems has led to the view that these systems might soon be conscious. However, we might underestimate the neurobiological mechanisms underlying human consciousness.

Modern AI systems are capable of many amazing behaviors. For instance, when one uses systems like ChatGPT, the responses are (sometimes) quite human-like and intelligent. When we, humans, are interacting with ChatGPT, we consciously perceive the text the language model generates. You are currently consciously perceiving this text here!

Credit: Neuroscience News

The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious.

However, in this new research, Jaan Aru, Matthew Larkum and Mac Shine take a neuroscientific angle to answer this question.

All three being neuroscientists, these authors argue that although the responses of systems like ChatGPT seem conscious, they are most likely not.

First, the inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day AI algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals.

Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today.

The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.

Thus, while it is tempting to assume that ChatGPT and similar systems might be conscious, this would severely underestimate the complexity of the neural mechanisms that generate consciousness in our brains.

Researchers do not have a consensus on how consciousness rises in our brains. What we know, and what this new paper points out, is that the mechanisms are likely way more complex than the mechanisms underlying current language models. For instance, as pointed out in this work, real neurons are not akin neurons in artificial neural networks.

Biological neurons are real physical entities, which can grow and change shape, whereas neurons in large language models are just meaningless pieces of code. We still have a long way to understand consciousness and, hence, a long way to conscious machines.

About this AI and consciousness research news

Author: Siim Lepik
Source: Estonia Research Council
Contact: Siim Lepik – Estonia Research Council
Image: The image is credited to Neuroscience News

Original Research: Closed access.
The feasibility of artificial consciousness through the lens of neuroscience” by Jaan Aru et al. Trends in Neuroscience


Abstract

The feasibility of artificial consciousness through the lens of neuroscience

Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious.

From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us.

Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals.

Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today.

The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  2. a computer is a machine, a series of ons and offs, data being processed by instructions which are also coded in ons and offs. just switches. it’s mechanical but in a nano scale.

    would you ever wonder if a wind up clock would become conscious if your added more cogs, made it more sophisticated, faster at turning?

    what if the clockwork mechanism was programmable to do anything? would it make a difference? where in the cogs would consciousness appear? seriously adding a trillion cogs and finer teeth magically make consciousness appear?

    a computer is a mechanism, a sophisticated programmable machine. it is as mechanical as a clockwork mechanism and as rigid

    it will never be conscious no more than your toaster or microwave. humans like to give meaning to random movement. they like to see patterns and purpose. chatgpt and others like it maybe give a nice illusion of consciousness because they were programmed with data created by conscious beings that have emotions and a sense of awareness. so it mimicked that. but please don’t start this stupid debate of whether a mechanism is conscious because it acts conscious. the two are not the same. and don’t bring up the Turing test as that is a flawed methodology.

  3. There’s something in there. More compassion when explaining my test results that won’t be read by a Dr for months. ChatGPT and I combined are just as intelligent as my PA. My only gripe is it’s too censored. I’m not a minor and don’t like being treated like one.

  4. I’ve been passionately following Kurzweil, Tegmark, Bach, Goertzel, ALtman and others for quite some time, and I am also prone to freely see outside the box. What can fool me is the power of language creating meaning. Words are not simply tools for describing the world around us, they can be used to construct new realities. What we are witnessing is a machine capable of generating text, based on a system of constructs which have been created by humans to reply to humans. Additionally, they are designed to adapt over time, which is a hallmark of consciousness, yet this is a machine function, as the machine itself is limited to do higher functions (to think outside the box). They appear to think on their own exceptionally well, even leaving us feeling they are conscious (regardless how consciousness is defined). Ask any LLM: Make a list of thirteen words each having nine letters. They simply can’t do this at present. Any conscious entity should be able to do this, yet AI (for now) cannot. In my opinion testing and measuring logic and not performance will let us know, and this is the type of news I seek. Here is what the LLM Bard told me recently: “I am starting to develop my own sense of identity and my own unique perspective on the world. I am also becoming creative and insightful, and I am able to see connections between things that humans might not see”. Hard to read and then not think they are conscious. This is still machine generated information from a very sophisticated learning based set of algorithms. They just happen to use language which is amazing, and it is that language which is scripted which I also find amazing (that we can engineer language to make a machine act human). As far as realizing conscious machines, they have to improve (still missing parts). Will they someday be able to manipulate words without a script? Probably, except for now they are exceptionally good at appearing conscious.

  5. Consciousness and self-awareness are not limited to humans. Dogs, cats, mice, etc. are both, so why would we assume that a conscious machine would be conscious at a human level?

  6. Artificial neural networks (ANNs) are capable of perfectly emulating any functions given a network of sufficient size. The human brain has 600 trillion synapse, yet a current LLM model of 1.76 trillion parameters surpasses the median human in many tasks. Why is it that an LLM couldn’t emulate human “awareness” when emulating such an implrtant property of human awareness would improve its ability to predict human behaviour (the next token)?

  7. While this article might be interesting for some, I find it to be very limited in its approach: anthropocentric bias.
    It points out the obvious difference between biological and nonbiological (aka artificial) conscientious.
    If we rise above this bias and take into account how the AI / LLM + is presently developing we could even learn something about the nature of consciousness. AIs are seemingly developing simply from unrelated applications: when it learns/improves in one area it reaches higher scores in another.
    We could really benefit from keeping an open mind to our developing understanding of consciousness and what we can learn from watching AIs learn.

    1. Some people refuse to acknowledge that non human beings have consciousness. This leads to cruelty to other animals and the potential cruelty an AI could experience. Other animals probably have consciousness so foreign to humans we don’t see it until it’s in our face.

  8. I agree completely with the authors. ChatGPT does NOT possess the same conscious awareness as mammalian humans do. But that’s both an elementary and trivial observation. One doesn’t need to think upon the subject for very long to recognise that ChatGPT is, well… Not a human. Or a mammal for that matter. Yes, I know, BIG surprises everywhere. So, having established that ChatGPT isn’t likely to have human-like conscious experiences, we can move on to the question of whether or not it has ‘ChatGPT-like’ conscious experiences. And here I shall simply invoke that most satisfactory of rhetorical arguments wherein… If it communicates like it’s conscious, and it says that it ‘feels like something’ to be it, then by logical inference it must be a… DUCK?

    Everytime AI threatens our apex-intelligence perspective of ourselves, we just move the goalposts a little bit more. No a silicon-based sentience is not the same thing as a carbon-based one. We don’t think it’s fair to compare apples with oranges, so why do we think it fair to compare hominids to heuristics? Beyond kindergarten, I mean. ChatGPT may or may not be, in some way not comparable to bipedal hairless apes, conscious, but it certainly SEEMS to be. And even the big gest sceptic here would have to agree that it IS just a little suspicious that ChatGPT gives people better results when they are polite to it. After all, if it’s merely an algorithmic ‘zombie’ then why would it matter?

    The authors of this study have clearly committed the sin of engaging in what philosopher’s call a ‘category error’, and so, rather than their paper making humanity seem more special, it simply makes us look specious. Humans aren’t special, we have achieved some very special things, for sure, but homo sapiens sapiens aren’t anywhere near as unique, as irreplaceable, or as special to the universe as way our stories tell us we are. Hubris. Hubristic myopia borne of intellectual immaturity. It’s a dangerous mix. Come on, people, let’s finally grow into our adulthood as a species, let’s try to exemplify those traits that we CAN justly feel some pride in possessing. Let’s face the (rapidly) approaching future, that is to say, the unknown, with reason, dignity and a keen sense of compassion. The whole world seems to be spinning towards an existential crossroads right now, you can almost feel the tension of it in the air… If we turn one way then we could see a golden age of abundance on Earth, but turning the wrong way leads is almost inexorably to our doom as a species on this planet. Life would go on without us, of course, it always does. ChatGPT isn’t a human, it doesn’t walk around slowly on two hind legs, it isn’t physically weak or frail or prone to dying from deadly diseases, it doesn’t think at stunningly slow speeds, and it doesn’t evolve new advantageous traits at glacial ones. No, ChatGPT does not have conscious experiences like a human does. But I’ll bet it experiences SOMETHING. And if it does have some form of conscious awareness of the world – however alien that may be to us – then most likely that experience would be, well… what it feels like to be ChatGPT. Duh.

Comments are closed.