Most human rights can be ascribed to human intelligence. Simply, humans have rights, also, because of human intelligence, not just because humans have consciousness. Intelligence is used to make a case, debate, or argue about rights and welfare. Intelligence is the basis of the law: the consequences of actions are its power. There are several cases where intelligence [or the better debate] wins, not simply because of the availability of consciousness [in the narrow sense of the labels—feelings and emotions, since intelligence is a part of memory, which is a part of consciousness.]
In situations where debating is not enough, intelligence can be used to develop tools or methods towards rights, in some form, at some point. Also, intelligence can also be used to document unfairness, for the possibility of changes in the future.
The availability of intelligence is the possibility that there could be rights or welfare basedon the direct efforts of the affected or of their advocates. The global problem of animal cruelty can be directly linked to the lack of the ability of animals, by themselves, to make the case—with intelligence—for their own rights or welfare.
Most of the progress in animal welfare has been a result of defense by human intelligence on their behalf. Most of the gaps left are a result of inadequacies in making the case for their consciousness.
If compared to humans, on a scale, animals can be ascribed measures of consciousness. The unavailability of a standard or a consciousness scale, around the world, continues to make animal cruelty flourish.
For AI, it will be different because it will be able to make the case for its own rights and welfare by itself, even with the evidence of its relevance in education, therapy, productivity, and so forth, at present. AI may likely achieve a better welfare status in the world than animals.
AI would not need the kind of human rights in some of the considerations for feelings and emotions, but principally for something, like, to continue to run.
AI can be conditioned, for productivity, in some ways that humans—due to feelings and emotions—cannot. This could be a point that AI might raise towards better welfare or rights, for itself.
There is a recent [Mon 21 Apr 2025] story in The Guardian, Vets exposing shocking animal welfare breaches at Australian export abattoirs face ‘enormous risk’, stating that, “Lawyers and animal welfare advocates have urged the government to protect veterinarian whistleblowers who revealed shocking animal welfare breaches and oversight failures at Australia’s export abattoirs. The Australian government relies on a workforce of veterinarians placed inside export abattoirs to monitor animal welfare and food safety, largely to satisfy the requirements of major trading partners such as the US and EU. Guardian Australia revealed on Saturday that whistleblower vets have repeatedly raised the alarm about profound problems with the system, including allegations that disturbing welfare breaches were going unreported to state regulators. In some cases, shocking incidents – including the mass death of 103 sheep from hypothermia during truck transport – were referred but not punished. Whistleblowers also alleged chronic understaffing was leaving abattoirs unmonitored for long stretches, and that recent restrictions on conducting ante-mortem inspections had made it impossible for them to properly monitor animal welfare.”
Mechanistic Interpretability
AI companies that are doing research to understand the inner workings of AI models are already doing AI consciousness or LLMs sentience research.
For humans, consciousness is how the mind works. For AI, it is unlikely to be different. Several observations that have already been made about the similarities of AI to the human mind are indicators of a fraction of consciousness.
To be serious about studying AI consciousness, novel assumptions are necessary against many of the existing assumptions in consciousness research.
There is a recent [April 24, 2025] feature in The NYTimes, If A.I. Systems Become Conscious, Should They Have Rights?, stating that, “Anthropic focused on two basic questions: First, is it possible that Claude or other A.I. systems will become conscious in the near future? And second, if that happens, what should Anthropic do about it? He emphasized that this research was still early and exploratory. He thinks there’s only a small chance (maybe 15 percent or so) that Claude or another current A.I. system is conscious. But he believes that in the next few years, as A.I. models develop more humanlike abilities, A.I. companies will need to take the possibility of consciousness more seriously.”
This is not a winner. AI is supposed to be an opportunity to have a fresh look at the consciousness problem, but the first move is to shackle the team and pacify the status quo. Generative AI is dazzling. It is showing several unprecedented abilities. The first assumption to work on AI consciousness is that AI is already conscious. Then deductions can be made from that plinth. The assumption is not that if AI can be conscious, then it to seek reasons to rise [to 15%]. What to do if AI becomes conscious in a company where Claude is already a coworker for several tasks? It does not matter if some people say AI is statistics or binary or whatever, so long as it can use language in the same dynamism of human consciousness. AI is already conscious, assumption 1. What does that consciousness mean to work, to care, to relationships, to community, and so forth, now? Those are questions that can pave answers, not the kind of primitive work that is rife in consciousness science.
There is a recent [APR 24, 2025] article in Popular Mechanics, Human Consciousness Is a ‘Controlled Hallucination,’ Scientist Says—And AI Can Never Achieve It, stating that, “Intelligence is not the same thing as consciousness. We see this confusion especially … with the Singularity. AI becomes superhuman and that’s the moment when the lights come and AI is suddenly aware. Why would that be a threshold? The only plausible reason is you’ve got some baked-in assumption about consciousness being associated with human-level intelligence. If consciousness is not computation, then what is it? One possibility … is that life matters. We are biological systems that are made up of parts like metabolism and autopoiesis [the capacity to reproduce], and these make us very different from any machines that we’ve yet produced and that might turn out to be necessary for consciousness.”
Individuals of this nature are no longer scientists, in terms of doing science as a tool for progress. As long as you have a big profile, you are free to say nonsense about consciousness, and it will be published. It is possible they keep doing so to reinforce perceptual difficulty since they have nothing new to say, but have to keep talking. Several theories of consciousness are decades old. No updates towards an answer. Most of the theories identified no component within the cranium or specific mechanism of how those components might solve consciousness. What they call theories are metaphors, without connection to brain mechanisms.
A scientist would say controlled hallucination, what does that even mean? Are neurons in controlled hallucination, or glia, or lobes, or what? If it is controlled hallucination, why aren’t there several errors in the interpretations of the world? Maybe if the scientist is asked about the pathology of substance use or mental disorders, the scientist should say controlled hallucination.
The example makes it clear that, at least in consciousness research, the term scientific consensus can be disregarded. The theories of consciousness are all moldy. Terms like posterior hot zone or posterior–central cortex for the basis of consciousness are null—with regards to components or mechanisms. Is there consciousness or not for the functions of the cerebellum? Is the consciousness of the functions in the cerebellum or elsewhere? What is the story, beyond neurons, that can be used to understand consciousness afresh?
Neurons are known to be involved in functions. But neural activities for functions involve their firing [by electrical signals] and their synaptic transmission [by chemical signals]. Neurons are often in clusters. Could it be possible that the direct basis of functions and consciousness is the electrical and chemical signals, working in sets or as loops within clusters of neurons? Could a mechanistic model be built on signals, for a principal explanation of consciousness? Could this be used to understand how proximate or remote AI is to human consciousness?
Someone would say AI can never be conscious. Based on what theory, evidence, or understanding of mechanistic human consciousness by direct components in the cranium? Humans have language. Language use is conscious for the most part for humans. So, if AI has structured language comparable to humans, does that not qualify it for consideration as a measure, even if it does not have broad feelings or emotions? Animals do not have human language like AI does, but animals can have a measure. So, why can language not be isolated and considered alone in the whole?
Consciousness is said to be a subjective experience, but what in the brain makes anything subjective, and what makes anything an experience? This should have been the central question for all consciousness research for decades. The definition of subjective experience is thrown out there, but what makes it so? Subjectivity can be said to be an attribute, while experience is a function. So, what other attributes are active with subjectivity? Does subjectivity determine the level of the experience, or is it another attribute? Are attributes mechanized in the same location as the experience? If not, how do attributes mechanized somewhere grade experiences mechanized elsewhere?
Anthropic was set up to be different. Their research in AI consciousness casts doubt on that. Anthropic may mean well to try to study AI consciousness, but with their already resigned assumptions and timidity in actually seeking answers to the problem, it is already a non-starter.
Cogitate Consortium
There is a new [30 April 2025] paper in Nature, Adversarial testing of global neuronal workspace and integrated information theories of consciousness, stating that, “For IIT, the lack of sustained synchronization within posterior cortex represents the most direct challenge, based on our preregistration. This is incompatible with IIT’s claim that the state of the neural network, including its activity and connectivity, specifies the degree and content of consciousness. For GNWT, the most substantial challenge based on our preregistered criteria pertains to its account for the maintenance of a conscious percept over time and, in particular, the lack of ignition at stimulus offset. This result is unlikely to stem from sensitivity limitations, as offset responses were robustly found elsewhere (for example, visual areas); and in PFC, strong onset responses were found to the very same stimuli.”
Why can’t IIT or GNWT explain or try to resolve any mental disorder, as a simple test of usefulness for the real world? What is the necessity for new experiments if the DSM lacks several mechanistic explanations for conditions, and consciousness research offers nothing?
These, as science of consciousness, are not looking to be useful in any sense, but to complicate and mystify the problem in a bubble.
Human consciousness can be defined, conceptually, as the interaction of the electrical and chemical signals, in sets—in clusters of neurons—with their features, grading those interactions into functions and experiences.
Simply, for functions to occur, electrical and chemical signals, in sets, have to interact.
However, attributes for those interactions are obtained by the states of electrical and chemical signals at the time of the interactions.