OpenAI rival Anthropic PBC has launched a research program focused on the concept of artificial intelligence welfare.
The company detailed the initiative today. The project is led by Kyle Fish, an “AI welfare” researcher who joined Anthropic last year. He previously launched a machine learning lab called Eleos AI Research.
The nascent field of AI welfare research revolves around two main questions. The first is whether tomorrow’s neural networks could achieve a form of consciousness. The other question, in turn, is what kind of steps could be taken to improve AI welfare if the answer to the first question is positive.
In an interview published by Anthropic today, Fish pointed to a 2023 research paper that explored AI consciousness. The paper was co-authored by Turing Award-winning computer scientist Yoshua Bengio. The researchers involved in the project determined that current AI systems probably aren’t conscious but found “no fundamental barriers to near-term AI systems having some form of consciousness,” Fish detailed.
He added that AI welfare could become a priority even in the absence of consciousness. Future AI systems with more advanced capabilities than today’s software might increase the need for research in this area “by nature of being conscious or by having some form of agency,” Fish explained. “There may be even non-conscious experience worth attending to there.”
Anthropic plans to approach the topic by exploring whether AI models have preferences as to the kind of tasks they carry out. “You can put models in situations in which they have options to choose from,” Fish said. “And you can give them your choices between different kinds of tasks.” He went on to explain that an AI’s preferences can be influenced by not only the architecture of a neural network but also its training dataset.
Research into AI welfare and consciousness may also have applications beyond machine learning. Asked whether discoveries in this area could shed new light on human consciousness, Fish said “I think it’s quite plausible. I think we already see this happening to some degree.”
Anthropic’s new research program is one of several that it’s pursuing alongside its commercial AI development efforts.
Last month, the company published two papers on the methods with which large language models process data. Anthropic discovered that one of the algorithms in its Claude series of LLMs can not only develop a plan for how to carry out future tasks but also adjust the plan if necessary. Additionally, the research shed new light on how LLMs go about performing mathematical calculations.
Image: Anthropic
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU