Should AI Models Be Morally Protected?
AI can quite easily be “jailbroken” — the act of overriding the system’s ethical, security, or operational constraints — leading to restricted or unethical outputs. A recent study published in arXiv showed the seriousness of the problem, and how AI companies were lagging behind when it came to safeguarding users from dangerous responses.
However, while the fallibility of AI chatbots is something that is well-documented, little attention has been paid to the moral status of chatbots. Despite this, Anthropic appear to be curious about the issue.
“We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously.” – Anthropic spokesperson
Likewise, those using chatbots, either independently or as part of a business, should be aware of the legal and publicity problems that could arise, should chatbots continue to be susceptible to giving away harmful or dangerous information.