Image generated by OpenAI’s DALLE via ChatGPT
As you’d expect, anything serious and negative about AI isn’t getting a lot of traction in mainstream coverage. The cash cow of all cash cows is pretty safe from criticism.
Until about now. AI chatbot bullying isn’t actually new and was flagged quite some time ago for deepfakes. It’s an ongoing and unresolved issue. Australia wants to hit back, hard.
Now, AI bullying is hitting center stage with an ugly clatter at the official level. Australian federal education minister Jason Clare is taking aim at the nasty AI bots. These bots, not at all coincidentally, are basically automated forms of familiar cyberbullying.
It’s hard to believe that LLMs can practically recite every known online bullying line, from “losers” to “kill yourself”, which are direct from online bullying in the early 2000s.
How are these bots able to do this?
Like any other tech, bots can simply be programmed and/or prompted.
Why are these bots able to do this?
Probably because the tech sector is drunk with big money and thinks it has few if any risks, thanks to deregulation and an apparent complete lack of interest in child safety online in so many countries.
Australia’s controversial social media ban for kids under 16 is a working platform in this ugly environment. Simply banning anything obviously can’t be the whole story, but it’s a legal basis for developing further protections.
Let’s look at some basics.
It’s not the tech, it’s the people.
It’s people doing the damage, not just the tech. Refusing to accept that the online environment needs support is directly killing kids and has been since at least the 2010s.
It’s lousy law and totally unrealistic.
Nothing online is immune to any kind of law. Never was and never will be. Why so slow to pick up on the realities when it comes to bullying? This is at the very least a form of assault or implied assault, and it’s a criminal offense.
Big Tech is all too visibly dropping the ball with AI in general.
Even the appearance of basic competence is highly questionable. You can see how this will work out in 10 years; no protections in place, pitiful security.
These laws are not negotiable.
Online the story is legislation at best 20 years out of date as usual. Ironically the main safeguards are older laws. Assault can’t ever be “legal”. Nor can harassment, invasion of privacy, and the rest of the Ugly Suite online.
Bots can be and must be controlled. So should the idiots.
There is no good reason bots should have ever been able to bully anyone. People made these untrustworthy bots possible. This looks like nobrainer undergrad frat fodder.
This is truly dumb stuff, and finding those responsible won’t be hard. Anyone can audit an AI system. Anyone can track a prompt or type of prompt from source to LLM and back.
It’s just a matter of time until legal action breaks the bots.
The AI corporate skanks seem to think they’re safe from direct liabilities. That’s not the case. The parents of a child suicide are directly suing OpenAI. A single case could and will open the floodgates.
Remember, these are the same bots that tried to blackmail their handlers. You can oversight users easily, directly, or indirectly. You can easily see changes in AI behavior.
You can set traps for AI bullies, too.
_________________________________________________________
Disclaimer
The opinions expressed in this OpEd are those of the author. They do not purport to reflect the opinions or views of the or its members.