CHATGPT will be banned from appearing on the world’s most popular messaging app as the AI tech wars heat up.
Meta has decided no other third-party AI chatbots will be allowed on WhatsApp soon except its own, Meta AI.
ChatGPT was made available within WhatsApp less than a year ago, giving users quick and easy access to the much-loved service.
But rule changes will block it and any other like it from appearing in the messaging app from January 15 next year.
The move gives a sense of the brutal rivalry in the AI space.
And it looks to only get more fiery, just days after ChatGPT creator OpenAI announced a Google Chrome competitor with its AI chatbot heavily infused into the web browser.
WHAT APP?
The 5 signs your partner has used ChatGPT to write you a romantic message
PICTURE THIS
I asked ChatGPT to open my friend’s eyes in a snap – results left me laughing
The move was outlined in rule changes for businesses.
Meta claims that these new chatbots put its systems under increased burden due to the volume of messages and requiring a different kind of support, reports News.
OpenAI confirmed as much, saying there had been a “policy and terms change”.
“We’ve loved seeing more than 50 million of you chat, create, and learn with ChatGPT on WhatsApp,” the firm said.
“The simplicity and familiarity of messaging made it a natural home for everyday creativity and curiosity.
“While we would have much preferred to continue serving you on WhatsApp, we are focused on making the transition as easy for all of our users as possible.”
What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools pull information from the internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.