Some users of a new Meta Platforms Inc. artificial intelligence app have inadvertently made their chatbot conversations public.
News and other outlets reported the phenomenon on Thursday. According to the publication, a cybersecurity researcher has determined that some of the publicly shared chatbot conversations included users’ addresses, information about court cases and other sensitive data. This indicates that the users had not meant to make the chat logs broadly accessible.
The logs originate from Meta AI, a mobile app that Meta released in late April. It’s the standalone version of an eponymous chatbot that the company had previously integrated into Facebook, Instagram and WhatsApp. The AI also ships with its Ray-Ban Meta series of augmented reality glasses.
The Meta AI app’s chatbot can not only answer user questions but also search the web and generate images. According to Meta, it personalizes prompt responses based on account data such as the Facebook posts liked by the user. Consumers can further fine-tune the chatbot’s output by providing customization instructions.
When the Meta AI app answers a prompt, a Share button at the top right corner of the chat window can be used to post the response to a public Discovery feed. Other users may then view the AI-generated content. It appears that some consumers are unaware the Share button makes their chatbot logs public.
PCMag reported that the button “doesn’t indicate where the post will be published.” According to the publication, it becomes apparent that the post has been shared to the public Discovery feed only after the fact. At the same time, Meta provides several ways of deleting such posts following their publication.
The scope of the privacy issue is limited by the fact that the Meta AI app doesn’t yet have a significant installed base. According to research cited by News, the app has been downloaded only about 6.5 million times since its late April release. The versions of Meta AI that are embedded into the company’s social media services topped 1 billion monthly users in May.
Over the past few years, regulators in the European Union have fined the Facebook parent several times over its privacy practices. At least one of those fines was issued because of Meta’s interface design choices. It’s unclear whether the inadvertent data leaks tied to the Meta AI app’s Share button might expose the company to similar scrutiny.
The app is not the only element of Meta’s consumer AI strategy that has drawn criticism. Earlier this year, the company deleted a collection of AI-powered Facebook and Instagram profiles after they drew a cool reception from users.
Image: Meta
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU