OpenAI has cut off API access to an engineer who allegedly used the company’s Realtime API to create a sentry machine gun that could respond to voice queries.
The man, who calls himself STS 3D online, posted a viral video where he tells the robot: “ChatGPT, we’re under attack from the front left and front right. Respond accordingly.” The robot appears to then shoot blanks from a rifle to its left and right, before saying: “If you need any further assistance, just let me know.”
One user on the social network Reddit commented: “There’s at least 3 movies explaining why this is a bad idea.”
An OpenAI spokesperson told Futurism, who first reported the news, that “we proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry.”
“OpenAI’s Usage Policies prohibit the use of our services to develop or use weapons, or to automate certain systems that can affect personal safety,” the company added.
Rolled out in October 2024 in public beta, OpenAI’s Realtime API allows developers to create natural speech-to-speech conversations using six preset voices, potentially making the development process for things like language apps and education and customer support software much easier. It’s built on the same version of GPT-4 that powers Advanced Voice Mode in ChatGPT.
STS 3D has not provided any information about how he was able to integrate Realtime API with the gun, and if he subverted OpenAI’s controls around this type of activity.
Recommended by Our Editors
In January 2024, OpenAI removed language within its terms of service that prohibited the use of its technology for “military and warfare.” Until Jan. 10, the company’s usage policy banned “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.”
Instead, the updated policy now prohibits using “our service to harm yourself or others” as well as using the technology to “develop or use weapons,” but not “military and warfare.” We’ve seen previous instances where openly available AI models from mainstream tech firms have potentially been leveraged for real-world military applications.
In November 2024, Reuters reported that institutions tied to the Chinese military may have used Meta’s Llama AI model to gather and process research data. Though Meta’s terms and conditions prohibit the model from being used for “military, warfare, nuclear industries or applications” or espionage, these restrictions can be hard to enforce outside of the US.
Get Our Best Stories!
This newsletter may contain advertising, deals, or affiliate links.
By clicking the button, you confirm you are 16+ and agree to our
Terms of Use and
Privacy Policy.
You may unsubscribe from the newsletters at any time.