A security flaw in OpenAI’s ChatGPT application programming interface could be used to initiate a distributed denial-of-service attack on websites, according to a researcher.
The discovery was made by Benjamin Flesch, a security researcher in Germany, who detailed the vulnerability and how it could be exploited on GitHub. According to Flesch, the flaw lies in the API’s handling of HTTP POST requests to the /backend-api/attributions endpoint. The endpoint allows a list of hyperlinks to be provided through the “urls” parameter.
The problem arises from an absence of limits on the number of hyperlinks that can be included in a single request, so attackers can easily flood requests with urls via the API. Additionally, OpenAI’s API does not verify whether the hyperlinks lead to the same resource or if they are duplicates.
The vulnerability can be exploited to overwhelm any website a malicious user wants to target. By including thousands of hyperlinks in a single request, an attacker can cause the OpenAI servers to generate a massive volume of HTTP requests to the victim’s website. The simultaneous connections can strain or even disable the targeted site’s infrastructure, effectively enacting a DDoS attack.
The severity is also increased by the lack of rate-limiting or duplicate request filtering mechanisms in OpenAI’s API. Without putting safeguards in place, Flesch argues, OpenAI inadvertently provides an amplification vector that can be utilized for malicious purposes.
Flesch also notes that the vulnerability showcases poor programming practices and a lack of attention to security considerations. He recommends that OpenAI address the issue by implementing strict limits on the number of URLs that can be submitted, ensuring duplicate requests are filtered and adding rate-limiting measures to prevent abuse.
Security experts agree with Flesch’s assessment. Elad Schulman, founder and chief executive of generative AI security company Lasso Security Inc., told News via email that “ChatGPT crawlers initiated via chatbots pose significant risks to businesses, including damage to reputation, data exploitation and resource depletion through attacks such as DDoS and denial of wallet.”
“Hackers targeting generative AI chatbots can exploit chatbots to drain a victim’s financial resources, especially in the absence of necessary guardrails,” Schulman added. “By leveraging these techniques, hackers can easily spend a monthly budget of a large language model-based chatbot in just a day.”
Image: News/DALL-E 3
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU