Eurostar International Ltd., the operator of the Eurostar trains that cross the English Channel, has been accused of mishandling the responsible disclosure of security flaws in its customer-facing artificial intelligence chatbot after security researchers were allegedly told their actions could be viewed as blackmail.
The allegation comes from U.K. security firm Pen Test Partners LLP, which said it identified multiple vulnerabilities in Eurostar’s AI-powered chatbot earlier this year. The vulnerabilities were discovered during routine testing rather than as part of a commissioned engagement.
The vulnerabilities detected included weaknesses in how the chatbot handled conversation history and message validation that could allow attackers to manipulate earlier messages in a chat session. The Pen Test Partners researchers were able to bypass safety guardrails, extract internal system information and inject arbitrary HTML into chatbot responses.
Though the chatbot was not connected to sensitive customer data, the firm warned that such flaws could become more serious if the system were later expanded to handle bookings, personal information, or account access.
As a legitimate company that practices ethical disclosure, Pen Test Partners attempted to report the issues through Eurostar’s vulnerability disclosure process beginning in mid-June. After receiving no response, it followed up multiple times via email and later through LinkedIn, but then it gets weird.
According to Pen Test Partners, a Eurostar security executive eventually responded but suggested that continued attempts to draw attention to the issue could be interpreted as blackmail.
“To say we were surprised and confused by this has to be a huge understatement – we had disclosed a vulnerability in good faith, were ignored, so escalated via LinkedIn private message,” Ross Donald, head of core pent test at Pen Test Partners, wrote in a blog post. “I think the definition of blackmail requires a threat to be made and there was of course no threat. We don’t work like that!”
Eurostar later acknowledged that the original disclosure email had been overlooked and said some of the reported issues were subsequently addressed. Exactly what it fixed is unclear, however.
“We still don’t know if it was being investigated for a while before that, if it was tracked, how they fixed it, or if they even fully fixed every issue!” added Donald.
As AI-powered customer interfaces become more widespread across industries, the Eurostar episode serves as a reminder that chatbot security is not just about AI behavior but also about the underlying software infrastructure that supports it.
The case also highlights the need to have trained staff who are willing to work with security professionals instead of erroneously accusing them of wrongdoing.
Image: News/Ideogram
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
