Welcome to The Circuit series. Meet our next interviewee:
Your browser does not support the video tag.
Artificial intelligence in financial services has progressed far beyond a curiosity and has today become a simple reality.
But while firms in the sector appear completely ready to embrace the technology, in much of the world regulation has not caught up with the capabilities of modern AI.
Therefore, the onus is on firms themselves to ensure that responsible practices are adopted and rigidly stuck to, the companies with real expertise in safe AI will be the ones that thrive.
In this exclusive interview with UKTN, Amanda Stent, head of AI strategy and research in the office of the CTO at Bloomberg, discusses how AI is being used in financial services, why responsible AI should be a top priority for firms and how it can be achieved.
What are the biggest areas of concern for financial firms increasingly relying on AI, and how can they be dealt with?
Financial institutions clearly see AI as both a strategic necessity and a competitive differentiator. Many firms see agentic AI driving incremental automation across the industry in the next three years, while some envision more far-reaching transformation of workflows and decision-making.
However, innovation is only meaningful when it translates into new, measurable capabilities directly within client workflows. While speed provides a competitive edge, trust remains the industry’s bedrock; the biggest challenge is not in just building or deploying cutting-edge technology, but making it work reliably in a high-stakes, heavily regulated environment.
There is still a lot of uncertainty among financial professionals about the risks of AI. That is why transparent attribution, explainability, and guardrails are so important in the adoption cycle of this technology. When clients understand how insights are generated and can validate the accuracy and timeliness of data and analytics used, the system gains legitimacy.
To succeed, financial firms must treat AI as an operational and cultural shift, not just a technical one. This means rethinking existing processes, moving from the more than a decade when AI made existing workflows more efficient towards a future where AI may make change workflows altogether.
Firms must also invest in training and upskilling personnel. By underscoring human judgment and ensuring employees are trained to oversee these tools, AI becomes a driver of augmented productivity and elevated impact.
What are the most important elements of ‘Responsible AI’?
Bloomberg is committed to several key principles related to the responsible use of AI and we utilise multiple processes to safeguard our AI-backed solutions.
Think of this like the “Swiss cheese” metaphor borrowed from Compliance, Data Security, and Cybersecurity practitioners. It equally applies here – no single mitigation effort can provide total safety; however, a combination of layers used together can help increase confidence in the outcomes.
For example, our AI solutions provide transparency and attribution so that users can always access the underlying data or sources that support a signal or generated output. Keep in mind, our AI solutions do not make decisions; they help human users – in this case, investment professionals – more efficiently process and work with massive amounts of information so that they can make more informed decisions.
By providing transparent attribution to source documents for AI-generated responses, we enable our users to validate the system’s output.
We also build for robustness, so we rigorously test and regularly monitor our AI solutions. This includes conducting regular red-teaming exercises to identify and mitigate risks of using generative AI models.
Where we do use generative AI in our products, we have implemented solutions to mitigate the risk of hallucinations and have also deployed AI guardrails that assess model inputs and outputs to mitigate various categories of risk.
Keep in mind that guardrails must also include an assessment that the use cases are appropriate, as well as provide a framework for the handling and management of the data being fed into a model. They must also make sure the model ignores and does not surface irrelevant information.
Interestingly, our Responsible AI team developed and published the first-ever AI content risk taxonomy tailored to meet the needs of real-world generative AI systems in capital markets financial services.
This goes beyond what general-purpose safety taxonomies and guardrail systems offer by addressing risks specific to the financial sector such as confidential disclosure, counterfactual narrative, financial services impartiality, and financial services misconduct.
How can finance firms ensure a sense of transparency in their AI systems?
With the complexity of today’s AI and the incredible breadth of what it can do, we have moved beyond the days when you could see the feature weights for each model decision. However, certain types of transparency are still possible.
In today’s market, being able to prove your data is right is not just a safety rule; it is a massive competitive edge.
AI systems should clearly cite their sources for every assertion so that the user can independently verify the facts. They should also show their ‘reasoning’ – identifying the steps they take to surface data and insights – so the user can learn to understand the systems’ capabilities. They should be clear and transparent about where they cannot answer; instead of hallucinating a response, when there is no data, document, or analytic to support a response, the systems should say so.
Ultimately, it’s about creating a safe, verifiable environment where technology empowers experts to make informed, high-stakes decisions with confidence.
How advanced is the integration of modern AI in the activities of a company like Bloomberg?
Bloomberg has been building and deploying AI for more than 15 years. This started in 2009, initially with sentiment analysis of news stories. Our leadership position in applying AI in the finance domain enables our clients around the world to move faster, work smarter and more strategically, and achieve better results.
Throughout our time building and deploying AI solutions, our focus has always been on addressing current and foreseeable client needs – and our goal is to help our clients discover, analyse, and distil information based on our data and analytics.
There are two ways we use AI at Bloomberg. First, we use AI behind the scenes to extract and normalise data from unstructured documents. We also enrich documents with metadata about topics, summaries, entities, and sentiment.
Second, we incorporate AI directly into products like the Bloomberg Terminal and BQuant Enterprise, enabling our users to analyse and visualise data, gather and synthesise insights, create and publish content, and manage and analyse communications.
All of this is facilitated by our investments in AI technology, open source and thought leadership. We have built our own robust and scalable AI infrastructure tech stack that operates at the speed of the market.
It is designed to support the continuous evolution of our products and services, ensuring we’re able to innovate quickly, train ML models, build robust solutions, and operate efficiently at scale. This technology stack enables us to use continuous discovery, development, and delivery methods to develop AI functionality. It also supports governance, including observability, traceability, and reproducibility.
Given the incredibly rapid pace of change in the tech sector, we must be able to adapt quickly from researching a new technology to deploying it in our products. But that also means we need to do our best to research and understand the new technology before putting it in client-facing solutions.
Our engineers have long had various training programs and workshops in which they’ve been encouraged to learn and develop their competency related to machine learning and AI.
We also have a system of Guilds across that firm that are made up of wide-reaching groups of engineers across the organisation who are interested in, and dedicated to, organically sharing knowledge, tools, code, and practices related to a given technical area.
These Guilds serve as communities that are focused on specific technical topics of interest and for the advancement of that technology within the firm. Guild Leaders are charged with influencing the use of the technology internally and engage with the respective technology community externally.
Is there a risk that competitiveness among companies could encourage rapid AI development without appropriate guardrails?
There is always a risk that the “fear of missing out” will lead to cutting corners on safety. Many firms feel they must move fast to survive. However, moving fast is useless if outputs can’t be trusted or if outcomes aren’t delivered. or if outcomes are not delivered.
This is why we are diligent in maintaining our focus to address only those things that will truly impact our business. With so many potential problems out there, we have to stay focused on the ones that matter to our customers.
It is also why we developed an enterprise-wide approach to AI risk management. Having a solid governance framework in place supports our responsible, trustworthy AI development. We are committed to continuously evolving this framework in response to the dynamic technological and regulatory landscape related to AI technologies.
