F5 has released its 2024 State of AI Application Strategy reportwhich reflects that although 75% of companies are already implementing AI, 72% discover major problems with data quality, as well as an inability to scale practices related to them. The report, presented at AppWorld in Madrid, also reveals that companies have to face difficulties with security and others related to infrastructure, data, model, application services and application layers when trying to achieve scalable adoption when developing a new stack with the aim of supporting more AI-powered services.
However, in the survey used to prepare the report, professionals have chosen AI as the most important technological trend for 2024. However, only 24% of organizations claim to have implemented generative AI on a large scale, although its use is increasing.
The most common use cases for generative AI, for now, tend to fulfil functions considered to be non-strategic. These include co-pilots and productivity improvement tools for employees, which are used by 40% of respondents. Customer service tools such as chatbots are also used by 36%. However, those who participated in the survey indicate that workflow automation tools would be a higher priority for them. This is the opinion of 36% of them.
When it comes to challenges in deploying AI-based applications, respondents have three main concerns related to the infrastructure layer: the cost of computing to scale AI (62%), model performance across the board (55%), and model security (57%). In the latter case, business leaders expect to increase their spending on security by 44% as they scale deployments.
When it comes to data, data maturity is a significant and pressing challenge affecting widespread AI deployment. 72% of respondents cite data quality and inability to scale data practices as major obstacles to scaling AI. Another 53% cite a lack of AI and data skills as a major barrier. Furthermore, while 53% of companies say they have a defined data strategy, over 77% of companies surveyed do not have a single true source for their data.
Cybersecurity is another major concern for those who have to deliver AI services. Among the main threats in this area for respondents are AI-driven attacks, data privacy or data leakage.
Respondents plan to act against these threats to protect AI implementations, 42% say they are using, or plan to use, API security solutions to protect data as they move through training AI models. 41% use or want to use monitoring tools to gain visibility into AI application usage, and 39% use or plan to use DDoS protection for AI models. Additionally, 38% use or plan to use bot protection for AI models.