By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Storage is key to AI projects that succeed | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Storage is key to AI projects that succeed | Computer Weekly
News

Storage is key to AI projects that succeed | Computer Weekly

News Room
Last updated: 2025/09/14 at 1:49 PM
News Room Published 14 September 2025
Share
SHARE

The hyperscaler cloud providers plan to spend $1tn on hardware optimised for artificial intelligence (AI) by 2028, according to market researcher Dell’Oro.

Meanwhile, enterprises are spending big on AI, with plans for AI projects fuelling record spending on datacentre hardware in 2024. In Asia, IDC found the region’s top 100 companies plan to spend 50% of their IT budget on AI. 

Despite all that, it’s not just a case of throwing money at AI.

And many AI projects fail.

Gartner, for example, has reported that nearly a third of AI projects get dropped after failing to achieve any business value – and has even gloomier predictions for agentic AI.

So, how do organisations ensure the best possible chance of success for AI projects, and how do they evaluate the storage needed to support AI?

What does AI processing demand from storage?

Let’s first look at AI and the demands it places on compute and storage.

Broadly speaking, AI processing falls into two categories.

These are training, when recognition is generated from a model dataset, with varying degrees of human supervision; and inference, in which the trained model is put to work on real-world datasets. 

The components of a successful AI project start well before training, however.

Here, we’re talking about data collection and preparation, and with datasets that can vary hugely in nature. They can include backups, unstructured data, structured data and data curated into a data warehouse. Data might be held for long periods and prepared for AI training in a lengthy and considered process, or could be required rapidly for needs that were unexpected.

In other words, data for AI can take many forms and produce unpredictable requirements in terms of access.

In other words, AI is very hungry in terms of resources.

The voraciousness of graphics processing units (GPUs) is well-known, but it’s worth recapping. So, for example, when Meta trained its open source Llama 3.1 large language model (LLM), it is reported that it took around 40 million GPU hours on 16,000 GPUs. We’ll come back to what that means for storage below.

A large chunk of this is because AI uses vectorised data. Put simply, when training a model, the attributes of the dataset being trained on are translated to vectorised – high dimensional – data.

That means data – say the numerous characteristics of an image dataset – is converted to an ordered set of datapoints on multiple axes so they can be compared, their proximity to each other calculated, and their similarity or otherwise determined.

The result is that vector databases often see significant growth in dataset size compared to source data, with as much as 10 times possible. That all has to be stored somewhere.

Then there is frequent checkpointing to allow for recovery from failures, to be able to roll back to previous versions of a model should results need tuning, and to be able to demonstrate transparency in training for compliance purposes. Checkpoint size can vary according to model size and the number of checkpoints required, but it is likely to add significant data volume to storage capacity requirements.

Then there is retrieval augmented generation (RAG), which augments the model with internal data from the organisation, relevant to a specific industry vertical or academic specialisation, for example. Here again, RAG data depends on vectorising the dataset to allow it to be integrated into the overall architecture. 

To maximise chances of AI success, organisations need to ensure they have the capacity to store the data needed for AI training and the outputs that result from it, but also that storage is optimised so that energy can be conserved for data processing rather than retaining it in storage arrays

All this comes before AI models are used in production.

Next comes inference, which is the production end of AI when the model uses data it hasn’t seen before to draw conclusions or provide insights.

Inference is much less resource-hungry, especially in processing, but results still must be stored.

Meanwhile, while data must be retained for training and inference, we also have to consider the power usage profile of AI use cases.

And that profile is significant. Some sources have it that AI processing takes north of 30 times more energy to run than traditional task-oriented software, and that datacentre energy requirements are set to more than double by 2030.

Down at rack level, reports indicate that per-rack kilowatt (kW) usage has leapt from single figures or teens to up to 100kW. That’s a massive leap, and it is down to the power-hungry nature of GPUs during training.

The implication here is that every watt allocated to storage reduces the number of GPUs that can be powered in the AI cluster. 

What kind of storage does AI require?

The task of data storage in AI is to maintain the supply of data to GPUs to ensure they are used optimally. Storage must also have the capacity to retain large volumes of data that can be accessed rapidly. Rapid access is a requirement to feed GPUs, but also to ensure the organisation can rapidly interrogate new datasets.

That more than likely means flash storage for rapid access and low latency. Capacity will obviously vary according to the scale of workload, but hundreds of terabytes, even petabytes, is possible.

High density quad-level cell (QLC) flash has emerged as a strong contender for general-purpose storage, including, in some cases, for datasets that might be considered “secondary”, such as backup data. Use of QLC means customers can store data on flash storage at a lower cost. Not quite as low as that of a spinning disk, but then QLC comes with the ability to access data much more rapidly for AI workloads.

In some cases, storage suppliers offer AI infrastructure bundles certified to work with Nvidia compute, and these come with storage optimised for AI workloads as well as RAG pipelines that use Nvidia microservices.

The cloud is also often used for AI workloads, so a storage supplier’s integration with cloud storage should also be evaluated. Holding data in the cloud also brings an element of portability, with data able to be moved closer to its processing location.

AI projects often start in the cloud because of the ability to make use of processing resources on tap. Later, a project started on-site may need to burst to the cloud, so look for providers that can offer seamless connections and homogeneity of environment between datacentre and cloud storage.

AI success needs the right infrastructure

We can conclude that to succeed in AI at the enterprise level takes more than just having the right skills and datacentre resources.

AI is extremely hungry in data storage and energy usage. So, to maximise chances of success, organisations need to ensure they have the capacity to store the data needed for AI training and the outputs that result from it, but also that storage is optimised so that energy can be conserved for data processing rather than retaining it in storage arrays.

As we’ve seen, often it will be flash storage – and QLC flash in particular – that offers the rapid access, density and energy-efficiency needed to provide the best chances of success.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Hey Google, let’s get rid of ads in Gmail
Next Article Augmented Reality is coming for your car’s dashboard whether you want it or notAugmented Reality is coming for your dashboard whether you want it or not
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Gemini’s handy multi-tasking trick is expanding to more Android phones
News
Take a Break From the Modern Internet, Surf the Original Web
News
Today's NYT Strands Hints, Answer and Help for Sept. 15 #561 – CNET
News
What’s next for Apple after the iPhone 17?
News

You Might also Like

News

Gemini’s handy multi-tasking trick is expanding to more Android phones

2 Min Read
News

Take a Break From the Modern Internet, Surf the Original Web

10 Min Read
News

Today's NYT Strands Hints, Answer and Help for Sept. 15 #561 – CNET

3 Min Read
News

What’s next for Apple after the iPhone 17?

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?