Big data centers were once the ugly duckling of the tech infrastructure world, profitable but boring, though today they exist as the belle of the ball – serving as some of the biggest profit drivers for Google, Microsoft, and Amazon. Flexera surveyed 759 cloud decision-makers and found that 30% of the companies are spending as much as $12 million a year on cloud services, and 11% are dishing out as much as $60 million.
The demand for big computing power is experiencing exponential growth after the advent of effective AI – but as the demand for this increases across the world, the centralized infrastructure developed by big tech companies over the years, located in rural, low-density areas, means that both retail users and businesses will eventually face costlier and slower performance.
To help understand the scale of the challenge ahead, I am sitting with Butian Li, a top tech specialist operating in the world of computing power. She has worked as COO of Wabi and was an investor at Lightspeed and NGC Ventures. Li is heavily involved in the democratization of the tech economy, trying to outsmart the big guns one node at a time, turning devices in our own homes into mini data centers.
Let’s dive in.
Ade: What’s the new obsession with data centers? Is it new? Is it needed?
Butian: Big tech has been funneling billions into computing infrastructure to ensure they control distribution and access to the internet’s backbone. Their edge is centralization—it gives them absolute control over efficiency, which is the public story, but pricing and data ownership are the private ones you don’t hear about. The challenge lies in the location of these data centers—as more and more people use more and more computing for increasingly complicated tasks, having data centers hundreds of miles away no longer becomes viable. It’s like having a taxi service that operates in a different city. Instead of building massive centralized data centers, we need to optimize peer-to-peer connectivity, minimize latency, and ensure compute availability is as seamless and automatic as cloud services. It’s how we deal with most digital problems today—computing is the final frontier.
Ade: The advent of AI has sharply increased computing/data costs for startups. How can one possibly maintain high-speed access for millions of users without having to turn over the data to the big tech?
Butian: The key is to decentralize both computing and data. Edge computing means processing these AI tasks closer to the end-user, reducing latency and cutting costs. Feasibly, these computations could be deployed on user-owned devices rather than in corporate data centers. Instead of running everything by AWS, computations can instead happen on a global network of pre-existing infrastructure, like consumer devices, bypassing the cloud rent-seeking model. Developers don’t need to pay exorbitant fees to run workloads; they can instead tap into this network of ‘unintentional compute, ’ pay these users a fee for the workloads their devices carry out, and save the majority of their compute budget for the variety of other things startups desperately need.
Ade: If idle laptops and personal computing devices can help create the next generation of decentralized data provision services, is it Bitcoin mining all over again? Won’t big players hijack this initiative as well?
Butian: Decentralized computing today is vastly different from Bitcoin mining. Mining was a race for hash rate dominance, leading to industrial-scale farms. Bless, on the other hand, is about maximizing distributed computing from everyday consumer devices on the edge. It’s about enabling participation at scale so that the processes of the internet run faster and cheaper.
Ade: Utilizing random strangers’ devices for decentralized data processing is a cool concept, but it presents even more privacy challenges for the end user. How do you bulletproof privacy and security so no one’s data gets jacked, especially for sensitive stuff like AI tasks?
Butian: We’ve run the tests with the five million bog-standard devices that have opened up access to us, and collectively, they are benchmarked amongst the top 3 supercomputers in the world. The other supercomputers on that list can only be accessed by a few government-sanctioned scientists and researchers. Bear in mind that we ran a dynamic consensus mechanism, which meant that clients could choose dynamically if they wanted data to be processed with zero-knowledge proofs, data aggregation, or any other method, and every user device has no idea what it’s actually processing due to the secure runtime.
What we’ve found through these tests is that you can have the performance, you can have greater control over your privacy, and you can do these things in a way that is cheaper, faster and fully distributed.
Ade: Decentralized networking can result in considerable bottlenecks that are hard to overcome. We’ve heard about this in the context of projects like Ethereum, Solana, and various other blockchains that have faced major choke points over the years. Have you faced similar bottlenecks, and how have you solved them?
Butian: Our biggest challenge at Bless has been optimizing load balancing and peer-to-peer routing. That means, which devices process which workloads? We’ve built a mechanism pretty similar to the dating app in the Black Mirror episode ‘Hang the DJ’ – it simulates in milliseconds the performance and compatibility of the computing task with the consumer devices in contention for this task, taking into account the geolocation, internet speed, thread count, processing power, and what else the devices are currently working on. Thankfully, we’re not a blockchain – we don’t deal with blocks! We deal with computing, the stuff that powers the internet.
Ade: Five years out, if users owned the internet’s compute backbone, what’s the biggest industry flip you’d bet on? AI, metaverse, web hosting?
Butian: AI processing. The computing takes place locally, meaning faster speeds, cheaper access, and, most importantly, it could be open-source, privacy-preserving, and available to all. The biggest challenge here isn’t actually a technical one—there are already excellent open-source models out there that could be plugged into a local device (if you don’t mind waiting the length of a Netflix episode for a query) but rather a financial one. With companies like Microsoft subsidizing ChatGPT with a mammoth supply of computing, users will continue to serve as free riders, extracting value from them right up until the tap runs dry and prices head to sustainable levels.
Q7: With Bless, you are working on the world’s first decentralized data center cum computing resource. It has 4.3 million nodes right now. How is it that you are able to run such a big distributed entity so well and lay the groundwork for future AI access and development?
Butian: By automating everything. Bless runs on an autonomous network, where nodes dynamically connect and allocate resources without manual intervention. We’re not a marketplace that sorts through which machines can power which requests. Everything is automatically decided by what we call ‘The Orchestrator’ – a mechanism that analyzes every device’s hardware, capacity for processing and geolocation and simulates how it would perform – all in milliseconds. The result is a compute network that is truly decentralized, scalable, and self-sustaining.