When Mai-Lan Tomsen Bukovec joined Amazon Web Services more than a decade ago, the company’s Simple Storage Service, better known as S3, was a humble online bucket for photos, logs and the occasional startup backup.
Today it houses exabytes of corporate treasure, and Bukovec (pictured), now vice president for data and analytics, is busy teaching that bucket to think.
“Every AI application is a data application,” she tells me inside a glass conference room at AWS’s Seattle Re:Invent headquarters. “Your bottom turtle is always going to be the data — and the world’s data is in S3.”
I loved the “bottom turtle” reference as it’s the analogy in information technology that refers to the most fundamental or foundational layer upon which everything else is built. Ultimately it emphasizes that data is the foundational element of IT systems, and S3 is the primary and highly reliable platform for storing that data in the cloud.
That mantra is guiding one of the most consequential rewrites of Amazon’s cloud playbook since the company invented rent-by-the-hour computing in 2006. For years S3’s selling points were durability and cost: store anything, pay pennies, never lose a byte. The new goal is more ambitious: Turn S3 into the default launchpad for artificial-intelligence agents that can sift billions of files, enforce corporate security rules and, eventually, run parts of a business without human help.
The video below is part of our editorial series AWS and Ecosystem Leaders Halftime to Re:Invent Special Report digital event. Look for other articles from the event on News.
Plumbing the query to the bucket
Bukovec’s role changed quietly last fall, when AWS combined its storage unit with analytics groups Redshift and Athena, plus file, streaming and messaging services. “Our whole objective here is to make that path from the query … all the way down to storage, the most effortless, the most price-performant as possible,” she said.
The hinge of the plan is S3 Tables, unveiled at Amazon’s re:Invent conference in Las Vegas. Built on the open-source Iceberg format, S3 Tables let customers expose raw Parquet files — the lingua franca of data science — as if they were rows in a database. The result: Analysts can run standard SQL against petabytes of information without copying it into a warehouse.
“If you have Parquet data in S3,” Bukovec said, “you should store it in an S3 Table.”
Iceberg’s appeal is interoperability and fast path to super intelligence which I wrote about earlier this year. Nearly every modern analytics engine speaks the format, which means migrations — long the bane of chief data officers — can be skipped. That matters as enterprises rush to feed proprietary information into large-language models such as Anthropic’s Claude, now embedded in AWS’s Bedrock service.
From dashboards to data agents
Yet easier queries are only half the story. Bukovec believes the next productivity leap will come from autonomous programs she calls data-AI agents. Powered by increasingly “agentic” foundation models — she name-checks Anthropic’s new Sonnet 4 and Opus 4 — these bots can locate the right dataset, transform it and trigger downstream actions without human intervention.
“Everything that you can imagine a human developer in an application can do today can be done by an AI agent,” she said.
She points to StarHub, a Singapore-based company that straddles insurance and telecom services. StarHub has a production data agent that taps both structured databases and unstructured S3 stores to process insurance workflows around the clock. The pattern, said Bukovec, will spread as customers realize agents can be trained once and reused across departments.
The metadata lake
Before that future can materialize, companies need a reliable way to find the right files among billions. Bukovec’s answer sounds paradoxical: Build a data lake composed of metadata.
“I think the next generation of data lakes are actually going to be metadata,” she said. Earlier this year AWS made generally available S3 Metadata, which stores descriptive tags — governance labels, access logs, even AI-generated summaries — inside an S3 Table. Because the metadata itself lives in Iceberg format, users (or agents) can run SQL to discover, for example, every file accessed by finance last week or every image containing personally identifiable information.
“It is really going to be part of the bottom turtle of S3,” Bukovec said, borrowing the ancient fable of a world stacked on tortoises. “All the context about your data” that today resides in employees’ heads can instead be written to the metadata layer, letting software reason about storage without pulling the underlying bytes.
The payoff is speed and thrift. “Data at rest is always going to be your most cost-effective solution for storage,” she reminded me. Querying metadata means agents don’t load full objects until they know the data is relevant.
Guardrails for nonhuman users
More autonomy brings new risks. Bukovec insists the same security regime that reassured chief information security officers a decade ago will protect AI agents. “Whether it is a human or an AI, you have to make sure that you’ve established a data perimeter … the AI agent has to follow the rules of the road,” she said.
AWS’ defenses include identity and access management policies and Access Analyzer, a tool that uses formal (machine-verifiable) logic to prove no forbidden path exists to sensitive buckets — a technique Bukovec calls “automated reasoning.” The company will apply those checks to agent calls the same way it does to human ones.
One copy, three workloads
S3’s transformation comes as two previously distinct camps — business-intelligence dashboards and Kubernetes-era platform engineers — converge on a single data pipeline. “Storage is where your application data, analytics data and AI workloads meet,” Bukovec says. Combining them promises lower latency and eliminates redundant copies, but requires formats such as Iceberg and security primitives robust enough for machines to execute without supervision.
Customers are taking notice. Bukovec says adoption of S3 Tables “has been tremendous,” largely because firms already own “exabytes of Parquet.” Early fans include Netflix, famous for its open-source catalog of media files and usage logs. Engineers there are weighing how the new metadata layer could simplify governance and recommendation engines.
A native place for AI data
Bukovec won’t tip her hand on what AWS will unveil at the next re:Invent in December, but drops hints: “We have some really interesting things coming up in the next six months that make S3 a native place for AI data,” she said. Observers expect deeper integrations between Bedrock agents and S3 Metadata, and perhaps tools to let agents annotate files automatically with provenance or confidence scores.
In the meantime her team keeps tending to the “foundational things,” she said: making sure thousands of engineers protect “the integrity of your bytes, the integrity of your query.” S3’s original promises of 11-nines durability and pennies-per-gigabyte storage still matter, even as AI reshapes who — or what — does the querying.
According to Bukovec, it’s all about that bottom turtle, S3 has come a long way. As she is walking away from the camera, Bukovec flashes a quick smile and says “It’s not just storage, it’s a data platform, stay tuned for more this summer and re:Invent.”
Here’s the full interview with Bukovec:
Photo: News
Support our open free content by sharing and engaging with our content and community.
Join theCUBE Alumni Trust Network
Where Technology Leaders Connect, Share Intelligence & Create Opportunities
11.4k+
CUBE Alumni Network
C-level and Technical
Domain Experts
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.
News Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of News, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — News Media operates at the intersection of media, technology, and AI. .
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.