By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Yuki CEO on Data Cost Optimization: Why Your Snowflake and BigQuery Bill Is 40% & Rising Due to AI | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Yuki CEO on Data Cost Optimization: Why Your Snowflake and BigQuery Bill Is 40% & Rising Due to AI | HackerNoon
Computing

Yuki CEO on Data Cost Optimization: Why Your Snowflake and BigQuery Bill Is 40% & Rising Due to AI | HackerNoon

News Room
Last updated: 2026/03/03 at 1:04 PM
News Room Published 3 March 2026
Share
Yuki CEO on Data Cost Optimization: Why Your Snowflake and BigQuery Bill Is 40% & Rising Due to AI | HackerNoon
SHARE

Every enterprise running Snowflake or BigQuery knows the feeling: the monthly bill arrives, it’s 40% higher than expected, and nobody can explain exactly why. Data teams are spending 10+ hours a week manually resizing warehouses instead of building pipelines. Finance is demanding predictability, engineering is demanding performance, and the default response, spin up another warehouse, throw more compute at it, and that just compounds the problem!

Ido Arieli Noga, the CEO and Co-founder of Yuki, spent over 12 years living inside this cycle, first managing large-scale data environments in government, then as Head of Data at Lightico where he watched Snowflake costs spiral firsthand.

In this interview, we dig into why the default approach to scaling data infrastructure is broken, what AI-era workloads are doing to cloud budgets, and how the industry needs to rethink data cost governance from the ground up.

Ishan Pandey: Hi Ido, welcome to our “Behind the Startup” series. You’ve worked in government, DevOps, data architecture, and eventually Head of Data at Lightico. That’s a wide path. What connected the dots to starting Yuki?

Ido Arieli Noga: The Head of Data role was the turning point. My team was spending 60% of their time on infrastructure – warehouse sizing, cost firefighting, tuning queries that shouldn’t have needed tuning. Smart people doing repetitive manual work.

I knew the problem was solvable. I’d seen it done in DevOps – companies like Spot.io built autonomous optimization for cloud infrastructure and it worked. The same thing didn’t exist for data platforms. So I reached out to Yakir Daniel, co-founder of Spot.io, pitched him the idea applied to data, and he immediately got it. That conversation turned into our first investor and a strong validator. Then we built it.

Ishan Pandey: You’ve spoken to more than 400 data teams across Snowflake, BigQuery, and Databricks. What’s the common thread?

Ido Arieli Noga: The same frustration, regardless of platform. Engineers know they’re overpaying. They can see it. But fixing it requires continuous auditing – warehouse by warehouse, workload by workload and nobody has bandwidth for that.

The platforms themselves give you limited options. You can resize a warehouse, but that’s a blunt instrument.

Real optimization happens at the query level – every query has different compute requirements, different concurrency patterns, different timing. You can’t solve that by moving a slider. You need something that looks at each query individually and makes the right infrastructure decision in real time.

The 200+ vendors in this space mostly add more dashboards. The gap is something that acts – continuously, at the query level, without touching user code.

Ishan Pandey: Warehouse sprawl, slot misuse, cluster proliferation – different platforms, same problem. How does it get that bad?

Ido Arieli Noga: Provisioning is easy. Right-sizing is hard. A team needs compute for a new use case, spins something up, and moves on. Nobody audits utilization across dozens of resources on an ongoing basis – the workload shifts every month anyway.

We see it consistently: on Snowflake, 40% of medium-warehouse queries could have run on extra-small. On BigQuery, slot reservations are routinely 3-4x actual demand. The engineers aren’t careless – it’s just not a problem that’s tractable manually at scale.

Ishan Pandey: You talk about fast time-to-value. What does a typical deployment look like, and where does it fall short?

Ido Arieli Noga: We sit between the application and the data platform at the connection level. We see every query in real time — size, complexity, concurrency, historical pattern and route it to appropriately sized compute. No code changes because we’re not touching queries.

The speed of impact comes from how much headroom exists in most environments.

Most accounts are significantly over-provisioned from day one.

Angel Studios was live in 54 minutes with a connection string swap and saw 60% cost reduction. Tenable cut costs 33% and reclaimed 25% of engineering time in two weeks.

Where it doesn’t work as well: teams that have already invested heavily in manual optimization, highly irregular workloads with no stable patterns, or environments where most of the cost is storage or data transfer rather than compute.

Ishan Pandey: Your pricing is a percentage of actual savings – no savings, no charge. That’s rare in enterprise SaaS. Why, and how does it land with CFOs?

Ido Arieli Noga: It came from conviction. If the product works, aligning on savings is honest. If it doesn’t work for a specific environment, we shouldn’t be charging.

With CFOs, the conversation is simple. We run a 30-day pilot at no cost. If we don’t generate savings, there’s no invoice. That removes evaluation risk entirely. The harder part is usually procurement – some finance teams don’t have a clean bucket for outcome-based SaaS. But once they understand it, it makes the deal easier to approve.

Ishan Pandey: AI experimentation is creating a new version of the old compute waste problem. How does that change things?

Ido Arieli Noga: AI workloads are exploratory and often abandoned. A team runs an experiment, it finishes or gets cancelled, and the compute keeps running. Multiply that across dozens of teams and the waste compounds fast.

A workload-aware layer changes the calculus. Teams can spin up workloads without the usual cost anxiety because they know the infrastructure adjusts. That sounds minor, but it removes a real friction point – you stop asking “can we afford to run this?” and start asking “what do we learn from this?”

It also gives data leadership actual visibility into where AI spend is going – by team, workload type, outcome. That’s what makes it a governance layer, not just a cost tool. n

Ishan Pandey: For a Head of Data dealing with unpredictable bills and engineers stuck on infra work – three things to audit this week, on any platform.

Ido Arieli Noga: First, compute utilization by time window. Most environments have significant idle compute during off-peak hours. Fast to find, fast to fix.

Second, resource tier vs actual query demand. Thirty days of query history, grouped by allocated compute tier. If more than 30% of queries are running well below their tier’s capacity, you have a sizing problem.

Third, always-on resources with no active workload. Warehouses, clusters, or slot reservations that never fully suspend – usually misconfigured or kept alive by a background process. Pure waste, fixable quickly.

Ishan Pandey: You’re expanding beyond Snowflake to BigQuery and Databricks. Where does this category go?

Ido Arieli Noga: The vision is a unified optimization layer across all major data platforms – not a Snowflake tool, not a cost tool. Resource management, observability, governance, across whatever platform you run.

The analogy to auto-scaling holds. Right now, running a data platform without an optimization layer means relying on engineers to do something that should be automated. Five years from now that will look the same way manual server provisioning looks today – not wrong, just unnecessary.

The shift is already happening. Data infrastructure is now core business infrastructure. The tooling around it needs to match that.

Don’t forget to like and share the story!

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Teramind launches agentic AI visibility and policy platform for AI tools –  News Teramind launches agentic AI visibility and policy platform for AI tools – News
Next Article Analogue’s restock of it Pocket handheld comes with a huge catch Analogue’s restock of it Pocket handheld comes with a huge catch
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Best DJI deal: Take 21% off the DJI Osmo Mobile 7
Best DJI deal: Take 21% off the DJI Osmo Mobile 7
News
Filing: IPIC closing movie theater near Seattle, will lay off 64 workers amid bankruptcy
Filing: IPIC closing movie theater near Seattle, will lay off 64 workers amid bankruptcy
Computing
Google brings Android’s desktop mode to Pixel devices
Google brings Android’s desktop mode to Pixel devices
News
Forget a smart mirror — this new app predicts how well you’re aging in just 30 seconds using only your phone
Forget a smart mirror — this new app predicts how well you’re aging in just 30 seconds using only your phone
News

You Might also Like

Filing: IPIC closing movie theater near Seattle, will lay off 64 workers amid bankruptcy
Computing

Filing: IPIC closing movie theater near Seattle, will lay off 64 workers amid bankruptcy

1 Min Read
How to Set Blog Goals: 5 Steps to Blog Goal Planning (Examples)
Computing

How to Set Blog Goals: 5 Steps to Blog Goal Planning (Examples)

27 Min Read
It’s Not Kubernetes. It Never Was. | HackerNoon
Computing

It’s Not Kubernetes. It Never Was. | HackerNoon

1 Min Read
AMD DPTCi Driver Posted For Linux To Better Enhance Ryzen Gaming Handhelds
Computing

AMD DPTCi Driver Posted For Linux To Better Enhance Ryzen Gaming Handhelds

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?