Recently, I caught myself saying: “OK, Google, turn on the shower vent.”
Within seconds, my voice left my home in Haifa, traveled through submarine fiber networks to Europe, was processed in a Google data center, possibly routed through additional vendor clouds across continents, and then made its way back, only to activate a switch sitting 10 inches from my face. The techie nerd in me gets excited every time this happens. But … I could have just raised my hand and pressed it.
We live in both incredible and absurd times. Our growing tendency to deploy global systems, across multiple vendors, and continuous compute to solve problems that were already solved locally is something I feel we need to discuss.
To be clear, I am very much in favor of automation and agentic AI. I am educating myself with agentic AI courses to keep up with the times and use the latest capabilities. In many cases, they are transformative to businesses and consumers. Especially at scale, in repetitive processes, in data-heavy environments, or in cases where accessibility matters, AI agents do unlock real value.
But not every problem belongs in that category. And I feel an increasing number of AI-based applications and workflow automations tend to fall in the “shower vent” category.
You may think this isn’t an issue: What does it matter if we bring the tech revolution to solve ridiculous tasks, just because we can?
But there are drawbacks and risks to the automate-everything ethos.
Three risks of automating without discipline
Operational risk: more points of failure, less control: That simple command depends on multiple systems working in sync, your device, your network, Google’s infrastructure and potentially a third-party vendor cloud.
If any layer fails, the system fails. The same pattern is emerging in agentic AI workflows: multistep pipelines across LLMs, orchestration tools and external APIs. These add dependencies and complexities.
To give another example from my personal life: When my parents got their existing home, they built it as a “smart home.” It worked great, until a “smart lightswitch” malfunctioned and the smart home company asked for $1,500 to send a special “smart home engineer” to fix what would have been a $5 DIY. This is equivalent to hiring AI engineers and automation experts to support a workflow that could have been handled by a junior, nontechnical person in 10 minutes.
And that brings me to the next point.
Economic risk: hidden and compounding costs: Voice commands and AI workflows feel inexpensive at small scale, but they rely on paid infrastructure: compute, API calls, tokens, orchestration layers and vendor integrations.
In many cases, especially at scale, when implemented for those “ridiculous” tasks, the cost of automation can approach, or exceed, the value of the task being automated. We must ensure we invest in AI and automation where it makes economic sense.
Environmental and strategic risk: scaling inefficiency: Data centers create hundreds of millions of tons of CO₂ emissions annually, estimated to grow to 2.5 billion tons of CO₂ emissions by 2030. AI is becoming a growing percentage of that. So these are megatons of CO₂ emissions, and growing.
While each small agentic AI workflow can account for a few grams of CO₂ emissions, at scale, these inefficiencies compound into real environmental impact. More importantly, this reflects a strategic issue: optimizing for the sake of it. This mindset can mean we often lose focus on solving meaningful problems.
Itay Sagie is a strategic adviser to tech companies and investors, specializing in strategy, growth and M&A, a guest contributor to Crunchbase News, and a seasoned lecturer. Learn more about his advisory services, lectures and courses at SagieCapital.com. Connect with him on LinkedIn for further insights and discussions.
Illustration: Dom Guzman
Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.
