By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI’s biggest problem isn’t intelligence. Its implementation
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Software > AI’s biggest problem isn’t intelligence. Its implementation
Software

AI’s biggest problem isn’t intelligence. Its implementation

News Room
Last updated: 2026/02/19 at 12:32 PM
News Room Published 19 February 2026
Share
AI’s biggest problem isn’t intelligence. Its implementation
SHARE

Welcome to AI Decoded, Fast Company‘s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

The AI ​​’arms race’ may be more of an ‘arm-twist’

The big AI companies tell us that AI will soon remake every aspect of business in every industry. Many of us are left wondering when that will actually happen in the real world, when the so-called “AI takeoff” will arrive. But because there are so many variables, so many different kinds of organizations, jobs, and workers, there’s no satisfying answer. In the absence of hard evidence, we rely on anecdotes: success stories from founders, influencers, and early adopters posting on X or TikTok.

Economists and investors are just as eager to answer the “when” question. They want to know how quickly AI’s effects will materialize, and how much cost savings and productivity growth it will generate. Policymakers are focused on the risks: How many jobs will be lost, and which ones? What will the downstream effects be on the social safety net?

Business schools and consulting firms have turned to research to find those answers to the question. One of the most consequential recent efforts was a 2025 MIT study, which found that despite spending between $30 billion and $40 billion on generative AI, 95% of large companies had seen “no measurable P&L (profit and loss) impact.”

More recent research paints a somewhat rosier picture. A recent study from the Wharton School found that three out of four enterprise leaders “reported positive returns on AI investments, and 88% plan to increase spending in the next year.”

My sense is that the timing of AI takeoff is hard to grasp because adoption is so uneven and depends a lot on the application of the AI. Software developers, for example, are seeing clear efficiency gains from AI coding agents, and retailers are benefiting from smarter customer-service chatbots that can resolve more issues automatically.

It also depends on the culture of the organization. Companies with clear strategies, good data, some PhDs, and internal AI enthusiasts are making real progress. I suspect that many older, less tech-oriented, companies remain stuck in pilot mode, struggling to prove ROI.

Other studies have shown that in the initial phases of deployment, human workers must invest a lot of time correcting or training AI tools, which severely limits net productivity gains. Others show that in AI-forward organizations, workers do see substantial productivity improvements, but because of that, they become more ambitious and end up working more, not less.

The MIT researchers included an interesting disclaimer on their research results. Their sobering findings, they noted, did not reflect the limitations of the AI ​​tools themselves, but rather the fact that organizations often need years to adapt their people and processes to the new technology.

So while AI companies constantly hype the ever-growing intelligence of their models, what ultimately matters is how quickly large organizations can integrate those tools into everyday work. The AI ​​revolution is, in this sense, more of an arm-twist than an arms race. The road to ROI runs through people and culture. And that human bottleneck may ultimately determine when the AI ​​industry, and its backers, begin to see returns on their enormous investments.

New benchmark finds that AI fails to do most digital gig work

AI companies keep releasing smarter models at a rapid pace. But the industry’s primary way of proving that progress—benchmarks—doesn’t fully capture how well AI agents perform on real-world projects. A relatively new benchmark called the Remote Labor Index (RLI) tries to close that gap by testing AI agents on projects similar to those given to remote contractors. These include tasks in game development, product design, and video animation. Some of the assignments, based on actual contract jobs, would take human workers more than 100 hours to complete and cost over $10,000 in labor.

Right now, some of the industry’s best models don’t perform very well on the RLI. In tests conducted late last year, AI agents powered by models from the top AI developers including OpenAI, Anthropic, Google, and others could complete barely any of the projects. The top-performing agent, powered by Anthropic’s Opus 4.5 model, completed just 3.5% of the jobs. (Anthropic has since released Opus 4.6, but it hasn’t yet been evaluated on the RLI.)

The test puts the question of the current applicability of agents in a different light, and may temper some of the most bullish claims about agent effectiveness coming from the AI ​​industry.

Silicon Valley’s pesky ‘principals’ re-emerge, irking the White House and Pentagon

The Pentagon and the White House are big mad at the safety-conscious AI company Anthropic. Why? Because Anthropic doesn’t want its AI being used for the targeting of humans by autonomous drones, or for mass surveilling US citizens.

Anthropic now has a $200 million contract allowing the use of its Claude chatbot and models by federal agency workers. It was among the first companies to get approval to work with sensitive government data, and the first AI company to build a specialized model for intelligence work. But the company has long had clear rules in its user guidelines that its models aren’t to be used for harm.

The Pentagon believes that after paying for the technology it should be able to use it for any legal application. But acceptable use for AI is different from that for traditional software. AI’s potential for autonomy makes it more dangerous by nature, and its risks increase the closer to the battle it gets used.

The disagreement, if not resolved, could potentially jeopardize Anthropic’s contract with the government. But it could get worse. Over the weekend, the Pentagon said it was considering classifying Anthropic as a “supply chain risk,” which would mean the government views Anthropic as roughly as trustworthy as Huawei. Government contractors of all kinds would be pushed to stop using Anthropic.

Anthropic’s limits on certain defense-related uses are laid out in its Constitution, a document that describes the values ​​and behaviors it intends its models to follow. Claude, it says, should be a “genuinely good, wise, and virtuous agent.” “We want Claude to do what a deeply and skillfully ethical person would do in Claude’s position.” To critics in the Trump administration, that language translates to a mandate for wokeness.

The whole dust-up harkens back to 2018, when Google dropped its Project Maven contract with the government after employees revolted against Google technology being used for targeting humans in battle. Google still works with the government, and has softened its ethical guidelines over the years.

The truth is, tech companies don’t stand on principle like they used to. Many have settled into a kind of patronage relationship with the current regime, a relatively inexpensive way to avoid MAGA backlash while keeping shareholders satisfied. Anthropic, in its way, seems to be taking a different course, and it may suffer financially for it. But, in the longer term, the company could earn some respect, trust, and goodwill from many consumers and regulators. For a company whose product is as powerful and potentially dangerous as consumer AI, that could count for a lot.

More AI coverage from Fast Company: 

  • OpenAI, Google, and Perplexity near approval to host AI directly for the US government
  • New AI models are losing their edge almost immediately
  • Meta patents AI that lets dead people post from the great beyond
  • These 6 quotes from OpenClaw creator Peter Steinberger hint at the future of personal computing

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article West Virginia sues Apple for allegedly letting child abuse spread in iCloud West Virginia sues Apple for allegedly letting child abuse spread in iCloud
Next Article Did The CATE App Survive After Shark Tank? Here’s What Happened After Season 4 – BGR Did The CATE App Survive After Shark Tank? Here’s What Happened After Season 4 – BGR
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

After Search Party backlash, Ring is still avoiding the bigger questions
After Search Party backlash, Ring is still avoiding the bigger questions
News
Our Favorite Budget Desktop Speaker Is an Even Better Buy Right Now
Our Favorite Budget Desktop Speaker Is an Even Better Buy Right Now
Gadget
25 Content Ideas for a Lifestyle Blog That Work on Pinterest
25 Content Ideas for a Lifestyle Blog That Work on Pinterest
Computing
Google is taking a massive swing at Android malware and fake app reviews this year
Google is taking a massive swing at Android malware and fake app reviews this year
News

You Might also Like

AI Panic Grips Software Stocks: 2 Stocks You Should Buy Anyway
Software

AI Panic Grips Software Stocks: 2 Stocks You Should Buy Anyway

7 Min Read
Meta might launch a smartwatch this year
Software

Meta might launch a smartwatch this year

2 Min Read
Google’s threat intel chief explains why AI is now both the weapon and the target
Software

Google’s threat intel chief explains why AI is now both the weapon and the target

3 Min Read
The rise of AI is making the future of work look bleak – but it could be an opportunity
Software

The rise of AI is making the future of work look bleak – but it could be an opportunity

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?