Anthropic says it is on pace to bring in a pace to bring in $5 billion over the course of a year as its revenue has surged along with the value of the San Franciscobased company behind Claude artificial intelligence model Copyright AFP/File Chris Delmas
Coding is critical. It’s an art form, a great and very efficient way of driving yourself nuts, and absolutely critical to the trustworthy operations of any system. If you’ve ever done any coding (I have), you’ll know how incredibly nitpicking this can be.
Imagine a millioncharacter block of coding. None of it works. Some lucky soul has to find out why. That sort of thing.
AI coding is supposed to be the ultimate great big help in coding. That’s the sales pitch. The reality, inevitably, is quite different.
AI coding is simply coding written by AI. The questions are whether or not it works properly, works at all, and is trustworthy.
Anthropic is an artificial intelligence company linked to Claude AI. They are deeply involved in the realworld applications of AI. Anthropic did an absolute baseline study of the efficiencies in AIassisted coding using a control group that didn’t use AI to perform the same tasks.
This very apt, canarylike study method may well turn into a standard practice critical test for both AI coding and software engineering.
I’ll go further. It should be standard practice. Reliance on AI has already produced enough black holes to make standards of AI coding mandatory. For future software engineering qualifications, this or something like it will be the benchmark.
To their great credit, Anthropic found significant flaws in blindly trusting AI coding, and major issues with engineer performance standards. This study definitely wasn’t a cheerleading exercise. Not many words are minced in their study paper.
For example, using AI didn’t produce a “significant” time benefit. That’s one of the great market myths of the clunkfestival of this generation of AI. It looks more like Parkinson’s Law for AI coding.
They also identified “cognitive offloading” as a skills metric. That’s pretty gutsy, given their own stake in AI. This means whether or not engineers understand the coding and how it works. Can’t get much more basic than that.
It’s also very realistic. The assumption that AI can do it all is potentially lethal. There was a famous case in Australia where a business owner sacked her staff and replaced them with AI. My comment at the time was that she’d just given herself 12 new unpaid jobs, keeping an eye on all that work and oversighting it.
Anthropic emphasizes the need for oversight and strong quality controls in its commentary on its study. It’s beyond absurd to simply assume that any system, AI or not, gets everything right. Are you going to simply ignore your own accounts? Of course not. It’s a recipe for catastrophic failure.
There’s an instant payoff on that Parkinson’s Law analogy; more oversight, aka more people, will be needed for AI coding and oversight in general. If something goes wrong in an AI system, someone has to identify the problems and fix them. That’s one of the fundamentals of Parkinson’s Law.
Anthropic seems to be much less than starryeyed about AI coding. The market should be paying close attention. This is how AI will really work in the future, and the message is clear and uncompromising.
My advice:
Stop bleating about software and start focusing on the survival factor in coding.
__________________________________________________________
Disclaimer
The opinions expressed in this OpEd are those of the author. They do not purport to reflect the opinions or views of the or its members.
