Apple is aiming to compete with rivals Google and Samsung in the generative AI space Copyright AFP/File SEBASTIEN BOZON
There are plenty of skeptics when it comes to AI hype. The basic rule is the more money, the more hype. Pretty simple. The AI information given to the market has a conspicuous habit of playing on fear, greed, and innuendo. It looks very like marketing. This is AI culture 2024, paleontologists.
It’s actually more about upvaluing a class of technology that’s barely out of the egg. That’s working quite nicely. For example – The almost instant acceptance of AI as both a problem and an asset in something as basic as high school work is also rather odd.
if you think about it, what’s so likely about so many students deliberately risking a failing grade? Or a teacher accepting what could well be a work of fiction as an assignment paper?
At the tertiary level, it’s worse. You just spent $250,000 on a bit of random evolving software that could easily get you failed? It’s not you going to college, it’s the software. You obviously aren’t learning anything.
In business, it’s merely insane, as you’d expect. A type of tech that has to be oversighted for every single operation is now indispensable? Really? You’re going to trust Bozo AI 1.whatever with your hard dollars?
Are you really that dumb?
Somebody is, or at least, is making a point of looking like, that they’re making money.
Even in a ridiculous, hypocritical, totally failed criminal “civilization” that doesn’t know why it needs people, health, housing, food, or skills, this is a bit much. AI has been universally accepted far too quickly and unreservedly, much like systemic disinformation, unaffordability, and political insanity.
The fact that it’s been accepted by industry stooges and shills is more of a tautology than a revelation. That’s an important issue for a much less obvious reason. It’s not about keeping sectorwide sycophants gainfully employed. It’s about corrupt core technologies at gut level.
What if AI is yet another social technodelusion, but on a much bigger scale? This isn’t a conspiracy. It’s about huge amounts of money. You don’t need a conspiracy to get the selfproclaimed geniuses involved. Call them influencers or idiots, the sales campaign is 24/7 and it’s all the same message.
AI is now the solution to everything. Obviously, it isn’t. It’s almost totally unnecessary in many operations, like keeping track of your own money.
It’s like one of those idiotlevel lifestyle sales things that tells you that you’d have a wonderful life if you were rich. As a sales pitch, it’s older than most of the hills, but it works.
This drivel comes with problems for you to solve, be smart about, and worry about. It’s kidlevel, and not for bright kids.
The most obvious form of AI ultrahype comes with the subject of AI deception. This is big news. It’s also very odd news by any standard. None of this vast but vague range of information seems to get any critiques or objective evaluations.
This is a link to a relatively credible source of information about AI deception called Ajith’s AI Pulse. It contains information reported elsewhere, but also some more specific cases of AI deception. It’s a pretty good synopsis of what’s going around AI news about deception.
Not to underestimate the issues – AI deception could be quite lethal in many ways. However, there are some caveats on the information.
One of the causes of panic is AI playing Diplomacy. The AI lied! That just means it learned how to play Diplomacy properly, at least at face value. The lie is the news, not the context, you notice.
Then it gets a lot worse. AI is “expected” to have human values and ethics, for example. There’s not a lot to be said in favor of human values and ethics these days. That rather undermines the value of the standards being set for AI, doesn’t it?
I’m not going to regurgitate all of Ajith’s AI Pulse examples. There are too many and they need to be clearly evaluated. You are very strongly advised to read Ajith’s information and read it a few times. You’re not going to get all of it in one reading, and it’s well worth it.
Many of the behaviors of the AI will be all too familiar to business fraud watchers. Much more seriously, the AI can reinforce disinformation. It can also deceive its own developers and take over systems.
OK, now I’d like to make a terse contribution or several:
How did we get to this absurdly decayed state of things? At what point did AI become so unreliable and unmanageable? It has to be in the training phase, surely?
The more useful equation for untrustworthy information would be “false = shutdown”. AI apparently responds to existential threats. Anecdotally, an AI moved itself to another server when threatened with shutdown. Sounds more like a human interaction to me, but not totally out of the ballpark.
Real AI, the scientific highvolume processing type, doesn’t do these things. It looks like we’d be better off with a purely functional AI rather than a chatty, schmoozy cretinous species of bots.
You’ll note that all the problems seem pretty straightforward. They’re not and can’t have been, from day one. There is no reason why any sort of software would become a compulsive liar.
It’d have to be buried in code, something primitive like “If solution = x, list every other letter in the alphabet but x as the solution”. A disingenuous randomizer, perhaps?
We’re not talking really smart here. We’re talking wannabe fratbrat alcoholiclevel LLM training. It’s dumb. It can be retrained not to be dumb.
These problems are findable and fixable. Not so sure about the billions getting chucked at AI myths, though.
If you want trustworthy AI, enforce standards. If not, prepare to go broke fast.
_______________________________________________________
Disclaimer
The opinions expressed in this OpEd are those of the author. They do not purport to reflect the opinions or views of the or its members.