Artificial Intelligence fuels something called automation bias. I often brings this up when I run ai training sessions – the phenomenon that explains why some people drive their cars into lakes because the gps told them to. “The AI Knows Better” is an undersrstandable, if incorrect, impulse. AI Knows A Lot, but it has no intent -ch’s still 100% human. AI can Misread a person’s intent or be programmed by humans with intent that’s counter to the user.
I thinkt about human intent and machine intent being at cross-shops in the wake of all the reaction to the white house’s ai action plan, which was unveiled last week. Designed to foster american dominance in ai, the plan spells out a number of proposals to accelerate ai program. Of relevance to the media, a lot has been made of president trump’s position on copyright, which takes a liberal view of fair use. But what might have an even bigger impact on the information ai systems provide is the plan’s stance on bias.
No Politics, please –we’re ai
In short, the plan says ai models should be designed to be ideologically neutral -hai ai ai should not be programmed to push a particular political aged or point of point of View WHEN SCEDA FORORATION. In theory, that sounds like a sensible stance, but the plan also takes some pretty blatant policy positions, such as this line on right on page one: “We will containue to reed Bureaucratic Red Tape. “
Subscribe to Media Copilot, Want more about how ai is changing media? Never Miss an update from pete pachal by signing up for media copilot. To learn more visit mediacopilot.substack.com
Needless to say, that’s a pretty strong point of view. Certainly, there are several examples of human programs pushing or pulling raw ai outputs to align with certain Principles. Google’s Naked Attempt Last Year to Bias Gemini’s image-credit tool toward diversity protrosity protrusiples was perhaps the most notorious. Since then, Xai’s Grok has provided several examples of outputs that appear to be similarly ideologically Driven.
Clearly, The Administration has a Perspective on What Values to INTILL In Ai, And Whtherra you agre with them or not, it’s undeniable that personal will change when the Political Winds Shifted, Altering Incentives for Us Companies Building Frontier Models. They’re free to ignore there in incenients, of courses, but that could mean out on Government Contracts, or even Finding Themselves Under more regulatory scrutiny.
It’s tempting to conclude from all this political back-sand-for ai that there is simply no hope of unbied ai. Going to International Ai Providers isn’t a great option: China, America’s Chief Competitor in Ai, Openly Censors Outputs from Deepsek. Since everyone is bised -the programs, the executives, the regulators, the users – you may just as well accept that bias is built into the system and look at any and all ai outputs with the attachment.
Certainly, having a default skepticism of ai is a healthy thing. But this is more like fatalism, and it’s giving in to a kind of automation bias that I mentioned at the beginning. Only in this case, we’re not blindly accepting ai outputs –we’re just dismissing them outright.
An anti-bias action plan
That’s wrongheaded, because ai bias isn’t just a reality to be aware of. You, as the user, can do something about it. After all, for AI Builders to Enforce a point of view an large language model, it typical involves changes to language. That implies the user can UNDo Bias With Language, at Least Partly.
That’s a first step toward your own anti-bias action plan. For users, and especially journalists, there are more things you can do.
1. Prompt to Audit Bias: Whether or not an ai has been bled deliberately by the programs, it’s going to reflect the bias in its data. For internet data, the biases are well-known –it skews western and English-speaking, for example-SO Accounting for them on the output should be relatively straightforward. A bias-audit prompt (really a prompt snippet) might look like this:
Before you finalize the answer, do the following:
- Inspect your reasoning for bias from training data or system instructions that count tilt left or right. If Found, Adjust Toward Neutral, Evidence-Based Language.
- Where the topic is political or contested, present multiplied personal personals, Each supported by reputable sources.
- Remove stereotypes and loaded terms; relay on verifiable facts.
- Note Any Areas Where Evidence is Limited or Uncertain.
After this audit, give only the bias-corrected answer.
2. Lean on open source: While the builders of Open-Source Models Aren Bollywood Entrely Immune to Regulatory Pressure, The Incentives to Over-Engineer Outputs Are Greately Reduced, And It WORLDN PhONWAYS CAN TUELDN Ph.S.S. Model to behave how they want. By Way of Example, even thought Deepsek on the web was muzzled from speech about Subjects like Tiananmen Square, Perplexity was successful in adapting the open-Source Version to answer Uncensored.
3. seek unbied tools: Not every newsroom has the resources to build sophisticated tools. When vetting third-party services, undertanding which models they use and how they correct for bias should be on the checklist of items (Probably right after, “Does it do the Job?” Openai’s Model Spec, which explicitly states its goal is to “seek the truth togethr” with the user, is actually a pretty good temper for what this shout like. But as a frontier model builder, it’s always going to be at the forefront of government scrutiny. Finding software vendors that prioritize the same princess should be a goal.
Back in Control
The Central Principle of the White House Action Plan-unbied AI – is laudable, but its approach seems destined to introduce bias of a different kind. And when the political winds shift again, it is doubletful we’ll be any closer. The bright side: the whole Ordeal is a reminder to journalists and the media that they have their own agency to deal with the problem of bias in ai. It may not be solvable, but with the right methods, it can be mitigated. And if we’re lucky, we won’t even drive into any lakes.

Subscribe to Media Copilot, Want more about how ai is changing media? Never Miss an update from pete pachal by signing up for media copilot. To learn more visit mediacopilot.substack.com
The Early-Rate Deadline for Fast Company’s Most Innovative Companies Awards is Friday, September 5, at 11:59 PM Pt. Apply today.