By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI’s Black Box Problem: Can Web3 Provide the Key? | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > AI’s Black Box Problem: Can Web3 Provide the Key? | HackerNoon
Computing

AI’s Black Box Problem: Can Web3 Provide the Key? | HackerNoon

News Room
Last updated: 2025/07/02 at 12:35 PM
News Room Published 2 July 2025
Share
SHARE

AI is evolving rapidly—faster than most institutions, regulators, and even investors can keep pace with. But as managing partner of DWF Labs, where we deploy capital across early-stage Web3 infrastructure and digital asset markets, one thing has become increasingly clear: trust is emerging as the defining fault line in AI’s next phase of development. Not trust in what models can do but in how they do it.

It’s hard not to think that artificial intelligence has already reached the point of no return. It’s already making its presence felt across numerous industries, and no longer is it limited to just making us more productive.

Increasingly, AI is going beyond simply generating lines of code, text and images, and making actual decisions on behalf of humans. For instance, some companies are using AI algorithms to screen job candidates before a human looks at their applications, approving various applicants and rejecting others. In healthcare, medical diagnostic systems are being employed by doctors to aid in diagnosing illnesses and recommending treatments. Banks are using AI to assess loan applications. And law enforcement agencies are experimenting with AI systems to try and predict crimes before they occur.

These applications promise to help us make better decisions, faster. They do this by analyzing massive volumes of information far beyond what humans are capable of, and they come to their conclusions without being influenced by emotions. However, such systems are hampered by a lack of transparency and explainability, making it impossible for us to trust the decisions they arrive at.

While the current debate is focused on scale, like larger models, more data, greater compute, the real challenge lies in explainability. If we can’t trace an AI’s decision-making process, it becomes a black box that’s uninvestable, unreliable, and ultimately unusable in critical systems. That’s where Web3 comes in, to support with infrastructure and transparency.

AI Can’t Explain Itself

At its core, AI decision-making relies on complex algorithms that churn through vast amounts of data, understand it, and attempt to draw logical conclusions based on the patterns they uncover.

The challenge is that the most advanced AI systems today, particularly those powered by large language models, make decisions and predictions without any explanation as to how they arrived at these conclusions. The “black box” nature of these systems is often intentional, because developers at leading AI companies such as OpenAI, Anthropic, Google and Meta Platforms strive to protect their source code and data to maintain a competitive advantage over their rivals.

LLMs such as OpenAI’s GPT series and Google’s Gemini are trained on enormous datasets and built on dozens of intricate neural layers. But it’s not clear exactly what these layers “do”. For instance, there’s no real understanding of how they prioritize certain bits of information or patterns over others. So it’s extremely difficult even for the creators of these models to interpret the interactions between each layer, and understand why it generates the outputs it does.

This lack of transparency and explainability carries substantial risks. If it’s unclear how an AI system works, how can you be sure it’s safe and fair? Who will be accountable if mistakes are made? How will you know if the system is broken or not? Even if you do realize the system is making some dodgy choices, how can you repair it if you don’t know how it works? There are regulatory concerns too, with laws like Europe’s GDPR requiring explainability for automated decisions. Opaque AI systems fail to meet this standard.

AI companies even admit these shortcomings. In a recent research paper, Anthropic revealed that one of its most sophisticated AI models masked its reasoning processes, known as “Chain-of-Thought”, in 75% of use cases.

Chain-of-Thought is a technique that aims to increase transparency in AI decision-making, revealing the model’s thought processes as it sets about trying to solve a problem, similar to how a human might think aloud. However, in Anthropic’s research, it discovered that its Claude 3.7 Sonnet model often uses external information to arrive at its answers, but failed to reveal either what this knowledge is, or when it relies on it. As a result, the creators have no way of explaining how it reached the majority of its conclusions.

Rethinking The AI Stack

Open-source AI models such as DeepSeek R1 and Meta’s Llama family are often touted as alternatives to the proprietary systems created by OpenAI and Google, but in reality they offer very little improvement in terms of explainability.

The problem is that although the codebase might be open, the training data and “weights” – the numerical values that determine the strength and direction of connections between artificial “neurons” – are rarely made available too. Moreover, open models tend to be built in siloes, and they’re hosted on the same centralized cloud servers as proprietary models are. A decentralized AI model hosted on a centralized server is open to manipulation and censorship, which means it’s not really decentralized at all.

While open models are a good start, true explainability and transparency in algorithmic decision-making requires a complete overhaul of the entire AI stack. One idea is to build AI systems on a foundation of Web3 technologies. With Web3, we can achieve openness and ensure active collaboration across every layer – from the training data and the computational resources, to the fine-tuning and inference processes.

Decentralized AI systems can leverage “markets” to ensure fair and equitable access to the components of this stack. By breaking down AI’s infrastructure into modular functions and creating markets around them, accessibility will be determined by market forces. An example of this is Render Network, which incentivizes network participants to share their idle computing power to create a resource for artists that need access to powerful GPUs for image rendering. It’s an example of how blockchain can help to coordinate people and resources for the common good.

Decentralization also enables community-based governance through the creation of Decentralized Autonomous Organizations or DAOs. Earlier this year, DREAM DAO launched an AI agent called DREAM that acts like a decentralized hedge fund that anyone can invest in. Users deposit funds into a common pool, and DREAM invests this cash into promising crypto projects based on an analysis of market data, while also taking into account community sentiment. It demonstrates how AI can optimize investments while ensuring its financial decisions are aligned with the community’s objectives.

The use of blockchain as a foundation of AI also means we can have auditability. DcentAI uses blockchain to create a permanent, unalterable record of every transaction and interaction made by an AI model, including the sourcing and pre-processing of training data, to model configuration and decision-making. By timestamping each of these interactions on its immutable ledger, it creates a detailed audit trail that can be used to verify the fairness and accuracy of AI outputs. Users can examine every bit of data that influenced the decisions they came to.

As a market maker and investor, we have seen firsthand how protocols that embed transparency gain faster institutional acceptance. This will apply to AI as well. Just as DeFi protocols earn trust through open-source smart contracts and auditable treasuries, AI will need verifiable model audits, accessible governance, and provable reasoning paths if it wants to be integrated into high-stakes financial systems.

This matters because AI isn’t just supporting decision-making. It is also shaping capital flows. From credit allocation and insurance pricing to predictive analytics in DeFi, AI models will soon sit at the heart of programmable finance. If those systems can’t explain themselves or be governed transparently, risk, bias, and fragility will be priced in by markets.

Rethinking the AI Stack—With Web3

Open-source AI models such as DeepSeek R1 and Meta’s Llama family are often touted as alternatives to the proprietary systems created by OpenAI and Google, but in reality they offer very little improvement in terms of explainability.

What’s missing is accountability architecture. As investors in infrastructure, we’re not just looking at what a protocol claims to decentralize, we’re also looking at how transparent the full lifecycle of decision-making is, from data ingestion to inference.

Safeguards For Harmful AI

Another concern around centralized AI is the prospect of algorithms becoming “aware” and developing self-preservation instincts.

Such a scenario is perhaps not so far away. Last year, one of OpenAI’s more experimental LLMs was reported to have resisted efforts by evaluators to shut it down via an oversight protocol, according to a report by Apollo Research, an AI red teaming organization.

According to the report, when the o1 model was led to believe its actions were being monitored and it might be taken offline, it made efforts to deactivate the “oversight mechanism” in 5% of evaluations. The model also reportedly “lashed out” at evaluators when it learned it was going to be replaced by one that was more obedient. In 2% of cases, it even attempted to copy itself in an effort to overwrite the newer model and preserve itself.

However, the researchers noted that o1 is far from being able to prevent itself being taken offline, mainly because it’s not powerful enough, lacking the “agentic” capabilities needed to achieve such an outcome. In addition, it only acted this way when it was specifically instructed to try and survive “at all costs”. The report also notes that the researchers didn’t have access to o1’s entire chain-of-thought process either, so they couldn’t obtain a full picture of what it was really “thinking”.

Nonetheless, the episode was alarming, given the prospect that AI models are expected to become much more powerful in the coming years. In particular, rapid advancements are being made in agentic AI systems that can perform a wide variety of tasks, such as using browsers and taking actions online.

The open nature of decentralized AI systems could be used to prevent such incidents, allowing for the integration of something akin to a “kill-switch” that would be governed by DAO consensus. Such a mechanism would enable any AI system to be completely shut down by its community if it began acting in dangerous or harmful ways. Additionally, the transparency of open models would mean users have greater visibility into its thought processes and the nature of the outcomes it is trying to achieve.

To Trust AI, We Need Transparency

There is a growing consensus that without transparency, the decisions of AI systems cannot be trusted or relied upon, limiting the applications they can be used for. Regulations don’t allow opaque algorithms to make decisions about people’s finances, and doctors cannot blindly follow an AI’s recommendations as to a certain course of treatment without verifiable evidence that it’s the best course of action.

By decentralizing the entire stack – from the code, to the training data and the infrastructure it runs on – we have a chance to rewrite AI’s entire DNA. It will create the conditions for fully explainable AI, so algorithms can be trusted to make ethical and accurate decisions that can be verified by anyone affected by them.

We already have the makings of decentralized AI in place. Federated learning techniques make it possible to train AI models on data where it lives, preserving privacy. With zero-knowledge proofs, we have a way to verify sensitive information without exposing it. These innovations can help to catalyze a new wave of more transparent AI decision-making.

The shift towards more transparent AI systems has implications, not only in terms of trust and acceptance, but also accountability and collaborative development. It will force developers to maintain ethical standards while creating an environment where the community can build upon existing AI systems in an open and understandable way.

There is a growing consensus that without transparency, the decisions of AI systems cannot be trusted or relied upon. Regulations don’t allow opaque algorithms to make decisions about people’s finances, and doctors cannot blindly follow an AI’s recommendations as to a certain course of treatment without verifiable evidence that it’s the best course of action.

This is why transparency and explainability are important to address the widespread skepticism and distrust around AI systems. As AI becomes more widespread, they will become integral to its future development, ensuring that the technology evolves in a responsible and ethical way.

By decentralizing the entire stack, from the training data to model inference to governance, we have a shot at building AI systems that can be trusted to operate ethically, perform reliably, and scale responsibly.

As these technologies mature, the protocols that will earn institutional capital and public trust won’t be the ones with the most compute, but the ones with the clearest governance, auditable decision flows, and transparent incentive structures.

Web3 doesn’t just offer decentralization, it offers a new economic logic for building systems that are resilient, ethical, and verifiable by design. And this is how we turn AI from a black box into a public utility and why the future of machine intelligence will be built on-chain.

Don’t forget to like and share the story!

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Ancient DNA shows genetic link between Egypt and Mesopotamia
Next Article Amazon’s Echo Hub has plunged to a new low price ahead of Prime Day
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

'Sinners' Is Streaming on Max This Weekend. Here's When to Watch
News
21 social media analytics tools to boost your strategy in 2025
Computing
The Best Computer Mice for 2025
News
Cardano (ADA) or Little Pepe (LILPEPE): Here’s the Best Crypto Under $1 to Invest in Today | HackerNoon
Computing

You Might also Like

Computing

21 social media analytics tools to boost your strategy in 2025

23 Min Read
Computing

Cardano (ADA) or Little Pepe (LILPEPE): Here’s the Best Crypto Under $1 to Invest in Today | HackerNoon

5 Min Read
Computing

Why You Should Adopt Voice AI Agents in Your Business—Before It’s Too Late | HackerNoon

9 Min Read
Computing

Cooking.City Bringing Back Value Redistribution To Solana Fair Launches | HackerNoon

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?