By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: The Physics of AI | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > The Physics of AI | HackerNoon
Computing

The Physics of AI | HackerNoon

News Room
Last updated: 2025/10/26 at 6:45 PM
News Room Published 26 October 2025
Share
SHARE

Recently, I partook in a roundtable debate on AI topics at the University of Oxford, joined by academics and AI and compliance professionals. One of the participants began talking about the scary “AI singularity”. I’m quite passionate about the world of physics, so the concept of AI singularity got me thinking a lot about the intersection between the two domains. I started playing with the idea of finding correlations between AI and physics.

Technological Black Holes: the AI singularity

In physics, a singularity is a region where spacetime curvature becomes infinite. The concept is famously tied to black holes, where matter is compressed to infinite density and zero volume, and time and space bend indeed infinitely. Bottom line, we still don’t really know what goes on inside a black hole. Some theories suggest that whatever falls in there might eventually bounce back and form a white hole – pardon the simplification, physicists (see White Holes, by Carlo Rovelli). Unlike black holes, from which nothing can escape once it enters their border, white holes expel matter and cannot be entered. Some even say the Big Bang itself could be a white hole.

The application of singularity to technology as a point where technological capabilities become infinite and uncontrollable is certainly not new. The rise of AI has shaped the idea into the more specific AI singularity: that hypothetical moment in the future when AI surpasses human cognitive capabilities and becomes autonomous in self-improving and scaling itself. This further reinforces the idea that humans may one day no longer be able to understand, and therefore control, AI systems. These concerns are probably blown out of proportion, inflated by cinematographic imagination, and seem to be constrained by the reality of our limited resources. In fact, for AI to become a singularity, we would need an immense volume of resources – something that, whilst in theory possible, appears rather improbable to achieve at present.

The Horizon of AI Models

As mentioned, nothing crossing the boundary of a black hole can then escape. Not even light, which is trapped forever. This point of no return is called event horizon. It is not a visible edge, but rather a mathematical region – one that we need to thank Karl Schwarzschild for.

There is a somewhat similar concept in AI systems – that is, an event horizon beyond which an AI model’s reliability begins crumbling away. This boundary is determined by the distribution of the data an AI system was trained on. When the model interacts with data differing from the training set, it encounters what experts call an “out-of-distribution” (OOD) generalization problem. Generalization itself is an AI model’s ability to “handle inputs that were not encountered during training”. Inputs that were never seen before. When generalization is poor, “it may perform perfectly on the data it was trained on but fail miserably when faced with new data”.

This issue is tied to the concept of feature contamination, where a model learns both relevant and irrelevant features at once to make good predictions. When data changes, the influence of irrelevant features can significantly degrade the precision of good predictions. For instance, a natural language processing (NLP) model trained on sets with highly specific phrasings can struggle with accurately processing and interpreting sentences not containing those specific phrasings.

The event horizon of AI models is more likely an uncharted territory rather than a region of infinite curvature. But it’s interesting how in both the AI and physics worlds, once you cross it, you venture yourself into the unknown – and the unthinkable could just happen.

Quantum Mechanics: Probabilistic Reasoning

AI isn’t just a matter of Einstein’s Theory of Relativity, but also a quantum thing. At the heart of quantum mechanics lies the most outlandish, lunatic idea: uncertainty is the fundamental feature of the universe. Odd, right? A particle could, in principle, be found anywhere until we measure it. It doesn’t exist in a specific defined state, but in a superposition of possible ones, described by a wavefunction that encodes probabilities. In fact, you could think of this wavefunction as a “map of probabilities”, showing where that particle might be or how fast it might move (its momentum).

Clearly, AI doesn’t obey quantum mechanics laws (or at least not yet). But much like quantum physics, AI operates under a probabilistic rather than a deterministic logic. The most obvious example is ChatGPT. When you give it a prompt, ChatGPT will go through a massive cloud (or wavefunction) of potential meanings, to predict the sequence of words that is most likely to articulate a coherent response. The model will basically infer what word is the most probable to be the right one after the previous one.

In other words, much like quantum physics, AI is about statistical realities, not set-in-stone absolutes.

The Expanding Universe of Knowledge

Like many scientists of his time, Einstein was not convinced that the universe was expanding. His equations did show that the universe should either expand or contract, but he was reluctant to the idea that the universe could, in fact, be in motion at all. To preserve a static universe, he introduced the cosmological constant, a measure of a certain “anti-gravity” effect (Λ) that offset the universe’s expansion as described by his very own predictions. In hindsight, he should have trusted his own math. Thanks to Edwin Hubble, who in 1929 found that the galaxies are moving away from us, we now know that the university is indeed expanding. Einstein later realised that he had been wrong, calling his cosmological constant the “biggest blunder of my life”. Einstein was right a lot. Hubble demonstrated he was right even when he was wrong.

There’s a certain resonance between the dynamics of the universe expansion as described by the Hubble constant and the impact of AI. On a basic level, one could argue that AI expands knowledge by providing information to users. Whether it’s ChatGPT or a binary model, whoever inputs a query will receive a response that can widen personal knowledge, drive business decisions, and so on. In addition, not only is AI scaling the amount of data we collect and process, but also leveraging it to increase its own operational knowledge. See, for instance, how ChatGPT is periodically refined or fine-tuned using anonymized data from its users’ conversations.

But there’s more to it. Cosmic expansion is the cornerstone of our understanding of the world because it’s inherently, profoundly tied to our idea of order. The Big Bang, from which the cosmic expansion originates, gave order to time and space (see next section). It marked the watershed moment between an unknown before and everything that followed. Whatever the before was (because we don’t know) was unimaginably hot, the after has been progressively cooling down ever since. With no external influence, a body that is hot, like a cup of coffee, slowly becomes lukewarm, then cold.

Order, however, is not just a matter of direction (past to present); it’s also a process of organization. Cosmic expansion describes how the universe turns chaos and irregularities into structure and form (after all, cosmos is a Greek word meaning ‘order’). In a similar way, AI helps us turning noise and ambiguity into meaning and knowledge. And what is most astounding is that both processes are spontaneous and self-organizing – one driven by physics, the other by algorithms. Unlike cosmic inflation, however, AI’s process of expansion can still be stopped if humans are willing to do so. Question is, though: will they ever be?

Model Bias Background Radiation

Einstein’s correct prediction about the universe’s expansion was picked up in 1927 by George Lemaître, a Belgian physicist and priest who theorized what we today call the Big Bang Theory (nope, not the overrated sitcom…). The theory remains today the most accredited answer to the question of the questions: where do we come from?

Around 13.8 billion years ago, an extremely hot and dense state, probably a singularity, rapidly expanded with an immense bang, projecting matter, energy, space, and time outward, and then slowly began to cool. This event was the catalyst for the formation of all matter, including the shaping of galaxies and the ignition of stars. The bang was so violent that even today, a little less than 14bn years later, we can still detect its afterglow, in the form of a faint signal permeating the universe and gently humming through its fabric. This is known as the cosmic microwave background radiation.

So, a relic radiation, from an initial catastrophic event survives today, though at a minimal scale and going unnoticed without sophisticated mechanisms. Something quite similar takes place in AI models relative to training data, and in particular to the concept of data representativeness. If data is representative of all groups (gender, race, age, ethnicity, sexual orientation, and so on), the model is likely to stay fair. On the other hand, if the data is not representative (incomplete or biased), the model will highly likely produce outputs that are unintentionally discriminatory toward certain groups.

In fact, data representativeness is one of the key dimensions against which an AI model is tested by AI ethicists and compliance experts. Ideally, you would identify bias before the data trains your AI model. If you don’t, you will need to mitigate bias post-deployment, to ensure no discrimination takes place (or at least is minimised). And by the way, the ISO/IEC 24027:2021 guidelines outline the best practice standards for testing fairness and robustness of your AI model (certainly worth a read, if you’re nerd enough!).

Anyways, the point of it all being: when not taken care of, bias persists and propagates itself, just like the cosmic background radiation has survived through billions of years. And when it does, the societal impact can be non-trivial: a brilliant female candidate might be ruled out by a predictive model in favour of less qualified male candidates, because the training data carried gender bias.

The good news is that again, unlike the Big Bang’s radiation, we can definitely do something about it. That’s what we AI governance and compliance guys are here for. We cannot stop the music of the Big Bang from playing through galaxies (if anything, let’s keep it coming!). But AI model bias? That’s manageable. Practically a cakewalk.

The Stellar Decay of AI Models

I will not live long enough to watch a star collapse into a neutron star, but hey, never say never. Who’d have said that we’d be able to “photograph” black holes and detect gravitational waves, and yet, here we are. In fact, the gravitational waves we intercepted in 2017 appear to have been generated by the collision of two neutron stars.

If neutron stars were just half the spectacle that astrophysicists describe, they’d be worth a one-way ticket to the universe to just admire them. These stars are born when gravity becomes so strong that it crushes a supernova’s core. The immense force to which it is subject leads to a compact, dense core where protons and electrons are combined into neutrons. Meanwhile, elsewhere, an AI model is formed out of the compression of data, collapsing vast information into compact nuclei of representations.

Seen from afar, we’d see a star of incomprehensible beauty, pretty small in size (10-12km diameter) but incomprehensibly mighty in density (half a sushi stick could weigh a billion tons) and gravity (around 200 billion times stronger than Earth’s). A star with a deceiving surface, smooth and gentle to the eye, yet able to crush you to pieces as soon as you touch it. An AI model generating the most realistic pictures, yet able to grossly misalign a tree’s trunk from its canopy or a finger from its hand. A star offering a mesmerizing dance of ultra-fast spins, intertwined with a frenzy of light beams and X-rays. An AI chatbot writing an essay in your stead, within the exact word count you told it, and with a wealth of accurate references in support of the arguments made. A star of radiant beauty, burning through power and desire to leave you in awe. An AI model of immaculate precision, burning through computational energy to minimize error.

Fight for survival

What appears to be the most stunning creation of the universe, or the best-performing AI model system, is actually fighting for existence. The neutron star is doing its best to resist the astounding gravitational force that would like to crush it, and for that it deploys the neutron degeneracy pressure, which is arm-wrestling with gravity, pushing outward and resisting collapse. Similarly, in another corner of the universe, an AI model is struggling against the relentless pull of noisy and biased data, which would like to destabilize it and push it to produce irrelevant or even harmful outputs. For this, model designers deploy techniques such as regularization or sparsity, which mitigate the risks of the model to overfit or collapse into extreme values.

But make no mistakes: it’s a delicate balance. The minimum disruptive event could cause gravity and bias to win over. Once the neutron degeneracy pressure cedes to gravity (maybe because that star has incorporated matter pulled from other stars, increasing the leverage of gravity), the collapse is inevitable. Like a greedy king, gravity imposes its supremacy and becomes unstoppable. Simultaneously, the applied model’s safeguards might not resist the pull of endless data, perhaps because it has absorbed too much or too varied information, and its stability starts to crack. At that point, the optimization process dominates, driving the system toward overfitting and the loss of meaningful generalization.

Spacetime curves more and more. Pre-announced by the formation of an event horizon, the neutron star collapses into a black hole. Escape velocity has now exceeded the speed of light, meaning that light can no longer escape. In a dramatic turn of events, that beautiful, ethereal white angel transforms into a black hole. And just like supernovae seeds new stars and feed the cosmos, the remnants of our AI model – those fine-tuned weights and lesson learned from the model’s previous failure – may be the starting point to build the next generation’s architecture.

In both cases, we see a cycle of transformation, where even the minimal imbalance can cause stability to plummet from one moment to the next.

Entangled and everywhere: AI and Qubits

When I think about what the scariest and most pressing issues humanity is facing long-term are, my mind goes right to three emergencies: climate impacts, demographic crises, and quantum computing. Quantum computing is slowly but steadily gaining relevance worldwide, with the occasional news reports of this or that quantum chip being produced by Microsoft or other tech players. It was not by chance that one of the Google AI Quantum Lab founders, John Martinis, was awarded the Nobel Prize in Physics this year.

But the truth is, once we’re able to effectively build large-scale quantum computers, we won’t simply read a random and obscure article about it. Our lives will be severely impacted by this technology. Suffice to say that, in principle, a large-scale quantum computer could be powerful enough to (inter alia) break high-level encryption. In theory, it could allow a country to make it easier to steal another country’s classified secrets, maybe relating to defence. You can imagine the cascading effects of such technology if not properly regulated and controlled. Luckily, this dystopian scenario is still theoretical at present, and quantum-safe encryption standards are being developed by NIST to pre-empt this.

Quantum computing takes his name from the principles of quantum mechanics. First and foremost, quantum mechanics introduced the above-mentioned concept of superposition (a particle can be anywhere until it is observed and measured). This logic is applied to quantum computers’ qubits (= quantum bits). Unlike traditional bits, which can be either 0 or 1, qubits exist in a combination of states between 1 and 0 simultaneously. This property is, and will, expand a computer’s capabilities beyond our imagination (i.e., performing multiple calculations simultaneously). And let’s not forget about entanglement, the property of two particles to influence each other at a distance. The same happens in quantum computing: qubits can similarly become linked and affect each other no matter how distant. This will again scale the capabilities of computers, particularly in terms of correlating data and performing incredibly complex operations.

So, where does AI fit in all this? Whilst distinct concepts, AI and quantum computing are strongly interlinked. One the one hand, AI can help optimise and calibrate quantum computing, for instance by designing better and error-free quantum circuits, or fine-tuning quantum machines for stability and performance.[23] On the other, in the future, quantum computers could help process massive datasets for AI, exponentially faster than our computers. In turn, this could likewise dramatically accelerate and refine neural network training. In other words, AI currently supports quantum computing development; quantum computing might return the favour in the future.

Some parallels are more precise, others perhaps a little forced. I hope you didn’t take the above too seriously — see it for what it is: nothing more than an intellectual divertissement.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Save up to $140 on Galaxy Buds 3 Pro sans trade-in at these merchants
Next Article The Dulcet Tones of Slop. OpenAI Reportedly Exploring AI Music Generator
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Best WiFi routers in 2025 (UK)
News
OpenAI adds Sky team to give ChatGPT real control on your Mac
News
It’s 2025, and Pixels still don’t seem to deliver notifications on time
News
Tech Moves: Ex-Starbucks CTO retires; Microsoft vet joins Oracle Cloud; Amazon hardware leader departs
Computing

You Might also Like

Computing

Tech Moves: Ex-Starbucks CTO retires; Microsoft vet joins Oracle Cloud; Amazon hardware leader departs

5 Min Read
Computing

After new funding, Noetix Robotics explains how it built a humanoid robot cheaper than an iPhone · TechNode

3 Min Read
Computing

Tencent’s auto chess game Honor of Kings: Wanxiang Chess to begin large-scale testing in December · TechNode

1 Min Read
Computing

Countdown to XIN Summit 2025 in Shenzhen — Only 20 Booths Left at the Global Hub for Smart Hardware Innovation! · TechNode

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?