A pair of new JEPs arriving in Java 24 (expected March 2025) tackle the subject of Post-Quantum Cryptography (PQC) in Java. They are:
The ideal cryptographic system uses a key which was exchanged in advance between two parties, by a private channel, but that doesn’t work at scale or over large distances or between parties who don’t know each other in advance.
So what public key cryptography does is to take advantage of “one-way functions” (aka “trapdoor functions”) to allow the receiver to share something which the sender can use to encrypt a key (or message), but which no one else can use to decrypt it.
Current cryptographic protections use problems in discrete maths (such as factorization of large numbers) that are believed to be computationally expensive to perform. The idea that multiplying numbers is cheap, but factorization is expensive is what makes this a one-way function and a useful pair for cryptography.
As it stands, all the known algorithms for factorization are little better than brute-forcing them in terms of computational difficulty.
This assumption allows cryptographers to reliably estimate how much computing power is required to crack the Key Exchange Mechanism (KEM) and recover the symmetric key. In turn, this allows users of cryptography to have confidence in the security of their communications, even when computing trends like Moore’s Law are taken into account.
However, in recent years, computers that take advantage of quantum mechanical behaviour have started to become available. The key concept is that of a qubit, which is not simply either 1 or 0, but represents a probabilistic combination of both at the same time. Quantum computing operates on qubits without forcing them to assume either state (a process known as “collapsing”).
For further background on quantum computing, specialist information should be consulted.
A future large-scale quantum computer could use new techniques such as Shor’s algorithm, which is capable of solving the discrete logarithm problem, to compromise the security of widely-deployed public-key based algorithms such as Rivest-Shamir-Adleman (RSA) and Diffie-Hellman – including elliptic curve algorithms.
ML-KEM, on the other hand, is designed to be secure against future quantum computing attacks. It achieves this by using a completely different approach to the “hard problem”. ML-KEM has been standardized by the United States National Institute of Standards and Technology (NIST) in Federal Information Processing Standard (FIPS) 203.
It uses a form of “lattice cryptography” – which uses an n-dimensional lattice of points at regular intervals in n-dimensional space. The points can be seen as vectors, and can added together, forming a translation group.
One powerful way of using this mathematical techniques to create secure encryption algorithms is called “learning with errors” (LWE). It is based on the idea of representing secret information as a set of equations with errors, and has only emerged from the mathematics research community since ~2010 or so.
Right now, large-scale quantum computers do not yet exist. The best available laboratory system has just been successfully applied to RSA cryptography for the first time. However, this only factorizes a 50-bit integer using quantum methods — which is much shorter than the key length actually in use by production systems. For example, a typical Java application uses a key length of 2048 bits, which is a reasonable choice, because NIST deems this key length sufficient until 2030.
Despite this, the US government has mandated that computer systems that handle sensitive information must be upgraded over the next decade to use ML-KEM and other forthcoming standards to protect against quantum attacks. For example, the NSA intends to be fully post-quantum by 2033 at the latest.
As of 2024, nation-state level attackers could, theoretically, start capturing and storing large volumes of encrypted traffic, in the expectation of being able to decrypt it in the future when sufficiently powerful quantum computers become available.
To put it another way, there is no credible immediate threat – these standards are about preparing for the future and minimizing vulnerable traffic that is flowing today.
The path ahead is far from clear – on one hand there are severe challenges involved in making larger quantum computers that could threaten today’s keys, which could cause further delays in the creation of viable quantum codebreakers; on the other, new techniques could accelerate their delivery even faster.
A final point to consider is that the history of Internet security standards shows that there is the possibility of interoperability problems and a lack of a clear migration path, sometimes referred to as “protocol ossification”. This is yet another reason why it makes sense to start on this work now.
A recent post by Cloudflare discusses both the current state of PQC and these interoperation issues (drawing on their experience rolling out TLS 1.3).
In any event, given Java’s noted longevity and ubiquity, it is necessary for the platform to begin supporting post-quantum capabilities, in even advance of full standardization. These two JEPs represent Java’s first steps into a world beyond classical public key cryptography.