IBM It already has a Defined plan to build what will be the first large -scale quantum computer. He called him IBM Quantum Starlingand those responsible say that will be ready in 2029. It will be developed in a new IBM quantum database located in the city of Poughkeepsie (New York), and will be able to execute up to 20,000 times more operations than current quantum computers.
Representing the status of an IBM Starling would require the memory of more than one quindecillón of the most powerful supercomdators in the world. With Starling, users can fully explore the complexity of their quantum states, which are beyond the limited properties to those that can be accessed with current quantum computers.
A large -scale quantum computer and tolerant of failures could execute hundreds of millions or even billions of operations, which would accelerate time savings and costs in fields such as drug development, the discovery of materials, chemistry and optimization.
Starling will be able to access the calculation power necessary for these problems through the execution of 100 million quantum operations using 200 logical qubits. It will be the basis for another quantum team of IBM, Blue Jay, which can execute 1,000 million quantum operations with more than 2,000 logical qubits.
A logical qbit is a unit of a quantum computer with errors correction, which is responsible for storing a quantum amount of quantum equivalent to that of a Quit. It can be composed of multiple physical qubits that work together to store that information and supervise each other to detect errors.
Quantum computers must correct errors to execute large work loads without failures. For this, groups of physical qubits are used, with the aim of creating a lower number of logical qubits with error rates lower than those of individual physical qubits.
The error rate of the logical qubits decreases exponentially with the size of the group, which allows more operations to be executed. Generating a growing number of logical qubits capable of running quantum circuits with the least possible amount of physical qubits is basic for scalable quantum.
Failure tolerant architecture for quantum computers
He success of a failure tolerant architecture and efficient It depends on the choice of the error correction code, and how the system is designed and constructed To allow the Code to climb. Alternative errors correction codes, or considered standard so far, have base engineering challenges.
To climb they would need an unfeasible number of physical qubits to generate sufficient logical qubits capable of performing complex operations. This would lead to amounts of infrastructure and electronic control that would not be practical, and make their application beyond experiments or small -scale devices not probable.
A large -scale quantum computer and failure tolerant needs a tolerant architecture also to failures, and capable of suppressing enough errors so that useful algorithms can be executed correctly. It must also be able to prepare and measure logical qubits during the calculations, and to apply universal instructions to those logical qubits, as well as decoding the measurements of logical qubits in real time and modify subsequent instructions.
Apart from this, it must be modular, to climb up to hundreds or thousands of logical qubits and execute more complex algorithms; In addition to efficient, to execute significant algorithms with realistic physical resources. In IBM you already know how it will resolve the criteria that are currently missing to build a large -scale tolerant architecture. First, they have devised a system that may process instructions and execute operations with QLDPC codes effectively.
To do this, they will use a pioneering approach to error correction, which introduces the quantum verification codes of low intensity (QLDPC), which drastically reduce the physical qubits necessary to correct errors and reduce the necessary cost of over 90% in relation to other codes.
In addition, it establishes the necessary resources to execute large -scale quantum programs. On the other hand, they also know how to efficiently decode the information of physical qubits, and draw the way to identify and correct real -time errors with conventional computer resources.
IBM quantum roadmap milestones
The new IBM quantum roadmap establishes the main technological milestones they need to demonstrate and execute the criteria for tolerance to failures. Each new processor of this roadmap addresses concrete challenges for the development of quantum systems that are modular, scalable and with error correction.
IBM Quantum Loon, scheduled for this yearis Designed to test architecture components of the QLDPC Code, among which are also the “C-co-drivers” that connect qubits at greater distance within the same chip. The following, IBM Quantum Cookaburra, for, It will be the first IBM modular processor designed to store and process coded information. It will combine quantum memory with logical operations.
As to IBM Quantum Cockattoo, previsto para 2027will intertwine two Kookaburra modules using “L-co-drivers”. This architecture will connect quantum chips as nodes of a larger system, which will avoid the need to build chips of a size that is not practical.