Physicists at Silicon Quantum Computing have developed what they say is the most accurate quantum computing chip ever engineered, after building a new kind of architecture.
Representatives from the Sydney-based startup say their silicon-based, atomic quantum computing chips give them an advantage over other kinds of quantum processing units (QPUs). This is because the chips are based on a new architecture, called “14/15,” that places phosphorus atoms in silicon (named as such because they are the 14th and 15th elements in the periodic table). They outlined their findings in a new study published Dec. 17 in the journal Nature.
SQC achieved fidelity rates between 99.5% to 99.99% in a quantum computer with nine nuclear qubits and two atomic qubits, resulting in the world’s first demonstration of atomic, silicon-based quantum computing across separate clusters.
Fidelity rates measure how well error-correction and mitigation techniques are working. Company representatives say they have achieved a state-of-the-art error rate on their bespoke architecture.
This might not sound as exciting as quantum computers with thousands of qubits, but the 14/15 architecture is massively scalable, the scientists said in the study. They added that demonstrating peak fidelity across multiple clusters serves as a proof-of-concept for what, theoretically, could lead to fault-tolerant QPUs with millions of functional qubits.
The secret sauce is silicon (with a side of phosphorous)
Quantum computing is performed using the same principle as binary computing — energy is used to perform computations. But instead of using electricity to flip switches, as is the case in traditional binary computers, quantum computing involves the creation and manipulation of qubits — the quantum equivalent of a classical computer’s bits.
Qubits come in numerous forms. Google and IBM scientists are building systems with superconducting qubits that use gated circuits, while some labs, such as PsiQuantum, have developed photonic qubits — qubits that are particles of light. Others, including IonQ, are working with trapped ions — capturing single atoms and holding them in a device referred to as laser tweezers.
The general idea is to use quantum mechanics to manipulate something very small in such a way as to conduct useful computations from its potential states. SQC representatives say their process for doing this is unique, in that QPUs are developed using the 14/15 architecture.
They create each chip by placing phosphorus atoms within pure silicon wafers.
“It’s the smallest kind of feature size in a silicon chip,” Michelle Simmons, CEO of SQC, told Live Science in an interview. “It is 0.13 nanometers, and it’s essentially the kind of bond length that you have in the vertical direction. It’s two orders of magnitude below typically what TSMC does as its standard. It’s quite a dramatic increase in the precision.”
Increasing tomorrow’s qubit counts
In order for scientists to achieve scaling in quantum computing, each platform has various obstacles to overcome or mitigate.
One universal obstacle for all quantum computing platforms is error correction (QEC). Quantum computations happen in extremely brittle environments, with qubits sensitive to electromagnetic waves, temperature fluctuations and other stimuli. This causes the superposition of many qubits to “collapse,” and they become unmeasurable — with quantum information lost during calculations.
To compensate, most quantum computing platforms dedicate a number of qubits to error mitigation. They function in a similar way to check or parity bits in a classical network. But as qubit counts increase, so too does the number of qubits required for QEC.
“We have these long coherence times of the nuclear spins and we have very little what we call “bit flip errors.” So, our error correction codes themselves are much more efficient. We’re not having to correct for a bit flip and phase for errors,” Simmons said.
In other silicon-based quantum systems, bit flip errors are more prominent because qubits tend to be less stable when manipulated with coarser accuracy. Because SQC’s chips are engineered with high precision, they’re able to mitigate certain occurrences of errors experienced in other platforms.
“We really only have to correct for those phase errors,” added Williams. “So, the error correction codes are much smaller, therefore the whole overhead that you do for error correction
is much, much reduced.”
The race to beat Grover’s algorithm
The standard for testing fidelity in a quantum computing system is a routine called Grover’s algorithm. It was designed by computer scientist Lov Grover in 1996 to demonstrate whether a quantum computer can demonstrate “advantage” over a classical computer at a specific search function.
Today, it’s used as a diagnostic tool to determine how efficiently quantum systems are operating. Essentially, if a lab can reach quantum computing fidelity rates in the range of 99.0% and above, it’s considered to have achieved error-corrected, fault-tolerant quantum computing.
In February 2025, SQC published a study in the journal Nature in which the team demonstrated a 98.9% fidelity rate on Grover’s algorithm with its 14/15 architecture.

In this regard, SQC has surpassed firms such as IBM and Google; although they have shown competitive results with dozens or even hundreds of qubits versus SQC’s four qubits.
IBM, Google and other prominent projects are still testing and iterating their respective roadmaps. As they scale up the qubit count, however, they’re forced to adapt their error mitigation techniques. QEC has proven to be among the most difficult to overcome bottlenecks.
But SQC scientists say their platform is so “error deficient” that it was able to break the record on Grover’s without running any error correction on top of the qubits..
“If you look at the Grover’s result that we produced at the beginning of the year, we’ve got the highest fidelity Grover album at 98.87% of the theoretical maximum and, on that, we’re not doing any error correction at all,” Simmons said.
Williams says the qubit “clusters” featured in the new 11-qubit system can be scaled to represent millions of qubits — although infrastructure bottlenecks may yet slow down progress..
“Obviously as we scale towards larger systems, we are going to be doing error correction,” said Simmons. “Every company has to do that. But the number of qubits we will need will be much smaller. Therefore, the physical system will be smaller. The power requirements will be smaller.”



