Calculation of time slices issued to the process. Basic concepts and principles of quantum computing. Applications to cryptography

Calculation of time slices issued to the process. Basic concepts and principles of quantum computing. Applications to cryptography

Historical reference

Quantum computing are unthinkable without control over the quantum state of individual elementary particles. Two physicists - the Frenchman Serge Lroche and the American David Wineland - succeeded. Lrosh caught single photons in the resonator and “unhooked” them from the outside world for a long time. Wineland trapped single ions with specific quantum states and isolated them from external influences. Harosh used atoms to observe the state of the photon. Wineland used photons to change the states of ions. They managed to make progress in studying the relationship between the quantum and classical worlds. In 2012 they were awarded Nobel Prize in physics for "breakthrough experimental techniques that made it possible to measure and control individual quantum systems."

The operation of quantum computers is based on the properties of a quantum bit of information. If computing processes use P qubits, then the Hilbert state space of a quantum system has a dimension equal to 2". Under Hilbert space we will understand a dimensional vector space in which the scalar product is defined under the condition that the value tends to P to infinity.

In our case, this means that there are 2" base states, and the computer can operate on a superposition of these 2" base states.

Note that an impact on any qubit immediately leads to a simultaneous change in all 2” base states. This property is called "quantum parallelism».

Quantum computing is a unitary transformation. This means that a linear transformation is carried out with complex coefficients, keeping the sum of the squares of the transformed variables unchanged. A unitary transformation is an orthogonal transformation in which the coefficients form a unitary matrix.

Under unitary matrix we will understand the square matrix ||aj|, the product of which by the complex conjugate and transposed matrix || aJI gives the identity matrix. Numbers ajk And a ki complex. If the numbers a ik are real numbers, then the unitary matrix will be orthogonal. A certain number of qubits form the quantum register of a computer. In such a chain of quantum bits, one- and two-bit logical operations can be carried out in the same way as the operations NOT, NAND, 2 OR-HE, etc. are carried out in a classical register. (Fig. 5.49).

Specific number N registers form essentially a quantum computer. The operation of a quantum computer occurs in accordance with developed calculation algorithms.

Rice. 5.49.

NOT - boolean NOT; CNOT - controlled NOT

Qubits as information carriers have a number of interesting properties that completely distinguish them from classical bits. One of the main theses quantum theory information is state entanglement. Suppose there are two two-level qubits A And IN, realized in the form of an atom with an electronic or nuclear spin, a molecule with two nuclear spins. Due to the interaction of two subsystems A And IN a nonlocal correlation arises that is purely quantum in nature. This correlation can be described by the mixed state density matrix

Where p i- population or probability i- state, so R ( + p 2 + p 3 + + Ra = 1-

The property of coherent quantum states to have a sum of probabilities equal to one is called entanglement, or coupling, of states. Entangled, or entangled, quantum objects are connected to each other no matter how far they are located from each other. If the state of one of the linked objects is measured, then information about the state of other objects is immediately obtained.

If two qubits are interlocked, then they are devoid of individual quantum states. They depend on each other in such a way that the measurement for one type gives “O”, and for the other - “1” and vice versa (Fig. 5.50). In this case, the maximally linked pair is said to carry one e-bit cohesion.

Entangled states are a resource in quantum computing devices, and to replenish the number of entangled states, methods must be developed to reliably generate entangled qubits. One of the methods

Rice. 5.50. The maximally entangled pair of qubits scheme is an algorithmic way to obtain entangled qubits on ions in traps, nuclear spins, or a pair of photons. The process of decay of a particle in a singlet state into two particles can be very effective. In this case, pairs of particles are generated that are entangled in coordinate, momentum, or spin.

Developing a comprehensive theory of entanglement is a key goal of quantum information theory. With its help, it will be possible to get closer to solving the problems of teleportation, ultra-dense coding, cryptography, and data compression. For this purpose, quantum algorithms are being developed, including quantum Fourier transforms.

The calculation scheme on a quantum computer has the following algorithm: a system of qubits is formed, on which the initial state is recorded. Through unitary transformations, the state of the system and its subsystems changes when logical operations are performed. The process ends with the measurement of new qubit values. The role of connecting conductors of a classical computer is played by qubits, and the logical blocks of a classical computer are played by unitary transformations. This concept of a quantum processor and quantum logic gates was formulated in 1989 by David Deutsch. He then proposed a universal logic block that could be used to perform any quantum computation.

Doina-Jozhi algorithm allows you to determine “in one calculation” whether a function of a binary variable /(/?) is constant (f x (ri)= Oh, f 2 (ri) = 1 regardless P) or "balanced" (f 3 ( 0) = 0,/ 3 (1) = 1;/ 4 (0) = 1, / 4 (1) = 0).

It turned out that to construct any calculation, two basic operations are sufficient. The quantum system gives a result that is correct only with some probability. But by slightly increasing the operations in the algorithm, you can bring the probability of obtaining the correct result as close as possible to one. Using basic quantum operations, it is possible to simulate the operation of ordinary logic gates that make up ordinary computers.

Grover's algorithm allows us to find a solution to the equation f(x) = 1 for 0 x in O(VN) time and is intended for database lookup. Grover's quantum algorithm is obviously more efficient than any algorithm for unordered search on a classical computer.

Shor factorization algorithm allows you to determine for prime factors aub given integer M= a"Xb by using an appropriate quantum circuit. This algorithm allows you to find the factors of an A-digit integer. It can be used to estimate the computational process time. At the same time, Shor’s algorithm can be interpreted as an example of a procedure for determining the energy levels of a quantum computing system.

Zalka-Wiesner algorithm allows you to simulate the unitary evolution of a quantum system P particles in almost linear time using O(n) qubits

Simon's algorithm solves the black box problem exponentially faster than any classical algorithm, including probabilistic algorithms.

Error Correction Algorithm makes it possible to increase the noise immunity of a quantum computing system that is susceptible to the destruction of fragile quantum states. The essence of this algorithm is that it does not require cloning qubits and determining their state. A quantum logic circuit is formed that is capable of detecting an error in any qubit without actually reading the individual state. For example, triplet 010 passing through such a device detects an incorrect middle bit. The device flips it over without determining the specific values ​​of any of the three bits. Thus, based on information theory and quantum mechanics one of the fundamental algorithms arose - quantum error correction.

The listed problems are important for creating a quantum computer, but they fall within the competence of quantum programmers.

A quantum computer is more progressive than a classical one in a number of indicators. Most modern computers operate according to the von Neumann or Harvard scheme: P memory bits store state and are changed by the processor every time tick. In a quantum computer, a system of P qubits is in a state that is a superposition of all base states, so a change in the system affects everyone 2" basic states simultaneously. Theoretically, the new scheme can work exponentially faster than the classic one. An almost quantum database search algorithm shows a quadratic increase in power compared to classical algorithms.

Today I would like to begin publishing a series of notes about this burning topic, on which my new book was recently published, namely an introduction to understanding the quantum computational model. I thank my good friend and colleague Alexander for the opportunity to post guest posts on this topic on his blog.

I have tried to make this short note as simple as possible from the point of view of understanding for the untrained reader who, however, would like to understand what quantum computing is. However, the reader is required to have a basic understanding of computer science. Well, a general mathematical education wouldn’t fit either :). There are no formulas in the article, everything is explained in words. However, you all can ask me questions in the comments and I will try to explain as best I can.

What is quantum computing?

Let's start with the fact that quantum computing is a new, very fashionable topic, which is developing by leaps and bounds in several directions (while in our country, like any fundamental science, it remains in disrepair and is left to a few scientists sitting in their towers from Ivory). And now they are already talking about the appearance of the first quantum computers (D-Wave, but this is not a universal quantum computer), new quantum algorithms are published every year, quantum programming languages ​​are created, the shadowy genius of the International Business Machines in secret underground laboratories produces quantum calculations on dozens of qubits.

What is it? Quantum computing is a computing model that differs from the Turing and von Neumann models and is expected to be more efficient for some tasks. At least, problems have been found for which the quantum computing model gives polynomial complexity, while for the classical computing model there are no known algorithms that would have a complexity lower than exponential (but, on the other hand, it has not yet been proven that such algorithms do not exist ).

How can this be? It's simple. The quantum computing model is based on several fairly simple rules transformations of input information that provide massive parallelization of computing processes. In other words, you can evaluate the value of a function for all its arguments at the same time (and this will be a single function call). This is achieved by special preparation of input parameters and a special type of function.

Lighter Prots teaches that all this is syntactic manipulation with mathematical symbols, behind which, in fact, there is no meaning. There is a formal system with rules for transforming input into output, and this system allows, through the consistent application of these rules, to obtain output from input data. All this ultimately comes down to multiplying a matrix and a vector. Yes Yes Yes. The entire quantum computing model is based on one simple operation - multiplying a matrix by a vector, resulting in another vector as the output.

The luminary Halikaarn, in contrast, teaches that there is an objective physical process that performs the specified operation, and only the existence of which determines the possibility of massive parallelization of function calculations. The fact that we perceive this as multiplying a matrix by a vector is just our way of imperfectly reflecting objective reality in our minds.

We are in our scientific laboratory In the name of the luminaries Prots and Halikaarn, we combine these two approaches and say that the quantum computing model is a mathematical abstraction that reflects an objective process. In particular, the numbers in vectors and matrices are complex, although this does not increase the computational power of the model at all (it would be just as powerful with real numbers), but complex numbers were chosen because an objective physical process was found that carries out such transformations, as the model describes, and in which complex numbers are used. This process is called the unitary evolution of a quantum system.

The quantum computing model is based on the concept of a qubit. This is essentially the same as a bit in classical information theory, but a qubit can take on multiple values ​​at the same time. They say that a qubit is in a superposition of its states, that is, the value of a qubit is a linear combination of its base states, and the coefficients of the base states are precisely complex numbers. The basic states are the values ​​0 and 1, known from classical information theory (in quantum computing they are usually denoted |0> and |1>).

It’s not very clear yet what the trick is. And here’s the trick. The superposition of one qubit is written as A|0> + B|1>, where A and B are some complex numbers, the only constraint on which is that the sum of the squares of their moduli must always be equal to 1. What if we consider two qubits? Two bits can have 4 possible values: 00, 01, 10 and 11. It is reasonable to assume that the two qubits represent a superposition of four base values: A|00> + B|01> + C|10> + D|11>. And so it is. The three qubits are a superposition of eight basic values. In other words, a quantum register of N qubits simultaneously stores 2N complex numbers. Well, from a mathematical point of view, this is a 2N-dimensional vector in a complex-valued space. This is what achieves the exponential power of the quantum computing model.

Next is a function that is applied to the input data. Since the input is now a superposition of all possible values ​​of the input argument, the function must be converted to accept such a superposition and process it. Here, too, everything is more or less simple. Within the quantum computing model, each function is a matrix subject to one constraint: it must be Hermitian. This means that when this matrix is ​​multiplied by its Hermitian conjugate, the identity matrix should be obtained. The Hermitian conjugate matrix is ​​obtained by transposing the original matrix and replacing all its elements with their complex conjugates. This limitation follows from the previously mentioned limitation on the quantum register. The fact is that if such a matrix is ​​multiplied by the vector of a quantum register, the result is a new quantum register, the sum of the squares of the moduli of complex-valued coefficients for the quantum states of which is always equal to 1.

It is shown that any function can be transformed in a special way into such a matrix. Also shown. that any Hermitian matrix can be expressed by the tensor product of a small set of basis matrices representing basic logical operations. Everything here is approximately the same as in the classical computational model. This is a more complex topic that is beyond the scope of this review article. That is, now the main thing to understand is that any function can be expressed in the form of a matrix suitable for use within the framework of the quantum computing model.

What happens next? Here we have an input vector, which is a superposition various options values ​​of the function input parameter. There is a function in the form of a Hermitian matrix. The quantum algorithm is a matrix-vector multiplication. The result is a new vector. What kind of nonsense is this?

The fact is that in the quantum computing model there is another operation called measurement. We can measure a vector and get a specific qubit value from it. That is, the superposition collapses into a specific value. And the probability of obtaining one or another value is equal to the square of the modulus of the complex-valued coefficient. And now it’s clear why the sum of squares should be equal to 1, since a measurement will always produce a specific value, and therefore the sum of the probabilities of obtaining them is equal to one.

So what happens? Having N qubits, you can simultaneously process 2N complex numbers. And the output vector will contain the results of processing all these numbers simultaneously. This is the power of the quantum computing model. But you can only get one value, and it can be different each time depending on the probability distribution. This is a limitation of the quantum computing model.

The essence of the quantum algorithm is as follows. An equally probable superposition of all possible values ​​of the input parameter is created. This superposition is fed to the function input. Further, based on the results of its execution, a conclusion is drawn about the properties of this function. The fact is that we cannot get all the results, but we can completely draw conclusions about the properties of the function. And the next section will show some examples.

In the vast majority of sources on quantum computing, the reader will find descriptions of several algorithms, which, in fact, are usually used to demonstrate the power of the computational model. Here we will also briefly and superficially look at such algorithms (two of them, which demonstrate different basic principles of quantum computing). Well, for a detailed acquaintance with them, I again refer to my new book.

Deutsch's algorithm

This is the first algorithm that has been developed to demonstrate the essence and effectiveness of quantum computing. The problem that this algorithm solves is completely divorced from reality, but it can be used to show basic principle, which forms the basis of the model.

So, let there be some function that receives one bit as input and returns one bit as output. Honestly, there could be only 4 such functions. Two of them are constant, that is, one always returns 0, and the other always returns 1. The other two are balanced, that is, they return 0 and 1 in equal numbers of cases. Question: how can you determine in one call to this function whether it is constant or balanced?

Obviously, this cannot be done in the classical computational model. You need to call the function twice and compare the results. But in the quantum computing model this can be done, since the function will be called only once. Let's see…

As already written, we will prepare an equally probable superposition of all possible values ​​of the function’s input parameter. Since we have one qubit at the input, its equal-probability superposition is prepared using one application of the Hadamard gate (this is a special function that prepares equal-probability superpositions:). Next, the Hadamard gate is applied again, which works in such a way that if an equal-probability superposition is fed to its input, it converts it back into states |0> or |1> depending on what phase the equal-probability superposition is in. After this, the qubit is measured, and if it is equal to |0>, then the function in question is constant, and if |1>, then it is balanced.

What happens? As already mentioned, when measuring we cannot obtain all the values ​​of a function. But we can draw certain conclusions about its properties. Deutsch's problem asks about a property of a function. And this property is very simple. After all, how does it work out? If the function is constant, then modulo 2 addition of all its output values ​​always gives 0. If the function is balanced, then modulo 2 addition of all its output values ​​always gives 1. This is exactly the result we got as a result of executing the Deutsch algorithm. We do not know exactly what value the function returned from an equally probable superposition of all input values. We only know that this is also a superposition of results, and if we now transform this superposition in a special way, then unambiguous conclusions will be drawn about the property of the function.

Something like that.

Grover's algorithm

Another algorithm, which shows a quadratic gain compared to the classical computational model, solves a problem that is closer to reality. This is Grover's algorithm, or, as Love Grover himself calls it, the algorithm for finding a needle in a haystack. This algorithm is based on another principle underlying quantum computing, namely amplification.

We have already mentioned a certain phase that a quantum state within a qubit may have. There is no phase as such in the classical model; it is something new within the framework of quantum computing. The phase can be understood as the sign of the coefficient of a quantum state in superposition. Grover's algorithm is based on the fact that a specially prepared function changes the phase of the state |1>.

Grover's algorithm solves the inverse problem. If you have an unordered set of data in which you need to find one element that satisfies the search criterion, Grover's algorithm will help you do this more efficiently than simple brute force. If simple enumeration solves the problem in O(N) function calls, then Grover’s algorithm effectively finds a given element in O(√N) function calls.

Grover's algorithm consists of the following steps:

1. Initialization of the initial state. Again, an equal-probability superposition of all input qubits is prepared.

2. Applying Grover Iteration. This iteration consists of sequential application of the search function (it determines the search criterion for the element) and a special diffusion gate. The diffusion gate changes the coefficients of quantum states, rotating them around the average. This produces amplification, that is, an increase in the amplitude of the desired value. The trick is that it is necessary to apply iteration a certain number of times (√2 n), otherwise the algorithm will return the wrong results.

3. Measurement. After measuring the input quantum register, the desired result is likely to be obtained. If it is necessary to increase the reliability of the answer, then the algorithm is run several times and the cumulative probability of the correct answer is calculated.

The interesting thing about this algorithm is that it allows you to solve an arbitrary problem (for example, any of the NP-complete class), providing, albeit not exponential, but a significant increase in efficiency compared to the classical computational model. A future article will show how this can be done.

However, it can no longer be said that scientists continue to sit in their ivory tower. Despite the fact that many quantum algorithms are developed for some strange and incomprehensible Mathan-like problems (for example, determining the order of an ideal of a finite ring), a number of quantum algorithms have already been developed that solve very applied problems. First of all, these are tasks in the field of cryptography (compromising various cryptographic systems and protocols). Next come typical mathematical problems on graphs and matrices, and such problems have a very wide range of application. Well, there are a number of approximation and emulation algorithms that use the analog component of the quantum computing model.

MINISTRY OF EDUCATION OF THE RUSSIAN FEDERATION

STATE EDUCATIONAL INSTITUTION

Essay

Quantum computing

Introduction

Chapter I. Basic concepts of quantum mechanics

Chapter II. Basic concepts and principles of quantum computing

Chapter III. Grover's algorithm

Conclusion

Bibliography

Introduction

Imagine a computer whose memory is exponentially larger than its sheer physical size would lead you to expect; a computer that can handle an exponentially larger set of input data simultaneously; a computer that performs calculations in Hilbert space, which is hazy for most of us.

Then you think about a quantum computer.

The idea of ​​a computing device based on quantum mechanics was first considered in the early 1970s and early 1980s by physicists and computer scientists such as Charles H. Bennett of the IBM Thomas J. Watson Research Center and Paul A. Benioff from Argonne National Laboratory in Illinois, David Deutsch from Oxford University, and later Richard P. Feynman from the California Institute of Technology (Caltech). The idea arose when scientists became interested in the fundamental limitations of computing. They realized that if technology continued to gradually reduce the size of computer networks packed into silicon chips, it would lead to individual elements becoming no more than a few atoms. Then a problem arose, since the laws of quantum physics operate at the atomic level, not classical ones. This raised the question of whether it was possible to construct a computer based on the principles of quantum physics.

Feynman was one of the first to try to answer this question. In 1982 he proposed a model of an abstract quantum system suitable for computation. He also explained how such a system could be a simulator in quantum physics. In other words, physicists could conduct computational experiments on such a quantum computer.

Later, in 1985, Deutsch realized that Feynman's claim might eventually lead to a general-purpose quantum computer, and he published landmark theoretical work showing that any physical process could in principle be simulated on a quantum computer.

Unfortunately, all they could come up with at that time were a few rather far-fetched mathematical problems, until Shor released his work in 1994, in which he presented an algorithm for solving an important problem from number theory on a quantum computer, namely, decomposition into prime factors. He showed how a set of mathematical operations designed specifically for a quantum computer could factorize(factorize) huge numbers fantastically quickly, much faster than conventional computers. This was a breakthrough that moved quantum computing from an academic interest to a problem of interest to the whole world.


Chapter I . Basic concepts of quantum mechanics

At the end of the 19th century, there was a widespread opinion among scientists that physics was a “virtually complete” science and that there was very little left for its complete “completeness”: to explain the structure optical spectra of atoms and spectral distribution thermal radiation . Optical spectra of an atom are obtained by the emission or absorption of light (electromagnetic waves) by free or weakly bound atoms; Monatomic gases and vapors, in particular, have such spectra.

Thermal radiation is a mechanism for transferring heat between spatially separated parts of the body due to electromagnetic radiation.

However, the beginning of the 20th century led to the understanding that there could be no talk of any “completeness”. It became clear that to explain these and many other phenomena, it was necessary to radically revise the concepts underlying physical science.

For example, based on the wave theory of light, it turned out to be impossible to give an exhaustive explanation of the entire set of optical phenomena.

When solving the problem of the spectral composition of radiation, the German physicist Max Planck in 1900 suggested that the emission and absorption of light by matter occurs in finite portions, or quanta. At the same time, the energy photon - quantum of electromagnetic radiation(in the narrow sense - light) is determined by the expression

Where is the frequency of emitted (or absorbed) light, and is the universal constant, now called Planck’s constant.

The Dirac constant is often used

Then the quantum energy is expressed as , where

Circular frequency of radiation.

The contradictions between viewing light as a stream of charged particles and as waves led to the concept wave-particle duality.

On the one hand, the photon demonstrates the properties of an electromagnetic wave in the phenomena diffraction(waves bend around obstacles comparable to the wavelength) and interference(superposition of waves with the same frequency and the same initial phase) on scales comparable to the wavelength of the photon. For example, single photons passing through a double slit create an interference pattern on the screen that can be described Maxwell's equations. However, the experiment shows that photons are emitted and absorbed entirely by objects whose dimensions are much smaller than the photon wavelength (for example, atoms), or, in general, to some approximation can be considered pointlike (for example, an electron), that is, they behave like particles - corpuscles. In the macrocosm around us, there are two fundamental ways of transferring energy and momentum between two points in space: the direct movement of matter from one point to another and the wave process of transferring energy without transferring matter. All energy carriers here are strictly divided into corpuscular and wave. On the contrary, in the microworld such division does not exist. All particles, and in particular photons, are attributed both corpuscular and wave properties. The situation is unclear. This is an objective property of quantum models.

The nearly monochromatic frequency radiation emitted by a light source can be thought of as consisting of "packets of radiation" that we call photons. Monochromatic radiation – having a very small frequency spread, ideally one wavelength.

The propagation of photons in space is correctly described by the classical Maxwell equations. In this case, each photon is considered classical in a train waves, defined by two vector fields - electrostatic field strength and induction magnetic field. A wave train is a series of disturbances with breaks between them. The radiation of an individual atom cannot be monochromatic, because the radiation lasts a finite period of time, having periods of rise and fall.

It is incorrect to interpret the sum of the squares of the amplitudes as the energy density in the space in which the photon moves; instead, each quantity that depends quadratically on the amplitude of the wave should be interpreted as a quantity proportional to the probability of some process. Let's say, it is not equal to the energy contributed by the photon to this region, but is proportional to the probability of detecting a photon in this region.

The energy transferred to any location in space by a photon is always equal to . Thereby where is the probability of finding a photon in a given area, and is the number of photons.

In 1921, the Stern-Gerlach experiment confirmed the presence of atoms back and the fact of spatial quantization of the direction of their magnetic moments (from the English spin - to rotate, spin.). Spin- the intrinsic angular momentum of elementary particles, which has a quantum nature and is not associated with the movement of the particle as a whole. When introducing the concept of spin, it was assumed that the electron could be considered as a “rotating top”, and its spin as a characteristic of such rotation. Spin is also the name given to the intrinsic angular momentum of an atomic nucleus or atom; in this case, the spin is defined as the vector sum (calculated according to the rules for adding moments in quantum mechanics) of the spins of the elementary particles forming the system, and the orbital moments of these particles, due to their motion within the system.

Spin is measured in units (reduced Planck constants, or Dirac constants) and is equal to , where J- an integer (including zero) or half-integer positive number characteristic of each type of particle - spin quantum number, which is usually called simply spin (one of the quantum numbers). In this regard, they speak of a whole or half-integer spin of a particle. However, the concepts of spin and spin quantum number should not be confused. A spin quantum number is a quantum number that determines the spin value of a quantum system (atom, ion, atomic nucleus, molecule), i.e., its own (internal) angular momentum. The projection of the spin onto any fixed direction z in space can take on the values J , J-1, ..., -J. Thus, a particle with spin J may be in 2J+1 spin states (at J= 1 / 2 - in two states), which is equivalent to the presence of an additional internal degree of freedom.

The key element of quantum mechanics is Heisenberg uncertainty principle, which suggests that it is impossible to simultaneously accurately determine the position of a particle in space and its momentum. This principle explains the quantization of light, as well as proportional dependence photon energy from its frequency.

The motion of a photon can be described by Maxwell’s system of equations, while the equation of motion of any other elementary particle electron type is described by the Schrödinger equation, which is more general.

Maxwell's system of equations is invariant under the Lorentz transformation. Lorentz transformations V special theory relativity are the transformations to which space-time coordinates are subjected (x,y,z,t) each event during the transition from one inertial frame of reference to another. In essence, these transformations are transformations not only in space, like Galileo's transformations, but also in time.

Chapter II . Basic concepts and principles of quantum computing

Although computers have become smaller and much faster at their task than before, the task itself remains the same: manipulate a sequence of bits and interpret that sequence as a useful computational result. A bit is a fundamental unit of information, usually represented as a 0 or 1 in your digital computer. Each classical bit is physically realized by a macroscopic physical system, such as the magnetization on a hard drive or the charge on a capacitor. For example, a text composed of n characters, and stored on a typical computer's hard drive, is described by a string of 8n zeros and ones. This is where the fundamental difference between your classical computer and a quantum computer lies. While a classical computer obeys the well-understood laws of classical physics, a quantum computer is a device that exploits quantum mechanical phenomena (especially quantum interference) to implement a completely new way of processing information.

In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary, but rather quaternary in nature. This property of the qubit arises as a direct consequence of its subjection to the laws of quantum mechanics, which are radically different from the laws of classical physics. A qubit can exist not only in a state corresponding to logical 0 or 1, like a classical bit, but also in states corresponding to mixed or superpositions these classical states. In other words, a qubit can exist as a zero, as a one, and as both 0 and 1. In this case, you can specify a certain numerical coefficient representing the probability of being in each state.

Ideas about the possibility of building a quantum computer go back to the work of R. Feynman in 1982-1986. Considering the question of calculating the evolution of quantum systems on a digital computer, Feynman discovered the “unsolvability” of this problem: it turns out that the memory resources and speed of classical machines are insufficient to solve quantum problems. For example, a system of n quantum particles with two states (spins 1/2 ) It has 2 n basic states; to describe it, it is necessary to specify (and write into the computer memory) 2 n amplitudes of these states. Based on this negative result, Feynman suggested that it is likely that a “quantum computer” will have properties that will allow it to solve quantum problems.

“Classical” computers are built on transistor circuits that have nonlinear relationships between input and output voltages. They are essentially bistable elements; for example, when the input voltage is low (logical "0"), the input voltage is high (logical "1"), and vice versa. In the quantum world, such a bistable transistor circuit can be compared to a two-level quantum particle: we assign the values ​​of logical to the state, state, - boolean value. Transitions in a bistable transistor circuit here will correspond to transitions from level to level: . However, a quantum bistable element, called a qubit, has a new, compared to the classical, property of superposition of states: it can be in any superposition state, where are complex numbers, . States of a quantum system from P two-level particles have in general the form of a superposition 2 n basic condition . In the end quantum principle superposition of states makes it possible to impart fundamentally new “abilities” to a quantum computer.

It has been proven that a quantum computer can be built from just two elements (gates): a one-qubit element and a two-qubit controlled NOT element (CNOT). Matrix 2x2 element has the form:

(1)

The gate describes the rotation of the qubit state vector from the z axis to the polar axis specified by the angles . If are irrational numbers, then by repeated use the state vector can be given any predetermined orientation. This is precisely the “universality” of a single-qubit gate in the form (1). In a particular case, we get a single-qubit logical element NOT (NOT): NOT=, NOT=. When physically implementing an element, it is NOT necessary to influence a quantum particle (qubit) with an external pulse that transfers the qubit from one state to another. The controlled NOT gate is executed by influencing two interacting qubits: in this case, through interaction, one qubit controls the evolution of the other. Transitions under the influence of external pulses are well known in pulsed magnetic resonance spectroscopy. The valve does NOT correspond to a spin flip under the influence of an impulse (rotation of the magnetization around the axis by an angle ) . The CNOT gate is executed on two spins 1/2 with Hamiltonian (spin controls ). CNOT is performed in three steps: impulse + free precession over time - impulse. If (the controlling qubit is in the state), then under the specified influences the controlled qubit makes transitions (or ). If (the controlling qubit is in the state), then the result of the evolution of the controlled qubit will be different: (). Thus, spin evolves differently at : here in is the state of the controlling qubit.

When considering the question of implementing a quantum computer on certain quantum systems, the feasibility and properties of elementary NOT and controlled NOT gates are first examined.

For what follows, it is also useful to introduce the one-qubit Hadamard transform:

In magnetic resonance technology these gates are carried out by pulses:

The diagram of a quantum computer is shown in the figure. Before the computer starts operating, all qubits (quantum particles) must be brought into the state, i.e. to the ground state. This condition in itself is not trivial.


It requires either deep cooling (to temperatures on the order of millikelvin) or the use of polarization methods. system P qubits in a state can be considered a memory register prepared for recording input data and performing calculations. In addition to this register, it is usually assumed that there are additional (auxiliary) registers necessary for recording intermediate calculation results. Data is recorded by influencing each qubit of the computer in one way or another. Let us assume, for example, that a Hadamard transformation is performed on each qubit of the register:

As a result, the system went into a state of superposition from 2 p basis states with amplitude 2 - n /2 . Each basic state is a binary number from to . The horizontal lines in the figure indicate the time axes.

The execution of the algorithm is accomplished by a unitary superposition transformation. is a unitary matrix of dimension 2 p. When physically implemented through pulsed influences on qubits from the outside, the matrix must be represented as a vector product of matrices of dimension 2 and . The latter can be performed by sequentially influencing single qubits or pairs of qubits :

The number of factors in this expansion determines the duration (and complexity) of the calculations. Everything in (3) is performed using the operations NOT, CNOT, H (or their variations).

It is remarkable that the linear unitary operator acts simultaneously on all terms of the superposition

The results of the calculation are written in the spare register, which was in the state before use. In one run of the computational process we obtain the values ​​of the desired function f for all values ​​of the argument X = 0,..., 2 p - 1 . This phenomenon is called quantum parallelism.

Measuring the result of calculations is reduced to projecting the superposition vector in (4) onto the vector of one of the basic states :

(5)

Here one of the weak points of a quantum computer appears: the number “falls out” during the measurement process according to the law of chance. To find for a given , it is necessary to carry out calculations and measurements many times until it accidentally falls out .

When analyzing the unitary evolution of a quantum system performing a computational process, the importance of physical processes such as interference is revealed. Unitary transformations take place in the space of complex numbers, and the addition of the phases of these numbers has the nature of interference. The productivity of Fourier transforms in the phenomena of interference and spectroscopy is known. It turned out that quantum algorithms invariably contain Fourier transforms. The Hadamard transform is the simplest discrete Fourier transform. Gates of the NOT and CNOT types can be implemented directly on the Mach-Zehnder interferometer using the phenomenon of photon interference and rotation of its polarization vector.

Various ways to physically implement quantum computers are being explored. Model experiments on quantum computing were performed on a pulsed nuclear magnetic resonance spectrometer. In these models, two or three spins (qubits) worked, for example, two spins of 13 C nuclei and one spin of a proton in a trichlorethylene molecule

However, in these experiments the quantum computer was “ensemble”: the computer’s output signals are composed of a large number of molecules in a liquid solution (~ 10 20).

To date, proposals have been made to implement quantum computers on ions and molecules in traps in a vacuum, on nuclear spins in liquids (see above), on the nuclear spins of 31 P atoms in crystalline silicon, on the spins of electrons in quantum dots created in two-dimensional electronic gas in GaAs heterostructures, at Josephson junctions. As we see, in principle, a quantum computer can be built on atomic particles in a vacuum, liquid, or crystals. In each case, certain obstacles must be overcome, but among them there are several common ones, determined by the principles of operation of qubits in a quantum computer. Let's set the task of creating a full-scale quantum computer containing, say, 10 3 qubits (albeit at n = 100 a quantum computer could be a useful tool).

1. We need to find ways to “initialize” the computer’s qubits into the state. For spin systems in crystals, the use of ultra-low temperatures and ultra-strong magnetic fields is obvious. The use of spin polarization by pumping can be useful when cooling and high magnetic fields are simultaneously applied.

For ions in vacuum traps, ultra-low cooling of ions (atoms) is achieved by laser methods. The need for cold and ultra-high vacuum is also obvious.

2. It is necessary to have a technology for selective impact of pulses on any selected qubit. In the field of radio frequencies and spin resonance, this means that each spin must have its own resonant frequency (in terms of spectroscopic resolution). Differences in resonance frequencies for spins in molecules are due to chemical shifts for the spins of one isotope and one element; the necessary frequency differences exist for nuclear spins various elements. However, common sense dictates that these naturally occurring differences in resonant frequencies are unlikely to be sufficient to work with 10 3 spins

More promising approaches seem to be those in which the resonant frequency of each qubit can be controlled externally. In the proposal for a silicon quantum computer, the qubit is the nuclear spin of an impurity atom 31 R. The resonance frequency is determined by the constant A hyperfine interaction of nuclear and electron spins of the 31 R atom. Electric field on a nanoelectrode located above the 31 P atom, polarizes the atom and changes the constant A(respectively, the resonant frequency of the nuclear spin). Thus, the presence of an electrode embeds a qubit into an electronic circuit and tunes its resonant frequency.

3. To perform the CNOT (controlled NOT) operation, interaction between qubits and the form is necessary . Such interaction occurs between the spins of nuclei in a molecule if the nuclei are separated by one chemical bond. In principle, it is necessary to be able to perform the operation on any pair of qubits . It is hardly possible to have physical interaction of qubits of the same magnitude scale and according to the “all with all” principle in the natural environment. There is an obvious need for a way to tune the environment between qubits from the outside by introducing electrodes with a controlled potential. In this way, it is possible to create, for example, an overlap of the wave functions of electrons in neighboring quantum dots and the emergence of an interaction of the form between electron spins [. The overlap of the wave functions of the electrons of neighboring 31 P atoms causes the appearance of an interaction of the type between nuclear spins.

To provide the operation , where and are distant qubits between which there is no interaction of the form, it is necessary to apply in the computer the operation of exchanging states along a chain so that the operation is ensured since the state coincides with the state .

4. During the execution of a unitary transformation corresponding to the selected algorithm, the computer’s qubits are exposed to influence from the environment; as a result, the amplitude and phase of the qubit state vector experience random changes - decoherence. Essentially, decoherence is the relaxation of those degrees of freedom of the particle that are used in the qubit. The decoherence time is equal to the relaxation time. In nuclear magnetic resonance in liquids, relaxation times are 1-10 s. For ions in traps with optical transitions between levels E 0 And E 1 The decoherence time is the time of spontaneous emission and the time of collisions with residual atoms. It is obvious that decoherence is a serious obstacle to quantum computing: the started computational process acquires the features of randomness after the decoherence time has elapsed. However, it is possible to achieve a stable quantum computing process for an arbitrarily long time m > ma if quantum coding and error correction methods (phase and amplitude) are systematically used. It has been proven that with relatively low requirements for error-free execution of elementary operations such as NOT and CNOT (error probability no more than 10 -5), quantum error correction (QEC) methods ensure stable operation of a quantum computer.

It is also possible to actively suppress the decoherence process if periodic measurements are carried out on the system of qubits. The measurement will most likely find the particle in the “correct” state, and small random changes in the state vector will collapse during the measurement (quantum Zeno effect). However, it is difficult to say yet how useful such a technique can be, since such measurements themselves can affect and disrupt the computational process.

5. The states of the qubits after the completion of the computational process must be measured to determine the result of the computation. Today there is no mastered technology for such measurements. However, the path to searching for such a technology is obvious: it is necessary to use amplification methods in quantum measurement. For example, the nuclear spin state is transferred to the electron spin; the orbital wave function depends on the latter; knowing the orbital wave function, it is possible to organize charge transfer (ionization); the presence or absence of charge on a single electron can be detected by classical electrometric methods. Probe force microscopy methods will probably play a major role in these measurements.

To date, quantum algorithms have been discovered that lead to exponential acceleration of calculations compared to calculations on a classical computer. These include Shor's algorithm for determining prime factors of large (multi-digit) numbers. This purely mathematical problem is closely connected with the life of society, since modern encryption codes are built on the “non-computability” of such factors. It was this circumstance that caused a sensation when Shor's algorithm was discovered. It is important for physicists that the solution of quantum problems (solving the Schrödinger equation for many-particle systems) is exponentially accelerated if a quantum computer is used.

Finally, it is very important that in the course of research into quantum computing problems, the main problems of quantum physics are subjected to new analysis and experimental verification: problems of locality, reality, complementarity, hidden parameters, wave function collapse.

Ideas of quantum computing and quantum communication arose a hundred years after the birth of the initial ideas of quantum physics. The possibility of building quantum computers and communication systems has been demonstrated by theoretical and experimental studies completed to date. Quantum physics is “sufficient” for the design of quantum computers based on various “element bases”. Quantum computers, if they can be built, will be the technology of the 21st century. Their manufacture will require the creation and development of new technologies at the nanometer and atomic level. This work could likely take several decades. The construction of quantum computers would be another confirmation of the principle of the inexhaustibility of nature: nature has the means to carry out any task correctly formulated by man.

In a conventional computer, information is encoded as a sequence of bits, and these bits are processed sequentially by Boolean logic gates to produce the desired result. Similarly, a quantum computer processes qubits by performing a sequence of operations on quantum logic gates, each of which represents a unitary transformation acting on a single qubit or pair of qubits. By sequentially performing these transformations, a quantum computer can perform a complex unitary transformation over the entire set of qubits prepared in some initial state. After this, you can make measurements on the qubits, which will give the final result of the calculations. These similarities in computation between a quantum computer and a classical computer suggest that, at least in theory, a classical computer can exactly replicate the operation of a quantum computer. In other words, a classical computer can do everything a quantum computer can do. Then why all this fuss with a quantum computer? The point is that, although theoretically a classical computer can simulate a quantum computer, it is very inefficient, so inefficient that practically a classical computer is unable to solve many problems that a quantum computer can do. Simulating a quantum computer on a classical computer is a computationally difficult problem because the correlations between quantum bits are qualitatively different from the correlations between classical bits, as first shown by John Bell. For example, we can take a system of only a few hundred qubits. It exists in Hilbert space with dimension ~10 90 , which would require, when modeling with a classical computer, the use of exponentially large matrices (to perform calculations for each individual state that is also described by the matrix). This means that a classical computer will take exponentially more time compared to even a primitive quantum computer.

Richard Feynman was among the first to recognize the potential of quantum superposition to solve such problems much faster. For example, a system of 500 qubits, which is almost impossible to model classically, is a quantum superposition of 2 500 states. Each value of such a superposition is classically equivalent to a list of 500 ones and zeros. Any quantum operation on such a system, for example a tuned pulse of radio waves that can perform a controlled NOT operation on, say, the 100th and 101st qubits, will simultaneously affect 2 500 states. Thus, in one tick of the computer clock, the quantum operation calculates not one machine state, like conventional computers, but 2 500 states immediately! However, eventually a measurement is made on the system of qubits, and the system collapses into a single quantum state corresponding to a single solution to the problem, a single set of 500 ones and zeros, as dictated by the measurement axiom of quantum mechanics. This is a truly exciting result, since this solution, found by a collective process of quantum parallel computing with its origins in superposition, is equivalent to performing the same operation on a classical supercomputer with ~ 10 150 separate processors (which, of course, is impossible)!! The first researchers in this field were, of course, inspired by such gigantic possibilities, and so the hunt for suitable problems for such computing power soon began. Peter Shor, a researcher and computer scientist at AT&T's Bell Laboratories in New Jersey, proposed a problem that could be solved on a quantum computer and using a quantum algorithm. Shor's algorithm uses the power of quantum superposition to factor large numbers (on the order of ~10,200 binary digits or more) into factors in a few seconds. This problem has important practical applications for encryption, where the generally accepted (and best) encryption algorithm, known as RSA, is based precisely on the complexity of factoring large composite numbers. , which easily solves this problem, is of course of great interest to the many government organizations using RSA, which until now was considered "unhackable", and to anyone interested in the security of their data.

Encryption, however, is only one possible application of a quantum computer. Shor has developed a whole set of mathematical operations that can be performed exclusively on a quantum computer. Some of these operations are used in his factorization algorithm. Further, Feynman argued that a quantum computer could act as a simulation device for quantum physics, potentially opening the door to many discoveries in the field. Currently, the power and capabilities of a quantum computer are mainly a matter of theoretical speculation; the advent of the first truly functional quantum computer will undoubtedly bring many new and exciting practical applications.

Chapter III . Grover's algorithm

The search problem is as follows: there is an unordered database consisting of N-elements, of which only one satisfies the given conditions - it is this element that needs to be found. If an element can be inspected, then determining whether it meets the required conditions or not is a one-step process. However, the database is such that there is no ordering in place to help select an item. The most efficient classical algorithm for this task is to check the items from the database one by one. If the element satisfies the required conditions, the search is over; if not, then the element is set aside so that it is not checked again. Obviously, this algorithm requires an average of elements to be checked before the desired one is found.

When implementing this algorithm, you can use the same equipment as in the classical case, but specifying the input and output in the form superpositions states, you can find an object for O () quantum mechanical steps instead of ABOUT( N )) classic steps. Each quantum mechanical step consists of an elementary unitary operation, which we will consider further.

To implement this algorithm, we need the following three elementary operations. The first is the preparation of a state in which the system is with equal probability in any of its N basic states; the second is the Hadamard transform and the third is the selective phase rotation of states.

As is known, the main operation for quantum computing is the operation M, acting per bit, which is represented by the following matrix:

that is, a bit in state 0 turns into a superposition of two states: (1/, 1/). Similarly, a bit in state 1 is transformed into (1/, -1/,), i.e., the amplitude value for each state is 1/, but the phase in state 1 is reversed. The phase has no analogue in classical probabilistic algorithms. It arises in quantum mechanics, where the probability amplitude is complex. In a system in which the state is described P bits (i.e. there is N = 2 p possible states), we can carry out the transformation M on each bit independently, sequentially changing the state of the system. In the case where the initial configuration was a configuration with P bits in the first state, the resulting configuration will have equal amplitudes for each state. This is the way to create a superposition with the same amplitude for all states.

The third transformation we will need is to selectively rotate the phase of the amplitude in certain states. The transformation presented here for a two-state system is of the form:

Where j = And - arbitrary real numbers. Note that, unlike the Hadamard transform and other state transformation matrices, the probability of each state remains the same, since the square of the absolute magnitude of the amplitude in each state remains the same.

Let's consider the problem in abstract form.

Let the system have N = 2 p states, which are denoted as ,..., . These 2 p states are represented as n-bit strings. Let there be a single state, say , that satisfies the condition C() = 1, while for all other states S, WITH( ,) = 0 (it is assumed that for any state S the condition is evaluated per unit time). The task is to recognize the state

Let's move on to the algorithm itself

Steps (1) and (2) are a sequence of elementary unitary operations described earlier. Step (3) is the final measurement carried out by the external system.

(1) We bring the system into a state of superposition:

with identical amplitudes for each of the N states. This superposition can be obtained in steps.

(2) Let's repeat the following unitary operation ABOUT( ) once:

a. Let the system be in some state S:

When WITH( S ) = 1, rotate phase by radian;

When С(S) = 0, leave the system unchanged.

b . Apply diffusion transformation D which is determined by the matrix D as follows: if ;" and . D can be implemented as sequential execution of unitary transformations: , where W– Hadamard transformation matrix, R – phase rotation matrix.

(3) Measure the resulting state. This state will be the state WITH( )„ (i.e., the desired state that satisfies the condition (C() = 1) with a probability of at least no less than 0.5. Note that step (2a) is a phase rotation. Its implementation must include a recognition procedure state and the subsequent determination of whether or not to carry out the phase rotation. It must be carried out in such a way as not to leave a trace on the state of the system, so that there is confidence that the paths leading to the same final state are indistinguishable and can interfere. Note that this. procedure Not includes classical measurements.

This quantum search algorithm is likely to be simpler to implement compared to many other known quantum mechanical algorithms, since the required operations are only the Walsh-Hadamard transform and the conditional phase shift operation, each of which is relatively simple compared to the operations used by the others quantum mechanical algorithms.


Conclusion

Currently, quantum computers and quantum information technologies remain in a state of pioneering development. Solving the difficulties these technologies currently face will ensure quantum computers' breakthrough to their rightful place as the fastest computers of all physically possible. By now, error correction has advanced significantly, bringing us closer to the point where we can build computers that are robust enough to withstand the effects of decoherence. On the other hand, the creation of quantum equipment is still only an emerging industry; but the work done to date convinces us that it is only a matter of time before we can build machines large enough to run serious algorithms like Shor's algorithm. Thus, quantum computers will definitely appear. At the very least, these will be the most advanced computing devices, and the computers we have today will become obsolete. Quantum computing has its origins in very specific areas of theoretical physics, but its future will undoubtedly have a huge impact on the lives of all humanity.


Bibliography

1. Quantum computing: pros and cons. Ed. V.A. Sadovnichigo. – Izhevsk: Udmurt University Publishing House, 1999. – 212 p.

2. Belonuchkin V.E., Zaikin D.A., Tsypenyuk Yu.M., Fundamentals of Physics. General physics course: Textbook. In 2 vols. T. 2. Quantum and statistical physics. – M.: FIZMATLIT, 2001. – 504 p.

3. Valiev K.A. “Quantum computers: can they be made “big”?”, Advances in Physical Sciences, vol. 169, no. 6, 1999.

4. Valiev K.A. “Quantum information science: computers, communications and cryptography”, BULLETIN OF THE RUSSIAN ACADEMY OF SCIENCES, volume 70, no. 8, p. 688-695, 2000

5. Maslov. D. “Quantum computing and communication: reality and prospects”, Computerra, No. 46, 2004.

6. Khalfin L.A. “Quantum Zeno effect”, Advances in Physical Sciences, v. 160, no. 10, 1990.

7. Kholevo A. “Quantum information science: past, present, future,”

IN THE WORLD OF SCIENCE, No. 7, 2008.

8. Center for Quantum Technologies, National University of Singapore www.quantumlah.org

Just five years ago, only specialists in the field of quantum physics knew about quantum computers. However, in last years The number of publications on the Internet and in specialized publications devoted to quantum computing increased exponentially. The topic of quantum computing has become popular and has generated many different opinions, which do not always correspond to reality.
In this article we will try to talk as clearly as possible about what a quantum computer is and at what stage modern developments in this area are.

Limited capabilities of modern computers

Quantum computers and quantum computing are often talked about as an alternative to silicon technologies for creating microprocessors, which, in general, is not entirely true. Actually, why do we even have to look for an alternative to modern computer technologies? As the entire history of the computer industry shows, the computing power of processors is increasing exponentially. No other industry is developing at such a rapid pace. As a rule, when talking about the rate of growth in the computing power of processors, they recall the so-called Gordon Moore's law, derived in April 1965, that is, just six years after the invention of the first integrated circuit (IC).

At the request of Electronics magazine, Gordon Moore wrote an article dedicated to the 35th anniversary of the publication. He made a prediction about how semiconductor devices will develop over the next ten years. Having analyzed the pace of development of semiconductor devices and economic factors over the past six years, that is, since 1959, Gordon Moore suggested that by 1975 the number of transistors in one integrated circuit would be 65 thousand.

In fact, according to Moore's forecast, the number of transistors in a single chip was expected to increase by more than a thousand times in ten years. At the same time, this meant that every year the number of transistors in one chip had to double.

Subsequently, adjustments were made to Moore's law (in order to correlate it with reality), but the meaning did not change: the number of transistors in microcircuits increases exponentially. Naturally, increasing the density of transistors on a chip is possible only by reducing the size of the transistors themselves. In this regard, a relevant question is: to what extent can the size of transistors be reduced? Already sizes individual elements transistors in processors are comparable to atomic ones; for example, the width of the dioxide layer separating the gate dielectric from the charge transfer channel is only a few tens of atomic layers. It is clear that there is a purely physical limit that makes it impossible to further reduce the size of transistors. Even if we assume that in the future they will have a slightly different geometry and architecture, it is theoretically impossible to create a transistor or similar element with a size of less than 10 -8 cm (the diameter of a hydrogen atom) and an operating frequency of more than 10 15 Hz (the frequency of atomic transitions). Therefore, whether we like it or not, the day is inevitable when Moore’s Law will have to be archived (unless, of course, it is corrected once again).

Limited opportunities increasing the computing power of processors by reducing the size of transistors is just one of the bottlenecks of classic silicon processors.

As we will see later, quantum computers in no way represent an attempt to solve the problem of miniaturization of the basic elements of processors.

Solving the problem of miniaturization of transistors, the search for new materials for creating the element base of microelectronics, the search for new physical principles for devices with characteristic dimensions comparable to the De Broglie wavelength, which has a value of about 20 nm - these issues have been on the agenda for almost two decades. As a result of their solution, nanotechnology was developed. A serious problem faced during the transition to the field of nanoelectronic devices is the reduction of energy dissipation during computational operations. The idea of ​​the possibility of “logically reversible” operations that are not accompanied by energy dissipation was first expressed by R. Landauer back in 1961. A significant step in solving this problem was made in 1982 by Charles Bennett, who theoretically proved that a universal digital computer can be built on logically and thermodynamically reversible gates in such a way that energy will be dissipated only due to irreversible peripheral processes of entering information into the machine ( preparation of the initial state) and, accordingly, output from it (reading the result). Typical reversible universal valves include Fredkin and Toffoli valves.

Another problem with classical computers lies in the von Neumann architecture itself and the binary logic of all modern processors. All computers, from Charles Babbage's Analytical Engine to modern supercomputers, are based on the same principles (von Neumann architecture) that were developed back in the 40s of the last century.

Any computer at the software level operates with bits (variables that take the value 0 or 1). Using logic gates, logical operations are performed on bits, which allows you to obtain a certain final state at the output. Changing the state of variables is carried out using a program that defines a sequence of operations, each of which uses big number bit.

Traditional processors execute programs sequentially. Despite the existence of multiprocessor systems, multi-core processors and various technologies aimed at increasing the level of parallelism, all computers built on the basis of von Neumann architecture are devices with a sequential mode of instruction execution. All modern processors implement the following algorithm for processing commands and data: fetching commands and data from memory and executing instructions on the selected data. This cycle is repeated many times and at tremendous speed.

However, von Neumann architecture limits the ability to increase the computing power of modern PCs. A typical example of a task that is beyond the capabilities of modern PCs is the decomposition of an integer into prime factors (a prime factor is a factor that is divisible by itself and 1 without remainder).

If you want to factor a number into prime factors X, having n characters in binary notation, then the obvious way to solve this problem is to try to sequentially divide it into numbers from 2 to. To do this, you will have to go through 2 n/2 options. For example, if you are considering a number that has 100,000 characters (in binary notation), then you will need to go through 3x10 15,051 options. If we assume that one processor cycle is required for one search, then at a speed of 3 GHz, it will take time exceeding the age of our planet to search through all the numbers. There is, however, a clever algorithm that solves the same problem in exp( n 1/3) steps, but even in this case, not a single modern supercomputer can cope with the task of factoring a number with a million digits.

The problem of factoring a number into prime factors belongs to the class of problems that are said to not be solved in polynomial time (NP-complete problem - Nondeterministic polynomial-time complete). Such problems are included in the class of non-computable problems in the sense that they cannot be solved on classical computers in a time polynomial depending on the number of bits n, representing the task. If we talk about decomposing a number into prime factors, then as the number of bits increases, the time required to solve the problem increases exponentially, not polynomially.

Looking ahead, we note that quantum computing is associated with the prospects of solving NP-complete problems in polynomial time.

The quantum physics

Of course, quantum physics is loosely related to what is called the elemental base of modern computers. However, when talking about a quantum computer, it is simply impossible to avoid mentioning some specific terms of quantum physics. We understand that not everyone has studied the legendary third volume of “Theoretical Physics” by L.D. Landau and E.M. Lifshitz, and for many such concepts as the wave function and the Schrödinger equation are something from the other world. As for the specific mathematical apparatus of quantum mechanics, these are solid formulas and obscure words. Therefore, we will try to adhere to a generally accessible level of presentation, avoiding, if possible, tensor analysis and other specifics of quantum mechanics.

For the vast majority of people, quantum mechanics is beyond comprehension. The point is not so much in the complex mathematical apparatus, but in the fact that the laws of quantum mechanics are illogical and do not have a subconscious association - they are impossible to imagine. However, the analysis of the illogicality of quantum mechanics and the paradoxical birth of harmonious logic from this illogicality is the lot of philosophers; we will touch on aspects of quantum mechanics only to the extent necessary to understand the essence of quantum computing.

The history of quantum physics began on December 14, 1900. It was on this day that the German physicist and future Nobel laureate Max Planck reported at a meeting of the Berlin Physical Society on the fundamental discovery of the quantum properties of thermal radiation. This is how the concept of energy quantum appeared in physics, and among other fundamental constants, Planck’s constant.

Planck's discovery and Albert Einstein's theory of the photoelectric effect, which then appeared in 1905, as well as the creation in 1913 of the first quantum theory of atomic spectra by Niels Bohr stimulated the creation and further rapid development of quantum theory and experimental studies of quantum phenomena.

Already in 1926, Erwin Schrödinger formulated his famous wave equation, and Enrico Fermi and Paul Dirac obtained a quantum statistical distribution for the electron gas, taking into account the filling of individual quantum states.

In 1928, Felix Bloch analyzed the quantum mechanical problem of the motion of an electron in an external periodic field of a crystal lattice and showed that the electronic energy spectrum in a crystalline solid has a band structure. In fact, this was the beginning of a new direction in physics - solid state theory.

The entire 20th century is a period of intensive development of quantum physics and all those branches of physics for which quantum theory became the progenitor.

The emergence of quantum computing

The idea of ​​using quantum computing was first expressed by the Soviet mathematician Yu.I. Manin in 1980 in his famous monograph “Computable and Incomputable”. True, interest in his work arose only two years later, in 1982, after the publication of an article on the same topic by an American theoretical physicist Nobel laureate Richard Feynman. He noted that certain quantum mechanical operations cannot be transferred exactly to a classical computer. This observation led him to believe that such calculations could be more efficient if carried out using quantum operations.

Consider, for example, the quantum mechanical problem of changing the state of a quantum system consisting of n spins over a certain period of time. Without delving into the details of the mathematical apparatus of quantum theory, we note that the general state of the system from n spins are described by a vector in 2n-dimensional complex space, and the change in its state is described by a unitary matrix of size 2nx2n. If the time period under consideration is very short, then the matrix is ​​structured very simply and each of its elements is easy to calculate, knowing the interaction between the spins. If you need to know the change in the state of the system over a long period of time, then you need to multiply such matrices, and this requires an exponentially large number of operations. Again we are faced with a PN-complete problem, unsolvable in polynomial time on classical computers. There is currently no way to simplify this calculation, and it is likely that modeling quantum mechanics is an exponentially difficult mathematical problem. But if classical computers are not capable of solving quantum problems, then perhaps it would be advisable to use the quantum system itself for this purpose? And if this is indeed possible, are quantum systems suitable for solving other computing problems? Similar questions were considered by Feynman and Manin.

Already in 1985, David Deutsch proposed a specific mathematical model of a quantum machine.

However, until the mid-90s, the field of quantum computing developed rather sluggishly. The practical implementation of quantum computers has proven to be very difficult. In addition, the scientific community was pessimistic about the fact that quantum operations could speed up the solution of certain computational problems. This continued until 1994, when the American mathematician Peter Shor proposed a decomposition algorithm for a quantum computer n-digit number into prime factors in a time polynomially dependent on n(quantum factorization algorithm). Shor's quantum factorization algorithm became one of the main factors that led to the intensive development of quantum computing methods and the emergence of algorithms that allow solving some NP problems.

Naturally, the question arises: why, in fact, did the quantum factorization algorithm proposed by Shor lead to such consequences? The fact is that the problem of factoring a number into prime factors has direct relation to cryptography, in particular to the popular RSA encryption systems. By being able to factorize a number into prime factors in polynomial time, a quantum computer could theoretically be able to decrypt messages encoded using many popular cryptographic algorithms, such as RSA. Until now, this algorithm was considered relatively reliable, since an efficient way to factor numbers into prime factors for a classical computer is currently unknown. Shor came up with a quantum algorithm that allows you to factorize n-digital number for n 3 (log n) k steps ( k=const). Naturally, the practical implementation of such an algorithm could have more negative than positive consequences, since it made it possible to select keys to ciphers, forge electronic signatures, etc. However, the practical implementation of a real quantum computer is still a long way off, and therefore over the next ten years there is no fear that codes can be broken using quantum computers.

The idea of ​​quantum computing

So, after a brief description of the history of quantum computing, we can move on to consider its very essence. The idea (but not its implementation) of quantum computing is quite simple and interesting. But even for a superficial understanding of it, it is necessary to become familiar with some specific concepts of quantum physics.

Before considering the generalized quantum concepts of the state vector and the superposition principle, let us consider a simple example of a polarized photon. A polarized photon is an example of a two-level quantum system. The polarization state of a photon can be specified by a state vector that determines the direction of polarization. The polarization of a photon can be directed upward or downward, so they speak of two main, or basic, states, which are denoted as |1 and |0.

These notations (bra/cat notations) were introduced by Dirac and have a strictly mathematical definition (basic state vectors), which determines the rules for working with them, however, in order not to delve into the mathematical jungle, we will not consider these subtleties in detail.

Returning to the polarized photon, we note that as the basis states we could choose not only horizontal and vertical, but also any mutually orthogonal directions of polarization. The meaning of basis states is that any arbitrary polarization can be expressed as a linear combination of basis states, that is, a|1+b|0. Since we are only interested in the direction of polarization (the magnitude of polarization is not important), the state vector can be considered unit, that is, |a| 2 +|b| 2 = 1.

Now let us generalize the example with photon polarization to any two-level quantum system.

Suppose we have an arbitrary two-level quantum system, which is characterized by basic orthogonal states |1 and |0. According to the laws (postulates) of quantum mechanics (superposition principle), the possible states of a quantum system will also be superpositions y = a|1+b|0, where a and b are complex numbers called amplitudes. Note that there is no analogue of the superposition state in classical physics.

One of the fundamental postulates of quantum mechanics states that in order to measure the state of a quantum system, it must be destroyed. That is, any measurement process in quantum physics violates the initial state of the system and transfers it to a new state. It is not so easy to understand this statement, and therefore let us dwell on it in more detail.

In general, the concept of measurement in quantum physics plays a special role, and it should not be considered as a measurement in the classical sense. A measurement of a quantum system occurs whenever it comes into interaction with a “classical” object, that is, an object that obeys the laws of classical physics. As a result of such interaction, the state of the quantum system changes, and the nature and magnitude of this change depend on the state of the quantum system and therefore can serve as its quantitative characteristic.

In this regard, a classical object is usually called a device, and its process of interaction with a quantum system is spoken of as a measurement. It must be emphasized that this does not at all mean the measurement process in which the observer participates. By measurement in quantum physics we mean any process of interaction between classical and quantum objects that occurs in addition to and independently of any observer. Clarification of the role of measurement in quantum physics belongs to Niels Bohr.

So, in order to measure a quantum system, it is necessary to somehow act on it with a classical object, after which its original state will be disrupted. In addition, it can be argued that as a result of the measurement, the quantum system will be transferred to one of its basic states. For example, to measure a two-level quantum system, at least a two-level classical object is required, that is, a classical object that can take two possible values: 0 and 1. During the measurement process, the state of the quantum system will be transformed into one of the basis vectors, and if the classical object takes a value equal to 0, then the quantum object is transformed to state |0, and if the classical object takes a value equal to 1, then the quantum object is transformed to state |1.

Thus, although a quantum two-level system can be in an infinite number of superposition states, as a result of measurement it takes only one of two possible basis states. Amplitude modulus squared |a| 2 determines the probability of detecting (measuring) the system in the basic state |1, and the square of the amplitude module |b| 2 - in the basic state |0.

However, let's return to our example with a polarized photon. To measure the state of a photon (its polarization), we need some classical device with a classical basis (1,0). Then the polarization state of the photon a|1+b|0 will be defined as 1 (horizontal polarization) with probability |a| 2 and as 0 (vertical polarization) with probability |b| 2.

Since measuring a quantum system leads it to one of the basic states and, therefore, destroys the superposition (for example, during the measurement a value equal to |1 is obtained), this means that as a result of the measurement the quantum system goes into a new quantum state and at the next measurement we we obtain the value |1 with 100% probability.

The state vector of a two-level quantum system is also called wave function quantum states y of a two-level system, or, in the interpretation of quantum computing, a qubit (quantum bit, qubit). Unlike a classical bit, which can only take two logical values, a qubit is a quantum object, and the number of its states determined by superposition is unlimited. However, we emphasize once again that the result of measuring a qubit always leads us to one of two possible values.

Now consider a system of two qubits. Measuring each of them can give a classical object value of 0 or 1. Therefore, a system of two qubits has four classical states: 00, 01, 10 and 11. Analogous to them are the basic quantum states: |00, |01, |10 and |11. The corresponding quantum state vector is written as a|00+b|01+ c|10+ d|11, where | a| 2 - probability during measurement to obtain the value 00, | b| 2 - probability of getting the value 01, etc.

In general, if a quantum system consists of L qubits, then it has 2 L possible classical states, each of which can be measured with some probability. The state function of such a quantum system will be written as:

where | n- basic quantum states (for example, state |001101, and | cn| 2 - probability of being in the basic state | n.

In order to change the superposition state of a quantum system, it is necessary to implement a selective external influence on each qubit. From a mathematical point of view, such a transformation is represented by unitary matrices of size 2 L x2 L. As a result, a new quantum superposition state will be obtained.

Structure of a quantum computer

The transformation we considered of the superposition state of a quantum system consisting of L qubits is essentially a model of a quantum computer. Consider, for example, a simpler example of implementing quantum computing. Suppose we have a system of L qubits, each of which is ideally isolated from the outside world. At each moment of time, we can choose arbitrary two qubits and act on them with a unitary matrix of size 4x4. The sequence of such influences is a kind of program for a quantum computer.

To use a quantum circuit for computation, you need to be able to input input data, perform the calculation, and read the result. Therefore, the circuit diagram of any quantum computer (see figure) must include the following functional blocks: a quantum register for data input, a quantum processor for data conversion, and a device for reading data.

A quantum register is a collection of a certain number L qubits Before entering information into the computer, all qubits of the quantum register must be brought to the basic states |0. This operation is called preparation, or initialization. Next, certain qubits (not all) are subjected to selective external influence (for example, using pulses of an external electromagnetic field, controlled by a classical computer), which changes the value of the qubits, that is, from state |0 they go to state |1. In this case, the state of the entire quantum register will go into a superposition of basic states | n s, that is, the state of the quantum register at the initial moment of time will be determined by the function:

It is clear that this superposition state can be used for the binary representation of a number n.

In a quantum processor, the input data is subjected to a sequence of quantum logical operations, which, from a mathematical point of view, are described by a unitary transformation acting on the state of the entire register. As a result, after a certain number of quantum processor cycles, the initial quantum state of the system becomes a new superposition of the form:

Speaking about the quantum processor, we need to make one important note. It turns out that to construct any calculation, only two basic logical Boolean operations are enough. Using basic quantum operations, it is possible to imitate the operation of ordinary logic gates that computers are made of. Since the laws of quantum physics at the microscopic level are linear and reversible, the corresponding quantum logic devices that perform operations with the quantum states of individual qubits (quantum gates) turn out to be logically and thermodynamically reversible. Quantum gates are similar to the corresponding reversible classical gates, but, unlike them, they are capable of performing unitary operations on superpositions of states. The implementation of unitary logical operations on qubits is supposed to be carried out using appropriate external influences controlled by classical computers.

Schematic structure of a quantum computer

After implementing the transformations in a quantum computer, the new superposition function is the result of calculations in a quantum processor. All that remains is to count the obtained values, for which the value of the quantum system is measured. As a result, a sequence of zeros and ones is formed, and, due to the probabilistic nature of measurements, it can be anything. Thus, a quantum computer can give any answer with some probability. In this case, a quantum calculation scheme is considered correct if the correct answer is obtained with a probability sufficiently close to unity. By repeating the calculations several times and choosing the answer that occurs most often, you can reduce the probability of error to an arbitrarily small amount.

In order to understand how classical and quantum computers differ in their operation, let's remember what a classical computer stores in memory L bits that change during each processor cycle. In a quantum computer, the memory (state register) stores values L qubits, however, the quantum system is in a state that is a superposition of all base 2 L states, and a change in the quantum state of the system produced by a quantum processor affects all 2 L basic states simultaneously. Accordingly, in a quantum computer, computing power is achieved through the implementation of parallel calculations, and theoretically, a quantum computer can work exponentially faster than a classical circuit.

It is believed that in order to implement a full-scale quantum computer, superior in performance to any classical computer, no matter what physical principles it operates on, the following basic requirements must be met:

  • a physical system that is a full-scale quantum computer must contain a sufficiently large number L>103 clearly distinguishable qubits for performing relevant quantum operations;
  • it is necessary to ensure maximum suppression of the effects of destruction of the superposition of quantum states caused by the interaction of the qubit system with the environment, as a result of which the execution of quantum algorithms may become impossible. The time for destruction of a superposition of quantum states (decoherence time) must be at least 104 times greater than the time it takes to perform basic quantum operations (cycle time). To do this, the qubit system must be fairly loosely coupled to its environment;
  • it is necessary to ensure measurement with sufficiently high reliability of the state of the quantum system at the output. Measuring the final quantum state is one of the main challenges of quantum computing.

Practical applications of quantum computers

For practical use, not a single quantum computer has yet been created that would satisfy all of the above conditions. However, in many developed countries, close attention is paid to the development of quantum computers and tens of millions of dollars are invested annually in such programs.

On this moment The largest quantum computer is made up of just seven qubits. This is enough to implement Shor's algorithm and factor the number 15 into prime factors of 3 and 5.

If we talk about possible models of quantum computers, then, in principle, there are quite a lot of them. The first quantum computer that was created in practice was a high-resolution pulsed nuclear magnetic resonance (NMR) spectrometer, although it, of course, was not considered a quantum computer. It was only when the concept of a quantum computer emerged that scientists realized that an NMR spectrometer was a variant of a quantum computer.

In an NMR spectrometer, the spins of the nuclei of the molecule under study form qubits. Each nucleus has its own resonance frequency in a given magnetic field. When a nucleus is exposed to a pulse at its resonant frequency, it begins to evolve, while the remaining nuclei do not experience any impact. In order to force another nucleus to evolve, you need to take a different resonant frequency and give an impulse at it. Thus, pulsed action on nuclei at a resonant frequency represents a selective effect on qubits. Moreover, the molecule has a direct connection between spins, so it is an ideal preparation for a quantum computer, and the spectrometer itself is a quantum processor.

The first experiments on the nuclear spins of two hydrogen atoms in molecules of 2,3-dibromothiophene SCH:(CBr) 2:CH and on three nuclear spins - one in the hydrogen atom H and two in isotopes of carbon 13 C in trichlorethylene molecules CCl 2:CHCl - were staged in 1997 in Oxford (UK).

In the case of using an NMR spectrometer, it is important that in order to selectively influence the nuclear spins of a molecule, it is necessary that they differ markedly in resonance frequencies. Later, quantum operations were carried out in an NMR spectrometer with the number of qubits 3, 5, 6 and 7.

The main advantage of an NMR spectrometer is that it can use a huge number of identical molecules. Moreover, each molecule (more precisely, the nuclei of the atoms of which it consists) is a quantum system. Sequences of radio frequency pulses, acting as certain quantum logic gates, carry out unitary transformations of the states of the corresponding nuclear spins simultaneously for all molecules. That is, selective influence on an individual qubit is replaced by simultaneous access to the corresponding qubits in all molecules of a large ensemble. A computer of this kind is called a bulk-ensemble quantum computer. NMR quantum computer. Such computers can operate at room temperature, and the decoherence time of quantum states of nuclear spins is several seconds.

In the field of NMR of quantum computers on organic liquids, the greatest progress has been achieved to date. They are mainly due to the well-developed pulsed NMR spectroscopy technique, which allows performing various operations on coherent superpositions of nuclear spin states, and the possibility of using standard NMR spectrometers operating at room temperature for this purpose.

The main limitation of NMR quantum computers is the difficulty of initializing the initial state in the quantum register. The fact is that in a large ensemble of molecules the initial state of the qubits is different, which complicates bringing the system to the initial state.

Another limitation of NMR quantum computers is due to the fact that the signal measured at the output of the system decreases exponentially with increasing number of qubits L. In addition, the number of nuclear qubits in a single molecule with widely varying resonant frequencies is limited. This leads to the fact that NMR quantum computers cannot have more than ten qubits. They should be considered only as prototypes of future quantum computers, useful for testing the principles of quantum computing and testing quantum algorithms.

Another version of a quantum computer is based on the use of ion traps, when the role of qubits is the energy level of ions captured by ion traps, which are created in a vacuum by a certain configuration electric field under laser cooling conditions to ultralow temperatures. The first prototype of a quantum computer based on this principle was proposed in 1995. The advantage of this approach is that it is relatively simple individual management separate qubits. The main disadvantages of quantum computers of this type are the need to create ultra-low temperatures, ensure the stability of the state of ions in the chain, and the limited possible number of qubits - no more than 40.

Other schemes for quantum computers are also possible, the development of which is currently underway. However, it will be at least another ten years before true quantum computers are finally created.

Creation of a universal quantum computer is one of the most complex tasks modern physics, the solution of which will radically change humanity’s ideas about the Internet and methods of information transfer, cybersecurity and cryptography, electronic currencies, artificial intelligence and machine learning systems, methods for synthesizing new materials and medicines, approaches to modeling complex physical, quantum and ultra-large (Big Data) systems.

The exponential growth of dimensionality when attempting to calculate real systems or the simplest quantum systems is an insurmountable obstacle for classical computers. However, in 1980, Yuri Manin and Richard Feynman (in 1982, but in more detail) independently put forward the idea of ​​using quantum systems for computing. Unlike classical modern computers, quantum circuits use qubits (quantum bits) for calculations, which by their nature are quantum two-level systems and make it possible to directly use the phenomenon of quantum superposition. In other words, this means that a qubit can simultaneously be in states |0> and |1>, and two interconnected qubits can simultaneously be in states |00>, |10>, |01> and |11>. It is this property of quantum systems that should provide an exponential increase in the performance of parallel computing, making quantum computers millions of times faster than the most powerful modern supercomputers.

In 1994, Peter Shor proposed a quantum algorithm for factoring numbers into prime factors. The question of the existence of an effective classical solution to this problem is extremely important and is still open, while Shor’s quantum algorithm provides exponential acceleration relative to the best classical analogue. For example, a modern supercomputer in the petaflop range (10 15 operations/sec) can resolve a number with 500 decimal places in 5 billion years; a quantum computer in the megahertz range (10 6 operations/sec) would solve the same problem in 18 seconds. It is important to note that the complexity of solving this problem is the basis of the popular RSA cryptographic security algorithm, which will simply lose relevance after the creation of a quantum computer.

In 1996, Lov Grover proposed a quantum algorithm for solving the enumeration (search) problem with quadratic acceleration. Despite the fact that the acceleration of Grover's algorithm is noticeably lower than Shor's algorithm, its wide range of applications is important, and the obvious impossibility of accelerating the classic version of brute force. Today, more than 40 effective quantum algorithms are known, most of which are based on the ideas of the Shor and Grover algorithms, the implementation of which is important step towards the creation of a universal quantum computer.

The implementation of quantum algorithms is one of the priority tasks of the Research Center for Physics and Mathematics. Our research in this area is aimed at developing multi-qubit superconducting quantum integrated circuits to create universal quantum information processing systems and quantum simulators. The basic element of such circuits is Josephson tunnel junctions, consisting of two superconductors separated by a thin barrier - a dielectric about 1 nm thick. Superconducting qubits based on Josephson junctions, when cooled in solution cryostats to near absolute zero temperatures (~20 mK), exhibit quantum mechanical properties, demonstrating quantization of electric charge (charge qubits), phase or flux of a magnetic field (flux qubits), depending on their design . Capacitive or inductive coupling elements, as well as superconducting coplanar resonators, are used to combine qubits into circuits, and control is carried out by microwave pulses with controlled amplitude and phase. Superconducting circuits are particularly attractive because they can be fabricated using planar mass technologies used in the semiconductor industry. At the Research Center for Physics and Mathematics, we use equipment (R&D class) from the world's leading manufacturers, specially designed and created for us, taking into account the peculiarities of the technological processes for manufacturing superconducting quantum integrated circuits.

Although the quality of superconducting qubits has improved by almost several orders of magnitude over the past 15 years, superconducting quantum integrated circuits are still very unstable compared to classical processors. Building a reliable universal multiqubit quantum computer requires solving a large number of physical, technological, architectural and algorithmic problems. The REC FMS has formed a comprehensive research and development program in the direction of creating multi-qubit superconducting quantum circuits, including:

  • methods of formation and research of new materials and interfaces;
  • design and manufacturing technology of quantum circuit elements;
  • scalable fabrication of highly coherent qubits and high-quality resonators;
  • tomography (characteristic measurements) of superconducting qubits;
  • control of superconducting qubits, quantum switching (entanglement);
  • error detection methods and error correction algorithms;
  • development of multi-qubit quantum circuit architecture;
  • superconducting parametric amplifiers with quantum noise level.

Due to their nonlinear properties with ultra-low losses (inherently) and scalability (manufactured by lithographic methods), Josephson junctions are extremely attractive for creating quantum superconducting circuits. Often, to manufacture a quantum circuit, it is necessary to form hundreds and thousands of Josephson junctions with characteristic sizes of the order of 100 nm in a np crystal. In this case, reliable operation of the circuits is realized only if the transition parameters are accurately reproduced. In other words, all transitions of quantum circuits must be absolutely identical. To do this, they resort to the use of the most modern methods of electron beam lithography and subsequent high-precision shadow deposition through resistive or rigid masks.

The formation of Josephson junctions is carried out by standard ultra-high-resolution lithography methods using two-layer resistive or rigid masks. When such a two-layer mask is developed, windows are formed for the deposition of superconductor layers at such angles that the processes result in the superposition of the deposited layers. Before deposition of the second superconductor layer, a very high quality Josephson junction dielectric tunnel layer is formed. After the Josephson junctions are formed, the two-layer mask is removed. At the same time, at each stage of transition formation, a critical factor is the creation of “ideal” interfaces - even atomic contamination radically worsens the parameters of the manufactured circuits as a whole.

The FMN has developed aluminum technology formation of Josephson junctions Al–AlOx–Al with minimum sizes in the range of 100-500 nm and reproducibility of transition parameters with respect to the critical current of no worse than 5%. Ongoing technological research is aimed at finding new materials, improving technological operations formation of junctions, approaches to integration with new routing technological processes and increasing the reproducibility of manufacturing junctions when increasing their number to tens of thousands of pieces on a chip.

Josephson qubits (quantum two-level system or "artificial atom") are characterized by the typical splitting of the energy of the ground excited state into levels and are driven by standard microwave pulses (external adjustment of the distance between levels and eigenstates) at a splitting frequency in the gigahertz range. All superconducting qubits can be divided into charge (quantization of electric charge) and flow qubits (quantization of magnetic field or phase), and the main criteria for the quality of qubits from the point of view of quantum computing are relaxation time (T1), coherence time (T2, dephasing) and time to performing one operation. The first charge qubit was realized in the NEC laboratory (Japan) by a scientific group led by Y. Nakamura and Yu. Pashkin (Nature 398, 786–788, 1999). Over the past 15 years, the coherence times of superconducting qubits have been improved by leading research groups by nearly six orders of magnitude, from nanoseconds to hundreds of microseconds, enabling hundreds of two-qubit operations and error correction algorithms.


At the Research Center for Physics and Mathematics, we develop, manufacture and test charge and flow qubits of various designs (stream, fluxoniums, 2D/3D transmons, X-mons, etc.) with aluminum Josephson junctions, conduct research on new materials and methods for creating highly coherent qubits aimed to improve the basic parameters of superconducting qubits.

The center's specialists are developing thin-film transmission lines and high-quality superconducting resonators with resonant frequencies in the range of 3-10 GHz. They are used in quantum circuits and memories for quantum computing, enabling control of individual qubits, communication between them, and readout of their states in real time. The main task here is to increase the quality factor of the structures created in the single-photon regime at low temperatures.

In order to improve the parameters of superconducting resonators, we conduct research into various types of their designs, thin film materials (aluminum, niobium, niobium nitride), film deposition methods (electron beam, magnetron, atomic layer) and the formation of topologies (explosive lithography, various etching processes ) on various substrates (silicon, sapphire) and integration various materials in one scheme.

Scientific groups from various areas Physicists have long been studying the possibility of coherent interaction (communication) of quantum two-level systems with quantum harmonic oscillators. Until 2004, such interaction could only be achieved in experiments in atomic physics and quantum optics, where a single atom coherently exchanges a single photon with single-mode radiation. These experiments made a major contribution to the understanding of the mechanisms of interaction of light with matter, quantum physics, coherence and decoherence physics, and also confirmed the theoretical foundations of the concept of quantum computing. However, in 2004, a research team led by A. Wallraff (Nature 431, 162-167 (2004)) was the first to demonstrate the possibility of coherent coupling of a solid-state quantum circuit with a single microwave photon. Thanks to these experiments and after solving a number of technological problems, principles for creating controllable solid-state two-level quantum systems were developed, which formed the basis of a new paradigm of quantum electrodynamics circuits (QED circuits) that have been actively studied in recent years.


QED circuits are extremely attractive both from the point of view of studying the features of the interaction of various elements of quantum systems and creating quantum devices for practical use. We are exploring various types of interaction schemes for elements of QED circuits: effective coupling of qubits and control elements, circuit solutions for entangling qubits, quantum nonlinearity of interaction of elements with a small number of photons, etc. These studies are aimed at developing a base of practical experimental methods for creating multi-qubit quantum integrated circuits.

The main goal of research in this direction at FMS is to develop a technology for creating a metrological, methodological and algorithmic base for implementing Shor and Grover algorithms using multiqubit quantum circuits and demonstrating quantum acceleration compared to classical supercomputers. This extremely ambitious scientific and technical task requires solving a colossal number of theoretical, physical, technological, circuit design, metrological and algorithmic problems, which leading scientific groups and IT companies are currently actively working on.


Research and development in the field of quantum computing is carried out in close cooperation with leading Russian scientific teams of the Institute of Physics and Technology of the Russian Academy of Sciences, MISIS, MIPT, NSTU and RKTs under the management of world-famous Russian scientists.