Chemical computer
This article is about computation using reaction–diffusion systems. For computation using single molecules as active elements, see Molecular scale electronics and Molecular logic gate.
A chemical computer, also called reaction-diffusion computer, BZ computer (stands forBelousov–Zhabotinsky computer) or gooware computer is an unconventional computerbased on a semi-solid chemical "soup" where data are represented by varying concentrations of chemicals.[1] The computations are performed by naturally occurring chemical reactions.
Background
Originally chemical reactions were seen as a simple move towards a stable equilibrium which was not very promising for computation. This was changed by a discovery made by Boris Belousov, a Sovietscientist, in the 1950s. He created a chemical reaction between different salts and acids that swing back and forth between being yellow and clear because the concentration of the different components changes up and down in a cyclic way. At the time this was considered impossible because it seemed to go against the second law of thermodynamics, which says that in a closed system the entropy will only increase over time, causing the components in the mixture to distribute themselves till equilibrium is gained and making any changes in the concentration impossible. But modern theoretical analyses shows sufficiently complicated reactions can indeed comprise wave phenomena without breaking the laws of nature.[1][2] (A convincing directly visible demonstration was achieved by Anatol Zhabotinsky with the Belousov-Zhabotinsky reaction showing spiraling colored waves.)
The wave properties of the BZ reaction means it can move information in the same way as all other waves. This still leaves the need for computation, performed by conventional microchips using the binary code transmitting and changing ones and zeros through a complicated system of logic gates. To perform any conceivable computation it is sufficient to have NAND gates. (A NAND gate has two bits input. Its output is 0 if both bits are 1, otherwise it's 1). In the chemical computer version logic gates are implemented by concentration waves blocking or amplifying each other in different ways.
Current research
In 1989 it was demonstrated how light-sensitive chemical reactions could performimage processing.[3] This led to an upsurge in the field of chemical computing. Andrew Adamatzky at the University of the West of England has demonstrated simple logic gates using reaction-diffusion processes.[4]Furthermore, he has theoretically shown how a hypothetical "2+ medium" modelled as acellular automaton can perform computation.[5] Adamatzky was inspired by a theoretical article on computation by using balls on a billiard table to transfer this principle to the BZ-chemicals and replace the billiard balls with waves: if two waves meet in the solution, they create a third wave which is registered as a 1. He has tested the theory in practice and is working to produce some thousand chemical versions of logic gates to create a chemical pocket calculator.[citation needed] One of the problems with the present version of this technology is the speed of the waves; they only spread at a rate of a few millimeters per minute. According to Adamatzky, this problem can be eliminated by placing the gates very close to each other, to make sure the signals are transferred quickly. Another possibility could be new chemical reactions where waves propagate much faster.
In 2014, a chemical computing system was developed by an international team headed by the Swiss Federal Laboratories for Materials Science and Technology (Empa). The chemical computer used surface tension calculations derived from the Marangoni Effect using an acidic gel to find the most efficient route between points A and B, outpacing a conventional Satellite Navigationsystem attempting to calculate the same route.[6][7]
In 2015, Stanford University graduate students created a computer using magnetic fields and water droplets infused with magnetic nanoparticles, illustrating some of the basic principles behind a chemical computer.[8][9]
In 2015, University of Washington students created a programming language for chemical reactions (originally developed forDNA analysis).[10][11]
Potential
Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers(e.g., products of two 300-digit primes).[15] By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithmproblem, both of which can be solved by Shor's algorithm. In particular the RSA, Diffie-Hellman, and elliptic curve Diffie-Hellmanalgorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
However, other cryptographic algorithms do not appear to be broken by those algorithms.[16][17] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like theMcEliece cryptosystem based on a problem in coding theory.[16][18]Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving thedihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[19] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[20] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size).Quantum cryptography could potentially fulfill some of the functions of public key cryptography.
Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems,[21] including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely.[22] For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved byGrover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.
Consider a problem that has these four properties:
  1. The only way to solve it is to guess answers repeatedly and check them,
  2. The number of possible answers to check is the same as the number of inputs,
  3. Every possible answer takes the same amount of time to check, and
  4. There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.
An example of this is a password cracker that attempts to guess the password for anencrypted file (assuming that the password has a maximum possible length).
For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.[23]
Grover's algorithm can also be used to obtain a quadratic speed-up over a brute-force search for a class of problems known as NP-complete.
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believequantum simulation will be one of the most important applications of quantum computing.[24] Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.[25]
Quantum supremacy
Main article: Quantum supremacy
John Preskill has introduced the termquantum supremacy to refer to the hypothetical speedup advantage that a quantum computer would have over a classical computer in a certain field.[26]Google has announced that it expects to achieve quantum supremacy by the end of 2017, and IBM says that the best classical computers will be beaten on some task within about five years.[27] Quantum supremacy has not been achieved yet, and skeptics like Gil Kalai doubt that it will ever be.[28][29] Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994.[30] Paul Davies pointed out that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle.[31] Those such as Roger Schlafly have pointed out that the claimed theoretical benefits of quantum computing go beyond the proven theory of quantum mechanics and imply non-standard interpretations, such as multiple worlds and negative probabilities. Schlafly maintains that the Born rule is just "metaphysical fluff" and that quantum mechanics doesn't rely on probability any more than other branches of science but simply calculates the expected values of observables. He also points out that arguments about Turing complexity cannot be run backwards.[32][33][34] Those who prefer Bayesian interpretations of quantum mechanics have questioned the physical nature of the mathematical abstractions employed.[35]
Obstacles
There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:[36]
  • scalable physically to increase the number of qubits;
  • qubits that can be initialized to arbitrary values;
  • quantum gates that are faster thandecoherence time;
  • universal gate set;
  • qubits that can be read easily.
Quantum decoherence
Main article: Quantum decoherence
One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR andMRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[14] Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.[37]
As a result, time consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.[38]
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10−4. This implies that each gate must be able to perform its task in one 10,000th of the coherence time of the system.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L andL2, where L is the number of qubits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction.[39] With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107steps and at 1 MHz, about 10 seconds.
A very different approach to the stability-decoherence problem is to create atopological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.[40][41]
Developments
There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:
The quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent; each can simulate the other with no more than polynomial overhead.
For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. There is also a vast amount of flexibility.
Operation
Unsolved problem in physics:
Is a universal quantum computer sufficient toefficiently simulate an arbitrary physical system?
(more unsolved problems in physics)
While a classical 3-bit state and a quantum 3-qubit state are each eight-dimensionalvectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string,
{\displaystyle |000\rangle }
, corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. Seequantum circuit for a more precise formulation.)
Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from theprobability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, onemeasures the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. This destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer's results, the probability of getting the correct answer can be increased. In contrast, counterfactual quantum computation allows the correct answer to be inferred when the quantum computer is not actually running in a technical sense, though earlier initialization and frequent measurements are part of the counterfactual computation protocol.

http://adisrssb.blogspot.com/
Basis
A classical computer has a memory made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or anyquantum superposition of those two qubit states;[13]:13–16 a pair of qubits can be in any quantum superposition of 4 states,[13]:16 and three qubits in any superposition of 8 states. In general, a quantum computer with
{\displaystyle n}
qubits can be in an arbitrary superposition of up to
{\displaystyle 2^{n}}
different states simultaneously[13]:17 (this compares to a normal computer that can only be in one of these
{\displaystyle 2^{n}}
states at any one time). A quantum computer operates on its qubits using quantum gatesand measurement (which also alters the observed state). An algorithm is composed of a fixed sequence of quantum logic gates and a problem is encoded by setting the initial values of the qubits, similar to how a classical computer works. The calculation usually ends with a measurement, collapsing the system of qubits into one of the
{\displaystyle 2^{n}}
pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most
{\displaystyle n}
classical bits of information (or, if the algorithm did not end with a measurement, the result is an unobserved quantum state). Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability.[14] Note that the term non-deterministic computing must not be used in that case to mean probabilistic (computing), because the term non-deterministic has a different meaning in computer science.
An example of an implementation of qubits of a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written
{\displaystyle |{\downarrow }\rangle }
and
{\displaystyle |{\uparrow }\rangle }
, or
{\displaystyle |0{\rangle }}
and
{\displaystyle |1{\rangle }}
). This is true because any such system can be mapped onto an effective spin-1/2 system.
Principles of operation
A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, representing the state of an n-qubit system on a classical computer requires the storage of 2n complex coefficients, while to characterize the state of a classical n-bit system it is sufficient to provide the values of the n bits, that is, only n numbers. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before the measurement. It is generally incorrect to think of a system of qubits as being in one particular state before the measurement, since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
Qubits are made up of controlled particles and the means of control (e.g. devices that trap particles and switch them from one state to another).[15]
To better understand this point, consider a classical computer that operates on a three-bit register. If the exact state of the register at a given time is not known, it can be described as a probability distribution over the
{\displaystyle 2^{3}=8}
different three-bit strings 000, 001, 010, 011, 100, 101, 110, and 111. If there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states.
The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector
{\displaystyle (a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7})}
(or a one dimensional vector with each vector node holding the amplitude and the state as the bit string of qubits). Here, however, the coefficients
{\displaystyle a_{k}}
are complex numbers, and it is the sum of thesquares of the coefficients' absolute values,
{\displaystyle \sum _{i}|a_{i}|^{2}}
, that must equal 1. For each
{\displaystyle k}
, the absolute value squared
{\displaystyle \left|a_{k}\right|^{2}}
gives the probability of the system being found after a measurement in the
{\displaystyle k}
-th state. However, because a complex number encodes not just a magnitude but also a direction in the complex plane, the phase difference between any two coefficients (states) represents a meaningful parameter. This is a fundamental difference between quantum computing and probabilistic classical computing.[16]
If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 =
{\displaystyle |a_{0}|^{2}}
, the probability of measuring 001 =
{\displaystyle |a_{1}|^{2}}
, etc.). Thus, measuring a quantum state described by complex coefficients
{\displaystyle (a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7})}
gives the classical probability distribution
{\displaystyle (|a_{0}|^{2},|a_{1}|^{2},|a_{2}|^{2},|a_{3}|^{2},|a_{4}|^{2},|a_{5}|^{2},|a_{6}|^{2},|a_{7}|^{2})}
and we say that the quantum state "collapses" to a classical state as a result of making the measurement.
An eight-dimensional vector can be specified in many different ways depending on whatbasis is chosen for the space. The basis of bit strings (e.g., 000, 001, …, 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state
{\displaystyle (a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7})}
in the computational basis can be written as:
{\displaystyle a_{0}\,|000\rangle +a_{1}\,|001\rangle +a_{2}\,|010\rangle +a_{3}\,|011\rangle +a_{4}\,|100\rangle +a_{5}\,|101\rangle +a_{6}\,|110\rangle +a_{7}\,|111\rangle }
where, e.g.,
{\displaystyle |010\rangle =\left(0,0,1,0,0,0,0,0\right)}
The computational basis for a single qubit (two dimensions) is
{\displaystyle |0\rangle =\left(1,0\right)}
and
{\displaystyle |1\rangle =\left(0,1\right)}
.
Using the eigenvectors of the Pauli-x operator, a single qubit is
{\displaystyle |+\rangle ={\tfrac {1}{\sqrt {2}}}\left(1,1\right)}
and
{\displaystyle |-\rangle ={\tfrac {1}{\sqrt {2}}}\left(1,-1\right)}
.