UNIT - 4 ( QC )
UNIT - 4 ( QC )
UNIT - IV 1
[QUANTUM COMPUTING]
In-Sequence Processing
This grouping represents applications, which demonstrate a need for low-
latency communication and control decisions for successful execution of a
quantum program. The selection includes applications of quantum computing
that require individual outcomes calculated by the QPU to be transmitted and
processed during the remaining run time of the quantum computation. Two
prominent examples are the operation of quantum error correction (QEC) and,
more generally, the conditional preparation of quantum states based on
intermediate measurements. For some existing applications of QEC, it is
necessary to process intermediate information, i.e., syndrome measurements
to identify the error syndrome that arises during an encoded computation. The
resulting syndrome analysis then leads to new control signals that modifies, i.e.,
UNIT - IV 2
[QUANTUM COMPUTING]
Single-Circuit Applications
This grouping represents the many current use-cases for quantum computing
that are based on single-circuit evaluations. This group includes domain
applications, e.g., analog quantum simulations, variational methods for
chemistry, materials science, and high-energy physics simulations, optimization
and combinatorial problems, as well as many others. The notable features of
these single-circuit applications is that the underlying quantum program is static
during execution, does not change its state based on intermediate
measurement outcomes, and completes circuit execution before returning
results. In general, such programs may be executed repeatedly to generate a
distribution of measurements and a resulting statistical characterization, but
such choices do not change the requirements placed on the control of the QPU.
UNIT - IV 3
[QUANTUM COMPUTING]
UNIT - IV 4
[QUANTUM COMPUTING]
UNIT - IV 5
[QUANTUM COMPUTING]
Several hardware (HW) devices control the QPU environment, which has a direct
effect on qubit properties and thus the quality of execution of instructions.
The resulting latency of the server, which includes queuing, context switching
between job executions, and the necessary circuit compilation process, may be
on the order of seconds and sometimes longer depending on overall demand.
However, this latency does not influence the quantum computation itself.
Instead, the requirement for the server to transmit and receive several kilobytes
of information over the course of several seconds is easily met by existing
Ethernet technologies.
Ensemble-Circuit Applications
The final grouping of applications that we consider represent programs that
require averaging of results over an ensemble of different circuit instance.
Unlike single-circuit applications, this grouping represents those methods that
require results collected from several different circuits in order to perform a
joint evaluation. These include methods for characterization, verification,
validation, and debugging of QPUs, which may be used to certify computational
performance in the process of meeting a known standard. For example, several
recent experimental demonstrations of quantum computational advantage
have relied on ensemble circuit evaluations for comparison with results from
conventional HPC simulations of those same circuits.
Ensemble-circuit applications lend themselves to highly parallel execution as the
circuit instances can be issued independently. However, aggregation of the
results from all circuit instances is necessary for postprocessing to complete the
calculation, e.g., as in benchmark or characterization results. Managing these
parallel tasks with existing multithreading and interprocess methods offer a
convenient means of implementing traditional fork-join programming models.
These parallelized approaches offer opportunities to manage the near-
concurrent, high-latency sampling of many different circuits.
The data requirements for ensemble-circuit applications may range from a linear
scaling of the requirements for single-circuit applications to more complex
scaling based on result-driven feedback. An example of the latter is reflected in
iterative characterization methods, such as adaptive process tomography, or
hierarchical testing methods for complex quantum programs. In addition,
postprocessing of these resulting datasets may require extensive computational
resources.
UNIT - IV 6
[QUANTUM COMPUTING]
Macroarchitecture Design
We now examine the possible macroarchitectures integrating QCs with
conventional computing systems and, especially, HPC systems. In Figure 2(a), we
present the current example of hybrid computing in which a remotely accessible
QPU is dedicated to executing standalone quantum programs. The local machine
system coordinates interactions with the QPU through a public network but
details of the conventional computing system are indistinct for the purposes of
these interactions. This macroarchitecture is representative of current
interactions with commercial QPUs, in which the connection is mediated by
wide-area networking such as the Internet. This type of integration is due to the
ease with which the devices are physically connected as well as the simplicity
with which programming and execution are implemented. While this
macroarchitecture can meet the high latency requirements of single-circuit and
ensemble-circuit applications, it may not be optimal due to limitation on the
number of processes that can be executed. In addition, Figure 2(a) offers poor
support for in-sequence applications due to the high-latency connection.
A leading example of a more performant architecture is shown in Figure 2(b),
where a network of conventional computing nodes individually interface with a
QPU. This design permits coordination of processing and data across low-latency
connections. This architecture also supports the paradigm of parallel processing
in which the decomposition of both single-circuit and ensemble-circuit
workloads into concurrent tasks may accelerate time to solution. In addition,
each task may be allocated to a resource that supports acceleration based on
quantum computing capabilities.
The coordination of these multiple QPU resources is orchestrated through the
communication of instructions and data across the system. Message passing
protocols, such as message passing interface (MPI) and parallel virtual machine
(PVM), have been used previously to program distributed nodes and similar
approaches seem highly feasible for these quantum-accelerated variants.
However, defining interfaces for the protocols including data types and
operations will be important for this approach. Anticipating future
heterogeneous HPC nodes, these interfaces may not be developed in isolation
but must include a diversity of computational accelerators including GPUs,
FPGAs, tensor processing units, and neuromorphic processors.1
We mention a third macroarchitecture for hybrid computing that extends the
distributed design toward a distributed quantum computing system. As shown
in Figure 2(c), this approach uses a quantum interconnect as a significant
advance in the computational capability of these distributed QPUs as entangling
UNIT - IV 7
[QUANTUM COMPUTING]
Microarchitecture Design
The node design presented above must account for the conventional CPU and
memory systems (for data processing and resource management) as well as the
real-time control electronics (typically state-of-the-art FPGA circuits) and digital-
to-analog/analog-to-digital converters that comprise the QPU. As shown in
Figure 3, the microarchitecture of this hardware stack may be viewed as an
encapsulated execution unit. The role of this subsystem is implementation of
QPU instructions and facilitating in-band signaling. Outputs from the execution
unit to the quantum register are control channels for modulating independent
carrier signals of each register element via analog upsampling/downsampling
stages (mixers or modulators). The carrier frequencies depend on the specific
quantum computing platform used and range from microwave (few gigahertz)
for superconducting qubits to optical frequencies (hundreds of terahertz) for
trapped-ion qubits. Thus, the execution unit translates digital instructions into
analog pulses and this translation may be tuned with respect to the instruction
set architecture as well as the physical condition of the quantum register.
For the macroarchitectures in Figure 2, quantum execution units are internal to
the QPU.7 Presently, FPGAs have proven to be highly effective implementations
UNIT - IV 8
[QUANTUM COMPUTING]
UNIT - IV 9
[QUANTUM COMPUTING]
Programming Models
The software stack for enabling QC integration with HPC will require extensibility
and modularity.10 Existing variability in quantum technologies (superconducting,
trapped-ion, etc.) from competing hardware vendors (Google, IBM, Rigetti,
Honeywell, etc.) requires customization at all levels of abstraction. Here we
distill those demands into a number of service interfaces for programming,
compilation, execution, and control of QPUs. Service interfaces provide a means
for deploying complex software systems that grow organically as new hardware,
programming, control, and compilation techniques are developed by the
community. Ultimately, we advocate for a hardware-agnostic approach that
decomposes software into extensible interfaces for host-to-control integration,
compiler, language, and algorithmic libraries,11 as illustrated in Figure 4.
UNIT - IV 12
[QUANTUM COMPUTING]
UNIT - IV 13
[QUANTUM COMPUTING]
users can ensure they are choosing equipment to stay at a price point they are
comfortable with, while also remaining on the cutting edge of new
technologies. This is necessary as users grow their businesses and in turn need
additional physical equipment. An open source access control system can also
easily integrate with video management systems (VMS), human resources and
HVAC to provide a full operational view of a facility.
With proprietary systems, users are instead forced to make up-front
commitments that may not allow for variance as new technologies are
developed. Open systems ensure users are at the forefront of innovation, on
whatever scale works best for them, while also providing the freedom of
choice to make security decisions based on functionality and security
preferences, rather than manufacturer agreements.
Open source systems are as secure as the people operating them. Work with
an integrator to customise a solution based on your security needs, and
consider combining an open source system with the cloud for a high level of
system oversight, and increased scalability potential.
What about credentials?
The definition of a credential is changing as we know it, and as such, the way
we go about credentialing is shifting from traditional cards to smarter options.
Credentials today are created around the identity of a user, rather than a
simple fob. There are a few credential options to keep in mind to ensure both
ongoing security, and the potential to scale up or down to adjust to an evolving
business trajectory.
Proximity cards are convenient, but are no longer the most secure option. It is
a known risk that unauthorised individuals can gain access to an area by
following behind someone with a working credential. Because there is no true
identity attached to this type of credential, there is no way to know that the
correct person is actually gaining access. End users are recognising this, and
are making the move to more secure options.
Mobile credentials are gaining traction because, in theory, users can use their
own devices. This eliminates security issues with previous access control
credentials, such as card duplication or sharing. Users are more likely to
remember and keep tabs on their own mobile devices as opposed to a key fob
that is easy to forget when running late or distracted because mobile devices
are already so engrained in day-to-day life.
In addition, mobile credentials reduce costs because companies do not have
to spend money on plastic cards or fobs, as most users have their own mobile
UNIT - IV 14
[QUANTUM COMPUTING]
UNIT - IV 15
[QUANTUM COMPUTING]
of noise, the shorter the algorithm that can be run before it suffers an error and
outputs an incorrect or even useless result. Right now, instead of the trillions of
operations that might be needed to run a full-fledged quantum algorithm, we
can typically only perform dozens before noise causes a fatal error.
Companies building quantum computers like IBM and Google have highlighted
that their roadmaps include the use of “Quantum Error Correction” as they scale
to machines with 1000 or more qubits.
Depending on the nature of the hardware and the type of algorithm you choose
to run, the ratio between the number of physical qubits you need to support a
single logical qubit varies - but current estimates put it at about 1000 to one.
That's huge. Today’s machines are nowhere near capable of getting benefits
from this kind of Quantum Error Correction.
QEC has seen many partial demonstrations in laboratories around the world -
first steps making clear it’s a viable approach. But in general the enormous
resource overhead leads to things getting worse when we try to implement QEC.
Right now there is a global research effort underway trying to cross the “break
even” point where it’s actually advantageous to use QEC relative to the many
resources it consumes.
The techniques are easy to implement and the benefits can be huge - our own
experiments have demonstrated more than 10X improvements in cloud
quantum computers!
In the context of QEC, quantum firmware actually reduces the number of qubits
required to perform error correction. Exactly how is a complex story, but in
short, quantum firmware reduces the likelihood of error during each operation
on a quantum computer. Better yet, quantum firmware easily eliminates certain
kinds of errors that are really difficult for QEC, and actually transforms the
characteristics of the remaining errors to make them more compatible with QEC.
Win-win!
Looking to the future we see that a holistic approach to dealing with noise and
errors in quantum computers is essential. Quantum Error Correction is a core
part of the story, and combined with performance-boosting quantum firmware
we see a clear pathway to the future of large-scale quantum computers.
FAULT TOLERANCE
Fault Tolerance simply means a system’s ability to continue operating
uninterrupted despite the failure of one or more of its components. This is true
whether it is a computer system, a cloud cluster, a network, or something else.
In other words, fault tolerance refers to how an operating system (OS) responds
to and allows for software or hardware malfunctions and failures.
An OS’s ability to recover and tolerate faults without failing can be handled by
hardware, software, or a combined solution leveraging load balancers(see more
below). Some computer systems use multiple duplicate fault tolerant systems
to handle faults gracefully. This is called a fault tolerant network.
UNIT - IV 17
[QUANTUM COMPUTING]
FAQs
The goal of fault tolerant computer systems is to ensure business continuity and
high availability by preventing disruptions arising from a single point of failure.
Fault tolerance solutions therefore tend to focus most on mission-critical
applications or systems.
At the lowest level, the ability to respond to a power failure, for example.
A step up: during a system failure, the ability to use a backup system
immediately.
Enhanced fault tolerance: a disk fails, and mirrored disks take over for it
immediately. This provides functionality despite partial system failure, or
graceful degradation, rather than an immediate breakdown and loss of
function.
UNIT - IV 18
[QUANTUM COMPUTING]
Although both high availability and fault tolerance reference a system’s total
uptime and functionality over time, there are important differences and both
strategies are often necessary. For example, a totally mirrored system is fault-
tolerant; if one mirror fails, the other kicks in and the system keeps working with
no downtime at all. However, that’s an expensive and sometimes unwieldy
solution.
On the other hand, a highly available system such as one served by a load
balancer allows minimal downtime and related interruption in service without
total redundancy when a failure occurs. A system with some critical parts
mirrored and other, smaller components duplicated has a hybrid strategy.
UNIT - IV 19
[QUANTUM COMPUTING]
Cost. Fault tolerant strategies can be expensive, because they demand the
continuous maintenance and operation of redundant components. High
availability is usually part of a larger system, one of the benefits of a load
balancing solution, for example.
Certain systems may require a fault-tolerant design, which is why fault tolerance
is important as a basic matter. On the other hand, high availability is enough for
others. The right business continuity strategy may include both fault tolerance
and high availability, intended to maintain critical functions throughout both
minor failures and major disasters.
Depending on the fault tolerance issues that your organization copes with, there
may be different fault tolerance requirements for your system. That is because
fault-tolerant software and fault-tolerant hardware solutions both offer very
high levels of availability, but in different ways.
There is more than one way to create a fault-tolerant server platform and thus
prevent data loss and eliminate unplanned downtime. Fault tolerance in
computer architecture simply reflects the decisions administrators and
UNIT - IV 20
[QUANTUM COMPUTING]
engineers use to ensure a system persists even after a failure. This is why there
are various types of fault tolerance tools to consider.
Fault tolerance computing also deals with outages and disasters. For this reason
a fault tolerance strategy may include some uninterruptible power supply (UPS)
such as a generator—some way to run independently from the grid should it fail.
Byzantine fault tolerance (BFT) is another issue for modern fault tolerant
architecture. BFT systems are important to the aviation, blockchain, nuclear
power, and space industries because these systems prevent downtime even if
certain nodes in a system fail or are driven by malicious actors.
Fault tolerant design prevents security breaches by keeping your systems online
and by ensuring they are well-designed. A naively-designed system can be taken
offline easily by an attack, causing your organization to lose data, business, and
trust. Each firewall, for example, that is not fault tolerant is a security risk for
your site and organization.
To be called a fault tolerant data center, a facility must avoid any single point of
failure. Therefore, it should have two parallel systems for power and cooling.
However, total duplication is costly, gains are not always worth that cost, and
UNIT - IV 21
[QUANTUM COMPUTING]
infrastructure is not the only answer. Therefore, many data centers practice
fault avoidance strategies as a mid-level measure.
Load balancing and failover solutions can work together in the application
delivery context. These strategies provide quicker recovery from disasters
through redundancy, ensuring availability, which is why load balancing is part of
many fault tolerant systems.
QUANTUM CRYPTOGRAPHY
Quantum cryptography is a method of encryption that uses the naturally
occurring properties of quantum mechanics to secure and transmit data in a way
that cannot be hacked.
Cryptography is the process of encrypting and protecting data so that only the
person who has the right secret key can decrypt it. Quantum cryptography is
different from traditional cryptographic systems in that it relies on physics,
rather than mathematics, as the key aspect of its security model.
UNIT - IV 22
[QUANTUM COMPUTING]
Photons are used for quantum cryptography because they offer all the necessary
qualities needed: Their behavior is well understood, and they are information
carriers in optical fiber cables. One of the best-known examples of quantum
cryptography currently is quantum key distribution (QKD), which provides a
secure method for key exchange.
The model assumes there are two people named Alice and Bob who wish to
exchange a message securely. Alice initiates the message by sending Bob a key.
The key is a stream of photons that travel in one direction. Each photon
represents a single bit of data -- either a 0 or 1. However, in addition to their
linear travel, these photons are oscillating, or vibrating, in a certain manner.
So, before Alice, the sender, initiates the message, the photons travel through a
polarizer. The polarizer is a filter that enables certain photons to pass through it
with the same vibrations and lets others pass through in a changed state of
vibration. The polarized states could be vertical (1 bit), horizontal (0 bit), 45
degrees right (1 bit) or 45 degrees left (0 bit). The transmission has one of two
polarizations representing a single bit, either 0 or 1, in either scheme she uses.
The photons now travel across optical fiber from the polarizer toward the
receiver, Bob. This process uses a beam splitter that reads the polarization of
each photon. When receiving the photon key, Bob does not know the correct
UNIT - IV 23
[QUANTUM COMPUTING]
Alice and Bob would also know if Eve was eavesdropping on them. Eve observing
the flow of photons would then change the photon positions that Alice and Bob
expect to see.
UNIT - IV 24
[QUANTUM COMPUTING]
This method of cryptography has yet to be fully developed; however, there have
been successful implementations of it:
In addition to QKD, some of the more notable protocols and quantum algorithms
used in quantum cryptography are the following:
UNIT - IV 25
[QUANTUM COMPUTING]
UNIT - IV 26
[QUANTUM COMPUTING]
This image shows the differences between classical cryptography and quantum
cryptography.
UNIT - IV 27
[QUANTUM COMPUTING]
Despite quantum computing often being touted as the next big revolution in
computing there are many problems where a classical computer can actually
outperform a quantum computer. One good example is basic arithmetic. A
computer can do arithmetic on the order of nanoseconds due to the fact that
they have dedicated logic in the form of ALUs. Quantum computers however
have no such dedicated hardware and are quite slow in comparison. Their power
does not lay in speed but in the fact that they can make use of quantum
parallelisation.
Quantum computers tend to be able to solve problems that have a very high
search space. For example, consider a database. Let's say you need to search a
database for an entry. The worst case scenario for a classical computer is that it
will have to go through the entire database to find the entry you wish to find.
This corresponds to a computational complexity of O(N).
UNIT - IV 28
[QUANTUM COMPUTING]
Now that we have worked out our application can benefit from quantum
computing we now need to find the right device. The field of quantum
computing is a burgeoning one and as such there have been many different
advancements. As a result there have been many different quantum devices
developed each with their own pros and cons. As such when developing an
application it is important to know which device is best depending on the
problem you wish to solve.
The most popular type of quantum device is the gate based quantum computer.
This is a quantum computer where operations are done on qubits using
quantum logic gates. These logic gates are much like the logic gates found in
classical computers but are more complex.
Gate based quantum computers are general purpose and such can be used for
any problem that requires quantum parallelisation. For example factoring
numbers using Shor’s algorithm or efficient search using Grover’s search
algorithm. Machine learning can also be done using variational quantum
circuits.
UNIT - IV 29
[QUANTUM COMPUTING]
Quantum Annealers
Another issue to look at when picking a quantum device is the issue relating to
qubit and gate errors. For example superconducting quantum devices are very
prone to noise which can lead to errors. These errors come in 3 types (bit flips,
phase flips, and readout errors). These can be corrected using quantum error
correction/mitigation methods however these may require additional qubits
and gates in the quantum circuits.
As such it is best to pick the best device based upon qubit error rates. For
example one device may have low fidelity qubits that are prone to error while
another may have very high fidelity qubits and as such a lower probability of
errors occurring.
In qiskit qubit error rates for quantum devices can be found using the backend
monitor:
UNIT - IV 30
[QUANTUM COMPUTING]
After you have picked the right device to solve the problem you will have to
create a quantum algorithm. In gate based quantum computers this could be a
small circuit. For example if you wanted to create a 32 qubit random number
generator you would simply create a quantum circuit that initialises 32 qubits
into superposition using Hadamard gates. Then you would measure the qubits
and the results would be brought back into your application.
Obviously depending on your problem your quantum circuit may be way more
complex. Your algorithm may even be hybrid and contain a classical part of the
algorithm and a quantum part consisting of a circuit. Some may be variational
quantum circuits where the state of the qubit is measured then reupdated each
time based upon the problem of interest.
In gate based quantum computing this is easy to check. If your circuit uses
Hadamard gates then it is using superpositioning. If your circuit creates bell
states with Hadamard gates and multi qubit gates then it is using entanglement.
UNIT - IV 31
[QUANTUM COMPUTING]
Circuit diagram of a bell pair consisting of a Hadamard gate and a CNOT gate.
This is one of the simplest forms of entanglement
As such it is very important that you test your quantum algorithm against a
classical one. If your algorithm has lower performance then use the classical
algorithm instead. An application that has a speedup from QC is excellent but if
there is no speedup then it is simply a gimmick.
But let's take a step back. It's not about time or energy. It's about noise.
But before I even discuss that... do you know what a quantum computer really
is?
Why, it's an analog computer. You know, the kind of computer that existed
before, well, before computers. Including the classic of classics of analog
computers, the slide rule.
UNIT - IV 32
[QUANTUM COMPUTING]
What makes quantum computers special, then? Well, it's the threshold
theorem. What the threshold theorem states is this: If you have a quantum
computer with "noisy" qubits, but the noise can be kept below a specific
threshold, then there exist error correction algorithms that allow your quantum
computer to emulate a "perfect" quantum computer. In short, scalable to an
arbitrary number of qubits.
But a minority, including Kalai, believe that far from enabling quantum
computing, the threshold theorem kills it. That is because it will never be
possible (on grounds of principle, not engineering limitations) to reduce noise
below the threshold.
I believe Kalai is right. Time will tell. For now, skeptics like Kalai (or me) are a
minority. But physics is not decided by majority rule but through experiment
and/or rigorous deduction.
UNIT - IV 33