0% found this document useful (0 votes)
5 views

UNIT - 4 ( QC )

The document discusses the performance, security, and scalability of quantum computing (QC) applications, categorizing them into in-sequence processing, single-circuit applications, and ensemble-circuit applications based on execution times and latency requirements. It highlights the integration of quantum processing units (QPUs) with conventional high-performance computing (HPC) systems, detailing macroarchitecture designs and the necessary hardware components for effective operation. Additionally, it emphasizes the importance of real-time control and communication technologies to meet the unique demands of quantum computing environments.

Uploaded by

SRIKANTH KETHA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

UNIT - 4 ( QC )

The document discusses the performance, security, and scalability of quantum computing (QC) applications, categorizing them into in-sequence processing, single-circuit applications, and ensemble-circuit applications based on execution times and latency requirements. It highlights the integration of quantum processing units (QPUs) with conventional high-performance computing (HPC) systems, detailing macroarchitecture designs and the necessary hardware components for effective operation. Additionally, it emphasizes the importance of real-time control and communication technologies to meet the unique demands of quantum computing environments.

Uploaded by

SRIKANTH KETHA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

[QUANTUM COMPUTING]

4 PERFORMANCE, SECURITY AND


SCALABILITY
UNIT
PERFORMANCE
There is a wide range of promising application use cases for QC, and many of
these overlap with the existing uses of HPC. The utilizations include the
simulation of high-dimensional physical models, the design, verification, and
validation of complex logical systems, and inference, pattern matching, and
search over large datasets. Underlying these use cases are algorithms that take
advantage of the logic from both conventional computing and quantum
computing, and we defer those details to other reviews.6 Instead, we highlight
several issues that arise when considering the micro- and macro-architectures
of these hybrid computing models, including characteristics of the quantum
devices that support these calculations and how those devices may interface
with conventional HPC nodes.
We will coarsely classify applications of quantum computing in terms of the
expected clock-speed of the QPU, the acceptable communication and control
latency within the accelerated node, and the amount of data typically
transmitted. We will term these use cases as 1) in-sequence processing, 2)
single-circuit applications, and 3) ensemble circuit applications. A graphical
representation of this classification is shown in Figure 1, and we include a
detailed description below.

UNIT - IV 1
[QUANTUM COMPUTING]

Figure 1. Applications of quantum computing categorized by typical execution


times. Low-latency, in-sequence processing applications (blue) include quantum
error correction and probabilistic state initialization methods that control
decisions on nanosecond timescale. Single-circuit applications (green) only
return at the end of computation on second timescales. Ensemble-circuit
applications (orange), such as application benchmarking or device verification
and validation of quantum devices, may take up to hours or more of
computational time.

In-Sequence Processing
This grouping represents applications, which demonstrate a need for low-
latency communication and control decisions for successful execution of a
quantum program. The selection includes applications of quantum computing
that require individual outcomes calculated by the QPU to be transmitted and
processed during the remaining run time of the quantum computation. Two
prominent examples are the operation of quantum error correction (QEC) and,
more generally, the conditional preparation of quantum states based on
intermediate measurements. For some existing applications of QEC, it is
necessary to process intermediate information, i.e., syndrome measurements
to identify the error syndrome that arises during an encoded computation. The
resulting syndrome analysis then leads to new control signals that modifies, i.e.,
UNIT - IV 2
[QUANTUM COMPUTING]

“corrects,” the remaining quantum operations. In a similar way, the quantum


register within the QPU may be reinitialized during run time as a result of a
previous measurement outcome, e.g., for efficient quantum state preparation.
Conditional state preparation processes issue new instructions and pulses to
modify the execution sequence. A similar paradigm also applies to
measurement-based quantum computing, which is of particular interest in blind
quantum computing for future high-security applications.
For example, consider a hypothetical QPU with a few 1,000 encoded qubits that
support error correction. Each of these encoded qubits shall consist of 100
physical qubits, where 25 ancilla measurements each characterize the errors in
a single encoded qubit. The control and preparation of these encoded qubits
must occur faster than the typical decoherence timescale that characterizes the
QPU technology. The target feedback time may be 10 ns for superconducting
technologies and 10 μ�s for trapped-ion technologies. The QPU must evaluate
and transfer the peak rates of 10 Tbps and 10 Gbps, respectively, with latencies
of a few nanoseconds or microseconds.
These requirements may be met, in principle, using existing technologies. For
example, the communication buses that connect existing GPUs with each other
and with CPU main memory can support up to about 50 Gbps per lane. Similarly,
state-of-the-art Ethernet transport may meet the requirements cited for
trapped ions. While these examples may meet the data transfer and latency
requirements, existing technologies may not be directly suitable for quantum
computing environments. In particular, the QPUs may be installed in cryogenic
environments with demanding heating restrictions to prevent additional noise.
Indeed, these examples suggest that the in-sequence processing requirements
will only be realistic for on-premise installation of a QC within an HPC node.

Single-Circuit Applications
This grouping represents the many current use-cases for quantum computing
that are based on single-circuit evaluations. This group includes domain
applications, e.g., analog quantum simulations, variational methods for
chemistry, materials science, and high-energy physics simulations, optimization
and combinatorial problems, as well as many others. The notable features of
these single-circuit applications is that the underlying quantum program is static
during execution, does not change its state based on intermediate
measurement outcomes, and completes circuit execution before returning
results. In general, such programs may be executed repeatedly to generate a
distribution of measurements and a resulting statistical characterization, but
such choices do not change the requirements placed on the control of the QPU.

UNIT - IV 3
[QUANTUM COMPUTING]

In addition, many single-circuit application may iteratively modify the circuit or


parameters. Key examples include variational and adaptive methods. Individual
results are only dependent on a single execution instance and independent
execution of these multiple circuits instances is possible, i.e., an example of
hybrid parallelism. Consequently, the single-circuit application use case is well
aligned with higher latency communication between the host node managing
task delegation and the QPU performing that task.
As a specific example, current cloud-based models of quantum computing rely
on a host computer to distribute self-contained quantum programs to dedicated
but remote QC hardware. These programs are relatively small in size as
measured by bits and correspondingly generate a modest amount of
information, which is due to almost entirely the limited size and coherence
lifetimes of existing QC hardware. Additional metadata may expand the data
collected. Execution of the distributed programs is often moderated by a server
task running on conventional computing hardware, in which a queue supports
multiprogram and multiuser operations. We expect these runtime models to
become more sophisticated through tighter integration of the conventional
control system with the QC hardware components.

UNIT - IV 4
[QUANTUM COMPUTING]

Figure 2. Three macroarchitectures for integrating quantum computing with


conventional computing. (a) A local machine remotely accesses a QPU through
public cloud network. (b) A network of quantum-accelerated nodes
communicate through a common interconnect. (c) A network of quantum-
accelerated nodes communicate through both conventional and quantum
networks.

Figure 3. A component diagram representing the microarchitecture of a HPC-QC


node with a common interconnect as depicted in Figure 2(b). The diagram shows
the major components needed for the operation of a QPU within the HPC node
infrastructure. Individual components are grouped into so-called out-of-band
and in-band scopes and are placed on the left-hand and right-hand side of the
figure, respectively. The QPU, which contains the qubits and is capable of
processing quantum information, is depicted at the lower part, whereas classical
information processing components are shown in the upper part of the figure.

UNIT - IV 5
[QUANTUM COMPUTING]

Several hardware (HW) devices control the QPU environment, which has a direct
effect on qubit properties and thus the quality of execution of instructions.
The resulting latency of the server, which includes queuing, context switching
between job executions, and the necessary circuit compilation process, may be
on the order of seconds and sometimes longer depending on overall demand.
However, this latency does not influence the quantum computation itself.
Instead, the requirement for the server to transmit and receive several kilobytes
of information over the course of several seconds is easily met by existing
Ethernet technologies.

Ensemble-Circuit Applications
The final grouping of applications that we consider represent programs that
require averaging of results over an ensemble of different circuit instance.
Unlike single-circuit applications, this grouping represents those methods that
require results collected from several different circuits in order to perform a
joint evaluation. These include methods for characterization, verification,
validation, and debugging of QPUs, which may be used to certify computational
performance in the process of meeting a known standard. For example, several
recent experimental demonstrations of quantum computational advantage
have relied on ensemble circuit evaluations for comparison with results from
conventional HPC simulations of those same circuits.
Ensemble-circuit applications lend themselves to highly parallel execution as the
circuit instances can be issued independently. However, aggregation of the
results from all circuit instances is necessary for postprocessing to complete the
calculation, e.g., as in benchmark or characterization results. Managing these
parallel tasks with existing multithreading and interprocess methods offer a
convenient means of implementing traditional fork-join programming models.
These parallelized approaches offer opportunities to manage the near-
concurrent, high-latency sampling of many different circuits.
The data requirements for ensemble-circuit applications may range from a linear
scaling of the requirements for single-circuit applications to more complex
scaling based on result-driven feedback. An example of the latter is reflected in
iterative characterization methods, such as adaptive process tomography, or
hierarchical testing methods for complex quantum programs. In addition,
postprocessing of these resulting datasets may require extensive computational
resources.

UNIT - IV 6
[QUANTUM COMPUTING]

Macroarchitecture Design
We now examine the possible macroarchitectures integrating QCs with
conventional computing systems and, especially, HPC systems. In Figure 2(a), we
present the current example of hybrid computing in which a remotely accessible
QPU is dedicated to executing standalone quantum programs. The local machine
system coordinates interactions with the QPU through a public network but
details of the conventional computing system are indistinct for the purposes of
these interactions. This macroarchitecture is representative of current
interactions with commercial QPUs, in which the connection is mediated by
wide-area networking such as the Internet. This type of integration is due to the
ease with which the devices are physically connected as well as the simplicity
with which programming and execution are implemented. While this
macroarchitecture can meet the high latency requirements of single-circuit and
ensemble-circuit applications, it may not be optimal due to limitation on the
number of processes that can be executed. In addition, Figure 2(a) offers poor
support for in-sequence applications due to the high-latency connection.
A leading example of a more performant architecture is shown in Figure 2(b),
where a network of conventional computing nodes individually interface with a
QPU. This design permits coordination of processing and data across low-latency
connections. This architecture also supports the paradigm of parallel processing
in which the decomposition of both single-circuit and ensemble-circuit
workloads into concurrent tasks may accelerate time to solution. In addition,
each task may be allocated to a resource that supports acceleration based on
quantum computing capabilities.
The coordination of these multiple QPU resources is orchestrated through the
communication of instructions and data across the system. Message passing
protocols, such as message passing interface (MPI) and parallel virtual machine
(PVM), have been used previously to program distributed nodes and similar
approaches seem highly feasible for these quantum-accelerated variants.
However, defining interfaces for the protocols including data types and
operations will be important for this approach. Anticipating future
heterogeneous HPC nodes, these interfaces may not be developed in isolation
but must include a diversity of computational accelerators including GPUs,
FPGAs, tensor processing units, and neuromorphic processors.1
We mention a third macroarchitecture for hybrid computing that extends the
distributed design toward a distributed quantum computing system. As shown
in Figure 2(c), this approach uses a quantum interconnect as a significant
advance in the computational capability of these distributed QPUs as entangling

UNIT - IV 7
[QUANTUM COMPUTING]

operations across computational workloads would be possible. This design open


up many new application possibilities by permitting entangling operations
between nodes. However, such advances are expected to introduce additional
requirements for synchronous quantum communication and we defer that
analysis to elsewhere.
Designing and forecasting the performance of these hybrid HPC systems will
require significant system-level modeling and simulation efforts. In the absence
of widely available production-ready QPUs, subsystem-level virtualization is
needed to simulate execution of quantum algorithms using classical processors
(embedding virtual QPUs into a classical HPC system by making some of the
nodes run QPU simulators). We anticipate upcoming exascale HPC platforms will
prove critical for simulating system models of hybrid HPC systems including the
conventional CPU as well as QPU operations. In addition, the flexibility provided
by hybrid virtualization will permit dynamic reconfiguration of the HPC platform
in which the granularity, composition, and connectivity of the classical and
quantum computing units can be fine tuned to a given quantum/classical
workload. Importantly, the ultimate realization of these systems is likely to
evolve in stages and the necessary specialized classical hardware components,
e.g., FPGA controllers, may be gradually integrated as the systems progress
toward full functionality.

Microarchitecture Design
The node design presented above must account for the conventional CPU and
memory systems (for data processing and resource management) as well as the
real-time control electronics (typically state-of-the-art FPGA circuits) and digital-
to-analog/analog-to-digital converters that comprise the QPU. As shown in
Figure 3, the microarchitecture of this hardware stack may be viewed as an
encapsulated execution unit. The role of this subsystem is implementation of
QPU instructions and facilitating in-band signaling. Outputs from the execution
unit to the quantum register are control channels for modulating independent
carrier signals of each register element via analog upsampling/downsampling
stages (mixers or modulators). The carrier frequencies depend on the specific
quantum computing platform used and range from microwave (few gigahertz)
for superconducting qubits to optical frequencies (hundreds of terahertz) for
trapped-ion qubits. Thus, the execution unit translates digital instructions into
analog pulses and this translation may be tuned with respect to the instruction
set architecture as well as the physical condition of the quantum register.
For the macroarchitectures in Figure 2, quantum execution units are internal to
the QPU.7 Presently, FPGAs have proven to be highly effective implementations
UNIT - IV 8
[QUANTUM COMPUTING]

of the execution units needed to implement instructions on prototype QPUs. In


addition to being reconfigurable, FPGAs offer relatively low-latency responses.
Figure 3 shows a quantum-enabled node that contains multiple components
next to the execution unit to manage and control the operation of the QPU. Due
to the large number of environmental effects (e.g., electromagnetic fields,
vibrations, temperature, etc.) on the qubits and the complexity of controlling
the environment to a suitable precision for reliable operation, a management
server and several additional hardware (HW) devices are needed within the QC
node. These components can be ascribed to the out-of-band management,
which is well known from system management of classical computing systems.
Currently, FPGAs are the most common hardware used to interface between the
HPC and QC node. They sit physically close to the qubits providing commands
locally. The device controls the analog systems that directly control the qubits.
The most important feature of FPGAs, as mentioned above, is their
programmability. However, access to these devices is unavailable currently to
algorithm developers. One way to increase access and improve performance
would be to allow computational access directly on the FPGA level. Operations
such as parameter updates and logical while-loops could be programmed into
the system at the FPGA or similar device level and as a result, improve the
latency bottleneck caused by the delays in the movement of data between the
entire stack of the HPC and QC system.9
Modulated carrier signals resemble the most fundamental qubit operations,
which are commonly defined as pulses in the software stack. Pulses are the
building blocks for quantum logic gates, which by themselves are building blocks
for quantum circuits. Quantum gates (operations) can be classified in single-
qubit and multi-qubit gates as well as the measurement operation. A
measurement on a qubit retrieves one bit of classical information in a digital
form. Typical single-qubit gate durations range from several nanoseconds for
superconducting qubits to microseconds for trapped-ion qubits, respectively,
whereas multiqubit gates and measurements are orders of magnitude longer
due to their complexity in both technologies.
A measurement-based feedback control of qubits during the execution of a
quantum circuit is enabled by conditional gates, for which their condition
depends on a logic input derived from the result of a measurement (same or
different qubit) and/or additional classical bits as inputs. This quantum feedback
sets the requirement on the latencies of digital communication channels
throughout the real-time control electronics stack and might be difficult to
achieve, e.g., in case of systems with a large number of superconducting qubits.

UNIT - IV 9
[QUANTUM COMPUTING]

For quantum circuits without conditional gates, the latency requirement is


relaxed (high-latency) because a single circuit is executed as an immutable
sequence of circuits. Due to its wide availability and simplicity, currently
communication channels between the compute node and the real-time
electronics are accomplished via an Ethernet link. Low latency communication
channels might be realized by different cable-based standards, e.g., InfiniBand,
which are common in networking communications of HPC environments. This
allows remote direct memory access into the memory of the FPGA, which
significantly reduces the communication latency overhead for executing
quantum circuits. Furthermore, these channels still allow to share resources by
creating networks of HPC nodes and QC hardware systems. Even further
integration of the QC hardware into HPC infrastructure by placing the real-time
control electronics on the same device might enable the usage of wire-based
near-range communication protocols, e.g., PCI Express or NVLink, in case even
lower latencies and high data rates are required.

Programming Models
The software stack for enabling QC integration with HPC will require extensibility
and modularity.10 Existing variability in quantum technologies (superconducting,
trapped-ion, etc.) from competing hardware vendors (Google, IBM, Rigetti,
Honeywell, etc.) requires customization at all levels of abstraction. Here we
distill those demands into a number of service interfaces for programming,
compilation, execution, and control of QPUs. Service interfaces provide a means
for deploying complex software systems that grow organically as new hardware,
programming, control, and compilation techniques are developed by the
community. Ultimately, we advocate for a hardware-agnostic approach that
decomposes software into extensible interfaces for host-to-control integration,
compiler, language, and algorithmic libraries,11 as illustrated in Figure 4.

Figure 4. Decomposition of the software architecture required for quantum-HPC


integration into a series of workflow steps, each exposing a unique set of service
interfaces. This architecture maximizes the flexibility, modularity, and
UNIT - IV 10
[QUANTUM COMPUTING]

extensibility of the suggested integration strategy. Here we show the workflow


decomposed into programming, compilation, IR lowering, and execution
components. Each component exposes a series of service interfaces intended
for the implementation of concrete use cases.
Language-level constructs for QPUs are enabled by pulse-level instruction that
execute a robust quantum intermediate representation (QIR) by translating
domain specific or stand-alone languages into hardware actions.
For N� languages and M� backends, the QIR
enables N+M�+� implementations and avoids writing a parser-to-device
implementation for all available languages and backends. Custom digital or
analog instructions may then be mapped directly through the IR to a control
system instantiation. The QIR uniquely represents these quantum expressions
at a level that is higher than the native QPU instructions and can present itself
in a number of forms: 1) an in-memory object model defined as an n-ary tree
that is uniquely situated for analysis and transformation; 2) a graph that
demonstrates quantum instruction control-flow; 3) a human-readable assembly
string; or 4) a persisted bitcode or binary file.
The QIR provides an extension point for host-to-control interaction. Each QPU
vendor provides a unique control system for the execution of quantum
instructions. The interaction may be managed through a service interface that
exposes the quantum instruction API for specific QPUs and maps QIR
expressions to native instruction sets. These interfaces may be further
configured for application uses cases, hardware support, and quantum
compilation that address quantum resource optimization, placement and
routing, and error mitigation and QEC. The transformations take as input an
instance of the QIR and output a modified instance.
High-level programmability of the CPU-QPU hybrid system needs performant
language approaches that permit creation of new quantum algorithmic
primitives.12 Software protocols for quantum acceleration in HPC environments
will also need to be compatible with existing HPC applications, tools, compilers,
and parallel runtimes. This implies the need for hybrid software approaches to
provide system-level languages, like C++, with appropriate bindings to higher
level application languages like Python or Julia. Providing an infrastructure
based on C++ will provide the performance necessary to integrate future QPUs
and enable in-sequence instruction execution for utilities and methods that
require fast-feedback. A C++-like language is multiparadigm (object-oriented,
functional, etc.) and enables integration with existing HPC workflows. Of course,
these language extensions will benefit from a service-oriented approach,
whereby agnostic compiler technologies, like Clang, parse the quantum
UNIT - IV 11
[QUANTUM COMPUTING]

expressions from domain specific languages. Algorithmic interfaces that support


this uniform programming model for hybrid execution can encapsulate both
common programming concerns and remove the need for programmers to
rewrite code from scratch for every compilation target.
Leveraging existing classical HPC programming models should enable the
programmability of the distributed macroarchitectures described above while
providing flexibility to accommodate the diversity of CPU-QPU interfaces. For
distributed systems, a hierarchical programming model across system, node,
and device interfaces can leverage conventional approaches in tandem with the
quantum programming model described here. For example, in a distributed
memory system with multiple nodes each having a dedicated QPU, the well-
known MPI+X model may distribute work across nodes via the MPI. Extensions
to the runtime infrastructure can map programs onto remote or integrated
execution models. Single-circuit and ensemble-circuit applications represented
in the QIR can be submitted for execution in a remotely hosted QPU paradigm,
while the execution of individual quantum instructions may be designed to
support the fast-feedback, in-sequence applications.

SECURITY AND SCALABILITY


For many, running their own business and in turn being their own boss is
dream worth working toward. However, there are a lot of considerations to
make when it comes to hiring competent employees, managing finances and
establishing best practices. But another often forgotten piece of the puzzle is
choosing a security solution that is fit for a businesses’ needs both now and in
the future. Thanks to technological advancements, users no longer have to
sacrifice security for scalability, or vice versa. Users are demanding solutions
that can change as their business does, and in turn, the industry is responding.
There are pros and cons to the many technologies involved in access control
systems today, so how can you be sure you are selecting a solution designed
with both security and scalability in mind? Several considerations should be
made before making a technology investment to bring you to the future.
Should I Use the Cloud?
There are a lot of benefits to choosing a cloud-based access control system,
most notably the lower cost of ownership and increased scalability
potential. With no software to install, paired with automatic updates, the
cloud ensures delivery of the most up-to-date, secure version of the system

UNIT - IV 12
[QUANTUM COMPUTING]

possible, with room to add or remove services as business trajectory


fluctuates.
For example, if a business only requires support for four doors when they
launch, but experiences growth and later needs support for 10 doors, this can
be done easily within a cloud-based system. Scalability is the biggest
advantage of the cloud, but what a lot of people don’t realise is that the cloud
is also just as secure as fully local counterparts, if not more.
According to the Hiscox Cyber Readiness Report, 70% of organisations are not
prepared for a cyber attack. This tells us two things: First, businesses do not
have the resources to dedicate a full team to cyber security; second,
businesses are, understandably, putting operations ahead of cyber security
best practices.
If your business falls into the 70%, the good news is that cloud-based solutions
are typically part of a subscription based model, allowing users to pay a set
monthly price for the services they receive. This ensures users are only paying
for what they need at present, with the ability to change the services they
subscribe to as their business grows. Along with this, users typically receive
24/7 monitoring by the cloud provider, guaranteeing their system is under the
watchful eye of an off-premise professional. Vulnerabilities are often detected
and dealt with before the company even realizes they were at risk. If the same
hacker attacked a local solution at a company without the resources for a
dedicated IT team, it is unlikely that the risk would be identified and
eliminated as quickly.
Cloud providers employ best in class IT professionals, so that you don’t have
to. No capital expenditures and a dedicated team monitoring the system
positions the cloud as an ideal option for affordability, scalability and security.
Are open source systems secure?
Open source systems provide a level of flexibility that is unmatched by other
options in the market. Instead of choosing a single manufacturer and getting
locked into purchasing products and services from only that vendor, users can
instead mix and match to receive a true “best-of-breed” solution. An open
system means being able to use equipment from a variety of companies to
customise the solution to a user’s unique needs. There is no cookie cutter
approach to access control, and open systems provide the means by which
integrators can create solutions tailored on a case-by-case basis.
As a result of this flexibility, users can start small and add more system
components as their company grows. With no vendor lock-in taking place,

UNIT - IV 13
[QUANTUM COMPUTING]

users can ensure they are choosing equipment to stay at a price point they are
comfortable with, while also remaining on the cutting edge of new
technologies. This is necessary as users grow their businesses and in turn need
additional physical equipment. An open source access control system can also
easily integrate with video management systems (VMS), human resources and
HVAC to provide a full operational view of a facility.
With proprietary systems, users are instead forced to make up-front
commitments that may not allow for variance as new technologies are
developed. Open systems ensure users are at the forefront of innovation, on
whatever scale works best for them, while also providing the freedom of
choice to make security decisions based on functionality and security
preferences, rather than manufacturer agreements.
Open source systems are as secure as the people operating them. Work with
an integrator to customise a solution based on your security needs, and
consider combining an open source system with the cloud for a high level of
system oversight, and increased scalability potential.
What about credentials?
The definition of a credential is changing as we know it, and as such, the way
we go about credentialing is shifting from traditional cards to smarter options.
Credentials today are created around the identity of a user, rather than a
simple fob. There are a few credential options to keep in mind to ensure both
ongoing security, and the potential to scale up or down to adjust to an evolving
business trajectory.
Proximity cards are convenient, but are no longer the most secure option. It is
a known risk that unauthorised individuals can gain access to an area by
following behind someone with a working credential. Because there is no true
identity attached to this type of credential, there is no way to know that the
correct person is actually gaining access. End users are recognising this, and
are making the move to more secure options.
Mobile credentials are gaining traction because, in theory, users can use their
own devices. This eliminates security issues with previous access control
credentials, such as card duplication or sharing. Users are more likely to
remember and keep tabs on their own mobile devices as opposed to a key fob
that is easy to forget when running late or distracted because mobile devices
are already so engrained in day-to-day life.
In addition, mobile credentials reduce costs because companies do not have
to spend money on plastic cards or fobs, as most users have their own mobile

UNIT - IV 14
[QUANTUM COMPUTING]

devices that can be programmed with firewalls and multi-factor


authentication to ensure the identity attached to the credential is authentic.
However, it is important to note that full-scale mobile credential deployments
require a special infrastructure to be in place, which can often be costly. As
traditional credentials become outdated it is important to look to the future
to be sure you are making wise technology investments that are secure, but
are also reflections of a changing technological landscape. If you are not ready
to implement mobile credentials today but think you may be in the future, it
is important to have this conversation with your integrator to be sure you are
ready when the time comes.

QUANTUM ERROR CORRECTION


The journey to realizing functional quantum computers will be long and it's a
path that Q-CTRL is committed to making as easy as possible for you. And by
easy, we mean less difficult. Building a universal quantum computer with
millions of entangled, coherent quantum bits running complex algorithms is not
going to be simple or straightforward.

From noise to error in quantum computing


Here we’ll get to the heart of why quantum computing is really hard: noise and
error.

“Noise” describes all of the things that cause interference in a quantum


computer. Just like a mobile phone call can suffer interference leading it to break
up, a quantum computer is susceptible to interference from all sorts of sources,
like electromagnetic signals coming from WiFi or disturbances in the Earth’s
magnetic field. When qubits in a quantum computer are exposed to this kind of
noise, the information in them gets degraded just the way sound quality is
degraded by interference on a call. This is known as decoherence.

Compared with standard computers, quantum computers are extremely


susceptible to noise. A typical transistor in a microprocessor can run for about a
billion years at a billion operations per second, without ever suffering a
hardware fault due to any form of interference. By contrast, typical quantum
bits become randomized in about one one-thousandth of a second. That’s a
huge difference.

Now consider a quantum algorithm, executing many operations across a large


number of qubits. Noise causes the information in the qubits to become
randomized - and this leads to errors in our algorithm. The greater the influence

UNIT - IV 15
[QUANTUM COMPUTING]

of noise, the shorter the algorithm that can be run before it suffers an error and
outputs an incorrect or even useless result. Right now, instead of the trillions of
operations that might be needed to run a full-fledged quantum algorithm, we
can typically only perform dozens before noise causes a fatal error.

Quantum Error Correction


So what do we do about this?

Companies building quantum computers like IBM and Google have highlighted
that their roadmaps include the use of “Quantum Error Correction” as they scale
to machines with 1000 or more qubits.

Quantum Error Correction - or QEC for short - is an algorithm known to identify


and fix errors in quantum computers. It’s able to draw from validated
mathematical approaches used to engineer special “radiation hardened”
classical microprocessors deployed in space or other extreme environments
where errors are much more likely to occur. QEC is the source of much of the
great promise supporting our community's aspirations for quantum computing
at-scale.

In QEC quantum information stored in a single qubit is distributed across other


supporting qubits; we say that this information is "encoded" in a logical
quantum bit. This procedure protects the integrity of the original quantum
information even while the quantum processor runs - but at a cost in terms of
how many qubits are required. Overall, the worse your noise is, the more qubits
you need.

Depending on the nature of the hardware and the type of algorithm you choose
to run, the ratio between the number of physical qubits you need to support a
single logical qubit varies - but current estimates put it at about 1000 to one.
That's huge. Today’s machines are nowhere near capable of getting benefits
from this kind of Quantum Error Correction.

QEC has seen many partial demonstrations in laboratories around the world -
first steps making clear it’s a viable approach. But in general the enormous
resource overhead leads to things getting worse when we try to implement QEC.
Right now there is a global research effort underway trying to cross the “break
even” point where it’s actually advantageous to use QEC relative to the many
resources it consumes.

How do we get there?


UNIT - IV 16
[QUANTUM COMPUTING]

Quantum firmware and Quantum Error Correction


This is where Q-CTRL comes in. We add something extra - quantum firmware -
which can stabilize the qubits against noise and decoherence without the need
for extra qubits. Quantum firmware serves as a complement to QEC, such that
in combination we can accelerate the pathway to useful quantum computers.

One kind of quantum firmware works by something called dynamic


stabilization - if you constantly rotate your qubits in just the right way you can
make them effectively immune to the noise which would normally randomize
them. It sounds a bit like magic, but believe it or not, similar techniques are
already used to stabilize the memory in your computer.

The techniques are easy to implement and the benefits can be huge - our own
experiments have demonstrated more than 10X improvements in cloud
quantum computers!

In the context of QEC, quantum firmware actually reduces the number of qubits
required to perform error correction. Exactly how is a complex story, but in
short, quantum firmware reduces the likelihood of error during each operation
on a quantum computer. Better yet, quantum firmware easily eliminates certain
kinds of errors that are really difficult for QEC, and actually transforms the
characteristics of the remaining errors to make them more compatible with QEC.
Win-win!

Looking to the future we see that a holistic approach to dealing with noise and
errors in quantum computers is essential. Quantum Error Correction is a core
part of the story, and combined with performance-boosting quantum firmware
we see a clear pathway to the future of large-scale quantum computers.

FAULT TOLERANCE
Fault Tolerance simply means a system’s ability to continue operating
uninterrupted despite the failure of one or more of its components. This is true
whether it is a computer system, a cloud cluster, a network, or something else.
In other words, fault tolerance refers to how an operating system (OS) responds
to and allows for software or hardware malfunctions and failures.

An OS’s ability to recover and tolerate faults without failing can be handled by
hardware, software, or a combined solution leveraging load balancers(see more
below). Some computer systems use multiple duplicate fault tolerant systems
to handle faults gracefully. This is called a fault tolerant network.
UNIT - IV 17
[QUANTUM COMPUTING]

FAQs

What is Fault Tolerance?

The goal of fault tolerant computer systems is to ensure business continuity and
high availability by preventing disruptions arising from a single point of failure.
Fault tolerance solutions therefore tend to focus most on mission-critical
applications or systems.

Fault tolerant computing may include several levels of tolerance:

 At the lowest level, the ability to respond to a power failure, for example.
 A step up: during a system failure, the ability to use a backup system
immediately.
 Enhanced fault tolerance: a disk fails, and mirrored disks take over for it
immediately. This provides functionality despite partial system failure, or
graceful degradation, rather than an immediate breakdown and loss of
function.

UNIT - IV 18
[QUANTUM COMPUTING]

 High level fault tolerant computing: multiple processors collaborate to


scan data and output to detect errors, and then immediately correct
them.

Fault tolerance software may be part of the OS interface, allowing the


programmer to check critical data at specific points during a transaction.

Fault-tolerant systems ensure no break in service by using backup components


that take the place of failed components automatically. These may include:

 Hardware systems with identical or equivalent backup operating systems.


For example, a server with an identical fault tolerant server mirroring all
operations in backup, running in parallel, is fault tolerant. By eliminating
single points of failure, hardware fault tolerance in the form of
redundancy can make any component or system far safer and more
reliable.
 Software systems backed up by other instances of software. For example,
if you replicate your customer database continuously, operations in the
primary database can be automatically redirected to the second database
if the first goes down.
 Redundant power sources can help avoid a system fault if alternative
sources can take over automatically during power failures, ensuring no
loss of service.

High Availability vs Fault Tolerance

Highly available systems are designed to minimize downtime to avoid loss of


service. Expressed as a percentage of total running time in terms of a system’s
uptime, 99.999 percent uptime is the ultimate goal of high availability.

Although both high availability and fault tolerance reference a system’s total
uptime and functionality over time, there are important differences and both
strategies are often necessary. For example, a totally mirrored system is fault-
tolerant; if one mirror fails, the other kicks in and the system keeps working with
no downtime at all. However, that’s an expensive and sometimes unwieldy
solution.

On the other hand, a highly available system such as one served by a load
balancer allows minimal downtime and related interruption in service without
total redundancy when a failure occurs. A system with some critical parts
mirrored and other, smaller components duplicated has a hybrid strategy.

UNIT - IV 19
[QUANTUM COMPUTING]

In an organizational setting, there are several important concerns when creating


high availability and fault tolerant systems:

Cost. Fault tolerant strategies can be expensive, because they demand the
continuous maintenance and operation of redundant components. High
availability is usually part of a larger system, one of the benefits of a load
balancing solution, for example.

Downtime. The greatest difference between a fault-tolerant system and a highly


available system is downtime, in that a highly available system has some minimal
permitted level of service interruption. In contrast, a fault-tolerant system
should work continuously with no downtime even when a component fails. Even
a system with the five nines standard for high availability will experience
approximately 5 minutes of downtime annually.

Scope. High availability systems tend to share resources designed to minimize


downtime and co-manage failures. Fault tolerant systems require more,
including software or hardware that can detect failures and change to
redundant components instantly, and reliable power supply backups.

Certain systems may require a fault-tolerant design, which is why fault tolerance
is important as a basic matter. On the other hand, high availability is enough for
others. The right business continuity strategy may include both fault tolerance
and high availability, intended to maintain critical functions throughout both
minor failures and major disasters.

What are Fault Tolerance Requirements?

Depending on the fault tolerance issues that your organization copes with, there
may be different fault tolerance requirements for your system. That is because
fault-tolerant software and fault-tolerant hardware solutions both offer very
high levels of availability, but in different ways.

Fault-tolerant servers use a minimal amount of system overhead to achieve high


availability with an optimal level of performance. Fault-tolerant software may
be able to run on servers you already have in place that meet industry standards.

What is Fault Tolerance Architecture?

There is more than one way to create a fault-tolerant server platform and thus
prevent data loss and eliminate unplanned downtime. Fault tolerance in
computer architecture simply reflects the decisions administrators and
UNIT - IV 20
[QUANTUM COMPUTING]

engineers use to ensure a system persists even after a failure. This is why there
are various types of fault tolerance tools to consider.

At the drive controller level, a redundant array of inexpensive disks (RAID) is a


common fault tolerance strategy that can be implemented. Other facility level
forms of fault tolerance exist, including cold, hot, warm, and mirror sites.

Fault tolerance computing also deals with outages and disasters. For this reason
a fault tolerance strategy may include some uninterruptible power supply (UPS)
such as a generator—some way to run independently from the grid should it fail.

Byzantine fault tolerance (BFT) is another issue for modern fault tolerant
architecture. BFT systems are important to the aviation, blockchain, nuclear
power, and space industries because these systems prevent downtime even if
certain nodes in a system fail or are driven by malicious actors.

What is the Relationship Between Security and Fault Tolerance?

Fault tolerant design prevents security breaches by keeping your systems online
and by ensuring they are well-designed. A naively-designed system can be taken
offline easily by an attack, causing your organization to lose data, business, and
trust. Each firewall, for example, that is not fault tolerant is a security risk for
your site and organization.

What is Fault Tolerance in Cloud Computing?

Conceptually, fault tolerance in cloud computing is mostly the same as it is in


hosted environments. Cloud fault tolerance simply means your infrastructure is
capable of supporting uninterrupted functionality of your applications despite
failures of components.

In a cloud computing setting that may be due to autoscaling across geographic


zones or in the same data centers. There is likely more than one way to achieve
fault tolerant applications in the cloud in most cases. The overall system will still
demand monitoring of available resources and potential failures, as with any
fault tolerance in distributed systems.

What Are the Characteristics of a Fault Tolerant Data Center?

To be called a fault tolerant data center, a facility must avoid any single point of
failure. Therefore, it should have two parallel systems for power and cooling.
However, total duplication is costly, gains are not always worth that cost, and

UNIT - IV 21
[QUANTUM COMPUTING]

infrastructure is not the only answer. Therefore, many data centers practice
fault avoidance strategies as a mid-level measure.

Load Balancing Fault Tolerance Issues

Load balancing and failover solutions can work together in the application
delivery context. These strategies provide quicker recovery from disasters
through redundancy, ensuring availability, which is why load balancing is part of
many fault tolerant systems.

Load balancing solutions remove single points of failure, enabling applications


to run on multiple network nodes. Most load balancers also make various
computing resources more resilient to slowdowns and other disruptions by
optimizing distribution of workloads across the system components. Load
balancing also helps deal with partial network failures, shifting workloads when
individual components experience problems.

QUANTUM CRYPTOGRAPHY
Quantum cryptography is a method of encryption that uses the naturally
occurring properties of quantum mechanics to secure and transmit data in a way
that cannot be hacked.

Cryptography is the process of encrypting and protecting data so that only the
person who has the right secret key can decrypt it. Quantum cryptography is
different from traditional cryptographic systems in that it relies on physics,
rather than mathematics, as the key aspect of its security model.

Quantum cryptography is a system that is completely secure against being


compromised without the knowledge of the message sender or the receiver.
That is, it is impossible to copy or view data encoded in a quantum state without
alerting the sender or receiver. Quantum cryptography should also remain safe
against those using quantum computing as well.

Quantum cryptography uses individual particles of light, or photons, to transmit


data over fiber optic wire. The photons represent binary bits. The security of the
system relies on quantum mechanics. These secure properties include the
following:

UNIT - IV 22
[QUANTUM COMPUTING]

 particles can exist in more than one place or state at a time;


 a quantum property cannot be observed without changing or
disturbing it; and
 whole particles cannot be copied.

These properties make it impossible to measure the quantum state of any


system without disturbing that system.

Photons are used for quantum cryptography because they offer all the necessary
qualities needed: Their behavior is well understood, and they are information
carriers in optical fiber cables. One of the best-known examples of quantum
cryptography currently is quantum key distribution (QKD), which provides a
secure method for key exchange.

How does quantum cryptography work?


In theory, quantum cryptography works by following a model that was
developed in 1984.

The model assumes there are two people named Alice and Bob who wish to
exchange a message securely. Alice initiates the message by sending Bob a key.
The key is a stream of photons that travel in one direction. Each photon
represents a single bit of data -- either a 0 or 1. However, in addition to their
linear travel, these photons are oscillating, or vibrating, in a certain manner.

So, before Alice, the sender, initiates the message, the photons travel through a
polarizer. The polarizer is a filter that enables certain photons to pass through it
with the same vibrations and lets others pass through in a changed state of
vibration. The polarized states could be vertical (1 bit), horizontal (0 bit), 45
degrees right (1 bit) or 45 degrees left (0 bit). The transmission has one of two
polarizations representing a single bit, either 0 or 1, in either scheme she uses.

The photons now travel across optical fiber from the polarizer toward the
receiver, Bob. This process uses a beam splitter that reads the polarization of
each photon. When receiving the photon key, Bob does not know the correct

UNIT - IV 23
[QUANTUM COMPUTING]

polarization of the photons, so one polarization is chosen at random. Alice now


compares what Bob used to polarize the key and then lets Bob know which
polarizer she used to send each photon. Bob then confirms if he used the correct
polarizer. The photons read with the wrong splitter are then discarded, and the
remaining sequence is considered the key.

Let's suppose there is an eavesdropper present, named Eve. Eve attempts to


listen in and has the same tools as Bob. But Bob has the advantage of speaking
to Alice to confirm which polarizer type was used for each photon; Eve doesn't.
Eve ends up rendering the final key incorrectly.

Alice and Bob would also know if Eve was eavesdropping on them. Eve observing
the flow of photons would then change the photon positions that Alice and Bob
expect to see.

What quantum cryptography is used for and examples


Quantum cryptography enables users to communicate more securely compared
to traditional cryptography. After keys are exchanged between the involved
parties, there is little concern that a malicious actor could decode the data
without the key. If the key is observed when it is being constructed, the expected
outcome changes, alerting both the sender and the receiver.

UNIT - IV 24
[QUANTUM COMPUTING]

This method of cryptography has yet to be fully developed; however, there have
been successful implementations of it:

 The University of Cambridge and Toshiba Corp. created a high-bit rate


QKD system using the BB84 quantum cryptography protocol.
 The Defense Advanced Research Projects Agency Quantum Network,
which ran from 2002 to 2007, was a 10-node QKD network developed
by Boston University, Harvard University and IBM Research.
 Quantum Xchange launched the first quantum network in the U.S.,
featuring 1,000 kilometers (km) of fiber optic cable.
 Commercial companies, such as ID Quantique, Toshiba, Quintessence
Labs and MagiQ Technologies Inc., also developed commercial QKD
systems.

In addition to QKD, some of the more notable protocols and quantum algorithms
used in quantum cryptography are the following:

 quantum coin flipping;


 position-based quantum cryptography; and
 device-independent quantum cryptography.
Benefits of quantum cryptography
Benefits that come with quantum cryptography include the following:

 Provides secure communication. Instead of difficult-to-crack


numbers, quantum cryptography is based on the laws of physics,
which is a more sophisticated and secure method of encryption.
 Detects eavesdropping. If a third party attempts to read the encoded
data, then the quantum state changes, modifying the expected
outcome for the users.

UNIT - IV 25
[QUANTUM COMPUTING]

 Offers multiple methods for security. There are numerous quantum


cryptography protocols used. Some, like QKD, for example, can
combine with classical encryption methods to increase security.
Limitations of quantum cryptography
Potential downsides and limitations that come with quantum cryptography
include the following:

 Changes in polarization and error rates. Photons may change


polarization in transit, which potentially increases error rates.
 Range. The maximum range of quantum cryptography has typically
been around 400 to 500 km, with the exception of Terra Quantum, as
noted below.
 Expense. Quantum cryptography typically requires its own
infrastructure, using fiber optic lines and repeaters.
 Number of destinations. It is not possible to send keys to two or more
locations in a quantum channel.
Differences between traditional cryptography and quantum cryptography
The classic form of cryptography is the process of mathematically scrambling
data, where only one person who has access to the right key can read it.
Traditional cryptography has two different types of key distribution: symmetric
key and asymmetric key. Symmetric key algorithms work by using a single key
to encrypt and decrypt information, whereas asymmetric cryptography uses two
keys -- a public key to encrypt messages and a private key to decode them.
Traditional cryptography methods have been trusted since it would take an
impractical time frame for classical computers to factor the needed large
numbers that make up public and private keys.

UNIT - IV 26
[QUANTUM COMPUTING]

This image shows the differences between classical cryptography and quantum
cryptography.

Unlike traditional cryptography, which is based on mathematics, quantum


cryptography is based on the laws of quantum mechanics. And, whereas
traditional cryptography is based on mathematical computation, quantum
cryptography is much harder to decrypt since the act of observing the involved
photons changes the expected outcome, making both the sender and receiver
aware of the presence of an eavesdropper. Quantum cryptography also typically
has a distance or range associated with it since the process requires fiber optic
cables and repeaters spaced out to boost the signal.

Future of quantum cryptography implementation


Quantum computers are in the early phases and need more development before
a broad audience can start using quantum communications. Even though there
are limitations to quantum cryptography, such as not being able to send keys to
two locations at once, the field is still steadily growing.

Recent advances, for example, include improvements in range. Swiss quantum


technology company Terra Quantum announced a breakthrough for quantum
cryptography in terms of range. Previously, the distance for quantum
cryptography was limited to a maximum of 400 to 500 km. Terra Quantum's

UNIT - IV 27
[QUANTUM COMPUTING]

development enables quantum cryptography keys to be transmitted over a


distance of more than 40,000 km. Instead of building a new optical line filled
with numerous repeaters, Terra Quantum's development enables quantum keys
to be distributed inside standard optical fiber lines that are already being used
in telecom networks.

IMPLEMENTING QUANTUM COMPUTING


This guide will provide the basic steps needed to successfully implement
quantum computing in to a software application that you are developing.

Step 1: Work out if your application requires quantum computing

Despite quantum computing often being touted as the next big revolution in
computing there are many problems where a classical computer can actually
outperform a quantum computer. One good example is basic arithmetic. A
computer can do arithmetic on the order of nanoseconds due to the fact that
they have dedicated logic in the form of ALUs. Quantum computers however
have no such dedicated hardware and are quite slow in comparison. Their power
does not lay in speed but in the fact that they can make use of quantum
parallelisation.

In fact on specific quantum devices called annealing processors it is often better


for the runtime to be as slow as possible since if the runtime is too fast the
solution will be nonoptimal.

What types of problems can Quantum Computers solve?

Quantum computers tend to be able to solve problems that have a very high
search space. For example, consider a database. Let's say you need to search a
database for an entry. The worst case scenario for a classical computer is that it
will have to go through the entire database to find the entry you wish to find.
This corresponds to a computational complexity of O(N).

However with quantum computing it is possible to solve such problems more


efficiently and with less computational complexity. For example a quantum
algorithm called Grover’s search algorithm can solve the problem in O(√N) time.

UNIT - IV 28
[QUANTUM COMPUTING]

Step 2: Find the right quantum device

Now that we have worked out our application can benefit from quantum
computing we now need to find the right device. The field of quantum
computing is a burgeoning one and as such there have been many different
advancements. As a result there have been many different quantum devices
developed each with their own pros and cons. As such when developing an
application it is important to know which device is best depending on the
problem you wish to solve.

Gate based Quantum Computing

The most popular type of quantum device is the gate based quantum computer.
This is a quantum computer where operations are done on qubits using
quantum logic gates. These logic gates are much like the logic gates found in
classical computers but are more complex.

Here are some examples:

 Hadamard Gate: This logic gate puts a qubit in to a superposition of states


and is arguably the most important quantum logic gate
 Pauli gates: These are a set of gates that rotate the qubits state by 180
degrees on different axes. A X-gate rotates the state around the X axis
while the Z-gate rotates around the Z axis. There’s also a Y-gate that
rotates around the Y axis
 Controlled NOT (CNOT): This is a multi qubit gate that operates on a qubit
based upon the state of another. If the control qubit is 1 then the target
qubits state will be flipped from 0 to 1 or vice versa

Gate based quantum computers are general purpose and such can be used for
any problem that requires quantum parallelisation. For example factoring
numbers using Shor’s algorithm or efficient search using Grover’s search
algorithm. Machine learning can also be done using variational quantum
circuits.

There are many different hardware implementations of gate based quantum


computing:

UNIT - IV 29
[QUANTUM COMPUTING]

1. Superconducting quantum devices: The most popular hardware


implementation that use noisy quantum circuits with a qubit count
ranging between 5 and 53 qubits. They also tend to have to be cooled
using refrigeration.
2. Trapped ion devices: These use trapped ions as qubits. These devices
tend to have very high fidelity qubits with much higher coherence times
but a lower qubit count compared to superconducting devices.
3. Photon based devices: These use photons as qubits. These devices tend
to have a high fidelity and fast gate operations and do not require
refrigeration.

Quantum Annealers

This type of quantum computer is designed specifically for solving optimisation


problems. For example the Travelling Salesman problem which corresponds to
finding the most optimal route around a city. On quantum annealers this is done
by first initializing the qubits into superposition. After this the qubits and
connections between them are slowly tuned such that at the end of runtime the
configuration corresponds to the optimal solution of interest.

Picking a device based upon qubit/gate error

Another issue to look at when picking a quantum device is the issue relating to
qubit and gate errors. For example superconducting quantum devices are very
prone to noise which can lead to errors. These errors come in 3 types (bit flips,
phase flips, and readout errors). These can be corrected using quantum error
correction/mitigation methods however these may require additional qubits
and gates in the quantum circuits.

As such it is best to pick the best device based upon qubit error rates. For
example one device may have low fidelity qubits that are prone to error while
another may have very high fidelity qubits and as such a lower probability of
errors occurring.

In qiskit qubit error rates for quantum devices can be found using the backend
monitor:

UNIT - IV 30
[QUANTUM COMPUTING]

Step 3: Create a quantum algorithm to solve the problem

After you have picked the right device to solve the problem you will have to
create a quantum algorithm. In gate based quantum computers this could be a
small circuit. For example if you wanted to create a 32 qubit random number
generator you would simply create a quantum circuit that initialises 32 qubits
into superposition using Hadamard gates. Then you would measure the qubits
and the results would be brought back into your application.

Obviously depending on your problem your quantum circuit may be way more
complex. Your algorithm may even be hybrid and contain a classical part of the
algorithm and a quantum part consisting of a circuit. Some may be variational
quantum circuits where the state of the qubit is measured then reupdated each
time based upon the problem of interest.

In your algorithm you should be making use of either superpositoning or


entanglement. These are quantum mechanical effects that allow quantum
computers to outperform classical computers. If your algorithm does not make
use of either of these then there is a good chance your algorithm will not
outperform its classical equivalent.

In gate based quantum computing this is easy to check. If your circuit uses
Hadamard gates then it is using superpositioning. If your circuit creates bell
states with Hadamard gates and multi qubit gates then it is using entanglement.

UNIT - IV 31
[QUANTUM COMPUTING]

Circuit diagram of a bell pair consisting of a Hadamard gate and a CNOT gate.
This is one of the simplest forms of entanglement

Step 4: Test the performance of your quantum algorithm

A common mistake among developers is to create a quantum algorithm that


seems to work but does not have a quantum advantage. A quantum advantage
can be defined as when a quantum computer can solve a problem much faster
than a classical computer. If your quantum algorithm can be beaten by a classical
equivalent then there is no point even using your algorithm.

As such it is very important that you test your quantum algorithm against a
classical one. If your algorithm has lower performance then use the classical
algorithm instead. An application that has a speedup from QC is excellent but if
there is no speedup then it is simply a gimmick.

SCALABILITY IN QUANTUM COMPUTING


I happen to agree with Gil Kalai, although his knowledge of the topic certainly
exceeds mine and he is able to express the doubts that I also have much more
eloquently.

But let's take a step back. It's not about time or energy. It's about noise.

But before I even discuss that... do you know what a quantum computer really
is?

Why, it's an analog computer. You know, the kind of computer that existed
before, well, before computers. Including the classic of classics of analog
computers, the slide rule.

Analog computers operate on continuous quantities. The length of a measuring


stick. Voltage or current in a circuit. Etc. Or, in the case of quantum computers,
the phase of the wavefunction. This distinguishes them from digital computers,
which, well, operate on discrete digits, not on a continuum of possible values.

As such, analog computers have, in principle, much richer capabilities.

Unfortunately analog computers are notoriously inaccurate. Your slide rule?


Maybe three significant digits of accuracy if it is a big one and you're really good
at using it. That's not even nearly enough to, say, factorize a 1000-digit number.

UNIT - IV 32
[QUANTUM COMPUTING]

What makes quantum computers special, then? Well, it's the threshold
theorem. What the threshold theorem states is this: If you have a quantum
computer with "noisy" qubits, but the noise can be kept below a specific
threshold, then there exist error correction algorithms that allow your quantum
computer to emulate a "perfect" quantum computer. In short, scalable to an
arbitrary number of qubits.

This is the promise of quantum computing.

But a minority, including Kalai, believe that far from enabling quantum
computing, the threshold theorem kills it. That is because it will never be
possible (on grounds of principle, not engineering limitations) to reduce noise
below the threshold.

I believe Kalai is right. Time will tell. For now, skeptics like Kalai (or me) are a
minority. But physics is not decided by majority rule but through experiment
and/or rigorous deduction.

UNIT - IV 33

You might also like