0% found this document useful (0 votes)
6 views

Project Report

Uploaded by

www.nuthang
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Project Report

Uploaded by

www.nuthang
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Using Quantum Neural Networks for Handwritten

Digit Recognition
For BITS F386: Quantum Information and Computing
By Chaitanya Agarwal, 2021B5AA3217H

Introduction
Handwritten digit recognition is a foundational problem in image classification, frequently
solved using classical neural networks. However, as computational limits of classical systems
become apparent, quantum computing presents a different approach to data processing,
with Quantum Neural Networks (QNNs) as a promising development. QNNs take advantage
of quantum superposition, entanglement, and interference to process complex data
structures, potentially offering exponential speedup and reduced computational
requirements. This report explores using a QNN for handwritten digit recognition on the
MNIST dataset, implemented using IBM's Qiskit library. We analyse QNN architecture,
advantages, and current limitations compared to classical neural networks.

Literature Review
Recent studies have delved into practical implementations of quantum machine learning,
highlighting QNNs as a promising new approach. Schuld et al. (2019) explored variational
quantum circuits applied to machine learning and identified potential advantages in
representing high-dimensional data efficiently. Additionally, research by Mitarai et al. (2018)
and Benedetti et al. (2019) focused on hybrid quantum-classical models, showing that QNNs
could perform on par with, or even surpass, classical models for certain datasets with fewer
parameters. Henderson et al. (2020) demonstrated the potential of quantum circuits for
image processing, especially in encoding visual data within quantum systems. Together,
these studies suggest that QNNs could serve as a complementary or even superior tool to
traditional neural networks, especially as advancements in quantum hardware continue.
While classical neural networks remain powerful in areas such as image and speech
recognition, QNNs bring unique benefits for handling data with intricate relationships or
high dimensionality, potentially enhancing computational efficiency and reducing resource
demands.

Methodology
Dataset and Preprocessing
The MNIST dataset, containing 28x28 grayscale images of handwritten digits (0-9), serves as
the basis for this project. Due to the limitations of current quantum hardware, each 784-
dimensional vector is reduced to 4-8 dimensions using Principal Component Analysis (PCA),
ensuring data fits within the constraints of quantum encoding.
Quantum Data Encoding
To encode the classical data into quantum states, we use angle encoding, where each
feature is represented as a rotation angle (Ry gate) on a qubit. For example, each reduced
feature in the PCA-transformed dataset maps to a rotation on a designated qubit. This
approach ensures that the data is effectively represented within the quantum circuit.

Quantum Neural Network Architecture


The QNN is constructed as a variational quantum circuit (VQC) with several layers of
parameterized rotation gates (e.g., Ry, Rz). These gates form the basis of the QNN's learning
capability. Entanglement between qubits, using CNOT gates, captures complex dependencies
between input features. For training, we employ hybrid quantum-classical optimization,
where classical algorithms (e.g., gradient descent) iteratively adjust the parameters to
minimize the error in classification.

Output Measurement and Post-Processing


Quantum measurement collapses the quantum state, producing probabilistic outcomes. The
measurements are repeated multiple times to ensure reliability, resulting in a probability
distribution across the digit classes. Post-processing involves interpreting the distribution to
identify the predicted class, analogous to the SoftMax layer in classical NNs.

Training and Evaluation


A cross-entropy loss function guides training, with gradients calculated using the parameter
shift rule. The QNN is trained on a subset of the MNIST data, with evaluation metrics such as
accuracy and F1-score computed to assess performance.
Advantages over Classical Neural Networks
QNNs provide potential advantages over classical networks, including:
 Efficient High-Dimensional Data Processing: Quantum parallelism allows QNNs to
process complex data relationships with fewer parameters.
 Reduced Parameters for Some Tasks: Quantum properties may reduce the need for
extensive parameterization.
 Alignment with Quantum Hardware: QNNs leverage emerging quantum hardware,
which can improve efficiency and scalability.

References
1. Schuld, M., Sweke, R., & Meyer, J. J. (2019). "Evaluating analytic gradients on
quantum hardware." Physical Review A, 99(3), 032331.
2. Mitarai, K., Negoro, M., Kitagawa, M., & Fujii, K. (2018). "Quantum circuit learning."
Physical Review A, 98(3), 032309.
3. Benedetti, M., Lloyd, E., Sack, S., & Fiorentini, M. (2019). "Parameterized quantum
circuits as machine learning models." Quantum Science and Technology, 4(4),
043001.
4. Henderson, M., Shakya, S., Pradhan, S., & Cook, T. (2020). "Quanvolutional neural
networks: powering image recognition with quantum circuits." Quantum Machine
Intelligence, 2(1), 1-9.

You might also like