0% found this document useful (0 votes)
5 views

Design and Implementation of Hardware Computation For Convolutional Neural Networks

Uploaded by

Luu Nguyễn
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Design and Implementation of Hardware Computation For Convolutional Neural Networks

Uploaded by

Luu Nguyễn
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Design and Implementation of Hardware

Computation for Convolutional Neural Networks


1st Given Name Surname
dept. name of organization (of Aff.)
name of organization (of Aff.)
City, Country
email address or ORCID

Abstract—Convolutional Neural Networks (CNNs) are vital Since data movement can consume more energy than the
in artificial intelligence and machine learning, especially for computation itself [5], optimizing CNN processing involves
image processing and recognition. They are widely used in facial not only achieving high parallelism for increased throughput
recognition, object detection, and image classification, signifi-
cantly improving system performance and accuracy. However, but also enhancing the efficiency of data movement across the
deploying CNNs on hardware poses challenges due to their system. To address these challenges, it is crucial to design a
high computational and memory requirements and the complex compute scheme, called a dataflow, that can support a highly
computations arising from the weight-sharing mechanism used parallel compute paradigm while optimizing the energy cost
in CNNs. Designing efficient hardware accelerators involves of data movement from both on-chip and off-chip. The cost
balancing speed, power consumption, and resource usage.
In this research, the design and implementation of a computation of data movement is reduced by exploiting data reuse in a
unit for CNNs include a convolutional accelerator, max-pooling multilevel memory hierarchy.
layer, fully connected layers, and a softmax activation function. Almost all existing FPGA-based CNN implementations have
This study utilizes a data flow called weight stationary (WS) to focused on exploring the limitations of memory bandwidth and
minimize data movement and reuse partial sums based on spatial computing parallelism. Works such as [5] and [6] alleviate
architecture with an array of processing elements. Specifically,
a softmax activation function is implemented using a Look- pressure on off-chip memory by reducing the precision of
Up Table (LUT) technique to construct a complete AlexNet the neural network parameters, as lower numerical precision
(batch size N = 1) for the handwritten digit recognition task has been shown to be sufficient for CNNs. Other studies,
using the MNIST dataset and fixed-point representation for data. such as [6] and [7], have exploited fixed-point quantization,
The design utilizes 125,021 Flip-Flops, 624 Distributed RAM loop and task pipelining, loop unrolling, parallelization, and
(LUTRAM), 92,727 Look-Up Tables (LUTs), 269 Input/Output
(I/O) pins, 2 Global Buffers (BUFG), 162.5 BRAM (Block RAM), loop tiling to enhance throughput and memory bandwidth
and 183 Digital Signal Processors (DSPs) at a frequency of 100 and lowest FPGA resource requirement. Regarding energy
MHz on the ZCU104 board. The system achieves an accuracy of efficiency, reference [8] emphasizes this aspect by employing
98% in software and 95% after hardware simulation. The system a binary weight method, converting CNN computations into
executes the convolutional layer at 33.5 frames per second, with multiplication-free processing. In [9] and [10], all layers are
the total power consumption of the entire network being 4.87 W.
processed in a computing unit called a matrix multiplication
Index Terms—Convolutional neural networks (CNNs), FPGA, engine, and the utilization of a hierarchical memory structure
weight stationay, spatial architecture. and on-chip buffers reduces the bandwidth limitations of off-
chip memory. However, these studies have not yet estab-
I. I NTRODUCTION lished a comprehensive dataflow that effectively addresses the
Convolutional Neural Networks (CNNs) [1], a specialized challenges of data movement and energy efficiency in CNN
form of Deep Neural Networks (DNNs) [2], have revolution- processing.
ized the field of artificial intelligence by significantly enhanc- In this study, a hardware computation unit for CNNs is imple-
ing the capability of computers to interpret and analyze visual mented, including convolutional computation (CONV), max-
data. Unlike traditional neural networks, CNNs are designed pooling (POOL), fully connected layers (FC), and a softmax
to automatically and adaptively learn spatial hierarchies [3] of activation function. Most of the computations in CNNs come
features from images through convolutional layers. This ability from the convolutional layers. To optimize performance, the
to capture local patterns and spatial relationships makes CNNs key contributions of this work are:
particularly effective for image processing tasks such as facial
recognition, object detection, and image classification. (1) A spatial architecture using an array of PEs with size of
However, state-of-the-art CNNs [4] require tens to hundreds array depend on size of kernel matrix.
of megabytes of parameters and involve billions of operations (2) A data flow called weight stationary is employed, where
per inference pass. This demands substantial data movement weights are kept fixed within an array of Processing
between on-chip and off-chip memory to support computation. Elements (PEs).
(3) The utilization of hierarchical memory structure and Table I
FIFO asynchronous on-chip buffer reduces the off-chip S HAPE PARAMETER OF A CNN LAYER
memory access and reuse data. Shape parameter Description
(4) Fixed-point representation to reduce computational com- N batch size
plexity and improve hardware efficiency. M number of filter/ofmap channel
C channel ifmap
(5) Activation function approximation using lookup table H/W ifmap height/width
methods: By precomputing and storing the values of the R/S filter height/width
softmax function in a LUT, we can significantly reduce E/F ofmap height/width
the need for complex calculations during inference.
This paper is organized as follows. Section II provides
fundamental knowledge of 3-D convolution operation, part
III covers the workflow of software-hardware co-design, and
Part IV describes the architecture of the accelerator and the
characteristics of this study including data flow, processing
element array and ping pong buffer.

II. BACKGROUND OF CNN S


CNNs are constructed from multiple computational layers
organized as a directed acyclic graph (DAG)[5]. Each layer
extracts an abstraction of data provided by the previous layer,
which is referred to as a feature map (fmap). The most
common layers in CNNs are convolution (CONV), pooling
(POOL), and fully connected (FC) layers. In CONV layers,
as illustrated in figure 7, two-dimensional (2-D) filters slide
over the input images or feature maps (Ifmaps), performing
Fig. 1. Software and Hardware co-design
convolution operations to extract feature characteristics from
local regions and generating output images or feature maps
(Ofmaps). In the case of three-dimensional (3D) convolution,
a batch of 3-D ifmaps is processed by a group of 3-D filters
in a layer. In addition, there is a 1-D bias that is added to the
filtering results. Given the shape parameters in Table I, the
computation of a layer is defined as combines both software and hardware to implement a fine-
tuned AlexNet model for handwritten digit recognition. On the
O[z][u][x][y] = ReLU (1) software side, the model is trained using the MNIST dataset,
where a modified AlexNet architecture is used to improve
C−1
XXX R−1 S−1
!recognition accuracy specific to the task. The model’s weights,
B[u] + I[z][k][Ux + i][Uy + j]W[u][k][i][j] obtained
, from training, are converted into a fixed-point repre-
k=0 i=0 j=0 sentation to be compatible with the hardware requirements.
(2)
On the hardware side, the architecture is designed and
0 < z < N, 0 < u < M, 0 < y < E, 0 < x < F, implemented in RTL (Register Transfer Level), with IP ver-
ification performed to ensure the accuracy and reliability
H −R+U W −S+U of the hardware model. The fixed-point weights from the
E= , F =
U U software are transferred to the hardware environment, where
where O, I, W, and B are the matrices of the ofmaps, they are integrated into the AlexNet network architecture.
ifmaps, filters, and biases, respectively. U is a given stride size. This complete hardware network is then used for real-time
Fig. ?? shows a visualization of this computation (ignoring recognition of handwritten digit images.
biases). After the convolutions, activation functions, such as
the rectified linear unit (ReLU) [6], are applied to introduce The software and hardware components are connected
nonlinearity. through a feedback loop for image recognition, accuracy calcu-
lation, and performance comparison, as illustrated in Figure 1.
III. H ARDWARE AND S OFTWARE C O -D ESIGN This collaborative framework enables efficient processing and
The block digram in Fig. III gives an overview of work- verification, leveraging both the flexibility of software and the
flow of this proposal. The co-design approach in this project performance of hardware to achieve optimal results.
This architecture reduces the energy required for weight
reads, maximizes convolutional operations, and enables effi-
cient reuse of the filter.
• Filter reuse: Each filter weight is reused E x F times
within one input feature map (ifm) channel.
• IFM reuse: Each input feature map (ifm) is reused R x
S times.

The PE array will perform a 2-D convolution between the


Fig. 2. System architecture kernel and the IFM window, with each row of the PE array
executing a 1-D convolution multiplication as described in
Fig 4. Initially, the first pixel, ifm1, from row 1 is pushed into
the PE array, at which point psum in in all PEs is initialized
IV. S YSTEM D ESIGN to 0. The result of the multiplication is stored in the register
A. Architecture Overview within each PE and passed to the adjacent PE via the psum in
signal at each PE. After W cycles, the first row has been fully
Figure 2 illustrates the block diagram of the architecture and
read, and the FIFOs are filled, ready to push one value per
memory hierarchy of the convolutional accelerator, which in-
cycle to the psum in input of the first PE in the row below.
cludes a PE array, ping-pong buffer, controller block and ReLU
The sliding window will shift downward until all H rows of
activation function. This block is responsible for convolution
the IFM are read, at which point the values in FIFO END
operations, max pooling, ReLU, and fully connected layers.
represent the result of the 2-D convolution of the kernel (RxS)
The weights, biases, and input feature maps are stored in off-
with the IFM (RxW). However, this is not the final result,
chip DRAM and are read into the accelerator via buffers to
as in 3-D convolution, accumulation occurs along the depth
reduce latency when accessing off-chip memory. The memory
dimension, using a buffer of size ExF to temporarily store the
hierarchy consists of three types: off-chip DRAM, a global
2-D convolution results for each channel. The mux will select
buffer (ping-pong buffer), and registers within each PE.
input from the FIFO for the first channel’s computation and
Each PE in the PE array is responsible for computing a
alternate for the other channels.
convolution operation or max-pooling and accumulating the
result through the internal PE register and a ping-pong buffer.
The ping-pong buffer is closely associated with the PE array
in rows. The accelerator is controlled by finite state machine
(FSM) in controller block.

B. Dataflow
Data flow is a major challenge when designing computing
units for convolutional layers, as computations in these layers
are highly complex and involve a large amount of memory
access. To optimize data movement, we use a dataflow called
weight stationary. In this dataflow, the weight filters are
stored statically in small local memory such as registers in
PE, forming a PE array of size RxS, corresponding to the Fig. 6. Convolutional architecture for kernel size of 3x3
size of the kernel matrix. The input feature map (activation)
is streamed row by row with a bandwidth of 1 pixel per C. Fix-Point representation
cycle, broadcasting activations and accumulating partial sums To deploy a neural network model onto hardware, all
spatially across the PE array. Each activation is multiplied weights must be converted into fixed-point representations.
and accumulated with the weight stored statically in the PE. To determine the exact number of bits needed for fixed-point
Each primitive multiplication result needs to be stored and representation, we need to identify the output range of each
accumulated with others to form partial sums. By using ping layer in the AlexNet network. The weights in the network
pong buffer, we can store and reuse the primitive results for layers are trained based on the AlexNet model, which has
subsequent references. The number of buffers needed is equal been fine-tuned for the handwritten digit recognition task on
to the number of rows of the weight matrix and the size of the MNIST dataset. This set of weights achieves a recognition
the FIFO depends on the row size of the input feature map accuracy of 98.62% on the MNIST test set. The figure 5
(IFM). The size of the buffer is calculated using the following: below visualizes the value range of weights across the layers
W + 2p − k of the pre-trained network.
Fifo size = +1 (3)
s
Fig. 3. 3-D convolutional operator.

in the DNN is generally not too large. In this study’s model, the
input data range is [-5, 5], and the total number of input data
points is 81,920. As described on the table II, the hardware
input data is fixed to 17 bits with 1 sign bit, 3 integer bits,
and 13 fraction bits and the output is 32 bits. The absolute
error of the output and the floating-point result from software
calculations does not exceed 4.5 ×10−6 , and the relative error
does not exceed 0.88%.
The default address without the data index is 0. According
Fig. 4. 1-D Convolutional to this method, the fixed-point number of each input data x
can be used as its lookup table address to index its exponential
From the figure, it can be observed that most of the weights value ex . The detailed mapping relationship between the input
in all layers of the network fall within the range of [-1:1]. fixed-point number and lookup table address is shown in Table
Therefore, only 1 bit is needed to represent the sign, and no IV-D below.
bits are required for the integer part.
Table III
L OOK - UP TABLE FOR EXPONENTIAL FUNCTION
Table II
F IXED - POINT REPRESENTATION FOR DATA Address Fixed-point number
True value
17 bit 32 bit
Data type Signed bit Integer part Fraction part 1 011 0000000000000 0000000000000000000000000000001 Exp(-5.00)
Parameter 1 0 12 --- --- ---
Activation/ 1 110 1010100011110 00000000000000000001000110001100 Exp(-2.32)
1 9 12
Image input --- --- ---
Input of Softmax 1 3 13 0 001 0001100111110 00000000000010010000110010000000 Exp(1.10)
Output of Softmax 1 15 16 --- --- ---
0 001 001111111110 00000000011001111111011010111101 Exp(2.32)
--- --- ---
D. Softmax function 0 101 0000000000000 01010110000010100111011100000000 Exp(5.00)
The softmax function is commonly used in the final layer of
a CNN and plays a crucial role in the hardware implementation
of the CNN. The function is given by the formula 4, which
shows that the highest computational cost in the hardware
implementation of the softmax function lies in the computation
of the exponential function. V. EXPERIMENTAL SETUP AND RESULT
exi The design, training, and extraction of post-training param-
softmax(xi ) = Pn xj
(4) eters for the network were carried out on Google Colab with
j=1 e
GPU support (Tesla T4), using the PyTorch library, and all
A simple way to implement the exponential function is by network weights are of the float data type. The model used
using a Look-up Table (LUT), which helps avoid the need for for this experimental task is based on the AlexNet architecture,
division in hardware. which has been fine-tuned to meet the requirements of the task,
Instead of calculating the softmax function directly, one can as described in Figure 4. The details of the model are shown in
compute the inverse of the softmax function. Therefore, the Table 5.3. The model was trained using the Stochastic Gradient
inverse formula of the softmax function is as follows: Descent (SGD) method with the following configuration pa-
N
1 X rameters: Image dataset: MNIST, Number of training samples:
= exj −xi (5) 60,000 images, Learning rate: 0.005, Momentum: 0.8, Batch
softmax(x)i j=1 size: 32, Epochs: 20.
Because of normalization, the input data for the softmax layer An independent test dataset, separate from the training set,
Fig. 5. Range of weight values across hidden layers.

Table IV
ACCURACY OF CNN-BASED R ECOGNITION ON S OFTWARE A FTER T RAINING

Class 0 1 2 3 4 5 6 7 8 9 Average
Accuracy 99.18% 99.30% 98.55% 98.02% 97.96% 98.88% 98.23% 99.03% 98.56% 98.41% 98.6%

Table V
ACCURACY OF RTL-BASED R ECOGNITION ON FPGA

Class 0 1 2 3 4 5 6 7 8 9 Average
Accuracy 96.67% 96.64% 96.75% 94.44% 94.31% 89.05% 96.34% 98.40% 95.65% 94.51% 95.3%

Fig. 7. Overall structure of Softmax function.

consisting of 10,000 images containing digits from 0 to 9, is


used for evaluation. The software testing is performed by a
Python-based program. According to Table V, the pre-trained
neural network with float-type weights achieves an accuracy
of 98.6%.
The deployment architecture operates at a frequency of 100
MHz, utilizing the Zynq UltraScale+ ZCU104 FPGA board,
achieving a maximum processing speed of 33.5 FPS with a
power consumption of 4.8W.

Table VI
SUMMARY OF TOTAL RESOURCE USAGE FOR THE ENTIRE NETWORK ON
FPGA

Result
Resource
Use Avaiable Utilization
LUT 93727 230400 40.68% Fig. 8. Fine-tuned AlexNet Network Architecture
LUTRAM 624 101760 0.61%
FF 125021 460800 27.13%
BRAM 162.5 312 52.08%
I/O 269 360 74.72%
BUFG 2 544 0.37%
size of the convolution layer increases, so does the number of
DSP 183 1728 10.59% LUTs, FFs, and DSPs. Moreover, although CONV3, CONV4,
and CONV5 layers have different input and output channel
numbers, they consume a similar amount of resources. This is
because each channel computation is stored in a fixed buffer,
Table VI provides a detailed summary of the resource allowing this data to be reused for subsequent channel passes,
utilization results for each convolution and fully connected highlighting the data reuse capability of partial sum across
layer of the CNN. It is evident that as the sliding window channels.
Table VII [8] Xiao Dong, Xiaolei Zhu, and De Ma, ”Hardware Implementation of
S UMMARY TABLE OF METRICS FOR FINE - TUNED A LEX N ET NETWORK Softmax Function Based on Piecewise LUT” 2019 IEEE International
Workshop on Future Computing (IWOFC)
Power Num.of Num.of
Layer LUT FF DSP
(mW) MAC PE
CONV1 333 11.71M 121 34303 73793 121
CONV2 84 37.32M 25 8166 18143 25
CONV3 26 12.46M 9 2301 4551 9
CONV4 26 24.92M 9 2301 4551 9
CONV5 26 12.46M 9 2301 4549 9
Sum
4.8W 104.65M 183 93727 125021 183
(all)

VI. CONCLUSION
In this study, we presented a hardware architecture tailored
for Convolutional Neural Networks (CNNs) with a focus on
efficient computation, data movement reduction, and resource
management. By implementing a weight-stationary data flow
on a spatially arranged array of processing elements, we
minimized data transfers and maximized filter reuse, leading
to a significant reduction in energy requirements. Our design
incorporated fixed-point representation and a Look-Up Table-
based softmax function to streamline computation and opti-
mize hardware efficiency. The architecture was successfully
validated using the fine-tuned AlexNet model on the MNIST
dataset, achieving a high recognition accuracy of 95% on
hardware compared to 98% in software simulation, with a real-
time performance of 33.5 frames per second. The proposed
CNN accelerator efficiently utilized available FPGA resources,
as demonstrated by the experimental results, which confirm its
potential in deploying CNNs on low-power embedded systems,
paving the way for further advancements in energy-efficient AI
hardware solutions.

R EFERENCES
[1] Yu-Hsin Chen, Tushar Krishna, Joel S. Emer, Fellow, “Eyeriss: An
Energy-Efficient Reconfigurable Accelerator for Deep Convolutional
Neural Networks,” IEEE Journal of Solid-State Circuits, vol. 52, no.
1, pp. 127–138, 2017.
[2] Q. Xiao, Y. Liang, L. Lu, S. Yan, and Y.W. Tai, “Exploring het-
erogeneous algorithms for accelerating deep convolutional neural net-
works on fpgas,” in Design Automation Conference (DAC), 2017 54th
ACM/EDAC/IEEE, 2017, pp. 1–6.
[3] S. I. Venieris and C.-S. Bouganis, “fpgaconvnet: Automated mapping of
convolutional neural networks on fpgas,” in Proceedings of the 2017
ACM/SIGDA International Symposium on Field-Programmable Gate
Arrays (FPGA), 2017, pp. 291–292.
[4] Y. Ma, Y. Cao, S. Vrudhula, and J.-s. Seo, “Optimizing loop oper-
ation and dataflow in fpga acceleration of deep convolutional neural
networks,” in Proceedings of the 2017 ACM/SIGDA International Sym-
posium on FieldProgrammable Gate Arrays (FPGA), 2017, pp. 45–54.
[5] J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N.
Xu, S. Song et al., “Going deeper with embedded fpga platform for
convolutional neural network,” in Proceedings of the 2016 ACM/SIGDA
International Symposium on Field-Programmable Gate Arrays (FPGA),
2016, pp. 26–35.
[6] A. Aimar, H. Mostafa, E. Calabrese, A. Rios-Navarro, R. Tapiador-
Morales, I.-A. Lungu, M. B. Milde, F. Corradi, A. Linares-Barranco,
S.-C. Liu, and T. Delbruck, ”Nullhop: A flexible convolutional neural
network accelerator based on sparse repre.
[7] R. Zhao, X. Niu, and W. Luk, ”Automatic optimising cnn with depthwise
separable convolution on fpga: (abstact only),” in Proceedings of the
2018 ACM/SIGDA International Symposium on Field-Programmable
Gate Arrays (FPGA), 2018, pp. 285–285.

You might also like