Design and Implementation of Hardware Computation For Convolutional Neural Networks
Design and Implementation of Hardware Computation For Convolutional Neural Networks
Abstract—Convolutional Neural Networks (CNNs) are vital Since data movement can consume more energy than the
in artificial intelligence and machine learning, especially for computation itself [5], optimizing CNN processing involves
image processing and recognition. They are widely used in facial not only achieving high parallelism for increased throughput
recognition, object detection, and image classification, signifi-
cantly improving system performance and accuracy. However, but also enhancing the efficiency of data movement across the
deploying CNNs on hardware poses challenges due to their system. To address these challenges, it is crucial to design a
high computational and memory requirements and the complex compute scheme, called a dataflow, that can support a highly
computations arising from the weight-sharing mechanism used parallel compute paradigm while optimizing the energy cost
in CNNs. Designing efficient hardware accelerators involves of data movement from both on-chip and off-chip. The cost
balancing speed, power consumption, and resource usage.
In this research, the design and implementation of a computation of data movement is reduced by exploiting data reuse in a
unit for CNNs include a convolutional accelerator, max-pooling multilevel memory hierarchy.
layer, fully connected layers, and a softmax activation function. Almost all existing FPGA-based CNN implementations have
This study utilizes a data flow called weight stationary (WS) to focused on exploring the limitations of memory bandwidth and
minimize data movement and reuse partial sums based on spatial computing parallelism. Works such as [5] and [6] alleviate
architecture with an array of processing elements. Specifically,
a softmax activation function is implemented using a Look- pressure on off-chip memory by reducing the precision of
Up Table (LUT) technique to construct a complete AlexNet the neural network parameters, as lower numerical precision
(batch size N = 1) for the handwritten digit recognition task has been shown to be sufficient for CNNs. Other studies,
using the MNIST dataset and fixed-point representation for data. such as [6] and [7], have exploited fixed-point quantization,
The design utilizes 125,021 Flip-Flops, 624 Distributed RAM loop and task pipelining, loop unrolling, parallelization, and
(LUTRAM), 92,727 Look-Up Tables (LUTs), 269 Input/Output
(I/O) pins, 2 Global Buffers (BUFG), 162.5 BRAM (Block RAM), loop tiling to enhance throughput and memory bandwidth
and 183 Digital Signal Processors (DSPs) at a frequency of 100 and lowest FPGA resource requirement. Regarding energy
MHz on the ZCU104 board. The system achieves an accuracy of efficiency, reference [8] emphasizes this aspect by employing
98% in software and 95% after hardware simulation. The system a binary weight method, converting CNN computations into
executes the convolutional layer at 33.5 frames per second, with multiplication-free processing. In [9] and [10], all layers are
the total power consumption of the entire network being 4.87 W.
processed in a computing unit called a matrix multiplication
Index Terms—Convolutional neural networks (CNNs), FPGA, engine, and the utilization of a hierarchical memory structure
weight stationay, spatial architecture. and on-chip buffers reduces the bandwidth limitations of off-
chip memory. However, these studies have not yet estab-
I. I NTRODUCTION lished a comprehensive dataflow that effectively addresses the
Convolutional Neural Networks (CNNs) [1], a specialized challenges of data movement and energy efficiency in CNN
form of Deep Neural Networks (DNNs) [2], have revolution- processing.
ized the field of artificial intelligence by significantly enhanc- In this study, a hardware computation unit for CNNs is imple-
ing the capability of computers to interpret and analyze visual mented, including convolutional computation (CONV), max-
data. Unlike traditional neural networks, CNNs are designed pooling (POOL), fully connected layers (FC), and a softmax
to automatically and adaptively learn spatial hierarchies [3] of activation function. Most of the computations in CNNs come
features from images through convolutional layers. This ability from the convolutional layers. To optimize performance, the
to capture local patterns and spatial relationships makes CNNs key contributions of this work are:
particularly effective for image processing tasks such as facial
recognition, object detection, and image classification. (1) A spatial architecture using an array of PEs with size of
However, state-of-the-art CNNs [4] require tens to hundreds array depend on size of kernel matrix.
of megabytes of parameters and involve billions of operations (2) A data flow called weight stationary is employed, where
per inference pass. This demands substantial data movement weights are kept fixed within an array of Processing
between on-chip and off-chip memory to support computation. Elements (PEs).
(3) The utilization of hierarchical memory structure and Table I
FIFO asynchronous on-chip buffer reduces the off-chip S HAPE PARAMETER OF A CNN LAYER
memory access and reuse data. Shape parameter Description
(4) Fixed-point representation to reduce computational com- N batch size
plexity and improve hardware efficiency. M number of filter/ofmap channel
C channel ifmap
(5) Activation function approximation using lookup table H/W ifmap height/width
methods: By precomputing and storing the values of the R/S filter height/width
softmax function in a LUT, we can significantly reduce E/F ofmap height/width
the need for complex calculations during inference.
This paper is organized as follows. Section II provides
fundamental knowledge of 3-D convolution operation, part
III covers the workflow of software-hardware co-design, and
Part IV describes the architecture of the accelerator and the
characteristics of this study including data flow, processing
element array and FIFO asynchronous.
B. Dataflow
Data flow is a major challenge when designing computing
units for convolutional layers, as computations in these layers
are highly complex and involve a large amount of memory
access. To optimize data movement, we use a dataflow called
weight stationary. In this dataflow, the weight filters are
stored statically in small local memory such as registers in
PE, forming a PE array of size RxS, corresponding to the
size of the kernel matrix. The input feature map (activation)
is streamed row by row with a bandwidth of 1 pixel per
cycle, broadcasting activations and accumulating partial sums
spatially across the PE array. Each activation is multiplied
and accumulated with the weight stored statically in the PE. Fig. 5. Convolutional architecture for kernel size of mxn
Each primitive multiplication result needs to be stored and
accumulated with others to form partial sums. By using ping
pong buffer, we can store and reuse the primitive results for
subsequent references. The number of buffers needed is equal
to the number of rows of the weight matrix and the size of
the FIFO depends on the row size of the input feature map
(IFM). The size of the buffer is calculated using the following:
W + 2p − k
Fifo size = +1 (3)
s
This architecture reduces the energy required for weight Fig. 6. 1-D Convolutional
reads, maximizes convolutional operations, and enables effi-
cient reuse of the filter. 1-D Convolution Primitive PE array: the weight station-
• Filter reuse: Each filter weight is reused E x F times ary dataflow first divides the computation in (1) into 1-
within one input feature map (ifm) channel. D convolution primitives that can all run in parallel. Each
• IFM reuse: Each input feature map (ifm) is reused R x primitive operates on one weight of filter weights and one
S times. ifm of ifmap values, generating nine of result of multilple
and accumulator. The result from different primitives are then
further accumulated to generate the partial sum and ofmap
value.
Fig. 3. 3-D convolutional operator.
Table IV
ACCURACY OF CNN-BASED R ECOGNITION ON S OFTWARE A FTER T RAINING
Class 0 1 2 3 4 5 6 7 8 9 Average
Accuracy 99.18% 99.30% 98.55% 98.02% 97.96% 98.88% 98.23% 99.03% 98.56% 98.41% 98.6%
Table V
ACCURACY OF RTL-BASED R ECOGNITION ON FPGA
Class 0 1 2 3 4 5 6 7 8 9 Average
Accuracy 96.67% 96.64% 96.75% 94.44% 94.31% 89.05% 96.34% 98.40% 95.65% 94.51% 95.3%
Table VI
SUMMARY OF TOTAL RESOURCE USAGE FOR THE ENTIRE NETWORK ON
FPGA
Result
Resource
Use Avaiable Utilization
LUT 93727 230400 40.68%
LUTRAM 624 101760 0.61%
FF 125021 460800 27.13%
BRAM 162.5 312 52.08%
I/O 269 360 74.72%
BUFG 2 544 0.37%
DSP 183 1728 10.59%
Table VII
S UMMARY TABLE OF METRICS FOR FINE - TUNED A LEX N ET NETWORK
Fig. 9. Fine-tuned AlexNet Network Architecture
Power Num.of Num.of
Layer LUT FF DSP
(mW) MAC PE
CONV1 333 11.71M 121 34303 73793 121
[7] R. Zhao, X. Niu, and W. Luk, ”Automatic optimising cnn with depthwise CONV2 84 37.32M 25 8166 18143 25
separable convolution on fpga: (abstact only),” in Proceedings of the CONV3 26 12.46M 9 2301 4551 9
2018 ACM/SIGDA International Symposium on Field-Programmable CONV4 26 24.92M 9 2301 4551 9
Gate Arrays (FPGA), 2018, pp. 285–285. CONV5 26 12.46M 9 2301 4549 9
[8] Xiao Dong, Xiaolei Zhu, and De Ma, ”Hardware Implementation of Sum
Softmax Function Based on Piecewise LUT” 2019 IEEE International 4.8W 104.65M 183 93727 125021 183
(all)
Workshop on Future Computing (IWOFC)