Logarithmic Multiplier
Logarithmic Multiplier
Abstract. Neural networks on chip have found some niche areas of ap-
plications, ranging from massive consumer products requiring small costs
to real-time systems requiring real time response. Speaking about latter,
iterative logarithmic multipliers show a great potential in increasing per-
formance of the hardware neural networks. By relatively reducing the size
of the multiplication circuit, the concurrency and consequently the speed
of the model can be greatly improved. The proposed hardware implemen-
tation of the multilayer perceptron with on chip learning ability confirms
the potential of the concept. The experiments performed on a Proben1
benchmark dataset show that the adaptive nature of the proposed neural
network model enables the compensation of the errors caused by inexact
calculations by simultaneously increasing its performance and reducing
power consumption.
1 Introduction
Artificial neural networks are commonly implemented as software models run-
ning in general purpose processors. Although widely used, these systems usually
operate on von-Neumann architecture which is sequential in nature and as such
can not exploit the inherent concurrency present in artificial neural networks.
On the other hand, hardware solutions, specially tailored to the architecture of
neural network models, can better exploit the massive parallelism, thus achiev-
ing much higher performances and smaller power consumption then the ordinary
systems of comparable size and cost. Therefore, the hardware implementations
of artificial neural network models have found its place in some niche applica-
tions like image processing, pattern recognition, speech synthesis and analysis,
adaptive sensors with teach-in ability and so on.
Neural chips are available in analogue and digital hardware designs [1,2]. The
analogue designs can take advantage of many interesting analogue electronics el-
ements which can directly perform the neural networks’ functionality resulting
in very compact solutions. Unfortunately, these solutions are susceptible to noise,
which limits their precision, and are extremely limited for on-chip learning. On the
other hand, digital solutions are noise tolerant and have no technological
obstacles for on-chip learning, but result in larger circuit size. Since the design of
A. Dobnikar, U. Lotrič, and B. Šter (Eds.): ICANNGA 2011, Part I, LNCS 6593, pp. 158–168, 2011.
c Springer-Verlag Berlin Heidelberg 2011
Logarithmic Multiplier in Hardware Implementation of Neural Networks 159
The iterative logarithmic multiplier (ILM) was proposed by Babic et al. in [5].
It simplifies the logarithm approximation introduced in [3] and introduces an
iterative algorithm with various possibilities for achieving an error as small as
required and the possibility of achieving an exact result.