100% found this document useful (1 vote)
94 views

Brain-Inspired Computing System

This document discusses brain-inspired computing and the concept of "neuromorphic completeness" which aims to determine the capability of brain-inspired computing systems and whether their hardware and software are compatible. It analyzes existing brain-inspired chip designs, application development frameworks, and introduces the concept of neuromorphic completeness. The document proposes that realizing "general purpose" brain-inspired computing systems will require extensible hardware primitives and corresponding chips that can support all computable applications in a way that balances functionality and efficiency.

Uploaded by

Dr.Shanavaz K T
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
94 views

Brain-Inspired Computing System

This document discusses brain-inspired computing and the concept of "neuromorphic completeness" which aims to determine the capability of brain-inspired computing systems and whether their hardware and software are compatible. It analyzes existing brain-inspired chip designs, application development frameworks, and introduces the concept of neuromorphic completeness. The document proposes that realizing "general purpose" brain-inspired computing systems will require extensible hardware primitives and corresponding chips that can support all computable applications in a way that balances functionality and efficiency.

Uploaded by

Dr.Shanavaz K T
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

TSINGHUA SCIENCE AND TECHNOLOGY

ISSNll1007-0214 07/15 pp664–673


DOI: 1 0 . 2 6 5 9 9 / T S T . 2 0 2 1 . 9 0 1 0 0 1 0
Volume 26, Number 5, October 2021

Towards “General Purpose” Brain-Inspired Computing System

Youhui Zhang , Peng Qu, and Weimin Zheng

Abstract: Brain-inspired computing refers to computational models, methods, and systems, that are mainly inspired
by the processing mode or structure of brain. A recent study proposed the concept of “neuromorphic completeness”
and the corresponding system hierarchy, which is helpful to determine the capability boundary of brain-inspired
computing system and to judge whether hardware and software of brain-inspired computing are compatible with
each other. As a position paper, this article analyzes the existing brain-inspired chips design characteristics and
the current so-called “general purpose” application development frameworks for brain-inspired computing, as well
as introduces the background and the potential of this proposal. Further, some key features of this concept are
presented through the comparison with the Turing completeness and approximate computation, and the analyses of
the relationship with “general-purpose” brain-inspired computing systems (it means that computing systems can
support all computable applications). In the end, a promising technical approach to realize such computing systems
is introduced, as well as the on-going research and the work foundation. We believe that this work is conducive to
the design of extensible neuromorphic complete hardware-primitives and the corresponding chips. On this basis, it is
expected to gradually realize “general purpose” brain-inspired computing system, in order to take into account the
functionality completeness and application efficiency.

Key words: brain-inspired computing; neuromorphic computing; computational completeness; hardware/software


decoupling; system hierarchy

1 Introduction of intelligent computing.


Brain-inspired computing is also called neuromorphic
Brain-inspired computing is a general term for
computing. The term of neuromorphic computing first
computing theory, chip architecture, system design, and
appeared in Ref. [10] written in the 1990s by Professor
algorithms, inspired by the processing mode or structure
Carver Mead of computer science at California Institute
of biological nervous systems or brains. It is considered
of Technology. At that time neuromorphic computing
as one of the most promising technological paths towards
system referred to an adaptive and large-scale parallel
the next generation of artificial intelligence[1–5] , and
computing system using very large scale integrated
brain-inspired computing architecture is also a major
circuits that simulates biological nervous system with
development direction of computer architecture in the
electronic devices. It usually used elementary physical
post-Moore’s Law era[6–8] . Thus, Ref. [9] has stated
phenomena as computational primitives, and represented
that brain-inspired computing would be the next wave
information by the relative values of analog signals.
 Youhui Zhang, Peng Qu, and Weimin Zheng are with With the passage of time, the styles and technologies
the Department of Computer Science and Technology,
of brain-inspired computing chips (also known as
Tsinghua University, Beijing 100084, China. E-mail:
[email protected]; [email protected]; neuromorphic chips. In this paper these two terms are
[email protected]. considered to be synonyms) have changed a lot[4] , but
 To whom correspondence should be addressed. their technical routes can be traced back to Prof. Mead’s
Manuscript received: 2021-01-18; accepted: 2021-02-04 work[10] to some extent, and the following three main

C The author(s) 2021. The articles published in this open access journal are distributed under the terms of the
Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).
Youhui Zhang et al.: Towards “General Purpose” Brain-Inspired Computing System 665

characteristics are still roughly maintained: completeness” (also known as “brain-inspired


 Inspired by the structure and/or processing mode of computational completeness”) and some of the key
the biological brain (or nervous system); features. In the end, a promising technical approach
 Simulate the information processing function of to realize computing systems of neuromorphic
biological neurons and neural synapses through completeness is introduced, as well as the work we are
micro-/nano-devices; doing and the research foundation.
 Realize a new intelligent computer system with low
energy consumption and high efficiency. 2 Research Status
Brain-inspired computing systems employ
2.1 Chips
neuromorphic chips as the core to support various
applications[11, 12] , and basically use Spiking Neural The next generation of high-performance and low power
Networks (SNNs)[13] as the main computing paradigm computer systems might be inspired by the brain.
at present. For such chips, whether they are digital The focus of neuromorphic computing is typically on
or digital-analog-hybrid or based on some novel spiking neural networks, as such systems are more
nonvolatile memory devices[14, 15] , a common feature similar to biological neural networks than artificial
lies in that almost all of them are based on the end-to-end neural networks, which have been commonly used in
co-design principle; so that each type of chip has its modern deep-learning applications. Moreover, most
specific software and hardware interfaces, toolchains, brain-inspired systems share common design principles,
and target applications. These characteristics bring two such as co-location of the memory and processing units.
issues: Some representative studies are given as follows:
 It binds applications with the specific system, and then TrueNorth[18, 19] is a digital neuromorphic chip that
leads to lack of portability and low productivity. That includes 4096 neurosynaptic cores connected via a 2D-
is, it couples software with hardware. mesh NoC, equipped with a toolchain that includes the
 The application range of brain-inspired computing programming paradigm, Corelet[20] . Through Corelet,
is developing rapidly, while hardware features are a complex algorithm is decomposed into simple ones,
summarized from existing applications. Thus it is which will be repeated till tasks can be completed by
difficult to judge whether today’s chips can cope with one or more cores. In addition, TrueNorth provides
emerging applications, that is, the completeness of an optimized strategy to map logical NNs to physical
computing system is uncertain. cores[21] . The Corelet description just matches the
Meanwhile, there have been some efforts to achieve organization of the hardware substrate (although the
“general purpose” development frameworks for brain- description is agnostic to the physical location of each
inspired computing, such as STICK[16] , FUGU[17] , etc., core in the actual hardware[21] ); thus it is bound to the
with the goal of providing a high-level programming hardware platform.
abstraction and realizing a unified development tool Neurogrid[22] is an analog/digital hybrid system with
independent of specific chips, i.e., they intend to a customized network[23] . It uses the Neural Engineering
make hardware specifications/constraints “transparent” Framework (NEF)[24] to configure neuromorphic chips
to application development. But the problem of not being to implement target functions. This allows the designer
able to determine whether the hardware is functional to work on a higher level of abstraction and yet still
enough, i.e., complete, does persist. In other words, produce a detailed model using spiking neurons. Namely,
a brain-inspired computing system can be said to be Neurogrid supports the NEF on the hardware and system
“general-purpose” if it can support all brain-inspired levels. The latest successor of Neurogrid, Braindrop[25] ,
computing applications; currently, how to judge whether also adopted this design philosophy.
a brain-inspired computing system is “general-purpose” Intel’s Loihi[26] chip consists of a many-core mesh
is an open question. of 128 neuromorphic cores, as well as three x86 cores
This article analyzes the existing brain-inspired for task scheduling and IO. It supports SNN-based
chips’ design characteristics and disadvantages, as inference and learning functions. The neuromorphic core
well as the “general purpose” application development contains a hardware pipeline customized for synaptic
framework for brain-inspired computing, and introduces computing (including support for synaptic plasticity,
the recent work of the proposed “neuromorphic which is the basis of SNN learning mechanism),
666 Tsinghua Science and Technology, October 2021, 26(5): 664–673

neural computing, and others. Loihi also provides a and exploit spiking neural dynamics to utilize the
development toolchain[27] , including compilers and run- potential benefits of neuromorphic computing, Fugu
time, and exposes hardware-supported brain-inspired is designed to provide a higher level abstraction as a
computing primitives to developers through interfaces hardware-independent mechanism for linking a variety
such as Python. Due to the limitation of hardware of scalable spiking neural algorithms from a variety of
resources, Loihi currently only supports fixed-point sources.
computation and fixed-point weight data. Another typical work is STICK[16] . Its goal is to
The FACETS[28] project and its successor BrainScaleS develop a new framework for brain-inspired general
have produced wafer-scale IC systems. It developes a purpose computation architectures. This works tries to
graph model to map a complex biological network to the show that the use of neuron-like units with precise timing
hardware network. This strategy requires that both have representation, synaptic diversity, and others could set a
the same underlying structure; thus it is also hardware- complete (Turing complete) computation framework.
specific. These studies are still carried out at the software level,
SpiNNaker[29] has a toolchain that is based on without support for any specific brain-inspired hardware.
the CMPs of ARM cores[30] . Thus, its neural To a large extent, one sentence from Ref. [17] of
computing is completed by software with some hardware Fugu can illustrate the dilemma faced by such software
customization for efficiency. The drawback is that the frameworks: “We envision that as these hardware-
performance will be lower than dedicated hardware. specific interfaces begin to stabilize    ”.
So, most existing chips are based on the end-to-end But, the hardware interfaces are difficult to stabilize
design principle. We can summarize this phenomenon until they have been determined to be complete. That
with a passage from Ref. [25] of Braindrop: “   is, in the face of the rapid development of AI computing
achieving this goal required co-designing all layers of field, it is not enough to summarize the hardware
the software-hardware stack, keeping the theoretical interfaces (functions) only according to the specific
framework in mind even at the lowest levels of the application requirements or hardware specifications.
hardware design”. This design principle leads to higher Therefore, it is necessary to study the completeness
execution efficiency for adapted applications, while the of brain-inspired computing to find out the functional
downside is that hardware constraints are often exposed boundary.
to developers, which usually requires developers to
understand substantial knowledge of neural computing 3 Inspiration from General-Purpose
or neuromorphic hardware, in other words, impairing Computers
portability and productivity. What’s worse, because
hardware functions have been summarized based on the Turing completeness and the von Neumann architecture
requirements of existing applications, it is difficult to are the key factors in the rapid development of
determine where the functional boundary of hardware general-purpose computer technologies. Almost all high-
is and whether it is complete or not facing the rapid level programming languages are Turing complete,
development of this field. Following this trend, various and the general-purpose processors based on the von
brain-inspired computing chips and systems would Neumann architecture can realize Turing completeness
become research and development islands. through some Turing complete instruction set, which
means that any Turing computable function written
2.2 Software in a programming language can be converted into an
On the other hand, in the field of basic software for brain- equivalent instruction sequence on any Turing complete
inspired computing, there are also researchers who have processor (i.e., program compilation). This is why
recognized this problem and tried to study a “general general-purpose computers are called “general-purpose”.
purpose” development framework. Further, the computer hierarchy composed of the
For example, Fugu[17] sought to achieve a software layer, the compiler layer, and the hardware layer
programming platform to enable the development can ensure that the application software and the hardware
of neuromorphic applications without substantial design are compatible with each other while moving
knowledge of the substrate. Rather than necessitating a forward independently (i.e., software and hardware
developer attain intricate knowledge of how to program decoupling), laying a systematic foundation for the
Youhui Zhang et al.: Towards “General Purpose” Brain-Inspired Computing System 667

prosperity and development of the entire field. decoupling of hardware and software while ensuring
Inspired by this principle, the concept of compatibility. As declaimed by Ref. [32], it is a
“neuromorphic completeness”[31] has been proposed: For useful step to unite the work carried out by the many
any given error gap  > 0 and any Turing-computable industrial and academic research groups in the field
function f .x/, a computational system is called of neuromorphic computing, as it helps researchers
neuromorphic complete if it can achieve a function F .x/, to focus on specific aspects of research problems,
such that kF .x/ f .x/k6  for any valid input x. rather than trying to find entire end-to-end solutions. In
Compared with Turing completeness, this definition other words, any neuromorphic complete system can
does not require the system to achieve a function support any brain-inspired computing application, i.e.,
through a series of precise computational steps (that it is “general-purpose” (This concept is orthogonal to
is, algorithms). Further, a corresponding computer Artificial General Intelligence (AGI)).
hierarchy and some hardware primitives have been Further, for a computing paradigm (including chips,
proposed to ensure the completeness of brain-inspired basic software, applications, etc.) with great and long-
computing to make full use of the advantages brought by term development potential, it is necessary to build an
this new concept. The hierarchy has three levels: a Turing application ecosystem: More and more research and
complete software model, a neuromorphic complete development personnel entering the field, rather than
hardware architecture, and a compilation layer between experts with general knowledge (interdisciplinary), is a
the above two. A constructive transformation algorithm prerequisite for the establishment of such an ecological
is designed to convert any Turing computable function environment, otherwise there can only be limited scope
into an equivalence on any neuromorphic complete of applications, impairing the potential for progress.
hardware, which brings the following advantages: 4.2 Neuromorphic completeness and Turing
First, general-purpose computing applications can be completeness
supported. The proposed application-oriented software It is necessary to note that the concept of completeness
model is Turing complete and provides a basis for itself defines the functional boundary of a computing
programming to support various applications (not limited system and does not involve the specific implementation.
to neural networks). Taking Turing completeness as an example: The
Second, compilation feasibility. Through the proposed universal Turing machine is Turing complete, but
hardware primitives and constructive conversion there are many various forms of Turing complete
algorithm, the equivalent conversion of “neuromorphic systems (including recurrent neural network, lambda
completeness” between the “Turing complete” software calculus, some forms of cellular automata, recursively
and the “neuromorphic complete” hardware primitive enumerable languages, etc.), and the common point
sequence is ensured, which realizes the decoupling of is that all can simulate a universal Turing machine
software and hardware. and have the equivalent computational capability. Thus,
Third, a new dimension of system design and a neuromorphic complete system may be not on
optimization, approximate granularity, is introduced. A the neural basis, from an implementation point of
reasonable inference of this definition is that a Turing view. Moreover, although this definition relaxes the
computable function can be implemented by a process requirement on the computing process (compared with
of traditional accurate computation (algorithm), by Turing completeness), they are not opposites; any Turing
approximations (not limit the specific technical means), complete system is neuromorphic complete[31] . This is
or by a hybrid of the two, which enlarges the system also in line with the actual situation of brain-inspired
design space and is conducive to improving efficiency. research: In fact, some well-known computing systems
(such as SpiNNaker from the University of Manchester)
4 Some Highlights of Neuromorphic directly use Turing-complete general-purpose processors
Completeness (ARM cores with custom extension) for main operations.
4.1 A neuromorphic complete system is conducive 4.3 Computational process of a neuromorphic
to the realization of a “general-purpose” brain- complete system could be a combination of
inspired computing system accurate Turing computation and approximation
The proposed concept of completeness could enable the Traditional computer algorithms are derived from the
668 Tsinghua Science and Technology, October 2021, 26(5): 664–673

Turing machine, which is an accurate description of unknown), and training neural networks for symbolic
the specific computing process. In this sense, Turing computation[34] (scientific computation can be divided
computation has no errors. into numerical computation and symbolic computation,
On the basis of Turing completeness, the new while approximate computation is applied to the former),
completeness is compatible with the ability to and so on.
approximate computing results. It should be noted
4.5 Software/hardware decoupling and software/
that here the concept of approximation refers to
hardware co-design
approximation capability, not limited to numerical
approximation. The principle of software/hardware co-design is widely
Specifically, for an objective function, its realization applied in the field of domain-specific architecture.
form may have three ways (but not limited to, because Although this paper has emphasized the decoupling,
the completeness itself does not limit the concrete these two are not contradictory. Software/hardware co-
method): design is an important means to overcome the challenges
First, the entire function is directly approximated by faced by brain-inspired computing.
neural network or lookup table, etc. In our opinion, the decoupling method based on
Second, through reorganizing and approximating completeness is the design foundation, which provides
some specific steps in the computing process of the the feasibility of judging the compatibility between
objective function (i.e., algorithm), the corresponding hardware and software (i.e., it is useful to determine
intermediate results are approximated, combined with whether brain-inspired computing systems are general
the remaining accurate computation. purpose). On this basis, from a performance/energy
Finally, the function as a whole can be realized by efficiency perspective, to design dedicated hardware
some numerical approximation algorithms. elements/primitives for typical algorithms or operations
The above methods are also shown in the experiments (i.e., to expose more functionalities of hardware to the
in Ref. [31]. A direct inference is that if a certain programming level for utilization) is necessary, that is, it
objective function (or part of the objective function’s gives consideration to both functional completeness and
algorithm) cannot be approximated (or is not suitable application efficiency.
to be approximated), then it can be achieved through Specifically, the proposed hierarchy[31] decouples
precise calculations. programming languages and hardware, and it also
There are many ways to achieve approximation, and benefits the codesign procedure. Owing to the
“approximate” computation may not introduce errors. composability of its application-oriented software model,
For example, certain logic functions can be looked up
we can abstract a complex operation supported by the
through truth tables; neural network can achieve Boolean
hardware into a hardware primitive and further wrap
functions accurately[33] . Further, Tianjic chips[5] use
it as an operator on the software level to achieve the
lookup tables to achieve certain non-linear functions,
equivalent function, that originally requires multiple
without affecting the result of applications.
operators for implementation, without affecting other
4.4 Neuromorphic completeness and approximate parts of the software. Thus, it is possible to make full
computation use of new types of hardware without affecting the up-
As mentioned above, the computational process of a level models, which simplifies the programming.
neuromorphic complete system could be a combination
5 How to Achieve a “Neuromorphic
of accurate computation and approximation; thus,
approximate computation is a way to realize a Complete” System Efficiently
neuromorphic complete system. As mentioned earlier, the definition of completeness
On the other hand, the concrete realization means can itself is abstract and does not consider how the
be diverse (rather than approximate computation), such system is implemented or whether the system itself is
as the above-mentioned “certain logic functions can be neuromorphic; the implementation example[31] is just
looked up through truth tables” (there is no error in the a reference model. Thus, we should further study the
process), fitting a surface of the high dimensional vector potential technical route towards neuromorphic complete
space through a neural network (the target function is systems.
Youhui Zhang et al.: Towards “General Purpose” Brain-Inspired Computing System 669

5.1 Methdology combinations of accurate calculation and approximation


The premise is to quantify and compare the (in various ways) to achieve the target function, which
neuromorphic complete system in the early design, so are then input to the hardware part for evaluation and
as to give the target design the right direction and further tuning.
specific optimization guidance. We believe that the early Accordingly, the initialization is to construct a design
comparison is very necessary, especially when dealing space that can reflect the multi-dimensional evaluation
with this type of system with more design space (as metrics (such as performance, cost, precision, etc.)
neuromorphic completeness introduces a new dimension of brain-inspired computing system, and map the
of system design and optimization). description of the above dataflow-like graph into this
The feasibility of this idea lies in that between the space. FSN[31] is such a primary method.
abstract concept of completeness and the system custom Afterwards, the first step is to analyze the parts
design method, there is sufficient space to occupy the that can be achieved in an approximate manner (or
comparison theory and method for computing systems manually specified) and find out the optimal (or
that incorporate specific implementation methods into approximate optimal) combinations, which is followed
consideration. by replacing approximate part(s) with the corresponding
Thus, it is proposed to establish a theory of implementations of suitable method (in the following
quantitative analysis of neuromorphic complete system. content, we take the approximation in terms of neural
This theory is to analyze the collection of core operators networks as the example).
and patterns from the widely used neuromorphic Step 1: Through static analyses, find out the parts that
computation paradigm and representative applications, can be approximated, and then carry out neural network
and then construct a flexible abstract reference model. training with different hyper-parameters to obtain the
By analyzing the “differences” between various brain- approximation scheme. The difficulty lies in that the
inspired systems (in the early stage of design) and this design space is very huge and then it is not feasible to
abstract model, quantitative analysis and comparison traverse all possibilities. So it is necessary to accelerate
could be made to achieve the research goal. Specifically, this process by rapid performance prediction and optimal
this work can be divided into two parts (the workflow is solution search, including (1) how to determine the
presented in Fig.1): approximation granularity of the part, as different parts
The software part’s input is a dataflow-like graph can be disassembled further or reassembled together, (2)
of the target application (or function); the output how to handle the control flow, and (3) how to judge the
is a number of optimal (or approximate optimal) merits of many different schemes.

Fig. 1 Workflow to quantify and compare neuromorphic complete systems in the early design.
670 Tsinghua Science and Technology, October 2021, 26(5): 664–673

Step 2: Replace approximate parts. We should for parallel heterogeneous systems; the latter are
link the cost function to specific hardware for overall optimized for the inherent characteristics of SNN.
evaluation, rather than simply replacing the part with Compared with some state-of-the-art SNN simulators on
Neural Networks (NN). General-Purpose Graphics Processing Unit (GPGPU),
In terms of hardware, what needs to be studied is a the proposed simulator is 1.41 to 9.33 times faster
prototype of a reconfigurable brain-inspired computing than the state-of-the-art SNN simulator on GPGPU.
system for verification and co-design. It contains the Moreover, its performance is 1 to 2 orders of magnitude
following two jobs: faster than other commonly-used simulators on CPU.
(1) Based on the reconfigurable properties of FPGA, The work laid the foundation for the current application-
a scalable and flexible simulation prototype is helpful. oriented software model[31] .
On one hand, it can fully realize all primary elements (2) Neural network training and transformation
of brain-inspired computational core operators, and all We proposed the “training before constraint” strategy
kinds of elements should be extensible and composable. and the corresponding workflow of neural network
On the other hand, it is necessary to make full use of for neuromorphic chips[36] . The input is Deep Neural
on-chip resources of FPGA to realize the features of Network (DNN)/SNN trained by the existing third-party
in-situ and event-driven computation, so as to reflect NN development framework (no hardware constraints
the features of brain-inspired computing paradigm more are involved), and the output is the equivalent neural
accurately and make the behavior of the system more network that conforms to hardware constraints of the
representative. target chip. Afterwards, we enhanced the above work
(2) The automatic design tool for brain-inspired to support a broader range of NN functions[37] . These
system is needed, too. Based on the reconfigurable works are the foundation for compilation based on
properties, automation (or semi-automation with manual neuromorphic complete primitives[31] .
intervention) is implemented according to the given (3) Neuromorphic chip design
optimization direction and guidance scheme for We proposed and designed the brain-inspired chip
verification of the theoretical results. with “reduced instruction set”, FPSA[38, 39] , which
can give full play to the advantages of Resistive
5.2 Reserach foundation
Random Access Memory (ReRAM). Similar with the
Our research team has been designing custom chips, Reduced Instruction Set Computer (RISC) general-
basic software, and computing systems since 2015 purpose computer, FPSA only provides limited and
to efficiently support the brain-inspired computing compact hardware primitives, while the software
paradigms and algorithms. We also participated in the (compiler) supports the full functions of different neural
research and development of the series of Tianjic chip[5] networks through conversion or approximation methods.
led by the Center for Brain-inspired Computing Research This principle is in contrast to most existing chips,
of Tsinghua University. The main work is centered on the existing brain-inspired chips tend to support a wide
establishment of the brain-inspired computer hierarchy variety of complex primitives, because the hardware
(Fig. 2), including the following aspects: interface is defined by specific software requirements or
(1) Spiking neural network programming language hardware specifications. One of the hardware platforms
We proposed the general SNN description language, (simulation) used in the experiments in Ref. [31]
E-PyNN, as well as its compiler and simulator[35] is FPSA. FPSA also investigated mechanisms for
efficiently mapping the results of neural network training
and transformation onto chip.
We have also developed SNN simulators based on
FPGA[40] and software simulation tools[41] for novel non-
volatile memory devices.

6 Conclusion and Outlook


The next decade will be the golden age of
computer architecture development[42] , and brain-
Fig. 2 Hierarchy of neuromorphic computing system. inspired computing is one of the most promising
Youhui Zhang et al.: Towards “General Purpose” Brain-Inspired Computing System 671

solutions. The design philosophy that this article [6] M. M. Waldrop, The chips are down for Moore’s law,
focuses on is to construct a flexible, adaptive, and Nature, vol. 530, no. 7589, pp. 144–147, 2016.
software/hardware decoupling system hierarchy of brain- [7] G. S. Wu, Ten fronties for big data technologies (Part B), (in
inspired computing based on the proposed completeness, Chinese), Big Data Res., vol. 1, no. 3, pp. 113–123, 2015.
[8] G. S. Wu, Ten fronties for big data technologies (Part A), (in
which is conducive to the promotion of the collaborative
Chinese), Big Data Res., vol. 1, no. 2, pp. 109–117, 2015.
development of this interdisciplinary field. [9] J. D. Kendall and S. Kumar, The building blocks of a brain-
From the perspective of the development history inspired computer, Appl. Phys. Rev., vol. 7, no. 1, p. 011305,
of general-purpose computers, the computational 2020.
completeness and software/hardware decoupling [10] C. Mead, Neuromorphic electronic systems, Proc. IEEE,
hierarchy laid the theory and architecture foundation for vol. 78, no. 10, pp. 1629–1636, 1990.
vigorous development. Our work has been inspired by [11] C. D. Schuman, T. E. Potok, R. M. Patton, J. D. Birdwell,
this historical process and is conducive to enabling all M. E. Dean, G. S. Rose, and J. S. Plank, A survey of
kinds of personnel participating in this interdisciplinary neuromorphic computing and neural networks in hardware,
arXiv preprint arXiv: 1705.06963, 2017.
research to focus on their professional fields and
[12] N. Wang, G. G. Guo, B. N. Wang, and C. Wang, Traffic
improve the efficiency of research and development,
clustering algorithm of urban data brain based on a hybrid-
i.e., promotes the progress of this field from isolated augmented architecture of quantum annealing and brain-
research to collaborative and iterative development. inspired cognitive computing, Tsinghua Sci. Technol., vol.
This would be one of the keys to the rapid development 25, no. 6, pp. 813–825, 2020.
of brain-inspired computing systems and the formation [13] W. Maass, Networks of spiking neurons: The third
of scale industries, which will also facilitate the generation of neural network models, Neural Netw., vol.
development of high-efficiency computing systems, 10, no. 9, pp. 1659–1671, 1997.
including supercomputing systems[43] . We are going [14] M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam,
to open-source the implementation to try our best to K. K. Likharev, and D. B. Strukov, Training and operation of
an integrated neuromorphic network based on metal-oxide
promote this process.
memristors, Nature, vol. 521, no. 7550, pp. 61–64, 2015.
Acknowledgment [15] C. Q. Yin, Y. X. Li, J. B. Wang, X. F. Wang, Y. Yang, and T.
L. Ren, Carbon nanotube transistor with short-term memory,
This work was partly supported by the National Natural
Tsinghua Sci. Technol., vol. 21, no. 4, pp. 442–448, 2016.
Science Foundation of China (Nos. 62072266 and
[16] X. Lagorce and R. Benosman, STICK: Spike time interval
62050340) and Beijing Academy of Artificial Intelligence
computational kernel, a framework for general purpose
(No. BAAI2019ZD0403). computation using neurons, precise timing, delays, and
References synchrony, Neural Comput., vol. 27, no. 11, pp. 2261–2317,
2015.
[1] D. Hassabis, D. Kumaran, C. Summerfield, and M. [17] J. B. Aimone, W. Severa, and C. M. Vineyard,
Botvinick, Neuroscience-inspired artificial intelligence, Composing neural algorithms with Fugu, in Proc. Int. Conf.
Neuron, vol. 95, no. 2, pp. 245–258, 2017. Neuromorphic Systems, Knoxville, TN, USA, 2019, pp.
[2] A. H. Marblestone, G. Wayne, and K. P. Kording, Toward 1–8.
an integration of deep learning and neuroscience, Front. [18] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy,
Comput. Neurosci., DOI: 10.3389/fncom.2016.00094. J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo,
[3] B. A. Richards, T. P. Lillicrap, P. Beaudoin, Y. Bengio, R. Y. Nakamura, et al., A million spiking-neuron integrated
Bogacz, A. Christensen, C. Clopath, R. P. Costa, A. de circuit with a scalable communication network and interface,
Berker, S. Ganguli, et al., A deep learning framework for Science, vol. 345, no. 6197, pp. 668–673, 2014.
neuroscience, Nat. Neurosci., vol. 22, no. 11, pp. 1761– [19] S. K. Esser, A. Andreopoulos, R. Appuswamy, P. Datta,
1770, 2019. D. Barch, A. Amir, J. Arthur, A. Cassidy, M. Flickner, P.
[4] K. Roy, A. Jaiswal, and P. Panda, Towards spike-based Merolla, et al., Cognitive computing systems: Algorithms
machine intelligence with neuromorphic computing, Nature, and applications for networks of neurosynaptic cores, in
vol. 575, no. 7784, pp. 607–617, 2019. Proc. 2013 Int. Joint Conf. on Neural Networks, Dallas, TX,
[5] J. Pei, L. Deng, S. Song, M. G. Zhao, Y. H. Zhang, S. Wu, G. USA, 2013, pp. 1–10.
R. Wang, Z. Zou, Z. Z. Wu, W. He, et al., Towards artificial [20] A. Amir, P. Datta, W. P. Risk, A. S. Cassidy, J. A. Kusnitz,
general intelligence with hybrid Tianjic chip architecture, S. K. Esser, A. Andreopoulos, T. M. Wong, M. Flickner, R.
Nature, vol. 572, no. 7767, pp. 106–111, 2019. Alvarez-Icaza, et al., Cognitive computing programming
672 Tsinghua Science and Technology, October 2021, 26(5): 664–673

paradigm: A corelet language for composing networks [32] O. Rhodes, Brain-inspired computing boosted by new
of neurosynaptic cores, in Proc. 2013 Int. Joint Conf. on concept of completeness, Nature, vol. 586, no. 7829, 364–
Neural Networks, Dallas, TX, USA, 2013, pp. 1–10. 366, 2020.
[21] F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. [33] B. Steinbach and R. Kohut, Neural networks – A model of
Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G. J. Boolean functions, in Proc. 5th Int. Workshop on Boolean
Nam, et al., TrueNorth: Design and tool flow of a 65 mW Problems, Freiberg, Germany, 2002.
1 million neuron programmable neurosynaptic chip, IEEE [34] G. Lample and F. Charton, Deep learning for symbolic
Trans. Comput. Aided Des. Integr. Circuits Syst., vol. 34, no. mathematics, in Proc. 8th Int. Conf. on Learning
10, pp. 1537–1557, 2015. Representations, Addis Ababa, Ethiopia, 2020.
[22] B. V. Benjamin, P. R. Gao, E. McQuinn, S. Choudhary, [35] P. Qu, Y. H. Zhang, X. Fei, and W. M. Zheng, High
A. R. Chandrasekaran, J. M. Bussat, R. Alvarez-Icaza,
performance simulation of spiking neural network on
J. V. Arthur, P. A. Merolla, and K. Boahen, Neurogrid:
GPGPUs, IEEE Trans. Parallel Distrib. Syst., vol. 31, no.
A mixed-analog-digital multichip system for large-scale
11, pp. 2510–2523, 2020.
neural simulations, Proc. IEEE, vol. 102, no. 5, pp. 699–
[36] Y. Ji, Y. H. Zhang, S. C. Li, P. Chi, C. H. Jiang, P. Qu,
716, 2014.
Y. Xie, and W. G. Chen, NEUTRAMS: Neural network
[23] P. Merolla, J. Arthur, R. Alvarez, J. M. Bussat, and K.
transformation and co-design under neuromorphic hardware
Boahen, A multicast tree router for multichip neuromorphic
constraints, in Proc. 49th Ann. IEEE/ACM Int. Symp. on
systems, IEEE Trans. Circuits Syst. I Reg. Pap., vol. 61, no.
3, pp. 820–833, 2014. Microarchitecture, Taipei, China, 2016, pp. 1–13.
[24] C. Eliasmith and C. H. Anderson, Neural Engineering: [37] Y. Ji, Y. H. Zhang, W. G. Chen, and Y. Xie, Bridge the
Computation, Representation, and Dynamics in gap between neural networks and neuromorphic hardware
Neurobiological Systems. Cambridge, MA, USA: with a neural network compiler, in Proc. 23rd Int. Conf.
The MIT Press, 2004. on Architectural Support for Programming Languages and
[25] A. Neckar, S. Fok, B. V. Benjamin, T. C. Stewart, N. N. Oza, Operating Systems, Williamsburg, VA, USA, 2018, pp. 448–
A. R. Voelker, C. Eliasmith, R. Manohar, and K. Boahen, 460.
Braindrop: A mixed-signal neuromorphic architecture with [38] Y. Ji, Y. Y. Zhang, X. F. Xie, S. C. Li, P. Q. Wang, X. Hu, Y.
a dynamical systems-based programming model, Proc. H. Zhang, and Y. Xie, FPSA: A full system stack solution for
IEEE, vol. 107, no. 1, pp. 144–164, 2019. reconfigurable ReRAM-based NN accelerator architecture,
[26] M. Davies, N. Srinivasa, T. H. Lin, G. Chinya, Y. Q. Cao, in Proc. 24th Int. Conf. on Architectural Support for
S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, et al., Programming Languages and Operating Systems, New
Loihi: A neuromorphic manycore processor with on-chip York, NY, USA, 2019, pp. 733–747.
learning, IEEE Micro, vol. 38, no. 1, pp. 82–99, 2018. [39] Y. Ji, Z. X. Liu, and Y. H. Zhang, A reduced architecture for
[27] C. K. Lin, A. Wild, G. N. Chinya, Y. Q. Cao, M. Davies,
ReRAM-based neural network accelerator and its software
D. M. Lavery, and H. Wang, Programming spiking neural
stack, IEEE Trans. Comput., vol. 70, no, 3, pp. 316–331,
networks on Intel’s Loihi, Computer, vol. 51, no. 3, pp.
2021.
52–61, 2018.
[40] J. H. Han, Z. L. Li, W. M. Zheng, and Y. H. Zhang,
[28] K. Wendt, M. Ehrlich, and R. Schüffny, A graph theoretical
Hardware implementation of spiking neural networks on
approach for a multistep mapping software for the facets
project, in Proc. 2nd WSEAS Int. Conf. on Computer FPGA, Tsinghua Sci. Technol., vol. 25, no. 4, pp. 479–486,
Engineering and Applications, Capulco, Mexico, 2008, pp. 2020.
189–194. [41] X. Fei, Y. H. Zhang, and W. M. Zheng, XB-SIM : A
[29] S. B. Furber, D. R. Lester, L. A. Plana, J. D. Garside, E. simulation framework for modeling and exploration of
Painkras, S. Temple, and A. D. Brown, Overview of the ReRAM-based CNN acceleration design, Tsinghua Sci.
SpiNNaker system architecture, IEEE Trans. Comput., vol. Technol., vol. 26, no. 3, pp. 322–334, 2021.
62, no. 12, pp. 2454–2467, 2013. [42] J. L. Hennessy and J. L. Hennessy, A new golden age for
[30] O. Rhodes, P. A. Bogdan, C. Brenninkmeijer, S. Davidson, computer architecture: Domain-specific hardware/software
D. Fellows, A. Gait, D. R. Lester, M. Mikaitis, L. A. Plana, co-design, enhanced security, open instruction sets, and
A. G. D. Rowley, et al., sPyNNaker: A software package for agile chip development, in Proc. 2018 ACM/IEEE 45th
running PyNN simulations on SpiNNaker, Front. Neurosci., Ann. Int. Symp. on Computer Architecture, Los Angeles,
vol. 12, p. 816, 2018. CA, USA, 2018, pp. 27–29.
[31] Y. H. Zhang, P. Qu, Y. Ji, W. H. Zhang, G. R. Gao, G. R. [43] W. M. Zheng, Research trend of large-scale supercomputers
Wang, S. Song, G. Q. Li, W. G. Chen, W. M. Zheng, et al., and applications from the TOP500 and Gordon Bell Prize,
A system hierarchy for brain-inspired computing, Nature, Sci. China Inf. Sci., vol. 63, no. 7, p. 171001, 2020.
vol. 586, no. 7829, pp. 378–384, 2020.
Youhui Zhang et al.: Towards “General Purpose” Brain-Inspired Computing System 673

Youhui Zhang received the BEng and PhD Weimin Zheng received the MEng degree
degrees in computer science from Tsinghua in computer science from Tsinghua
University, Beijing, China in 1998 and 2002, University, Beijing, China in 1982.
respectively. He is currently a professor in Currently he is an academician of Chinese
the Department of Computer Science and Academy of Engineering and a professor at
Technology, Tsinghua University, Beijing, the Department of Computer Science and
China. His research interests include Technology, Tsinghua University, Beijing,
computer architecture and neuromorphic China. His research interests include high
computing. He is a member of CCF, ACM, and IEEE. performance computing, network storage, and parallel compiler.

Peng Qu received the BEng and PhD


degrees in computer science from Tsinghua
University, Beijing, China in 2013 and
2018, respectively. He is currently a
postdoctoral fellow in the Department
of Computer Science and Technology,
Tsinghua University, Beijing. His research
interests include computer architecture and
neuromorphic computing.

You might also like