0% found this document useful (0 votes)
4 views

Research progress in architecture and application of RRAM with computing-in-memory - PMC

The document discusses the advancements in resistive random access memory (RRAM) and computing-in-memory (CIM) technologies as solutions to the limitations of traditional von Neumann architecture, which separates processing and storage. RRAM, characterized by its ability to change resistance with electrical signals, is highlighted for its potential in enhancing computing power, particularly in applications like AI and neural networks. The paper outlines the structure, benefits, and future implications of RRAM and CIM in addressing the growing demands for data processing and computational efficiency.

Uploaded by

chetansb2003
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Research progress in architecture and application of RRAM with computing-in-memory - PMC

The document discusses the advancements in resistive random access memory (RRAM) and computing-in-memory (CIM) technologies as solutions to the limitations of traditional von Neumann architecture, which separates processing and storage. RRAM, characterized by its ability to change resistance with electrical signals, is highlighted for its potential in enhancing computing power, particularly in applications like AI and neural networks. The paper outlines the structure, benefits, and future implications of RRAM and CIM in addressing the growing demands for data processing and computational efficiency.

Uploaded by

chetansb2003
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

As a library, NLM provides access to scientific literature. Inclusion in an NLM database does
not imply endorsement of, or agreement with, the contents by NLM or the National Institutes
of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice

Nanoscale Adv. 2023 Mar 14; 5(6): 1559–1573. PMCID: PMC10012847


Published online 2023 Feb 27. doi: 10.1039/d3na00025g PMID: 36926563

Research progress in architecture and application of RRAM with computing-in-


memory
Chenyu Wang,a Ge Shi,a Fei Qiao,b Rubin Lin,a Shien Wu,a and Zenan Hua

Abstract

The development of new technologies has led to an explosion of data, while the computation
ability of traditional computers is approaching its upper limit. The dominant system architec‐
ture is the von Neumann architecture, with the processing and storage units working indepen‐
dently. The data migrate between them via buses, reducing computing speed and increasing en‐
ergy loss. Research is underway to increase computing power, such as developing new chips
and adopting new system architectures. Computing-in-memory (CIM) technology allows data to
be computed directly on the memory, changing the current computation-centric architecture
and designing a new storage-centric architecture. Resistive random access memory (RRAM) is
one of the advanced memories which has appeared in recent years. RRAM can change its resis‐
tance with electrical signals at both ends, and the state will be preserved after power-down. It
has potential in logic computing, neural networks, brain-like computing, and fused technology
of sense-storage-computing. These advanced technologies promise to break the performance
bottleneck of traditional architectures and dramatically increase computing power. This paper
introduces the basic concepts of computing-in-memory technology and the principle and appli‐
cations of RRAM and finally gives a conclusion about these new technologies.

The development of new technologies has led to an explosion of data, while the computation
ability of traditional computers is approaching its upper limit.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 1/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

1. Introduction

Scientific research and social life bring vast amounts of data. Researchers are constantly up‐
grading computing power to cope with large amounts of information processing tasks. Within
the constraints of the von Neumann architecture, the dominant approach to increase the over‐
all system speed is to increase the performance of chips. As semiconductor processing is ap‐
proaching physical limits, Moore's Law is about to expire.1,2 The number of transistors per unit
area of a chip will no longer increase significantly. The storage wall and power wall created by
data migration have also limited the performance of computers. There has not been an order-
of-magnitude increase in computing power.

Computing-in-memory technology integrates computing and storage capabilities into a single


unit. Data need not migrate between the processor and memory. The technology breaks the
bottleneck of traditional computer architecture, which is considered a significant development
trend for future breakthroughs in computing power.3 W. H. Kautz4 of the Stanford Research
Institute first proposed the concept of computing-in-memory in the 1970s but was unable to
implement it due to technical constraints. Early computing-in-memory solutions were usually
based on mature CMOS processes. The research is focused on the existing memory cell circuits,
including static random access memory (SRAM), dynamic random access memory (DRAM), and
Flash.5 The technology can make logic operations, and the performance and cost of the device
are reduced compared with those of the conventional component. However, with the advent of
new non-volatile memories and the progress of semiconductor processing, computing-in-mem‐
ory technology has been further developed.

Advanced storage technologies include resistive storage, ferroelectric storage,6 phase change
storage,7,8 and magnetic storage.9 Resistive storage is a current research hotspot. The simple
structure, high integration, low power consumption, and high speed make it one of the advan‐
tageous candidates for next-generation non-volatile memory. In 1971, the concept of memris‐
tors was first proposed by Professor Chua10 at the University of California, Berkeley. It was not
until 2008 that a prototype of the memristor was successfully prepared by Hewlett-Packard
Laboratories.11 In the time that follows, research studies focus on mathematical models,12 re‐
sistance mechanisms,13,14 across-array structures, preparation processing,15 and fabrication
materials.16,17 The memristor used in memory applications is called resistive random-access
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 2/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

memory (RRAM). It is considered the key to breaking through AI arithmetic and big data, to
meet high-speed computing and also to meet low-precision, high-speed computing needs
raised by IoT and neural network computing. The computing-in-memory structure is expected
to replace von Neumann architecture chips in many fields in the future.18

2. Computing-in-memory technology

The von Neumann architecture is characterized by the fact that the processing and storage
units are independent, and the two are connected via a bus. However, traditional architecture
has encountered some problems. On the one hand, data migration across the bus consumes
much energy. The cost of multiplying the two numbers is several orders of magnitude lower
than that of accessing them from memory, creating a power wall.19 On the other hand, the dif‐
ference in performance development between the two presents a storage wall. Processor per‐
formance increased by 50% per year compared to 7% per year for memory. The difference in
data processing speed has become approximately 250 times greater than that in memory pro‐
cessing speed, as shown in Fig. 1.20 The mismatch between the processor and memory results
in slow processing and wasted resources. In addition, the performance of a chip depends on
the number of transistors. However, due to the physical limits of semiconductor processes, the
size of the transistor reaches its limit. As the size of the device shrinks, the quantum and short-
channel effects of the devices become increasingly severe. Overly dense transistors make heat
dissipation more severe, and the power consumption of chips cannot be effectively reduced.
These problems not only limit the performance of the chip but also have an impact on the accu‐
racy and endurance of the calculations. People are desperate for new solutions.

Fig. 1

Processor-memory performance gap.

The concept of computing-in-memory was developed in the last century, but the technology has
not received much attention. The integration of computing power and storage units is costly
and technically challenging. Integrated circuits are evolving at a rapid pace under Moore's Law.
The needs of data processing are far from its limits. However, with the increased data volume
and new demands for computing speed and accuracy, computing-in-memory technology is
back in the spotlight. Computing-in-memory technology breaks through the traditional comput‐
ing architecture, eliminating the need for a separate computing unit to process the data. The
technology can be divided into charge-based and resistor-based storage technologies. Most of
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 3/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

the charge-based storage technology is based on adding a computational circuit to a conven‐


tional memory cell, as shown in Fig. 2(b). For example, the SRAM memory cell is the central cell
in a CMOS-based memory circuit. Simple logic circuits can be implemented by designing simple
peripheral circuits, and efficient neural network training can also be achieved by SRAM across-
array design. Resistor-based storage technologies use memory as the core of integrated cir‐
cuits. The initial data for the system are the resistance values of the resistor cells, while the in‐
put data come from external electrical signals. The resistance value of each cell represents the
final calculation result. Resistor-based circuits mainly use new non-volatile memories, such as
RRAM, PCM, and FRAM. Advanced memory combined with an across-array structure enables
complex computations such as vector-matrix multiplication, greatly accelerating computation
speed and reducing operational power consumption. Resistor-based storage technologies will
be described in detail below.

Fig. 2

The development of the computing structure.21 (a) von Neumann architecture. (b) Near-memory
computing structure. (c) Computing-in-memory structure.

3. RRAM

3.1. Memristor

In 1971, Professor Chua10,22 of the University of California at Berwick introduced the memris‐
tor concept and then extended its definition. He considered that a memristor is the fourth fun‐
damental circuit element in addition to resistance, capacitance, and inductance. The memristor
complements the gap in the fundamental circuit relationship between charge and flux. Its math‐
ematical model is represented by the ratio of magnetic flux to charge. The memristor resis‐
tance is numerically equal to the ratio of the voltage applied to memory terminals at a given
moment to the current flowing through it. In practice, however, the value of a memristor is de‐
termined by the necessary amount of current or flux flowing through the memristor over time,
which is a non-linear resistive element with memory characteristics.23

The prevailing explanation for the resistive principle of the memristor is the wire filamentary
resistive switching mechanism. The wire filament mechanism assumes that the resistance
change of the device corresponds to the formation and rupture of wire filaments within the

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 4/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

material.24 In most cases, a freshly prepared memristor usually exhibits fewer defects and
shows an initial resistance state (IRS), as shown in Fig. 3(a). When a positive voltage is applied
to the device, the conductive defects in the resistive layer are connected to form conductive fila‐
ments. The device's conductivity is enhanced, showing a low resistance state (LRS), as shown in
Fig. 3(b). When a negative voltage is applied to the device, the conductive filaments break, and
the device's conductivity is reduced, resulting in a high resistance state (HRS), as shown in
Fig. 3(c). Some of the memristors even have intermediate resistance levels or more resistance
levels.

Fig. 3

The wire filament mechanism. (a) Initial resistance state (IRS). (b) Low resistance state (LRS). (c)
High resistance state (HRS).

Researchers used the resistive properties of resistors to implement the calculation function.
The memristor used in memory applications is called resistive random-access memory
(RRAM). RRAM has high speed, high durability and multi-bit storage capability. It meets the de‐
velopment requirements of high capacity, high read and write times, faster read and write
speeds, and lower power consumption. RRAM is expected to replace traditional storage devices
to achieve a big boost in computing power.

3.2. Structure of RRAM

The basic structure of a memristor is a sandwich structure consisting of upper and lower metal
electrodes and an intermediate insulator resistive layer. RRAM is constructed as an across-ar‐
ray structure to improve integration. Realization of high-density and reliable RRAM is crucial
toward development of next-generation information storage and computing. The structure can
be divided into active and passive arrays. In the active array, the RRAM forms an array with
field effect tubes. The transistors control the reading, writing and erasing of resistive elements
and can effectively isolate adjacent cells from interference. However, the active array design is
complex and not conducive to integration. In passive arrays, each memory cell is defined by up‐
per and lower electrodes formed by intersecting word and bit lines, as shown in Fig. 4(a).
Passive arrays are used in most high-density RRAM arrays, and passive arrays are more con‐
ducive to the three-dimensional (3D) integration of the chip.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 5/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

Fig. 4

RRAM across-array architecture and scaling.25 (a) RRAM across-array structure with a single
memory layer. (b) Horizontal stacked 3D across-array structure. (c) Vertical 3D across-array
structure.

The across-array structure of RRAM is one of the bases of its fast calculations and responsible
calculations. Employing a 3D structure can make resource-expensive tasks into a manageable
size and provides substantial improvement to the speed and energy efficiency while running
complex neural network models. A broad range of applications can be envisioned with further
optimizations in the device performance and more carefully designed on-chip peripheral ana‐
logue circuitry, even though the current 3D design might not be the most scalable architecture
compared to its two-dimensional (2D) counterpart. According to the stacking form, 3D RRAM
can be divided into a horizontal stacked 3D across-array structure (HRRAM) and vertical 3D
across-array structure (VRRAM). HRRAM is composed of multilayer planar two-dimensional
RRAM superimposed by the fabrication process, as shown in Fig. 4(b). VRRAM is made by ro‐
tating the conventional horizontal across-array structure by 90° and extending it repeatedly in
the horizontal direction, as shown in Fig. 4(c).

Three-dimensional fabrication technology not only increases the integration of the chip and
greatly reduces the chip area, but also increases the computing speed and brings newer appli‐
cations. However, making 3D chips is still a very difficult process, and only a few labs can make
them. In 2022, Tang et al.26 fabricated a RRAM across-array using MoS2 and then implemented
a single three-dimensional RRAM cube by overlaying two-dimensional MoS2 layers, as shown in
Fig. 5(a). This work paves the way algorithmically for the implementation of two memristors in
high density neuromorphic computing systems. In the process, it is possible to integrate the
prepared 2D RRAM into 3D successfully. However, 3D-HRRAM requires critical lithography and
other processes for each stacked layer, and this overhead manufacturing cost increases linearly
with the number of stacks. The three-dimensional vertical structure occupies a smaller area in
the integrated array of the same number of devices and is suitable for large-scale operations.
Huang et al.27 proposed a vertical architecture of RRAM design, as shown in Fig. 5(b). The
RRAM is composed of graphene plane electrode/multilayer hexagonal boron oxide (h-BN) in‐
sulating dielectric stacked layers, AlOx/TiOx, a resistive switching layer and an ITO pillar elec‐
trode that exhibits reliable device performance. The vertical three-dimensional structure com‐
bining the graphene plane electrode with a multilayer h-BN insulating dielectric can pave the
way toward a new area of ultra-high-density memory integration in the future. Q. Huo et al.28
reported a two-kilobit non-volatile CIM macro based on an eight-layer 3D VRRAM, as shown in
Fig. 5(c). The chip is fabricated using a 55 nm complementary metal-oxide-semiconductor
process. Scaling such systems to three-dimensional arrays could provide higher parallelism, ca‐
pacity and density for necessary vector–matrix multiplication operations. Lin et al.29 success‐
fully constructed an eight-layer array, as shown in Fig. 6(d). The 3D array was fabricated on top
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 6/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

of a silicon wafer with a 100 nm thick thermally grown oxide layer. Researchers programmed
parallelly operated kernels into the prepared RRAM, implemented a convolutional neural net‐
work and achieved software-comparable accuracy in recognizing handwritten digits.

Fig. 5

Different forms of RRAM. (a) 3D-HRRAM: a 3D memristor array with buried metal interconnects and
logic circuits. (b–d): 3D-VRRAM. (b) The fabrication flows of a single 2D material based vertical
RRAM. (c) High-precision CIM scheme based on MLSS 3D VRRAM. (d) 3D monolithic integrated
memristor circuits. (e and f): Flexible RRAM. (e) Flexible RRAM crossbar arrays with a Ti/PPX-C/Cu
structure. (f) The flexible memory devices based on polychord-para-xylylene. (g) 3D flexible RRAM
artificial synaptic network using a low-temperature atomic layer.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 7/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

Fig. 6

Computing-in-memory realized by RRAM. (a) IMPLY logic and (b) NAND logic proposed by HP LABS.
(c) The multi-functional boolean logic circuit schematic. (d and e): MAGIC logic computation. (d)
MAGIC two-put OR gates and (e) NAND/NOR gates using the convert VTM method. (f–i): Complex
computation. (f) Across-array circuits for cases beyond the constraints. (g) Memristor-based quinary
half adder. (h) Two-bit adder circuit with parasitic elements based on RRAM. (i) Digital full adder
based on RRAM.

In addition, RRAM, made of flexible materials for particular application scenarios, is also a hot
topic. Kim et al.30 used PPXC as a resistive switching layer and substrate to fabricate RRAM flex‐
ible across-arrays, as shown in Fig. 5(e). Researchers applied them to neuromorphic acceler‐
ated computing to test their performance with high electrical conductivity and durability.
Huang et al.31 proposed a kind of single-component polymer RRAM based on polychord-para-
xylylene, as shown in Fig. 6(f). The device has excellent chemical stability and high CMOS
process compatibility as well as further reduced operation current. Wang et al.32 successfully
prepared a 3-layer flexible RRAM array by a low-temperature atomic layer deposition tech‐
nique to achieve high-density binary storage function and multi-bit storage in a single device,
as shown in Fig. 5(g). Preparing flexible devices opens up new possibilities for future high-den‐
sity, high-performance wearable applications.

A RRAM across-array can greatly compress the chip volume and improve the computing effi‐
ciency and computing volume. But the problem is also obvious: chip design and preparation
processes are difficult, every application needs to be customized, there is still a lot of room for
development and there are still unresolved problems.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 8/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

(a) When writing and programming, current inevitably passes through other resistive cells.
Leakage current can interfere with the accuracy of the information read when adding unneces‐
sary power consumption.

(b) Due to the fabrication process, the ion movement pattern inside the device is somewhat
random and not perfectly uniform for each resist-variable cell.33

(c) When the device is operating, thermal crosstalk affects the change in the resistance state of
the surrounding resistive cells—leading to inaccurate calculations. The effect of thermal
crosstalk even increases with increasing density.

4. Applications

RRAM is considered one of the outstanding candidates for emerging storage technologies. It
has low power consumption, high speed computing, high density and non-volatile data storage
capability. RRAM has been applied in logic computing, neural networks, brain-like computing,
and fused technology of sense-storage-computing.

4.1. Computing-in-memory

The simplest application is logic operations; the current application of logic circuits imple‐
mented logic gate operations using resistive cells instead of traditional transistors. HP LABS34
first prepared a physical model of RRAM in 2008 and implemented IMPLY logic operations. The
primary logic cell and truth table are shown in Fig. 6(a). IMPLY and NOT logic can form a com‐
plete logic set to complete all 16 logics. However, it is not a beautiful operation, which destructs
the input method because the final output of the implied logic overwrites the original input
value. Implementing other logic using IMPLY logic also requires more steps, which is less effi‐
cient. Then, HP LABS also proposed the 3M1R structure, as shown in Fig. 6(b). It is an improve‐
ment to IMPLY, whose basic logic unit consists of three resistive cells and a voltage divider re‐
sistor. The high resistance state represents logic “1” and the low resistance state represents
logic “0”, where P and Q represent the input variables and S represents the output variables.
Based on the research from HP LABS, more complex logic operations can be obtained using
RRAM. Dong et al.35 combined multiple logic resistors in a single circuit to implement logic op‐
erations, as shown in Fig. 6(c), which offers the possibility of implementing advanced comput‐
ing architectures. The study implements two-input or multi-input AND, OR, NOR, and NAND op‐
erations and single-input copy and NOT operations. In each logic gate, the circuit uses non-
volatile resistors of the RRAM as input and output states, thus enabling stateful logic opera‐
tions. In contrast to several existing methods, this method generates a versatile state logic cir‐
cuit that can perform multiple state logic operations simultaneously.

In addition to the structure proposed by HP LABS, researchers connected RRAM in series and
parallel. S. Kvatinsky et al.36 proposed a memristor-aided logic (MAGIC) circuit that uses resis‐
tive values as logical state variables, as shown in Fig. 6(d). Under applied voltage control, the
five essential logic functions can be achieved by connecting the RRAM in series and parallel.
This design approach allows the results of logic calculations to be stored in a separate resistive
cell, avoiding the problem of data overwriting arising in IMPLY logic operation. F. Mozafari et
al.37 proposed a new memristor-based NAND/NOR logic gate with a similar structure for gen‐
eral-purpose logic gates requiring two different input voltages. The structure consists of two
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 9/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

input amnesia resistors and one output amnesia resistor, as shown in Fig. 6(e). RRAM-based
logic calculations can reduce the computational complexity of the problem, as well as reduce
the amount of data being accessed by performing the calculations within the across-array.

In recent years, the improvement of the technological level and the progress of materials re‐
search have enabled the successful preparation of larger RRAM matrices and the realization of
more and more complex functions. Cui et al.38 proposed a family of NMOS-like RRAM gates. All
gates are array implementable, and the gate family logic is complete. NOR, AND and NOR gates
consume only one cycle during the computation phase. The gate circuit saves half the number
of RRAM devices compared to its CMOS-based counterpart. By using RRAN gates and across-
array structures, complex logic operations can be realized, as shown in Fig. 6(f). A. H. Fouad et
al.39 proposed a multi-valued logic adder, as shown in Fig. 6(g). The study discussed the possi‐
bility of extending a three-valued adder circuit to a multi-valued logic adder theoretically. By
exploiting the properties and dynamics of RRAM, the circuit achieved the advantages of han‐
dling different numbering systems, increasing density, and reducing processing time. Siemon et
al.40 proposed a multi-input memristive switch logic, as shown in Fig. 6(h). This work enabled
the function X OR (Y NOR Z) to be performed in a single step with three memristive switches.
This OR/NOR logic gate increases the capabilities of memristive switches, improving the overall
system efficiency of a memristive switch-based computing architecture. N. Taherinejad et al.41
present a new architecture for a digital full adder, as shown in Fig. 6(i). The circuit is faster than
existing IMPLY-based serial designs while requiring less area compared to the existing parallel
design.

In the latest studies, researchers hope not only to implement logical operations, but also to im‐
plement complex mathematical analysis in RRAM across-arrays. Tian et al.42 realized a hard‐
ware Markov chain algorithm in a single device for machine learning. M. Teimoory et al.43 de‐
signed a 2 × 2 multiplier circuit, as shown in Fig. 7(a). The circuit requires only sixteen resistive
cells, eight transistors, and only one calculation time step for the multiplication operation,
which is of low cost. B. Chakrabarti et al.44 demonstrated vertical monolithic integration of 3D
RRAM crossbars on a foundry-processed 5 × 5 CMOS chip. The chip can realize dot-product
operation, which is the first application of functional 3D CMOL hybrid circuits. Xie et al.45 used
h-BN RRAM across-arrays to demonstrate the hardware implementation of dot product opera‐
tion and the linear regression algorithm, as shown in Fig. 7(b)–(d). Y. Halawani et al.46 pro‐
posed an XNOR-based RRAM content addressable memory (CAM) with an analog time-domain
adder function for efficient winning class computation, as shown in Fig. 7(e). The chip had 31
times less area and about three times less energy consumption than state-of-the-art RRAM de‐
signs. P. Kumar et al.47 used h-BN as an electrolyte to implement the fabrication of an across-
array RRAM, as shown in Fig. 7(f), and combined it with CMOS circuits to implement extreme
learning machine algorithms. With CMOS, circuits implement the encoder unit and RRAM ar‐
rays implement the decoder unit. The hybrid architecture performs well on complex audio, im‐
age and non-linear classification tasks with real-time data sets.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 10/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

Fig. 7

Complex mathematical formulas realized by RRAM. (a) Logic design of the 2 × 2 multiplier. (b)
Schematic of Au/h-BN/Ti RRAM arrays and cross-sectional schematic of a single memristor. (c)
Schematic of the experimental setup for demonstration of basic dot-product operation on h-BN
RRAM arrays. (d) Flow diagram for stochastic multivariable linear regression. (e) Circuit designs of
the proposed RRAM-based CAM, analog time-domain adder and winning class logic. (f) System
architecture showing a CMOS encoder chip and memristor decoder chip along with its functional
sub-blocks.

These studies are of great significance for further implementation of neuromorphic computing
and machine learning hardware based on RRAM across-arrays. The compatibility of the above
design with CMOS technology provides a new way for the development of digital logic circuits.
Introduction of RRAM was presented as a solution to the fundamental limitations of VLSI
systems.

4.2. Neural networks and brain-like computing

In recent years, RRAM has been widely used in neural morphological networks. Artificial neu‐
ral networks and all their variants are the primary tools for machine learning tasks. The contin‐
uous production of large amounts of data makes it possible to train and operate artificial neu‐
ral networks successfully. However, in order to achieve high inference accuracy, neural net‐
works usually require a large number of parameters. The traditional structure of memory and
computing points makes the whole stage consume a lot of time and energy but using RRAM to
build neural networks is expected to solve this problem. RRAM has resistive properties and is
similar to synapses in neural networks, so the neural network hardware and a neural network
model based on RRAM have intrinsic consistency. A single device can be used as a synapse and
can integrate storage and operation. A RRAM-based neural network integrates computing and
storage closely, eliminating data transmission between the processor and memory, thus im‐
proving the overall system performance and saving most system energy consumption. RRAM is
well compatible with CMOS processes and can be scaled up on a large scale through cross-ar‐

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 11/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

ray structures. Through CMOS processes, functional circuits are added to the periphery of
RRAM, so that the system can complete the calculation of a matrix in one operation, thus play‐
ing an important role in high dimensional computation.

The perceptron model is an early neural network algorithm. In the calculation process of the
perceptron neural network, matrix vector multiplication between the input information vector
and weight matrix consumes a lot of computing resources. However, using RRAM across-arrays
to realize matrix vector multiplication in parallel in one step can greatly reduce the energy con‐
sumption of the hardware neural network. The RRAM across-array is used to store the synap‐
tic weight matrix, and the conductance at each crossing is used to represent the weight value of
a synaptic connection. MNIST image classification is modeled using a single-layer perceptron
that is not in situ trained and a large-scale multi-layer perceptron classifier based on a more
advanced conductance tuning algorithm. M. Prezioso et al.48 first produced an across-array of
RRAMs for neural network computing. Researchers used a 12 × 12 RRAM across-array to form
a single-layer perceptron network to recognize and classify black and white letters in 3 × 3 pix‐
els, as shown in Fig. 8(a). F. M. Bayat et al.49 produced two 20 × 20 RRAM across-array, as
shown in Fig. 8(d), and integrated them with discrete CMOS components on two printed circuit
boards to implement a multilayer perceptron (MLP) neural network. The MLP network has 16
inputs, 10 hidden layer neurons, and 4 outputs, enough to classify 4 × 4 pixel black and white
patterns into 4 categories. Then, more complex array structures were proposed based on pre‐
vious research to realize much more complex computing. H. Kim et al.50 developed a 64 × 64
passive crossbar circuits with almost 99% working cross point metal-oxide memristors based
on the foundry-compatible fabrication process. The circuit has a relatively average error of
<4% when programming 4 K grey-scale patterns and an error of nearly 1%. It is an essential
step towards realizing a human brain-scale integrated neuromorphic system.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 12/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

Fig. 8

Perceptron network based on RRAM. (a–c): Single-layer perceptron network. (a) Top-level
description of the network. (b) The used input pattern set. (c) 12 × 12 RRAM across-array circuit.
(d–f): Multilayer perceptron network. (d) Double layer 20 × 20 RRAM across-array circuit combined
with the CMOS process. (e) 64 × 64 RRAM passive across-array circuit. (f) Programming Einstein
image to (e) with a 5% relative error.

The convolutional neural network (CNN) reduces the dimensions by convolutional operation,
reduces parameter complexity, simplifies complex problems before processing, and greatly re‐
duces the calculation cost. The convolutional neural network based on memristor cross arrays
mainly consists of two parts: the convolutional operation part and the fully connected layer
part. On the one hand, the memristor array can store convolution kernels and complete the
matrix vector multiplication of input information and convolution kernels in one step, which
greatly improves the computational efficiency. On the other hand, the fully connected layer part
of a convolutional neural network is a multi-layer perceptron, which can also be realized in
parallel by using the memristor cross array as mentioned above. However, CNNs have not yet
fully implemented hardware via a RRAM across-array. P. Yao et al.51 implemented complete
hardware for convolutional networks based on RRAM array chips. The team used eight arrays
with 2048 RRAM to implement a five-layer convolutional neural network. The chip was two or‐
ders of magnitude more energy efficient than an image processor (GPU) in processing convolu‐
tional neural networks by comparison with conventional neural network computation.

Stochastic neural networks introduce random changes into neural networks, one is to assign
transfer functions of the stochastic process among neurons and the other is to assign random
weights to neurons. This makes stochastic neural networks very useful in optimization prob‐
lems, because random transformations avoid local optimality. The key operation in stochastic
neural networks is the random dot product, which has become the latest method to solve the
problems of machine learning, information theory and statistics. Although there have been
many demonstrations of dot product circuits and random neurons, there is still a lack of effi‐
cient hardware implementation combining these two functions. M. R. Mahmoodi et al.52 pro‐
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 13/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

posed versatile stochastic dot product circuits based on nonvolatile memories for high perfor‐
mance neuro-computing and neuro-optimization, as shown in Fig. 9. F. Zahari et al.53 proposed
an analogue pattern recognition with stochastic switching binary CMOS-integrated RRAM.
Researchers prepared a random binary RRAM device based on a polycrystalline HfO and have
produced software neurons. Based on this, a random neural network is used for image recog‐
nition. The convergence rate of this new learning algorithm is very fast, which may significantly
reduce the power consumption of the system. This study has potential applications in stochas‐
tic neural networks to solve MNIST pattern recognition tasks.

Fig. 9

Stochastic neural networks.52 (a) Stochastic dot-product circuit and its applications in
neurocomputing. (b) RRAM-based restricted Boltzmann machine. (c) A bipartite graph of the
considered RBM network and its implementation (the red rectangle highlights the utilized area of a
20 × 20 across-array).

Reservoir computing (RC) is a concept developed from recursive neural networks (RNNs) that
has recently been successfully used to implement a wide range of tasks, such as image model
recognition, time series prediction, and pattern generation. Researchers have successfully used
memristors for this purpose as well. Fig. 10 shows the concept of a memristor-based RC sys‐
tem. The spikes collected from firing neurons are used directly as inputs to an excitation mem‐
ristor. The repository space is further extended with the concept of virtual nodes to help han‐
dle complex time inputs. A simple neural network is used as the reservoir's readout layer to
produce the final output. In 2020, Lu et al.54 looked at reserve pool arithmetic. Researchers
demonstrate a RC system based on RRAM, whose states reflect the temporal characteristics of
nerve peak sequences, successfully used to identify neural firing patterns, monitor the conver‐
sion of firing patterns, and identify neural synchronization states between different neurons.
This work makes it possible to realize efficient neural network signal analysis with high spatial
and temporal accuracy. In addition, researchers made a RRAM across-array based on WOx.
They used a circuit to form an array to recognize digits and increased the noise, proving the
anti-interference and stability of the system's recognition, and expanded to a small reserve
pool composed of 88 memristors for handwritten digit recognition. X. Liang et al.55 also ap‐
plied RRAM to reservoir computing, an artificial neural network for efficient processing of tem‐
poral signals, with the potential to have a significant impact in the field of brain-like computing.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 14/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

Fig. 10

Schematic showing the concept of a memristor-based RC system for neural activity analysis.

On the basis of the development of neural networks, the concept of brain-like computing has
been put forward. From an information technology perspective, researchers have abstracted
the way decisions are made in the human brain. They connected it through multi-layered artifi‐
cial neural networks to create a non-linear, adaptive computing model for information process‐
ing. Wu et al.56 have been focusing on research on brain-like computing for a long time and
have described the RRAM-based brain-like circuit in detail, as shown in Fig. 11(a). By introduc‐
ing artificial dendritic computational units with rich dynamic properties, the team constructed a
novel artificial neural network,57 significantly reducing system power consumption while im‐
proving computational network accuracy. Compared to traditional processes, the new system
has a 30 times reduction in dynamic power consumption, an 8% increase in accuracy, and an
overall system power consumption downgraded by three orders of magnitude. Z. Wu et al.58
built an artificial sensory neural system implementation scheme with habituation properties
based on RRAM, as shown in Fig. 11(b). The system used habituation as a biological learning
rule to build a habituated impulse neural network that can be applied to robot autonomous
cruising obstacle avoidance. The neural network can be used to implement robot obstacle
avoidance functions and effectively improve robot obstacle avoidance efficiency.

Fig. 11

Brain-like computing. (a) Neural network operations using RRAM implementation. (b) Biological
afferent nerve vs. the artificial afferent nerve.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 15/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

The application of RRAM-based neural networks is an indispensable research direction in neu‐


romorphic computing. As a new type of synaptic device, the memristor realizes the hardware
mapping of the synaptic weight well by means of multivalue modulation under pulse and can
store the synaptic weight matrix and realize in situ calculations. The parallel matrix multiplica‐
tion and addition capability of the across-array structure of RRAM realizes the acceleration of
neural network computing and provides a solution for constructing a new computing
architecture.

4.3. Fused technology of sense-storage-computing

The human nervous system senses the physical world in an analogue but efficient way.
Researchers have been working on using hardware to simulate human perceptual systems.
Sensors take signals such as light, pressure and sound and convert them into electrical signals.
The processor is used to analyze electrical signals and simulate human sensory systems.
However, traditional computing platform analysis speed is slow and precision is not high. As a
new type of memory, RRAM can be made of materials that are sensitive to peripheral signals
and can itself be used as a sensor. In addition, RRAM can combine microsensors with neural
network computing. The sense-storage-computing fusion technology based on RRAM brings a
new research direction for bionics technology.

Visual perception is made by using the effect of light signals on RRAM or using the RRAM
across-array as a processing circuit, which can currently be used to identify numbers and test
light intensity. Emboras et al.59 first discovered the change in light signal transmission in the
waveguide during transformation, which laid the foundation for visual perception based on
RRAM. Zhou et al.60 prepared a photo-resistive memory, as shown in Fig. 12(a) and (b). The
component can convert light signals into electrical signals and store the data briefly. In conjunc‐
tion with developments in neural networks, researchers used RRAM as a synaptic node to en‐
able more advanced visual processing. Lorenzi et al.61 used an RRAM array to recognize a 5 ×
5 pixel binary picture. Sarkar et al.62 used an RRAM-based neuromorphic circuit system and
trained and tested the system to successfully recognize basic Arabic numerals. Not only digital
numbers, but also visual recognition of infrared, low light and images are currently being
investigated.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 16/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

Fig. 12

Fused technology of sense-storage-computing. (a) Schematic structure of ORRAM. (b) Schematic


structure of an 8 × 8 ORRAM array. (c) Three-dimensional schematic of the investigated optically
readable plasmonic RRAM.59 (d) The common implementation scheme for sound localization
application with CMOS circuits. (e) A conceptual diagram of a memristor-based neuromorphic sound
localization system. The RRAM across-array acts as synapses to deal with binaural sound signals. (f)
The conceptual diagram of haptic memory. (g) Image for 7 × 7 SS-RRAM arrays and ECG electrode
mounted onto the wrist. (h) Structure of a memristor for gas sensing applications.

In terms of sound localization, researchers have localized the location of sound sources based
on the theory of interaural time difference, where the time difference between the theoretical
sound arriving in the two ears is shown in Fig. 12(d). Gao et al.63 have a leading study in this
area. The team fabricated a 128 × 8 array of memristors and proposed a new sound localiza‐
tion algorithm. The devices were tested with pulses to simulate sound signals, showing that the
training accuracy and energy consumption of the system built using the memristor array were
significantly improved, representing a significant advance in auditory localization systems with
memristors. F. Moro et al.64 designed an event-driven object localization system inspired by the
owl auditory cortex using a combination of state-of-the-art piezoelectric micromechanical ul‐
trasound sensors and RRAM. The system was more efficient and power-efficient than micro‐
controllers performing the same task by several orders of magnitude.

Haptic perception is mainly studied in terms of sensing external pressure, where pressure data
are converted into electrical signals and stored in memory. The location of pressure generation
can be clarified through an array mechanism of resistive variable memory, as shown in
Fig. 12(f). Park et al.65 used a flexible RRAM array for ECG measurements, as shown in

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 17/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

Fig. 12(g). Researchers successfully demonstrated stable data storage of cardiac signals, a dam‐
age-reliable memory triggering system using a triboelectric energy-harvesting device, and
touch sensing via pressure-induced resistive switching.

As for smelling perception, the research started late, and the system can identify a small type of
gas. People use the combination of a gas sensor and RRAM made of special materials and use
the storage array to store and process the data quickly, so as to achieve the purpose of recog‐
nition. A. Adeyemo et al.66 used an across-array RRAM as a gas sensor, as shown in Fig. 12(h).
Researchers used the HP LABS fabricated TiO2 based memristor model in an attempt to im‐
prove sensing accuracy. The experiment provides the basis for the research of smelling percep‐
tion. At present, the smell sensing system using RRAM has a long sensing period, low sensing
accuracy, single sensing gas, and narrow application. Most systems have a small response gap
when identifying similar gases, which requires additional amplifying circuits to be sufficient for
classification, increasing the structural complexity.

4.4. Other applications

RRAM has good switching and isolation characteristics for high-frequency communications. In
5G communication, the RRAM has 50 times the switching efficiency of other non-volatile
switches. At 6G operating frequencies, RRAM also exhibits high isolation and has sub-nanosec‐
ond pulse switching, low BER and a high signal-to-noise ratio. M. Kim et al.67–69 worked on
RRAM for high-frequency applications at the University of Texas. RRAM made from MoS2 and
h-BN is suitable as an energy-efficient RF switch, overcoming the limitations of transistors and
pick-and-place switches. Researchers also predicted that RRAM could be used in mobile sys‐
tems, low-power IoT and terahertz beam steering. The unique switching randomness of RRAM
has important applications in information security and can also be used to design chaotic cir‐
cuits. H. M. Ibrahim et al.70 have implemented a robust and lightweight physically unclonable
function (PUF) architecture using RRAM, with implications for cryptographic key and security
application upgrades. In addition, P. S. Zarrin et al.71 proposed a neuromorphic on-chip recog‐
nition of saliva samples of COPD and healthy controls using RRAM. This is an application of
RRAM in the medical field.

5. Conclusion and outlook

With the constraints of the von Neumann structure and the physical limits of semiconductor
processing, computers are experiencing bottlenecks in their ability to process data. As a new
computing principle, computing-in-memory technology embeds the computation capability into
the storage unit. This increases computation speed, reduces system power consumption, and
opens up more application possibilities.

RRAM is a new memory type with resistive and non-volatile properties. The structure, mecha‐
nism and preparation of RRAM have been the focus of attention, with research continuing in
in-memory logic operations, brain-like computing and fused technology of sense-storage-com‐
puting. However, research into RRAM still faces some challenges:

(a) RRAM has randomness. The characteristics presented by memory resistors are not identi‐
cal from cycle to cycle. There is also variability in devices made from the same batch. The
preparation process of RRAM is being improved to try to solve this problem. In addition, the
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 18/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

addition of peripheral circuits to the memristor array, the use of check methods and redundant
design are also used to reduce errors.

(b) The best materials for the resistive layer in RRAM are also being screened. Metal oxides, 2d
materials, emerging materials and organics are currently used for the preparation of RRAM.
The materials used to prepare devices largely influence the performance of amnesic resistors.
High resistance ratios, good homogeneity and matching proven manufacturing processes and
equipment are the basis for judging the suitability of the material. We need to develop new ma‐
terials with innovative approaches and explore the best material composition ratios based on
already well-defined material elements. Better materials will enhance the performance given to
each application.

(c) Mass production is almost impossible to replace most conventional memories. Across-array
circuits are still in the small-scale laboratory stage. The preparation of memristors is difficult,
with only a few laboratories and processors meeting the demand. The variety of materials used
in the preparation of memristors and the preparation processes involved each have their own
characteristics, and integration with existing mature processes takes time. Complex functional
applications require customization. Different functions can be achieved by selecting different
memory resistor units.

Data availability

The data that support the findings of this study are available from the corresponding author
upon reasonable request.

Conflicts of interest

The authors declare that they have no known competing financial interests or personal rela‐
tionships that could have appeared to influence the work reported in this paper.

Supplementary Material

Acknowledgments

This research was supported by the National Natural Science Foundation of China under Grant
61971389, Grant 62131010, and Grant 61801253 and in part by the Zhejiang Provincial
Natural Science Foundation of China under Grant LR22F010001, Grant LZ20F010006, Grant
LY20F010003 and LY21F010006 and the Fundamental Research Funds for the Provincial
Universities of Zhejiang Grant 2020YW01.

References

1. Lundstrom M. Moore’s Law Forever. Science. 2003;299(5604):210–211. doi: 10.1126/science.1079567. [PubMed]


[CrossRef] [Google Scholar]

2. Schaller R. R. Moore's law: past, present and future. IEEE Spectrum. 1997;34(6):52–59. [Google Scholar]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 19/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

3. Sim J., Kim M., Kim Y., et al., MAPIM: Mat Parallelism for High Performance Processing in Non-volatile Memory
Architecture; Proceedings of the 20th International Symposium on Quality Electronic Design (ISQED), F 6-7 March 2019,
2019 [Google Scholar]

4. Kautz W. H. Cellular Logic-in-Memory Arrays. IEEE Trans. Comput. 1969;C-18(8):719–727. [Google Scholar]

5. Jaiswal A. Chakraborty I. Agrawal A. et al., 8T SRAM Cell as a Multibit Dot-Product Engine for Beyond von Neumann
Computing. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2019;27(11):2556–2567. [Google
Scholar]

6. Guo R. You L. Zhou Y. et al., Non-volatile memory based on the ferroelectric photovoltaic effect. Nat. Commun.
2013;4(1):1990. doi: 10.1038/ncomms2990. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

7. Wuttig M. Yamada N. Phase-change materials for rewriteable data storage. Nat. Mater. 2007;6(11):824–832.
doi: 10.1038/nmat2009. [PubMed] [CrossRef] [Google Scholar]

8. Li W. Qian X. Li J. Phase transitions in 2D materials [J] Nat. Rev. Mater. 2021;6(9):829–846. doi: 10.1038/s41578-
021-00304-0. [CrossRef] [Google Scholar]

9. Chappert C. Fert A. Van Dau F. N. The emergence of spin electronics in data storage. Nat. Mater. 2007;6(11):813–823.
doi: 10.1038/nmat2024. [PubMed] [CrossRef] [Google Scholar]

10. Chua L. Memristor-The missing circuit element [J] IEEE Trans. Circuit Theory. 1971;18(5):507–519. [Google Scholar]

11. Strukov D. Snider G. Stewart D. et al., The Missing Memristor Found. Nature. 2008;453:80–83.
doi: 10.1038/nature06932. [PubMed] [CrossRef] [Google Scholar]

12. Hossen I. Anders M. A. Wang L. et al., Data-driven RRAM device models using Kriging interpolation. Sci. Rep.
2022;12(1):5963. doi: 10.1038/s41598-022-09556-4. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

13. Kang J. Kim T. Hu S. et al., Cluster-type analogue memristor by engineering redox dynamics for high-performance
neuromorphic computing. Nat. Commun. 2022;13(1):4040. doi: 10.1038/s41467-022-31804-4. [PMC free article]
[PubMed] [CrossRef] [Google Scholar]

14. Ielmini D. Resistive switching memories based on metal oxides: mechanisms, reliability and scaling. Semicond. Sci.
Technol. 2016;31(6):063002. doi: 10.1088/0268-1242/31/6/063002. [CrossRef] [Google Scholar]

15. Thomas S. IGZO and RRAM team up. Nat. Electron. 2020;3(7):353. doi: 10.1038/s41928-020-0454-4. [CrossRef]
[Google Scholar]

16. Ahn E. C. Wong H. S. P. Pop E. Carbon nanomaterials for non-volatile memories. Nat. Rev. Mater. 2018;3(3):18009.
doi: 10.1038/natrevmats.2018.9. [CrossRef] [Google Scholar]

17. Bulja S. Kopf R. Tate A. et al., High frequency resistive switching behavior of amorphous TiO 2 and NiO. Sci. Rep.
2022;12(1):13804. doi: 10.1038/s41598-022-16907-8. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

18. Wong H. S. P. Salahuddin S. Memory leads the way to better computing. Nat. Nanotechnol. 2015;10(3):191–194.
doi: 10.1038/nnano.2015.29. [PubMed] [CrossRef] [Google Scholar]

19. Horowitz M.. 1.1 Computing's Energy Problem (And what We Can Do about it) [M]. 2014 [Google Scholar]

20. Patterson D. Anderson T. Cardwell N. et al., A case for intelligent RAM. IEEE Micro. 1997;17(2):34–44.
doi: 10.1109/40.592312. [CrossRef] [Google Scholar]

21. Sebastian A. Le Gallo M. Khaddam-Aljameh R. et al., Memory devices and applications for in-memory computing.
Nat. Nanotechnol. 2020;15(7):529–544. doi: 10.1038/s41565-020-0655-z. [PubMed] [CrossRef] [Google Scholar]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 20/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

22. Chua L. If it’s pinched it’s a memristor. Semicond. Sci. Technol. 2014;29(10):104001. doi: 10.1088/0268-
1242/29/10/104001. [CrossRef] [Google Scholar]

23. Chua L.. Everything You Wish to Know About Memristors but Are Afraid to Ask [M]//CHUA L, SIRAKOULIS G C,
ADAMATZKY A, Handbook of Memristor Networks, Springer International Publishing, Cham, 2019, pp. 89–157
[Google Scholar]

24. Li Y. Long S. Liu Y. et al., Conductance Quantization in Resistive Random Access Memory. Nanoscale Res. Lett.
2015;10(1):420. doi: 10.1186/s11671-015-1118-6. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

25. Ielmini D. Wong H. S. P. In-memory computing with resistive switching devices. Nat. Electron. 2018;1:333–343.
doi: 10.1038/s41928-018-0092-2. [CrossRef] [Google Scholar]

26. Tang B. Veluri H. Li Y. et al., Wafer-scale solution-processed 2D material analog resistive memory array for memory-
based computing. Nat. Commun. 2022;13(1):3037. doi: 10.1038/s41467-022-30519-w. [PMC free article] [PubMed]
[CrossRef] [Google Scholar]

27. Huang Y.-J. Lee S.-C. Graphene/h-BN Heterostructures for Vertical Architecture of RRAM Design. Sci. Rep.
2017;7(1):9679. doi: 10.1038/s41598-017-08939-2. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

28. Huo Q. Yang Y. Wang Y. et al., A computing-in-memory macro based on three-dimensional resistive random-access
memory. Nat. Electron. 2022;5(7):469–477. doi: 10.1038/s41928-022-00795-x. [CrossRef] [Google Scholar]

29. Lin P. Li C. Wang Z. et al., Three-dimensional memristor circuits as complex neural networks. Nat. Electron.
2020;3(4):225–232. doi: 10.1038/s41928-020-0397-9. [CrossRef] [Google Scholar]

30. Kim J. E. Kim B. Kwon H. T. et al., Flexible Parylene C-Based RRAM Array for Neuromorphic Applications. IEEE
Access. 2022;10:109760–109767. [Google Scholar]

31. Huang R., Cai Y., Liu Y., et al., Resistive Switching in Organic Memory Devices for Flexible Applications [M], 2014 [Google
Scholar]

32. Wang T. Meng J. Chen L. et al., Flexible 3D memristor array for binary storage and multi-states neuromorphic
computing applications. InfoMat. 2020;3:212–221. doi: 10.1002/inf2.12158. [CrossRef] [Google Scholar]

33. Guy J., Molas G., Vianello E., et al., Investigation of the physical mechanisms governing data-retention in down to
10nm nano-trench Al 2O 3/CuTeGe conductive bridge RAM (CBRAM); Proceedings of the 2013 IEEE International
Electron Devices Meeting, F 9-11 Dec. 2013, 2013 [Google Scholar]

34. Borghetti J. Snider G. Kuekes P. et al., Memristive switches enable stateful logic operations via material implication.
Nature. 2010;464:873–876. doi: 10.1038/nature08940. [PubMed] [CrossRef] [Google Scholar]

35. Luo L. Dong Z. K. Duan S. K. et al., Memristor-based stateful logic gates for multi-functional logic circuit. IET Circuits
Devices Syst. 2020;14(6):811–818. doi: 10.1049/iet-cds.2019.0422. [CrossRef] [Google Scholar]

36. Kvatinsky S. Belousov D. Liman S. et al., MAGIC—Memristor-Aided Logic. IEEE Trans. Circuits Syst. II Express Briefs.
2014;61(11):895–899. [Google Scholar]

37. Mozafari F., Sharifi M. J., Ahmadi A., et al., A Novel Architecture for Memristor-Based Logic; Proceedings of the 2021
IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), F 9-11 Aug. 2021, 2021 [Google Scholar]

38. Cui X. Ma Y. Wei F. et al., The Synthesis Method of Logic Circuits Based on the NMOS-Like RRAM Gates. IEEE Access.
2021;9:54466–54477. [Google Scholar]

39. Fouad A. H. Radwan A. G. Memristor-based quinary half adder. AEU - Int. J. Electron. Commun. 2019;98:123–130.
doi: 10.1016/j.aeue.2018.10.033. [CrossRef] [Google Scholar]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 21/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

40. Siemon A. Drabinski R. Schultis M. J. et al., Stateful Three-Input Logic with Memristive Switches. Sci. Rep.
2019;9(1):14618. doi: 10.1038/s41598-019-51039-6. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

41. Taherinejad N., Delaroche T., Radakovits D., et al. A Semi-serial Topology for Compact and Fast IMPLY-Based Memristive
Full Adders [M]. 2019 [Google Scholar]

42. Tian H. Wang X.-F. Mohammad M. A. et al., A hardware Markov chain algorithm realized in a single device for
machine learning. Nat. Commun. 2018;9(1):4305. doi: 10.1038/s41467-018-06644-w. [PMC free article] [PubMed]
[CrossRef] [Google Scholar]

43. Teimoory M., Amirsoleimani A., Ahmadi A., et al., A hybrid memristor-CMOS multiplier design based on memristive
universal logic gates; Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems
(MWSCAS), F 6-9 Aug. 2017, 2017 [Google Scholar]

44. Chakrabarti B. Lastras-Montañ o M. A. Adam G. et al., A multiply-add engine with monolithically integrated 3D
memristor crossbar/CMOS hybrid circuit. Sci. Rep. 2017;7(1):42429. doi: 10.1038/srep42429. [PMC free article]
[PubMed] [CrossRef] [Google Scholar]

45. Xie J. Afshari S. Sanchez Esqueda I. Hexagonal boron nitride (h-BN) memristor arrays for analog-based machine
learning hardware. npj 2D Mater. Appl. 2022;6(1):50. doi: 10.1038/s41699-022-00328-2. [CrossRef] [Google Scholar]

46. Halawani Y. Kilani D. Hassan E. et al., RRAM-based CAM combined with time-domain circuits for hyperdimensional
computing. Sci. Rep. 2021;11(1):19848. doi: 10.1038/s41598-021-99000-w. [PMC free article] [PubMed] [CrossRef]
[Google Scholar]

47. Kumar P. Zhu K. Gao X. et al., Hybrid architecture based on two-dimensional memristor crossbar array and CMOS
integrated circuit for edge computing. npj 2D Mater. Appl. 2022;6(1):8. doi: 10.1038/s41699-021-00284-3. [CrossRef]
[Google Scholar]

48. Prezioso M. Merrikh-Bayat F. Hoskins B. D. et al., Training and operation of an integrated neuromorphic network
based on metal-oxide memristors. Nature. 2015;521(7550):61–64. doi: 10.1038/nature14441. [PubMed] [CrossRef]
[Google Scholar]

49. Bayat F. M. Prezioso M. Chakrabarti B. et al., Implementation of multilayer perceptron network with highly uniform
passive memristive crossbar circuits. Nat. Commun. 2018;9(1):2331. doi: 10.1038/s41467-018-04482-4. [PMC free
article] [PubMed] [CrossRef] [Google Scholar]

50. Kim H. Mahmoodi M. R. Nili H. et al., 4K-memristor analog-grade passive crossbar circuit. Nat. Commun.
2021;12(1):5198. doi: 10.1038/s41467-021-25455-0. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

51. Yao P. Wu H. Gao B. et al., Fully hardware-implemented memristor convolutional neural network. Nature.
2020;577(7792):641–646. doi: 10.1038/s41586-020-1942-4. [PubMed] [CrossRef] [Google Scholar]

52. Mahmoodi M. R. Prezioso M. Strukov D. B. Versatile stochastic dot product circuits based on nonvolatile memories
for high performance neurocomputing and neurooptimization. Nat. Commun. 2019;10(1):5113.
doi: 10.1038/s41467-019-13103-7. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

53. Zahari F. Pérez E. Mahadevaiah M. K. et al., Analogue pattern recognition with stochastic switching binary CMOS-
integrated memristive devices. Sci. Rep. 2020;10(1):14450. doi: 10.1038/s41598-020-71334-x. [PMC free article]
[PubMed] [CrossRef] [Google Scholar]

54. Zhu X. Wang Q. Lu W. D. Memristor networks for real-time neural activity analysis. Nat. Commun. 2020;11(1):2439.
doi: 10.1038/s41467-020-16261-1. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 22/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

55. Liang X. Zhong Y. Tang J. et al., Rotating neurons for all-analog implementation of cyclic reservoir computing. Nat.
Commun. 2022;13(1):1549. doi: 10.1038/s41467-022-29260-1. [PMC free article] [PubMed] [CrossRef] [Google
Scholar]

56. Tang J. Yuan F. Shen X. et al., Bridging Biological and Artificial Neural Networks with Emerging Neuromorphic
Devices: Fundamentals, Progress, and Challenges. Adv. Mater. 2019;31:1902761. doi: 10.1002/adma.201902761.
[PubMed] [CrossRef] [Google Scholar]

57. Li X. Tang J. Zhang Q. et al., Power-efficient neural network with artificial dendrites. Nat. Nanotechnol.
2020;15(9):776–782. doi: 10.1038/s41565-020-0722-5. [PubMed] [CrossRef] [Google Scholar]

58. Wu Z. Lu J. Shi T. et al., A Habituation Sensory Nervous System with Memristors. Adv. Mater. 2020;32:2004398.
doi: 10.1002/adma.202004398. [PubMed] [CrossRef] [Google Scholar]

59. Emboras A. Goykhman I. Desiatov B. et al., Nanoscale Plasmonic Memristor with Optical Readout Functionality.
Nano Lett. 2013;13:6151. doi: 10.1021/nl403486x. [PubMed] [CrossRef] [Google Scholar]

60. Zhou F. Zhou Z. Chen J. et al., Optoelectronic resistive random access memory for neuromorphic vision sensors. Nat.
Nanotechnol. 2019;14(8):776–782. doi: 10.1038/s41565-019-0501-3. [PubMed] [CrossRef] [Google Scholar]

61. Lorenzi P., Sucre V., Romano G., et al., Memristor based neuromorphic circuit for visual pattern recognition;
Proceedings of the 2015 International Conference on Memristive Systems (MEMRISYS), F 8-10 Nov. 2015, 2015 [Google
Scholar]

62. Sarkar M., Chowdhury A., Arka A. I., et al., A new supervised learning approach for visual pattern recognition using
discrete circuit elements and memristor array; Proceedings of the TENCON 2017 - 2017 IEEE Region 10 Conference, F
5-8 Nov. 2017, 2017 [Google Scholar]

63. Gao B. Zhou Y. Zhang Q. et al., Memristor-based analogue computing for brain-inspired sound localization with in
situ training. Nat. Commun. 2022;13(1):2026. doi: 10.1038/s41467-022-29712-8. [PMC free article] [PubMed]
[CrossRef] [Google Scholar]

64. Moro F. Hardy E. Fain B. et al., Neuromorphic object localization using resistive memories and ultrasonic
transducers. Nat. Commun. 2022;13(1):3506. doi: 10.1038/s41467-022-31157-y. [PMC free article] [PubMed]
[CrossRef] [Google Scholar]

65. Park J. Seong D. Park Y. J. et al., Reversible electrical percolation in a stretchable and self-healable silver-gradient
nanocomposite bilayer. Nat. Commun. 2022;13(1):5233. doi: 10.1038/s41467-022-32966-x. [PMC free article]
[PubMed] [CrossRef] [Google Scholar]

66. Adeyemo A., Jabir A., Mathew J., et al. Reliable Gas Sensing with Memristive Array [M]. 2017 [Google Scholar]

67. Kim M. Ge R. Wu X. et al., Zero-static power radio-frequency switches based on MoS2 atomristors. Nat. Commun.
2018;9(1):2524. doi: 10.1038/s41467-018-04934-x. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

68. Kim M. Pallecchi E. Ge R. et al., Analogue switches made from boron nitride monolayers for application in 5G and
terahertz communication systems. Nat. Electron. 2020;3(8):479–485. doi: 10.1038/s41928-020-0416-x. [CrossRef]
[Google Scholar]

69. Kim M. Ducournau G. Skrzypczak S. et al., Monolayer molybdenum disulfide switches for 6G communication
systems. Nat. Electron. 2022;5(6):367–373. doi: 10.1038/s41928-022-00766-2. [CrossRef] [Google Scholar]

70. Ibrahim H. M. Abunahla H. Mohammad B. et al., Memristor-based PUF for lightweight cryptographic randomness.
Sci. Rep. 2022;12(1):8633. doi: 10.1038/s41598-022-11240-6. [PMC free article] [PubMed] [CrossRef] [Google
Scholar]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 23/24
9/5/24, 11:39 PM Research progress in architecture and application of RRAM with computing-in-memory - PMC

71. Zarrin P. S. Zahari F. Mahadevaiah M. K. et al., Neuromorphic on-chip recognition of saliva samples of COPD and
healthy controls using memristive devices. Sci. Rep. 2020;10(1):19742. doi: 10.1038/s41598-020-76823-7. [PMC
free article] [PubMed] [CrossRef] [Google Scholar]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10012847/ 24/24

You might also like