0% found this document useful (0 votes)
57 views11 pages

On The Security of Lightweight Block Ciphers Against Neural Distinguishers Observations On LBC-IoT and SLIM

This paper constructs neural distinguishers to analyze the security of two lightweight block ciphers, LBC-IoT and SLIM, against deep learning attacks. It finds that SLIM is more resistant to such attacks compared to LBC-IoT, despite their structural similarities. The position where round keys are added in the round function significantly impacts resistance to neural cryptanalysis. By comparing variants of the round function, the paper shows how different designs can enhance security against these attacks. It also presents a key recovery attack on up to 8 rounds of LBC-IoT with complexity of around 224 encryptions.

Uploaded by

mashitah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views11 pages

On The Security of Lightweight Block Ciphers Against Neural Distinguishers Observations On LBC-IoT and SLIM

This paper constructs neural distinguishers to analyze the security of two lightweight block ciphers, LBC-IoT and SLIM, against deep learning attacks. It finds that SLIM is more resistant to such attacks compared to LBC-IoT, despite their structural similarities. The position where round keys are added in the round function significantly impacts resistance to neural cryptanalysis. By comparing variants of the round function, the paper shows how different designs can enhance security against these attacks. It also presents a key recovery attack on up to 8 rounds of LBC-IoT with complexity of around 224 encryptions.

Uploaded by

mashitah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Journal of Information Security and Applications 76 (2023) 103531

Contents lists available at ScienceDirect

Journal of Information Security and Applications


journal homepage: www.elsevier.com/locate/jisa

On the security of lightweight block ciphers against neural distinguishers:


Observations on LBC-IoT and SLIM
Wei Jian Teng a , Je Sen Teh a ,∗, Norziana Jamil b
a School of Computer Sciences, Universiti Sains Malaysia, 11800 Gelugor, Malaysia
b
Department of Computing, College of Computing and Informatics, Universiti Tenaga Nasional, 43000 Kajang, Malaysia

ARTICLE INFO ABSTRACT

Keywords: Interest in the application of deep learning in cryptography has increased immensely in recent years. Several
Deep learning works have shown that such attacks are not only feasible but, in some cases, are superior compared to classical
Block cipher cryptanalysis techniques. However, due to the black-box nature of deep learning models, more work is required
Lightweight cryptography
to understand how they work in the context of cryptanalysis. In this paper, we contribute towards the latter
Differential cryptanalysis
by first constructing neural distinguishers for 2 different block ciphers, LBC-IoT and SLIM that share similar
Neural distinguisher
Neural network
properties. We then show that, unlike classical differential cryptanalysis (on which neural distinguishers are
based), the position where the round keys are included in round functions can have a significant impact on
distinguishing probability. We explore this further to investigate if different choices of where the round key
is introduced can lead to better resistance against neural distinguishers. We compare several variants of the
round function to showcase this phenomenon, which is useful for securing future block cipher designs against
deep learning attacks. As an additional contribution, the neural distinguisher for LBC-IoT was also applied in a
practical-time key recovery attack on up to 8 rounds. Results show that even with no optimizations, the attack
can consistently recover the correct round key with an attack complexity of around 224 full encryptions. To
the best of our knowledge, this is the first third-party cryptanalysis results for LBC-IoT to date.

1. Introduction was treated as a black box) [7]. Findings showed that Gohr’s neural
distinguisher was in fact approximating the differential distribution
There has been growing interest in the application of machine table (DDT) during the learning phase, which is then used to classify
learning (and more specifically deep learning) to solve cryptography- ciphertext pairs.
related problems in recent years. Earlier works that applied machine Contributions: In this paper, we first develop neural distinguishers for
learning in cryptography tend to focus on using deep learning mod- two relatively new block ciphers, LBC-IoT [8] and SLIM [9] published
els to perform side-channel analysis [1,2], whereby the attacker at- in Computers, Materials & Continua, and IEEE Access respectively. Both
tempts to observe the physical implementation of the cryptographic ciphers are based on the generalized Feistel structure (GFS) and are
devices for weaknesses and leakages in order to gain an upper hand. structurally similar. Our aim is to investigate whether slight variations
In these applications, deep learning methods seem to perform better in the design of the round function can enhance resistance to neural
than conventional machine learning methods. distinguishers by comparing and contrasting these two ciphers. To the
Recently, Gohr successfully applied deep learning to conventional best of our knowledge, there have been no third-party cryptanalysis
cryptanalysis by training a neural network to behave as a statistical attacks on both ciphers to date, therefore, cryptanalysis findings for
distinguisher for Speck32/64 [3]. This neural distinguisher was then these ciphers also serve as a secondary contribution.
used in a key recovery attack on round-reduced Speck32/64 with We used Gohr’s method to train neural distinguishers for both
practical attack complexities. This then led to a number of different ciphers, enabling us to directly compare the resistance of LBC-IoT and
works that use various machine learning and deep learning models SLIM to neural cryptanalysis with that of Speck32.1 Given a particular
for different cryptanalysis objectives [4–6]. Gohr’s work sparked an input difference, the model will distinguish the output of the ciphers
interest in the cryptographic community who have also delved into from random data. If an accurate distinguisher exists, we then use it
explaining how the neural distinguisher worked (in Gohr’s work, it in a key recovery procedure.

∗ Corresponding author.
E-mail address: [email protected] (J.S. Teh).
1
Codes and other relevant information are available at https://github.com/CryptoUSM/deep-lbc-slim.

https://doi.org/10.1016/j.jisa.2023.103531

Available online 15 June 2023


2214-2126/© 2023 Elsevier Ltd. All rights reserved.
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

We conducted a thorough comparison of distinguisher performance Table 1


S-box of the LBC-IoT.
for the target ciphers to identify block cipher features that may en-
hance resistance to neural cryptanalysis. It was found that SLIM is, 𝑥 0 1 2 3 4 5 6 7 8 9 A B C D E F

in general, more resistant to such attacks in comparison to LBC-IoT 𝑆 0 8 6 D 5 F 7 C 4 E 2 3 9 1 B A


despite sharing many similarities. We attempted to isolate elements of
the round function that contributed to this resistance and discovered Table 2
that the location where the round keys are added has a significant Permutation table of LBC-IoT.
impact on the block cipher’s resistance to neural cryptanalysis. We x 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
compared several potential round function designs which serves as a 𝑃1 13 10 7 12 9 14 3 2 5 16 15 4 1 6 11 8
reference for designers seeking to secure their ciphers against neural 𝑃2 5 8 16 12 3 11 2 13 4 1 14 6 9 15 7 10
attacks, and discussed potential reasons for this phenomenon.
Additionally, we report a practical key recovery attack on 7 rounds
of LBC-IoT that specifically determines the correct round key 100% of
bits. This was then improved upon in [14] where careful selection of
the time under 224.29 full encryptions. On the other hand, for 8 rounds
the plaintext pairs using a concept named homogeneous set has helped
of LBC-IoT, 66.67% of the time the correct round key was ranked in
to reduce the data complexity of the attack.
the top 10 candidates while 83.33% of the time it was within the top
More recently, Kimura et al. [15] proposed a deep learning-based
20. The computational complexity of the 8-round attack is 224.04 full
output prediction attack on three toy ciphers. In their work, they
encryptions.
noted that swapping or replacing internal components of a block cipher
Paper Structure: The paper will be structured as follows: Section 2 affects the average success probability of their attack. They left the
briefly covers previous work using machine or deep learning in crypt- explanation of this behavior to future work. In our work, we show
analysis. Section 3 then provides an overview of the LBC-IoT and SLIM for the first time that this phenomenon also occurs for Gohr’s neural
block ciphers. Section 4 will outline the neural network that will be distinguishers.
used to train the distinguisher along with validation results. Section 5
documents the tests and trials that were conducted in an attempt to 3. Preliminaries
identify the feature of the cipher that might lend to better resistance
to such deep learning-based distinguishers. In Section 6, we show the 3.1. LBC-IoT revisited
results of the partial key recovery attack using the neural distinguisher
trained in Section 4. Section 7 will discuss the results of this paper and The LBC-IoT [8] is a lightweight block cipher that was designed
possible future work. with the primary focus of keeping the cipher ultra-lightweight so as
to be able to function well under Internet of Things (IoT) constrained
2. Related work environments. The cipher is based on the Feistel structure and has a
32 bit block size with an 80 bit key. The 32 bit plaintext will go through
The application of machine learning methods for cryptanalysis was 32 rounds of the round functions, with each round using a 16 bit round
theorized to be possible as far back as 1991 [10]. Even so, for the key derived from the 80 bit master key. There are also some additional
next 20 odd years, cryptanalysis work has for the most part relied design elements that contribute to the efficiency of the cipher such as
on conventional non-machine-learning methods. Some of the earliest using smaller S-boxes (4 bit) and simple functions such as bit-wise shifts
instances of machine learning-based cryptanalysis work focused on and XORs.
side-channel analysis [1,2]. Each round, the 32 bit input will be split into 16 bit halves noted
The truly successful attempt at adapting deep learning models to as 𝐿𝑖 and 𝑅𝑖 . The processing of both halves is described by Eqs. (1)
conventional cryptanalysis was proposed by Gohr at CRYPTO 2019 [3]. and (2) below. Fig. 1 illustrates the round functions for encryption and
His work focused on applying neural networks to perform a partial decryption.
key recovery attack on round reduced Speck32/64. The promising
results by Gohr actually renewed interest in machine learning and deep 𝐿𝑖 = 𝑃2 (𝑅𝑖−1 ) (1)
learning-based cryptanalysis and have inspired various works. In [4],
a deep neural network-based differential distinguisher was presented 𝑅𝑖 = 𝑃1 [𝐿𝑖−1 ⊕ 𝐾𝑖 ⊕ 𝑆(𝑅𝑖−1 ≪ 7)] (2)
that can distinguish up to 3–6 rounds of PRESENT cipher data from
random data. [5] on the other hand managed to build a deep neural 𝑆 is a 4 × 4 S-box that will be repeated 4 times per round whereby
network model that is able to perform key recovery attacks successfully the 16 bit (𝑅𝑖−1 ≪ 7) will be divided into 4 nibbles and each nibble
on simplified DES, Simon, and Speck, as long as the keyspace is limited substituted separately. Two 16 bit permutation functions, 𝑃1 and 𝑃2 , are
to 64 ASCII characters. [6] used a Multi-Layer Perceptron (MLP) to used here in order to introduce better diffusion into the round function.
distinguish 8-round Gimli-Hash and Gimli-Cipher. The details of 𝑆 as well as 𝑃1 and 𝑃2 are shown below in Tables 1 and
Later, [11] developed a neural distinguisher in a similar vein as 2 respectively.
Gohr’s work but focused on SIMON32 instead. Some additional work Since in LBC-IoT’s original design specification there was insuffi-
was done to gain more insight into the workings of Gohr’s model. cient detail describing how the 16 bit round keys were generated from
A similar deep dive was done in [7] which found that the neural the 80 bit key, we provide a detailed explanation below (verified based
model is in fact, to some extent, learning the differential distribution on test vectors that we requested from the designers). The process is
of the cipher especially those of the second and third last rounds. also illustrated in Fig. 2.
Extending their previous work further, [12] introduced a SAT-based
algorithm that improved on the neural distinguishers presented thus 1. The first 5 round keys are taken directly from the 80 bit key.
far. In addition, they also proposed a new neural distinguisher that uses The first round key 𝐾1 will be taken from the 16 least significant
output difference in contrast to the models thus far that are trained bits of the 80 bit key. The second round key 𝐾2 will be the next
using ciphertext pairs. In [13], a new attack was introduced based 16 least significant bits. The process will be repeated for 𝐾3 to
on neural distinguishers. The Neural Aided Statistical Attack (NASA) 𝐾5 .
combines statistical analysis and a neural distinguisher. It is interesting 2. The 80 bit key is then divided into two 40 bit halves, 𝐾𝑒𝑦𝑀𝑆𝐵
to note that NASA does not dependent on special features like neutral and 𝐾𝑒𝑦𝐿𝑆𝐵 .

2
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Fig. 2. LBC-IoT key generator.

Fig. 1. Round function of LBC-IoT.


Fig. 3. Round function of SLIM.

Table 3
S-box of SLIM.
3. To obtain each of the round keys starting from 𝐾6 onwards,
the 16 least significant bits of 𝐾𝑒𝑦𝑀𝑆𝐵 and 𝐾𝑒𝑦𝐿𝑆𝐵 are first 𝑥 0 1 2 3 4 5 6 7 8 9 A B C D E F

extracted as 𝐾𝑒𝑦𝐿16 and 𝐾𝑒𝑦𝑀16 . 𝐾𝑒𝑦𝐿16 will be nibble-wise 𝑆 C 5 6 B 9 0 A D 3 E F 8 4 7 1 2


circularly left shifted by 3 bits and then XOR-ed with 𝐾𝑒𝑦𝑀16 .
The output of the XOR will then go through a round of substi- Table 4
tution using the same S-box as in the round function, forming Permutation table of SLIM.
an updated 𝐾𝑒𝑦𝐿16 . 𝐾𝑒𝑦𝑀16 will go through the 𝑃1 permutation x 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
function as in the round function. The output of the permu- 𝑃 8 14 2 9 12 15 3 6 5 11 16 1 4 7 10 13
tation will then be XOR-ed with the updated 𝐾𝑒𝑦𝐿16 , forming
an updated 𝐾𝑒𝑦𝑀16 . The round key is then simply the updated
𝐾𝑒𝑦𝑀16 . The updated 𝐾𝑒𝑦𝑀16 and 𝐾𝑒𝑦𝐿16 will then be rotated
back into 𝐾𝑒𝑦𝑀𝑆𝐵 and 𝐾𝑒𝑦𝐿𝑆𝐵 respectively from the left side. The 32 bit block is divided into 2 equal halves, 𝐿𝑖 and 𝑅𝑖 . The round
function is summarized in Eqs. (3) and (4). Fig. 3 outlines the round
Usually, block cipher specifications include some preliminary anal- function for SLIM.
ysis against attacks such as differential or linear cryptanalysis. Unfor-
𝐿𝑖 = 𝑅𝑖−1 (3)
tunately, no cryptanalysis results were provided by the designers apart
from providing the DDT and linear approximation table. There have
also been no third-party cryptanalysis results on LBC-IoT so far. 𝑅𝑖 = 𝐿𝑖−1 ⊕ 𝑃 (𝑆(𝐾𝑖 ⊕ 𝑅𝑖−1 )) (4)

3.2. SLIM revisited SLIM uses 4 bit S-boxes that are applied in parallel. The S-box 𝑆
was chosen carefully to make the cipher more resistant to linear and
SLIM is an ultra-lightweight symmetric block cipher introduced differential attacks. At the same time, considerations are also given
in [9]. SLIM was designed with RFID applications in mind and its main to selecting an S-box that has a small hardware area footprint. The
design considerations are to keep RFID data transmissions secure while permutation function 𝑃 is selected according to cryptanalysis findings
keeping the implementation of the algorithm as simple as possible. on the cipher. The details of 𝑆 and 𝑃 are shown below in Tables 3 and
Similar to the LBC-IoT, SLIM uses a Feistel structure and also has a 4.
32 bit block size with an 80 bit key. The cipher consists of 32 rounds Each of the 32 rounds uses a 16 bit round key which is derived from
with 16 bit round keys that are derived from the 80 bit key. the 80 bit key. The round key generation process is described below and

3
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Fig. 5. 2-Dimensional convolution operation.

Fig. 4. Key generation of SLIM.

the process is illustrated in Fig. 4. The process is quite similar to that


of the LBC-IoT.

1. The first 5 round keys are taken from the 80 bit key where the
16 least significant bits of the key form the first round key 𝐾1 .
The next 16 least significant bits will be the second round key
𝐾2 . The process is then repeated for 𝐾3 to 𝐾5 .
Fig. 6. Neural network with residual learning.
2. The 80 bit key is then divided into two 40 bit halves, 𝐾𝑒𝑦𝑀𝑆𝐵
and 𝐾𝑒𝑦𝐿𝑆𝐵 .
3. To obtain the round keys from 𝐾6 onwards, the 16 least sig-
when the depth of neural networks was increased. It was found in [20]
nificant bits of 𝐾𝑒𝑦𝑀𝑆𝐵 and 𝐾𝑒𝑦𝐿𝑆𝐵 are extracted as 𝐾𝑒𝑦𝑀16
that as more layers are added to neural networks when the number
and 𝐾𝑒𝑦𝐿16 . 𝐾𝑒𝑦𝐿16 will be nibble-wise circular left shifted by 2
of layers exceeds a certain threshold, the training and test error will
and then XOR-ed with 𝐾𝑒𝑦𝑀16 . The output of the XOR is then
increase. Residual networks function as a feedforward or shortcut that
forwarded to the substitution layer consisting of the same 4-by-4
will bypass certain layers of the model. This concept is illustrated in
S-box as the round function, forming an updated 𝐾𝑒𝑦𝐿16 . On the
Fig. 6
other hand, 𝐾𝑒𝑦𝑀16 will go through a nibble-wise circular left
shift of 3 bits before being XOR-ed with the updated 𝐾𝑒𝑦𝐿16 . The
4. Deep neural distinguisher for LBC-IoT and SLIM
result of the XOR will form an updated 𝐾𝑒𝑦𝑀16 and is also the
round key. The updated 𝐾𝑒𝑦𝑀16 and 𝐾𝑒𝑦𝐿16 will then be rotated
The network used here is the same one that was used in [3]. The
back into 𝐾𝑒𝑦𝑀𝑆𝐵 and 𝐾𝑒𝑦𝐿𝑆𝐵 respectively from the left side.
same network is used for several reasons: Firstly, Gohr’s work focused
Some cryptanalysis findings were provided by the designers them- on using the neural distinguisher to distinguish round reduced Speck32,
selves, mainly outlining the linear and differential trails of SLIM. The and structurally, Speck32 shares some similarities with LBC-IoT and
authors only managed to find a linear trail of up to 11 rounds and SLIM. The latter are all block ciphers with the same block size of 32 bits
a differential trail of up to 7 rounds. It was then concluded that the with a Feistel structure. Secondly, although Speck32 has a key size of
cipher is secure against linear and differential cryptanalysis. However, 64 bits while LBC-IoT and SLIM both have a key size of 80 bits, the
it is unclear if these trails were optimal in terms of their differential or deep learning model was designed to be independent of the key size.
linear approximation probability. However, the ciphers differ in their nonlinear components: In Speck32
it was modular addition while LBC-IoT and SLIM both rely on S-boxes.
3.3. Deep learning-based neural distinguisher Additionally, the promising results in Gohr’s work have lead to
many follow-up works that explored different aspects of the model pro-
A core part of this work is the neural distinguisher that will be posed in order to either extend the work further or to better understand
used as part of the key recovery procedure. More specifically, it is a it [5,21]. For both the chosen ciphers, very little cryptanalysis work has
convolutional residual neural network trained to distinguish whether a been done. Therefore, we decided to adopt an approach that has proven
given pair of ciphertexts is an output of the cipher or simply random to be effective and investigate if it can be equally as effective on other
bits. The convolution residual neural network is actually an amalgama- ciphers.
tion of 2 different concepts, convolutional neural network, and residual
neural network. Convolutional neural networks are neural networks 4.1. Experimental setup
that consist of at least one convolution layer where the input data is
convolved with a filter (known as a kernel) to produce an output (See The first step of the experiment is to replicate Gohr’s experimental
Fig. 5). The convolution process allows the network to better capture setup. The findings are then verified against Gohr’s findings to ensure
spatial relationships. As a result of the convolution layer, these neural consistency. Following that, the SLIM cipher was implemented accord-
networks are particularly suitable for image and pattern recognition ing to the specifications provided by the author. SLIM’s implementation
tasks and have found application in many different fields [16–18]. is verified using the differential trails provided by the authors. LBC-
On the other hand, we also have residual neural networks [19] IoT is then implemented by modifying SLIM since the 2 ciphers share
which were introduced to overcome a problem that was discovered a number of similarities. Unfortunately, no test vectors for LBC-IoT

4
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Table 5
Training settings.
Parameter Setting
Dataset size 106
Epoch 40
Batch size 5000
Optimizer Adam algorithm
Loss function Means square error
Regularizer L2 with regularization parameter = 10−5
Learning rate Cyclic learning rate

Table 6
LBC-IoT distinguisher training results.
LBC-IoT
Input difference 5-Rounds 6-Rounds 7-Rounds 8-Rounds
(0𝑥0020, 0𝑥0000) 0.9860 0.8299 0.6072 0.5036
(0𝑥0400, 0𝑥0000) 0.9742 0.7751 0.6408 0.5030
(0𝑥0800, 0𝑥0000) 0.9702 0.8478 0.5095 0.5041
(0𝑥1000, 0𝑥0000) 0.9784 0.7883 0.5049 0.5045
(0𝑥2000, 0𝑥0000) 0.9884 0.7366 0.5029 0.5027
(0𝑥8000, 0𝑥0000) 0.9806 0.8135 0.5038 0.5049
Average 0.9796 0.7985 0.5449 0.5038

and ‘‘random’’ ciphertext pairs will then be the data used for training,
along with the flag values. 106 samples were generated in this manner
for training and 105 was generated for validation.
The basic training setting is summarized below in Table 5. Much of
the setting remains the same [3] as with a few exceptions. The dataset
Fig. 7. Network model.
size is reduced to allow for faster training of more settings in terms
of the number of rounds and the input difference. Additionally, the
number of epochs is also reduced to 40. The reduction of the epoch
is due to the observation that the models have minimal improvement
beyond around epoch 30 and a large epoch value might contribute to
were made available by the authors so no further verification can overfitting.
be done. The two implemented ciphers are then substituted into the
experimental setup. Correctness of the code can be verified with the 4.4. Findings
code provided in the GitHub link1 .
This section will detail the findings from the training of the models.
4.2. Structure of the network Since little cryptanalysis information is available for both ciphers, the
optimal input difference for establishing a distinguisher is unknown.
The input of the network is a pair of ciphertexts written as 64 singu- Therefore, as a start, the models will be trained on all possible input
lar bits. The network will first preprocess the input into a 4 × 16 matrix differences with a Hamming weight of 1 for 5 rounds. For 6 to 8 rounds,
before passing the output to an initial convolution with 32 output only the 6 input differences with the highest validation accuracy will
channels. From there, it will then move into the residual convolutional be selected for training. A summary of the training results is shown in
blocks. Each of the convolutional blocks will consist of 2 layers of 32- Tables 6 and 7.
filter convolution layers. At the end of the convolutional block, the For the LBC-IoT distinguisher, from Table 6 we can see that the
initial input of the block will be added to the result of the 2 convolution distinguisher’s ability to distinguish a round output from random will
layers. The number of these convolutional blocks will be the depth of decrease as the number of rounds function increases. This behavior
the network and its default value is 10. A visualization of the network is as expected since the application of more round functions will
is shown in Fig. 7. better randomize the ciphertext values. This is in line with the results
presented by [3]. In addition to that, a few additional observations can
4.3. Model training be made here:

• Input differences where the single-bit difference is in the left half


Training, as well as validation data, is randomly generated accord- of the block tend to result in distinguishers with noticeably better
ing to the set input difference. The process starts by first randomly validation accuracy.
generating 3 items: the 80 bit keys, 32 bit plaintexts, and single-bit This can be explained by the fact that all nonlinear components
flags indicating whether the particular ciphertext pair will be a real pair are ‘‘inactive’’ in the first round.
or a randomly generated one. From there, the second set of plaintexts The first nonzero difference enters an S-box only in the second
will be formed by XOR-ing the first set of plaintexts with the input round.
difference. The plaintext pairs will then be encrypted using the keys • With 8-round and above, the distinguisher is no longer able to
generated earlier. A set of blinding values will be randomly generated. differentiate the round output from random.
For those entries where the flag is set to 0, the ciphertext pair will be A validation accuracy of approximately 0.5 means that the distin-
XOR-ed with these blinding values to randomize it. The resulting real guisher is only able to ‘‘guess’’ correctly half the time.

5
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Table 7
SLIM distinguisher training results.
SLIM
Input difference 3-Rounds 4-Rounds 5-Rounds 6-Rounds 7-Rounds 8-Rounds
(0𝑥0000, 0𝑥0008) 0.5028 0.5039 0.5045 0.5029 0.5016 0.5025
(0𝑥0000, 0𝑥0010) 0.5033 0.5039 0.5039 0.5025 0.5028 0.5037
(0𝑥0000, 0𝑥0080) 0.5023 0.5023 0.5048 0.5028 0.5024 0.5041
(0𝑥0000, 0𝑥2000) 0.5027 0.5050 0.5038 0.5018 0.5024 0.5039
(0𝑥0002, 0𝑥0000) 0.5021 0.5030 0.5053 0.5026 0.5033 0.5012
(0𝑥0100, 0𝑥0000) 0.5036 0.5032 0.5040 0.5020 0.5043 0.5051
Average 0.5028 0.5036 0.5044 0.5024 0.5028 0.5034

As for SLIM, based on Table 7 it is clear that SLIM has a significant 5.1. Key schedule
resistance towards a deep learning-based distinguisher.
Experiments were performed to determine if the key schedule has
• The cipher is already indistinguishable with relatively few rounds. any influence on the cipher’s ability to resist neural distinguishers. The
At 3-rounds, the best validation accuracy is only around 0.5.2 tests that were conducted include:
• There also does not seem to be any bias in the performance of the
distinguisher based on the input difference. • Replacing SLIM’s key schedule with that of LBC-IoT.
Regardless of the input difference, the best validation accuracy is • Simplifying the SLIM’s key schedule to just 1 round key (the first
still around 0.5 across the board. round key) for all rounds.
• Alternating only the first 2 rounds round keys from SLIM’s key
LBC-IoT shares a similar (albeit slightly lower) level of resistance schedule.
to neural distinguishers as Speck32. The distinguisher accuracy for
Speck32 degrades at a similar rate from 5 to 8 rounds (0.929 → 0.788 → All the above methods did not have any visible impact on the cipher’s
0.616 → 0.514). On the other hand, we can conclude that SLIM has security. Additionally, if the round keys are zeros, the distinguisher will
better security than both LBC-IoT and Speck32 when it comes to neural have very high validation accuracy (e.g greater than 0.9) as expected.
distinguishers. These results imply that the complex key schedule of SLIM and LBC-IoT
Although the results from the LBC-IoT distinguisher is as expected, can actually be simplified to an alternating key schedule [22] without
findings from the SLIM distinguisher is to the contrary. This observation compromising their security against neural attacks.
implies that slight variations in a block cipher’s design can have a sig-
nificant impact on its security against deep learning attacks, something 5.2. Substitution and permutation
that was similarly reported in [15]. In the next section, we further
investigate this phenomenon by analyzing various round functions in The next components to be examined will be the substitution and
search of a highly indistinguishable design. In this paper, we limit our permutation operations. Both SLIM and LBC-IoT have one substitution
experiments to Feistel-like round functions with an S-box layer. layer but differ in terms of their permutation operation (SLIM has one
permutation operation applied right after the S-box layer while LBC-
IoT has permutations being applied to each half). Although intuitively,
5. The security of round functions against neural distinguishers
replacing the S-box and permutation patterns should not affect the
distinguishing probability (assuming that good S-boxes and permuta-
As noted in Section 4 the SLIM cipher shows significant resistance
tion patterns are used), we still performed a few simple experiments
towards a deep learning-based distinguisher when compare to LBC-IoT
regardless. As expected, when the S-box and permutation pattern of
even at a small number of rounds. At first glance, both ciphers seemed
SLIM was replaced with that of LBC-IoT, no noticeable impact was
to share a number of similarities:
observed.
• Both ciphers are based on the Feistel structure.
• Both round functions consist of very similar components, namely 5.3. Internal structure of the round function
substitution, permutation, and key addition.
One minor difference is that LBC-IoT has an additional left shift In terms of the internal components, most Feistel-like structures
that SLIM does not. are simple in that they consist of only substitution and permutation
• The key schedules for both ciphers are also very similar. functions applied on one-half of the entire block. Since both of these
The general structure of the key schedule is the same but with components as well as the key schedule have already been examined,
small differences in the operations used. the logical next step would be to examine the internal structure of
the round function itself, specifically the order and placement of these
In this section, we performed experiments on a wide range of designs components. Fig. 8 illustrates all the structures and their best validation
that vary in terms of the key schedule, substitution and permutation accuracy results are shown in Table 8. The following list summarizes
components, and the internal structure of the round function itself. Our the different structures that were tested.
experimental results will not only shed light on which design feature is
responsible for SLIM’s resistance to neural distinguishers but will also (a) Original unmodified SLIM round function to be used as a base-
uncover other similarly secure round function designs. line
All the experiments in this section will be based on the 5-round (b) Swapping the position of the substitution and permutation func-
distinguisher trained using an input difference of (0𝑥0001, 0𝑥0000). tions only, while the round key addition remains in the same
position
(c) Round key is added in between the substitution and permutation
2
For completeness, results for all input differences with Hamming weight operation
of 1 for 1 to 3 rounds can be found in the Appendix (See Tables A.11–A.15). (d) Round key is added after the permutation operation

6
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Fig. 8. Modified SLIM structures that were tested.

Table 8 property of LBC-IoT has yet to be proven while Speck32 is known to be


Results from testing the different SLIM structures. non-Markov [24]. Swapping the positions of the S-box and permutation (in
Fig. 8 Structure description Best validation accuracy Fig. 8(b)) is also secure since it is structurally equivalent.
(a) Unmodified SLIM 0.5036
(b) Position of S and P are swapped 0.5052
Observation 2. When the round key is added after the nonlinear operation
(c) Round key is added between S and P 0.9773
(d) Round key is added after P 0.9761
(Figs. 8(c) and 8(d)), the cipher is significantly weaker against neural
(e) Round key is added before the right branch 0.9999 attacks. Since one half passes through to the next round unmodified while
(f) Round key is added after the right branch 0.5174 the other half is linearly masked by the round key, we postulate that there
is a statistical bias that aids the distinguisher. Essentially, for a particular
round, the round key would have bypassed the nonlinear operation entirely.
(e) Round key is added before the right half branches (refer to
Observation 3. The sixth round function design (Fig. 8(f)) seems struc-
Fig. 8(e))
turally equivalent to the ones discussed in 2 but is, in contrast, secure against
(f) Round key is added after the right half branches (refer to
neural distinguishers. To explain this observation, we can consider an
Fig. 8(f))
alternative representation of Fig. 8(f) whereby the first round is essentially
This round of tests revealed some interesting findings. From Table 8 ‘‘free’’ of all round key bits, and the subsequent rounds are masked by 2
it is evident that the position of where the round key is added relative round key bits, one on the left branch and one on the right. Looking at each
to the position of the other components plays a significant role in round independently, if both round keys are sufficiently random, both halves
determining whether the cipher is able to resist neural distinguish- of a ciphertext would then have been ‘‘masked’’ by the random round key,
ers. As previously mentioned, these observations are in line with the making it difficult to distinguish ciphertext pairs from random pairs. It would
findings in [15], which found that changing the internal component be similar to the neural distinguisher trying to differentiate between pairs of
texts that have been generated by a random number generator. In contrast,
positions of several toy SPN and Feistel block ciphers has an effect on
the designs mentioned in 2 have only one-half of their blocks being masked
the success of deep learning-based attacks. Our results now additionally
by the round key.
show that even the position of the round key can affect the learning
capability of deep learning models, which is in contrary to the behavior
Observation 4. When the same round key affects both halves of an input
of differential cryptanalysis whereby the effect of the round keys is
to a particular round (whether linearly or nonlinearly), the cipher becomes
supposedly negated. The following are our observations and possible
less resistant to a neural distinguisher. The round function in Fig. 8(e) has
explanations for their occurrences.
this property whereby inputs to the 2nd round onward will be linearly (left
half) and nonlinearly (right half) influenced by the same round key (from
Observation 1. The original unmodified SLIM design (Fig. 8(a)) is
the earlier round). This may lead to some statistical biases that can be
resistant to neural distinguishers since, in each round, one-half will be
learned by the neural network.
masked by a random round key before being nonlinearly transformed by
the round function. Also, each round of SLIM should be independent of Our prior observations run contrary to the classical differential
one another since the original round function of SLIM makes it a Markov cryptanalysis setting, whereby the position of the round key should
cipher. Proof of the latter stems from the fact that SLIM shares the same not affect the effectiveness of a distinguisher, i.e., the probability that
design as DES, a proven Markov cipher [23]. In contrast, the Markovian a differential trail holds. A possible reason for this phenomenon is

7
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Table 9
Results of the key recovery attack.
Input difference Rank of correct key Run time (no. of full encryption)
Max. Average Min. Average
0x0020 0x0000 1 1.0 1 224.25
0x0400 0x0000 1 1.0 1 224.25
0x0800 0x0000 1 1.0 1 224.23
7-round
0x1000 0x0000 1 1.0 1 224.29
0x2000 0x0000 1 1.0 1 224.37
0x8000 0x0000 1 1.0 1 224.36
0x0020 0x0000 3 1.8 1 224.02
0x0400 0x0000 19 8.6 1 224.02
0x0800 0x0000 13 8.0 4 224.09
8-round
0x1000 0x0000 8 5.4 1 224.03
0x2000 0x0000 236 140.8 38 224.05
0x8000 0x0000 10 5.4 1 224.04

that the neural distinguisher receives the actual ciphertext pairs as is already trained; the training of the distinguisher along with other
inputs. Therefore, the statistics of the individual ciphertext could make pre-computations are considered to be a part of the offline phase.
a difference in the effectiveness of the deep learning model. As expected, the probability of the correct key being having the top
rank is higher for the 7-round attack compare to the 8-round attack as
6. Key recovery attack on LBC-IoT illustrated in Table 9. While the 7-round attack will result in the correct
key having the highest rank 100% of the time, the 8-round attack
In this section, the neural distinguisher established in previous sec- is less consistent. It is observed that across all the input differences,
tions will be used in a key recovery attack on round-reduced LBC-IoT. the 8-round attack has a 66.67% probability to rank the correct key
The distinguisher will be used as part of a key ranking procedure [3] within the top 10 candidates. In comparison, a classical differential
in an attempt to recover the correct (final) round key. The attack is attack using 2−𝑝 pairs of plaintexts with a differential with probability
outlined in the following paragraph. 2𝑝 would have a success probability of 1 − 1∕𝑒 ≈ 63.2%. The correct key
With an 𝑛 round neural distinguisher, we perform a key recovery is found within the top 20 ranked keys around 83.33% of the time. The
attack on 𝑛 + 2 rounds. One round can be added at the start of the lower success rate for the 8-round attack is due to the higher accuracy
neural distinguisher because the first round key addition occurs after of the 5-round distinguisher which is used in the 7-round attack.
the first application of the S-boxes. Since the key addition occurs after Recovering the remaining round keys would just involve repeating the
key recovery process using distinguishers for fewer rounds of LBC-IoT.
the nonlinear operation, an adversary can trivially inject their chosen
Since the distinguisher accuracy increases as the number of rounds
differences to the outputs of the first round of LBC-IoT. One key-
decrease, both data and attack time complexities should also decrease
recovery around is added to the end of the neural distinguisher. To
accordingly. We expect the cost of recovering the remaining round keys
recover the (𝑛 + 2)th round key, 200 plaintexts will first be generated at
to be negligible as compared to recovering the final round key.
random. Each of the plaintexts will then be expanded into a plaintext
One interesting observation is that the average rank of the correct
pair by applying the selected input difference. The plaintext pairs will
key in the 8-round attack seems to vary (sometimes quite significantly)
then be encrypted (𝑛 + 2) rounds by the encryption oracle to obtain
depending on the input difference. This could possibly be attributed
the corresponding ciphertext pairs. We guess the final round key and
to biases that are a result of the differential characteristics which is
partially decrypt all ciphertext pairs. The 𝑛-round distinguisher will
likely learned by the model. More work needs to be done in order
be used to determine whether the partially decrypted ciphertext pairs
to analyze the differential characteristics of the cipher in order to
are random or not. Of the 200 ciphertext pairs, the number of ‘‘non-
determine whether this hypothesis has merit. Another thing to note
random’’ ciphertext pairs is recorded for each of the 216 possible round
is that the attack complexities of both the 7 and 8-round attacks
keys; this is known as the rank of the key. In an ideal scenario, it is
are similar (in terms of full encryptions, i.e., 7 and 8-round LBC-IoT
expected that the correct key will have the highest rank (highest count)
encryptions respectively). Since the overall key recovery procedure is
among all keys.
the same and we use the same amount of data, the attack complexity is
This key recovery attack is only attempted on LBC-IoT since the
relatively constant. The only difference is the length of the underlying
distinguisher for SLIM is not able to meaningfully distinguish at all. In
distinguisher which determines the number of pairwise encryption
the next subsection, the attack is demonstrated using 6 input differences
rounds required during the data preparation phase (which in this case,
with the best validation accuracy (as outlined in Section 4). We report is negligible as compared to the key guessing phase).
attacks on 7 and 8-round-reduced LBC-IoT using the 5-round and 6-
round distinguisher respectively. Each attempt is repeated 5 times to 7. Conclusion
obtain averaged results. Although it may be possible to attack 9 rounds
of LBC-IoT using the 7-round distinguisher but the success rate would In this paper, we analyze the security of two Feistel-like lightweight
be considerably lower (e.g. the straightforward key recovery attack on block ciphers, LBC-IoT, and SLIM against neural distinguishers. We
9 rounds of Speck32 by Gohr had a success rate of only 35.8% [3]). found that LBC-IoT has a similar level of security as Speck32 against
neural attacks but SLIM was unusually resistant. A further investigation
6.1. Key ranking results into SLIM’s resistance determined that, unlike classical differential
distinguishers, the position where round keys are included in the round
Here we present the results of the key recovery attack on 7 and 8- function has a significant impact on the accuracy of a neural distin-
round LBC-IoT as shown in Table 9. The table shows the max, min and guisher. Similar observations were made by previous researchers who
mean rank of the correct key as well as the time required for the attack noted that swapping encryption components (such as substitution and
expressed in the number of full encryptions. Note that the attack time permutation operations) has an impact on the success probability of
recorded here is for the online phase which assumes that the model deep learning-based attacks.

8
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Our findings show for the first time that the round key position mat- draft, Writing – review & editing. Norziana Jamil: Resources, Writing
ters as well, especially when ciphertext values themselves are used for – review & editing, Project administration, Funding acquisition.
training, rather than ciphertext differences. We then investigated vari-
ous Feistel-like round function designs to find other similarly neural- Declaration of competing interest
resistant designs, which can be used as a reference for future block
cipher designs. Finally, we propose a key recovery attack on 7 and
The authors declare that they have no known competing finan-
8 round-reduced LBC-IoT using the neural distinguishers developed in
cial interests or personal relationships that could have appeared to
this paper, which have practical attack time complexities of ≈ 224 full
influence the work reported in this paper.
encryptions. To the best of our knowledge, the key recovery attack on
LBC-IoT is the first cryptanalysis result for the cipher.
As future work, we will look further into explaining how the various Data availability
components of a block cipher affect the accuracy of neural distin-
guishers, and how this can be used to develop more secure block A link to a Github repository has been provided that has all the
ciphers in the presence of deep learning adversaries. Conversely, we codes and information related to this paper.
will also investigate how these findings can help build better neural
distinguishers either from the perspective of feature selection or data Acknowledgments
generation. Also, our cryptanalysis results on LBC-IoT could potentially
be improved by constructing a wrong-key response profile to limit the
This work was supported by the Universiti Sains Malaysia, Research
key search space or by prepending a differential trail to the neural
University Team (RUTeam) Grant Scheme (Grant Number:
distinguisher.
1001/PKOMP/8580013), and the Uniten BOLD2025 Research Fund en-
titled ‘A Deep Learning Approach to Block Cipher Security Evaluation’,
CRediT authorship contribution statement
Project Code J510050002/2021052.
Wei Jian Teng: Conceptualization, Methodology, Validation, Inves-
tigation, Data curation, Writing – original draft, Writing – review & Appendix. Full training results for 1 to 3 rounds of LBC-IoT and
editing, Visualization. Je Sen Teh: Conceptualization, Methodology, SLIM
Validation, Investigation, Resources, Supervision, Writing – original

Table A.10
Full training result for 1-round LBC-IoT.
Input difference Best validation Input difference Best validation
accuracy accuracy
(0x0000, 0x0001) 0.8757 (0x0001, 0x0000) 0.5037
(0x0000, 0x0002) 0.8741 (0x0002, 0x0000) 0.5028
(0x0000, 0x0004) 0.9064 (0x0004, 0x0000) 0.5033
(0x0000, 0x0008) 0.9060 (0x0008, 0x0000) 0.5038
(0x0000, 0x0010) 0.8757 (0x0010, 0x0000) 0.5027
(0x0000, 0x0020) 0.8760 (0x0020, 0x0000) 0.5027
(0x0000, 0x0040) 0.9072 (0x0040, 0x0000) 0.5048
(0x0000, 0x0080) 0.9048 (0x0080, 0x0000) 0.5035
(0x0000, 0x0100) 0.8757 (0x0100, 0x0000) 0.5035
(0x0000, 0x0200) 0.8758 (0x0200, 0x0000) 0.5035
(0x0000, 0x0400) 0.9072 (0x0400, 0x0000) 0.5034
(0x0000, 0x0800) 0.9060 (0x0800, 0x0000) 0.5034
(0x0000, 0x1000) 0.8740 (0x1000, 0x0000) 0.5038
(0x0000, 0x2000) 0.8764 (0x2000, 0x0000) 0.5033
(0x0000, 0x4000) 0.9077 (0x4000, 0x0000) 0.5009
(0x0000, 0x8000) 0.9060 (0x8000, 0x0000) 0.5019

Table A.11
Full training result for 1-round SLIM.
Input difference Best validation Input difference Best validation
accuracy accuracy
(0x0000, 0x0001) 0.5026 (0x0001, 0x0000) 0.5028
(0x0000, 0x0002) 0.5018 (0x0002, 0x0000) 0.5033
(0x0000, 0x0004) 0.5024 (0x0004, 0x0000) 0.5034
(0x0000, 0x0008) 0.5025 (0x0008, 0x0000) 0.5034
(0x0000, 0x0010) 0.5032 (0x0010, 0x0000) 0.5021
(0x0000, 0x0020) 0.5024 (0x0020, 0x0000) 0.5041
(0x0000, 0x0040) 0.5019 (0x0040, 0x0000) 0.5023
(0x0000, 0x0080) 0.5045 (0x0080, 0x0000) 0.5031
(0x0000, 0x0100) 0.5018 (0x0100, 0x0000) 0.5035
(0x0000, 0x0200) 0.5027 (0x0200, 0x0000) 0.5016
(0x0000, 0x0400) 0.5042 (0x0400, 0x0000) 0.5039
(0x0000, 0x0800) 0.5055 (0x0800, 0x0000) 0.5023
(0x0000, 0x1000) 0.5035 (0x1000, 0x0000) 0.5028
(0x0000, 0x2000) 0.5031 (0x2000, 0x0000) 0.5029
(0x0000, 0x4000) 0.5029 (0x4000, 0x0000) 0.5031
(0x0000, 0x8000) 0.5028 (0x8000, 0x0000) 0.5031

9
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Table A.12
Full training result for 2-round LBC-IoT.
Input difference Best validation Input difference Best validation
accuracy accuracy
(0x0000, 0x0001) 0.9653 (0x0001, 0x0000) 0.9080
(0x0000, 0x0002) 0.9376 (0x0002, 0x0000) 0.8746
(0x0000, 0x0004) 0.9503 (0x0004, 0x0000) 0.9062
(0x0000, 0x0008) 0.9579 (0x0008, 0x0000) 0.9067
(0x0000, 0x0010) 0.9650 (0x0010, 0x0000) 0.8750
(0x0000, 0x0020) 0.9331 (0x0020, 0x0000) 0.8733
(0x0000, 0x0040) 0.9461 (0x0040, 0x0000) 0.9061
(0x0000, 0x0080) 0.9363 (0x0080, 0x0000) 0.9064
(0x0000, 0x0100) 0.9535 (0x0100, 0x0000) 0.8747
(0x0000, 0x0200) 0.9445 (0x0200, 0x0000) 0.8751
(0x0000, 0x0400) 0.9522 (0x0400, 0x0000) 0.9061
(0x0000, 0x0800) 0.9690 (0x0800, 0x0000) 0.9061
(0x0000, 0x1000) 0.9644 (0x1000, 0x0000) 0.8752
(0x0000, 0x2000) 0.9335 (0x2000, 0x0000) 0.8735
(0x0000, 0x4000) 0.9503 (0x4000, 0x0000) 0.8749
(0x0000, 0x8000) 0.9386 (0x8000, 0x0000) 0.9060

Table A.13
Full training result for 2-round SLIM.
Input difference Best validation Input difference Best validation
accuracy accuracy
(0x0000, 0x0001) 0.5024 (0x0001, 0x0000) 0.5030
(0x0000, 0x0002) 0.5026 (0x0002, 0x0000) 0.5023
(0x0000, 0x0004) 0.5014 (0x0004, 0x0000) 0.5036
(0x0000, 0x0008) 0.5031 (0x0008, 0x0000) 0.5024
(0x0000, 0x0010) 0.5035 (0x0010, 0x0000) 0.5035
(0x0000, 0x0020) 0.5030 (0x0020, 0x0000) 0.5030
(0x0000, 0x0040) 0.5028 (0x0040, 0x0000) 0.5020
(0x0000, 0x0080) 0.5030 (0x0080, 0x0000) 0.5027
(0x0000, 0x0100) 0.5032 (0x0100, 0x0000) 0.5025
(0x0000, 0x0200) 0.5039 (0x0200, 0x0000) 0.5035
(0x0000, 0x0400) 0.5041 (0x0400, 0x0000) 0.5028
(0x0000, 0x0800) 0.5035 (0x0800, 0x0000) 0.5013
(0x0000, 0x1000) 0.5040 (0x1000, 0x0000) 0.5024
(0x0000, 0x2000) 0.5029 (0x2000, 0x0000) 0.5024
(0x0000, 0x4000) 0.5034 (0x4000, 0x0000) 0.5021
(0x0000, 0x8000) 0.5031 (0x8000, 0x0000) 0.5040

Table A.14
Full training result for 3-round LBC-IoT.
Input difference Best validation Input difference Best validation
accuracy accuracy
(0x0000, 0x0001) 0.9921 (0x0001, 0x0000) 0.9469
(0x0000, 0x0002) 0.9782 (0x0002, 0x0000) 0.9328
(0x0000, 0x0004) 0.9760 (0x0004, 0x0000) 0.9511
(0x0000, 0x0008) 0.9851 (0x0008, 0x0000) 0.9384
(0x0000, 0x0010) 0.9846 (0x0010, 0x0000) 0.9648
(0x0000, 0x0020) 0.9373 (0x0020, 0x0000) 0.9356
(0x0000, 0x0040) 0.9540 (0x0040, 0x0000) 0.9498
(0x0000, 0x0080) 0.9846 (0x0080, 0x0000) 0.9686
(0x0000, 0x0100) 0.9697 (0x0100, 0x0000) 0.9659
(0x0000, 0x0200) 0.9847 (0x0200, 0x0000) 0.9337
(0x0000, 0x0400) 0.9767 (0x0400, 0x0000) 0.9498
(0x0000, 0x0800) 0.9795 (0x0800, 0x0000) 0.9360
(0x0000, 0x1000) 0.9853 (0x1000, 0x0000) 0.9643
(0x0000, 0x2000) 0.9770 (0x2000, 0x0000) 0.9434
(0x0000, 0x4000) 0.9800 (0x4000, 0x0000) 0.9528
(0x0000, 0x8000) 0.9910 (0x8000, 0x0000) 0.9572

10
W.J. Teng et al. Journal of Information Security and Applications 76 (2023) 103531

Table A.15
Full training result for 3-round SLIM.
Input difference Best validation Input difference Best validation
accuracy accuracy
(0x0000, 0x0001) 0.5020 (0x0001, 0x0000) 0.5035
(0x0000, 0x0002) 0.5027 (0x0002, 0x0000) 0.5029
(0x0000, 0x0004) 0.5029 (0x0004, 0x0000) 0.5033
(0x0000, 0x0008) 0.5042 (0x0008, 0x0000) 0.5040
(0x0000, 0x0010) 0.5025 (0x0010, 0x0000) 0.5031
(0x0000, 0x0020) 0.5042 (0x0020, 0x0000) 0.5025
(0x0000, 0x0040) 0.5039 (0x0040, 0x0000) 0.5027
(0x0000, 0x0080) 0.5031 (0x0080, 0x0000) 0.5029
(0x0000, 0x0100) 0.5021 (0x0100, 0x0000) 0.5036
(0x0000, 0x0200) 0.5033 (0x0200, 0x0000) 0.5053
(0x0000, 0x0400) 0.5028 (0x0400, 0x0000) 0.5023
(0x0000, 0x0800) 0.5031 (0x0800, 0x0000) 0.5037
(0x0000, 0x1000) 0.5054 (0x1000, 0x0000) 0.5037
(0x0000, 0x2000) 0.5036 (0x2000, 0x0000) 0.5035
(0x0000, 0x4000) 0.5040 (0x4000, 0x0000) 0.5024
(0x0000, 0x8000) 0.5021 (0x8000, 0x0000) 0.5023

Remark. For 1 round of LBC-IoT, when the nonzero input difference [11] Hou Z, Ren J, Chen S. Cryptanalysis of round-reduced SIMON32 based on deep
is in the left half, the round function is inactive. Thus, the output learning. 2021, p. 362, IACR Cryptol ePrint Arch.
[12] Hou Z, Ren J, Chen S. Improve neural distinguisher for cryptanalysis. 2021, p.
differences for all samples are the same regardless of whether they have
1017, IACR Cryptol ePrint Arch.
been blinded or not. Thus, the model cannot distinguish between real [13] Chen Y, Yu H. Neural aided statistical attack for cryptanalysis. 2020, p. 1620,
and random samples (See Table A.10). IACR Cryptol ePrint Arch.
[14] Chen Y, Yu H. Improved neural aided statistical attack for cryptanalysis. 2021,
References p. 311, IACR Cryptol ePrint Arch.
[15] Kimura H, Emura K, Isobe T, Ito R, Ogawa K, Ohigashi T. Output prediction
attacks on SPN block ciphers using deep learning. 2021, p. 401, IACR Cryptol
[1] Maghrebi H, Portigliatti T, Prouff E. Breaking cryptographic implementations
ePrint Arch.
using deep learning techniques. In: Carlet C, Hasan MA, Saraswat V, editors.
[16] Adnan MM, Rahim MSM, Khan AR, Saba T, Fati SM, Bahaj SA. An improved
Security, Privacy, and Applied Cryptography Engineering - 6th International
automatic image annotation approach using convolutional neural network-
Conference, SPACE 2016, Hyderabad, India, December 14–18, 2016, Proceedings.
slantlet transform. IEEE Access 2022;10:7520–32. http://dx.doi.org/10.1109/
Lecture notes in computer science, vol. 10076, Springer; 2016, p. 3–26. http:
ACCESS.2022.3140861.
//dx.doi.org/10.1007/978-3-319-49445-6_1.
[17] Chandio AA, Asikuzzaman M, Pickering MR, Leghari M. Cursive text recognition
[2] Picek S, Samiotis IP, Kim J, Heuser A, Bhasin S, Legay A. On the performance
in natural scene images using deep convolutional recurrent neural network. IEEE
of convolutional neural networks for side-channel analysis. In: Chattopadhyay A,
Access 2022;10:10062–78. http://dx.doi.org/10.1109/ACCESS.2022.3144844.
Rebeiro C, Yarom Y, editors. Security, Privacy, and Applied Cryptography
[18] Adhane G, Dehshibi MM, Masip D. A deep convolutional neural network for
Engineering - 8th International Conference, SPACE 2018, Kanpur, India, Decem-
classification of Aedes albopictus mosquitoes. IEEE Access 2021;9:72681–90.
ber 15–19, 2018, Proceedings. Lecture notes in computer science, vol. 11348,
http://dx.doi.org/10.1109/ACCESS.2021.3079700.
Springer; 2018, p. 157–76. http://dx.doi.org/10.1007/978-3-030-05072-6_10.
[19] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In:
[3] Gohr A. Improving attacks on round-reduced speck32/64 using deep learning.
2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016,
In: Boldyreva A, Micciancio D, editors. Advances in Cryptology - CRYPTO 2019
Las Vegas, NV, USA, June 27–30, 2016. IEEE Computer Society; 2016, p. 770–8.
- 39th Annual International Cryptology Conference, Santa Barbara, CA, USA,
http://dx.doi.org/10.1109/CVPR.2016.90.
August 18–22, 2019, Proceedings, Part II. Lecture notes in computer science,
[20] He K, Sun J. Convolutional neural networks at constrained time cost. In: IEEE
vol. 11693, Springer; 2019, p. 150–79. http://dx.doi.org/10.1007/978-3-030-
Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston,
26951-7_6.
MA, USA, June 7–12, 2015. IEEE Computer Society; 2015, p. 5353–60. http:
[4] Jain A, Kohli V, Mishra G. Deep learning based differential distinguisher for
//dx.doi.org/10.1109/CVPR.2015.7299173.
lightweight cipher PRESENT. 2020, p. 846, IACR Cryptol ePrint Arch.
[21] Wang G, Wang G, He Y. Improved machine learning assisted (related-key)
[5] So J. Deep learning-based cryptanalysis of lightweight block ciphers. Secur
differential distinguishers for lightweight ciphers. In: 20th IEEE International
Commun Netw 2020;2020:3701067:1–3701067:11. http://dx.doi.org/10.1155/
Conference on Trust, Security and Privacy in Computing and Communications,
2020/3701067.
TrustCom 2021, Shenyang, China, October 20–22, 2021. IEEE; 2021, p. 164–71.
[6] Baksi A, Breier J, Dong X, Yi C. Machine learning assisted differential
http://dx.doi.org/10.1109/TrustCom53373.2021.00039, URL https://doi.org/10.
distinguishers for lightweight ciphers. 2020, p. 571, IACR Cryptol ePrint Arch.
1109/TrustCom53373.2021.00039.
[7] Benamira A, Gérault D, Peyrin T, Tan QQ. A deeper look at machine learning-
[22] Bogdanov A, Knudsen LR, Leander G, Standaert F, Steinberger JP, Tischhauser E.
based cryptanalysis. In: Canteaut A, Standaert F, editors. Advances in Cryptology
Key-alternating ciphers in a provable setting: Encryption using a small number
- EUROCRYPT 2021 - 40th Annual International Conference on the Theory and
of public permutations. 2012, p. 35, IACR Cryptol ePrint Arch.
Applications of Cryptographic Techniques, Zagreb, Croatia, October 17–21, 2021,
[23] Lai X, Massey JL, Murphy S. Markov ciphers and differential cryptanalysis. In:
Proceedings, Part I. Lecture notes in computer science, vol. 12696, Springer;
Davies DW, editor. Advances in Cryptology - EUROCRYPT ’91, Workshop on
2021, p. 805–35. http://dx.doi.org/10.1007/978-3-030-77870-5_28.
the Theory and Application of of Cryptographic Techniques, Brighton, UK, April
[8] Ramadan RA, Aboshosha BW, Yadav K, Alseadoon IM, Kashout MJ, Elhoseny M.
8–11, 1991, Proceedings. Lecture notes in computer science, vol. 547, Springer;
LBC-IoT: Lightweight block cipher for IoT constraint devices. CMC-Comput Mater
1991, p. 17–38. http://dx.doi.org/10.1007/3-540-46416-6_2.
Con 2021;67:3563–79.
[24] Biryukov A, Velichkov V, Corre YL. Automatic search for the best trails in ARX:
[9] Aboushosha B, Ramadan RA, Dwivedi AD, El-Sayed A, Dessouky MM.
application to block cipher speck. In: Peyrin T, editor. Fast Software Encryption
SLIM: A lightweight block cipher for internet of health things. IEEE Access
- 23rd International Conference, FSE 2016, Bochum, Germany, March 20–23,
2020;8:203747–57. http://dx.doi.org/10.1109/ACCESS.2020.3036589.
2016, Revised Selected Papers. Lecture notes in computer science, vol. 9783,
[10] Rivest RL. Cryptography and machine learning. In: Imai H, Rivest RL, Mat-
Springer; 2016, p. 289–310. http://dx.doi.org/10.1007/978-3-662-52993-5_15.
sumoto T, editors. Advances in Cryptology - ASIACRYPT ’91, International
Conference on the Theory and Applications of Cryptology, Fujiyoshida, Japan,
November 11–14, 1991, Proceedings. Lecture notes in computer science, vol.
739, Springer; 1991, p. 427–39. http://dx.doi.org/10.1007/3-540-57332-1_36.

11

You might also like