Efficient Scheme of Verifying Integrity of Application Binaries in Embedded Operating Systems
Efficient Scheme of Verifying Integrity of Application Binaries in Embedded Operating Systems
DOI 10.1007/s11227-010-0465-4
Abstract Currently, embedded systems have been widely used for ubiquitous com-
puting environments including digital setup boxes, mobile phones, and USN (Ubiq-
uitous Sensor Networks). The significance of security has been growing as it must
be necessarily embedded in all these systems. Up until now, many researchers have
made efforts to verify the integrity of applied binaries downloaded in embedded
systems. The research of problem solving is organized into hardware methods and
software-like methods. In this research, the basic approach to solving problems from
the software perspective was employed. From the software perspective, unlike in
the existing papers (Seshadri et al., Proc. the IEEE symposium on security and pri-
vacy, 2004; Seshadri et al., Proc. the symposium on operating systems principals,
2005) based on the standardized model (TTAS.KO-11.0054. http://www.tta.or.kr
2006) publicized in Korea, there is no extra verifier and conduct for the verifica-
tion function in the target system. Contrary to the previous schemes (Jung et al.
http://ettrends.etri.re.kr/PDFData/23-1_001_011.pdf, 2008; Lee et al., LNCS, vol.
4808, pp. 346–355, 2007), verification results are stored in 1 validation check bit,
instead of storing signature value for application binary files in the i-node structure
for the purpose of reducing run-time execution overhead. Consequently, the proposed
S.S. Kim
Department of Computer Engineering, Halla University, San 66, Heungup-Li, Heungup-myon,
Wonju-shi, Kangwon-do, Republic of Korea
e-mail: [email protected]
J.H. Park
Department of Computer Science and Engineering, Seoul National University of Technology,
172 Gongreung 2-dong, Nowon-gu, Seoul, Republic of Korea
e-mail: [email protected]
S.S. Kim et al.
1 Introduction
Embedded system is a solution for specified functions, which are built-in in other
products. For example, when a mobile phone mainly designed for calls has a TV fea-
ture, this TV feature (system) is an embedded system. Today, most of the electronic,
information, and communication devices with hi-tech features such as computers,
electronic appliances, factory automation systems, elevators, and mobile phones con-
tain embedded systems. In most cases, it runs on its own and is called an embedded
system, especially when it works as a secondary system, combined with other prod-
ucts. For computers, it refers to a certain type of computer system or computing
device designed for an exclusive function or to be used with certain embedded soft-
ware application programs. Besides computers, voice solutions in a portable personal
information terminal (PDA) and web features built-in in TVs, electronic cookers,
refrigerators, and automobiles are other examples of embedded systems.
Embedded systems are closely related to our daily lives and applied to a wide
range of areas, allowing for online shopping or Internet banking from our desktop
PCs which have important personal information in their embedded systems. Linux
systems designed for embedded uses, in particular, have a limit on their performance
and resources and are hard for quick maintenance, making it difficult to guarantee a
sustainable secure user environment even against minor attacks. Accordingly, impor-
tance of embedded operating systems’ security technology to protect privacy from
attacks against such an embedded system is expected to grow sharply with the in-
crease of its use.
Embedded systems consist largely of embedded operating systems, embedded
middleware and security systems, embedded basic/common application software, and
embedded software development tools. From the perspective of embedded systems’
security, there are two issues: kernel security in embedded operating systems and
security systems for secured communication between embedded middleware server-
side servers and target-embedded systems. This paper is designed to verify integrity
of running programs when executing signed application binary programs downloaded
from target embedded systems inside of target-embedded system kernel and in rela-
tion to the latter, with the aim of proposing a more efficient integrity verification
scheme than those existing presently [19, 23]. Figure 1 outlines the classification re-
lated to security technology of embedded operating systems, and suggested study is a
part of the application program signing/tagging among several security technologies.
Efficient scheme of verifying integrity of application binaries
Although more secured operating systems for servers or desktop PCs have been so
far studied in depth at home and abroad in relation to operating systems security tech-
nology (e.g. NSA’s general-purpose SELinux [42] and SecureOS developed by Se-
cuBrain and Electronics and Telecommunications Research Institute), there has been
no interesting development when it comes to embedded operating systems’ security
technology. As mentioned above, however, Electronics and Telecommunications Re-
search Institute developed technology that meets the Korean TTA Semi-Standard [36]
and provides a solution for embedded OS security systems at large in 2006.
In this paper, we propose a new and more efficient scheme to secure applica-
tion binary integrity in target system based security framework, in relation to em-
bedded OS security service middleware proposed in [19]. The proposed scheme also
meets the Korean Standard (TTAS.KO-11.0054), as do those schemes proposed in
[19] and [23].
Up until now, many researchers have made efforts to verify the integrity of ap-
plied binaries downloaded in embedded systems. The research of problem-solving
is organized into hardware-like methods and software-like methods. In this research,
the basic approach to solving problems from the software-like perspective was em-
ployed. From the software-like perspective, unlike in the existing papers [37, 38]
based on the standardized model [36] publicized in Korea, there is no extra verifier
and conduct for the verification function in the target system. Moreover, MAC pol-
icy suggested in [29] was not used, but rather a hash function was. However, unlike
in the previously suggested method of [19], for security, SHA-1 hash function was
improved to SHA-2, and for efficiency, except for the verification computing for the
first applied program, improvement was made for efficient verification only through
1-bit check about the same file.
This paper consists of the following. In Sect. 2, research background; embedded
operation systems security reference model, TTA standard; and the method to guar-
antee integrity developed by Electronics and Telecommunications Research Institute
are included. In Sect. 3, a more efficient method is suggested and Sect. 4 concludes
this paper.
S.S. Kim et al.
2 Related studies
2.1 Background
According to the existing research [11, 17, 21, 32, 33] about embedded systems’
security requirements, there are seven classifications: basic security functions (con-
fidentiality, integrity and authentication), tamper resistance, content security, secure
storage, availability, secure network access, and user identification. Among them, in
particular, the data integrity requirement is one of the basic security functions that
guarantee that data cannot be illegally changed. In the embedded system, the classifi-
cation is made into the integrity about the program downloaded when the application
is downloaded from the server host system to the target embedded system. The in-
tegrity of the operating program when applied is operated in the target-embedded
system’s kernel. The reason is that in the former case, even though there is no prob-
lem in the integrity, there is a possibility that malicious code such as a virus, worm
or Trojan horse has infected when the operation is done in the latter target-embedded
system’s kernel.
Such security threats have already been named software attacks in [32], and such
threats were much identified in research [7, 14, 22, 46, 47]. Regarding the attack
against safe embedded systems, in [32], with regard to a functional objective, there
are largely three classifications such as privacy attack, integrity attack, and availabil-
ity attack. In addition, regarding an agent or means used in an attack, the classifica-
tion is divided into software attack, physical or invasive attack [5, 13, 28], and side-
channel attack [27]. Among them, a software attack occurs through a virus or Trojan
horse when an applied program is downloaded or operated, and from a functional
perspective it is related to integrity attacks.
Up until now, there have been many efforts [9, 32, 35, 39, 41, 52] in tamper re-
sistance for protection from the perspective of a software attack. Among them, re-
garding software integrity problems, there have been efforts to solve the problems
such as XOM [8, 26], off-chip memory security solution [45], and secure single chip
processors [2, 40] from a hardware perspective.
There have also been various efforts from a software perspective. In 2002, the
method known as oblivious hashing [4] was introduced. This method hashes exe-
cution traces of a piece of code and verifies the run time behavior. Unlike previous
techniques that mainly verify the static shape of the code, this primitive allows im-
plicit computation of a hash value based on the actual execution of the code. The
main idea is to hash the execution trace of a piece of code, thereby allowing prob-
abilistic or deterministic verification of the run time behavior of the software. This
is accomplished by injecting additional computation into the software. The hashing
code implicitly computes a hash value from the dynamic execution context of the host
code. The main feature of their injection method is to blend the hashing code seam-
lessly with the host code, making them locally indistinguishable and thus difficult to
separate without non-trivial effort to run and observe the program’s execution repeat-
edly. However, this method has a drawback that was not detected in x86 debugger
breakpoints and materialized through INT3.
Efficient scheme of verifying integrity of application binaries
In 2004, the memory contents of embedded devices were verified and the SWATT
(SoftWare-based ATTestation) technique [38] that established the existence if mali-
cious changes about memory contents was implied. Verifier attests code or static data
and the embedded device’s configuration settings by being physically separated from
the embedded device externally. SWATT uses a challenge response protocol between
the verifier and the embedded device. The verifier sends a challenge to the embedded
device. The embedded device computes a response to this challenge, using a verifica-
tion procedure that is either preprogrammed into the embedded device’s memory or
downloaded from the verifier prior to verification. The verifier can locally compute
the answer to its challenge, and can thus verify the answer returned by the embedded
device. The design of SWATT ensures that the embedded device can return the correct
answer only if its memory contents are correct. SWATT may appear to provide simi-
lar properties to secure boot, but it is distinct. Systems such as TCG [44] and NGSCB
[30] use a secure coprocessor during system initialization to bootstrap trust. SWATT
does not need a secure coprocessor, and allows a trusted external verifier to verify the
memory contents of an embedded device. Once the code running on the embedded
device is verified, the code forms the trusted computing base. However, this method
basically assumes that the embedded device is unreliable and has a drawback of ad-
ditionally requiring the entity to perform the attestation for verification for code or
data.
In 2005, a so-called Pioneer [37] method that improved this point was introduced.
This method is also based on a challenge–response protocol of an external trusted en-
tity, called the untrusted platform. The dispatcher communicates with the untrusted
platform over a communication link, such as a network connection. After a success-
ful invocation of Pioneer, the dispatcher obtains assurance that: an arbitrary piece of
code, called the executable, and the untrusted platform are unmodified; the unmodi-
fied executable is invoked for execution on the untrusted platform; and the executable
is executed without being tampered, despite the presence of malicious software on
the untrusted platform. However, this method basically assumes that the dispatcher
knows the hardware configuration of the untrusted platform, and that the untrusted
platform cannot collude with other devices during verification. This also assumes
that communication channels between the dispatcher and the untrusted platform pro-
vide the property of message-origin authentication, i.e., the communication channel
is configured so that the dispatcher obtains the guarantee that the Pioneer packets it
receives originated from the untrusted platform. Furthermore, to provide the guar-
antee of untampered code execution, they assume the executable is self-contained,
not needing to invoke any other software on the untrusted platform, and that it can
execute at the highest processor privilege level with interrupts turned off.
In 2007, in an applied environment requiring remote verification about software
operated in an untrusted platform, the remote attestation method [34] was suggested.
This defined the problem of remote code integrity verification as the act of delivering
such attestations to a verification entity that guarantees code executes untampered on
a remote untrusted computing platform. On such a platform, an adversary has ad-
ministrative privileges and can tamper with all the software including the operating
system. Remote code integrity verification can be seen as an extension of local in-
tegrity verification, in which the software execution fails when tampering of its code
is detected.
S.S. Kim et al.
Related to such a remote attestation solution, there was a study case that applied
this method to a wireless sensor network in 2009. It was proposed that the method
[6] recognized main security vulnerability as the malicious host problem, where an
adversary in control of the target’s host environment tries to tamper with the target
code and improve it; in addition, there was an introduction of the ReDAS (Remote
Dynamic Attestation System) method [20] that provided integrity evidence for dy-
namic system properties. However, such methods based on SWATT have revealed
their vulnerabilities in 2009 by [3, 10].
In the meantime, from the policy perspective to guarantee the integrity of third-
party applications such as mobile banking or untrusted downloaded games that are
critical to security in mobile phone systems, the PRIMA (Policy Reduced Integrity
Measurement Architecture) method [29] that extends existing MAC (Mandatory Ac-
cess Control) methods and improves the existing SELinux policy into the one of over
90% smaller size for inherent simplicity, was introduced in 2008. The PRIMA ad-
dresses the problem of run-time integrity measurements by additionally measuring
the implied information flows between processes from the system’s security policy.
This way, a verifier can prove that trusted components in the system are isolated from
untrusted and potentially harmful inputs. Moreover, PRIMA’s CW-Lite integrity en-
forcement model only requires the trusted portions of a system to be measured and
thus reduces the number of measurements required to verify a system.
However, in this paper, from software perspective, unlike in the existing literature
[1, 6, 20, 34, 37, 38], an extra verifier was not used, but rather the verification function
was conducted in the target system itself. In addition, a hash function was used and
not the method using MAC policy suggested in [29]. However, unlike the previously
suggested method of [19], SHA-1 hash function was improved to SHA-2 for security
and efficiency; except for the verification computing for the first applied program,
improvement was made for efficient verification only through 1-bit check of the same
file.
The model in Fig. 2 consists of a host system and a target system: the host system
generates application programs and transmits them to the target system [4]. In the
course, the host system’s security program module generates application programs
running on the target system and then gives the signature and tag to the program.
In turn, the target system’s security program module downloads a signed application
program and transmits information necessary to control access to the signed applica-
tion program to the kernel.
For the security kernel module, access control module and signature verification
module are added to the embedded system’s OS kernel. Here, the signature veri-
fication module verifies the signature of the signed application program to secure
integrity when the signed application program runs.
The host system’s security program module provides a signature and tag attach-
ment function to application programs developed by the host and sends them to the
target system in the form of an application program signed by the security program,
as shown on the left (solid line part) of Fig. 3. As shown on the right (dotted line part)
Efficient scheme of verifying integrity of application binaries
of Fig. 3, the target system’s security program module, like the host system’s secu-
rity program, receives signed application program and public key downloaded from
the host, generates an access control table in a format available for the kernel and
distributes it to the kernel. In other words, it downloads signed application program
from the host system’s security program, verifies the network integrity, manages the
access-control information table stored on nonvolatile media and transmits all this
information to the kernel.
In this paper, we propose a new method based on such a reference model with a
special focus on an integrity verification function.
S.S. Kim et al.
There are two methods used for security OS realization today: kernel alteration and
load or removal of security module depending on the need inside the kernel. It is
basically assumed the latter method, that is, dynamically loadable kernel modules
(LKM, Loadable Kernel Modules) [15, 18, 49], which have been in the spotlight
because they solve the problem of recompiling the entire kernel whenever simple
modification of the integrated kernel such as embedded Linux is needed or alteration
of part of system architecture takes place.
In the meantime, LKM is divided into two methods: one is a system call hooking in
which Linux system call is hooked and inserted before original service routine occurs
for security check in security kernel modules to execute original service routine. The
other is LSM (Linux Security Module) [50, 51] in which multiple interfaces are made
at a certain point of the Linux kernel to insert or remove a security module. One of
the advantages of LSM is that it allows for the use of many security modules, which
were not possible under the existing hooking system.
For a general Linux box, there are a variety of paths where application programs
run and work in target embedded systems. Therefore, the method of blocking pro-
gram execution in user space well in advance is not appropriate because testing all ex-
ecution programs is impossible. Therefore, we have adopted a hooking path through
which a program runs inside the kernel. As mentioned above, all programs in a Linux
environment are assigned memory at kernel level down from user level by a system
call. Accordingly, making a hooking function in the kernel so that the desired job is
executed shortly before loaded to the memory does not allow for a detour of verifica-
tion at any time and in whatever path.
This scheme allows for execution of all application binary files, including shared
library as dynamic links. Here, it is necessary to modify the internal structure of the
kernel for hooking. This paper proposes a way of using LSM, which can be applied
directly without the need to recompile the kernel of existing embedded Linux.
In this paper, we use a method in which integrity of the file is used by the program
immediately before the file is loaded into the memory and which becomes active by
changing file_mmap() function among hooking functions allowing to be replaced in
LSM in the same manner as proposed in [24] in relation to program execution. As a
result, it is possible to protect the system safely by blocking Code Injection Attack,
which changes binary execution files or data of dynamic library files making use of
weakness of existing programs, as well as harmful programs such as viruses or worms
in the form of binary execution files, as shown in Fig. 5.
3 Proposed scheme
Integrity verification aims to test forging of application binary files when run inside
the kernel as outlined above. To verify integrity, existing schemes as shown in [19, 23]
compare two result values to see if they are identical by way of hash algorithm and
Efficient scheme of verifying integrity of application binaries
1 Bit v is 1 bit value, the initial value of which is 0. If there is no problem with integrity check for initial
binary file, the value becomes 1 and turns to 0 when modifying the same files (writing operation, etc.). This
value is inserted as a reserved field of the i-node structure in Linux OS or can be stored as an encrypted
file key in the form of an extended attribute as in existing methods [23, 42] if there is no space to store in
the i-node structure.
S.S. Kim et al.
There are three general security requirements for embedded systems: verification,
integrity, and confidentiality [11, 17, 21, 32, 33]. Verification and confidentiality, in
particular, occur in the course of transmitting binary from the host system to the
Efficient scheme of verifying integrity of application binaries
target system and downloading it, and in this paper, methods proposed in [19, 23] are
applied. See [19, 23] according to their security.
When it comes to integrity discussed in this paper, the proposed scheme calcu-
lates SHA-1 hash value for sections except digital signature via integrity verification
modules in the kernel and compares it and SHA-1 value stored in existing binary ver-
ification/setup modules in order to secure integrity of application binary. However,
as mentioned in Sect. 2.2, in the case of SHA-1 hash algorithm, in 2005 a security
problem was exposed. Currently SHA-2 is used instead of SHA-1, and in the future
SHA-3 algorithm is expected to be used. Accordingly the suggested method improved
security of integrity by replacing the existing SHA-1 algorithm with SHA-2.
Contrary to the above schemes [19, 23], however, verification results are stored
in 1 validation check bit, instead of storing signature value for application binary
files in the i-node structure for the purpose of reducing run time execution overhead.
Consequently, the proposed scheme is more efficient because it dramatically reduces
overhead in storage space, and when it comes to computing, performs one hash algo-
rithm for initial execution and thereafter compares 1 validation check bit only, instead
of signature and hash algorithms for every application binary. Furthermore, in cases
where there are frequent changes in the i-node structure or file data depending on
the scheme application, the scheme can provide far more effective verification per-
formance compared to the previous schemes. Table 1 compares previous methods
S.S. Kim et al.
Table 1 The comparison of previous methods [19, 23] and the proposed method
SHA-1a O
For initial SHA-2b O
Executing
execution RSA Signingc O
binary in Comparisond O O
kernel SHA-1a O
For twice
or more RSA Signingc O
executions Comparisone O O
Information stored in i-node Signature value and Validation check
verification result Bit v (1 bit)
a One SHA-1 hash for the entire file except sections containing digital signature
b One SHA-2 hash for the entire file except sections containing digital signature
c One RSA digital signature verification
d In the case of the existing methods [19, 23], One comparison SHA-1 hash value with digital signature
verification value, and in the case of the suggested method, One comparison SHA-1 hash value with loaded
hash value stored from binary verification/setup module
e In the case of the existing methods [19, 23], the same computing as the first operation, or One comparison
SHA-1 hash value with digital signature verification value is operated, but in the suggested method, only
One check for bit v is operated
[19, 23] and the proposed method, based on computational level. As summarized
in the table, the proposed scheme guarantees security at the same level as in exist-
ing schemes [19, 23] and is also extremely efficient because the RSA-based digital
signatures process can be skipped in this scheme. Even though it loads hash value
stored in binary verification/setup modules onto binary integrity verification modules
once again, time for this process is insignificant compared to that for existing RSA
signature verification processes, as each module in embedded systems are basically
composed of hardware.
4 Conclusions
value for application binary files in the i-node structure for the purpose of reduc-
ing run time execution overhead in this proposed method. It dramatically reduces
overhead in storage space, and when it comes to computing, performs one hash algo-
rithm for initial execution and thereafter compares 1 validation check bit only, instead
of signature and hash algorithm for every application binary. Furthermore, in cases
where there are frequent changes in the i-node structure or file data depending on
the scheme application, the scheme can provide far more effective verification per-
formance compared to previous schemes.
Currently, to improve heavy RSA signature verification methods as proposed in
existing methods [19, 23], a design of optimal and efficient security protocols such
as Rabin [31], NtruEncrypt [16], ECDSA [43], and XTR [25] that can be built-in in
lightweight embedded systems is in progress.
Acknowledgement This research was supported by the MKE (The Ministry of Knowledge Economy),
Korea, under the ITRC (Information Technology Research Center) support program supervised by the
NIPA (National IT Industry Promotion Agency) (NIPA-2010-C1090-1031-0004).
References
1. Abuhmed T, Nyamaa N, Nyang D (2009) Software-based remote code attestation in wireless sensor
network. In: Proc IEEE GLOBECOM
2. Arbaugh A, Farber DJ, Smith JM (1997) A secure and reliable BootStrap architecture. In: Proc IEEE
symposium on security and privacy, pp 65–71
3. Castelluccia C, Francillon A, Perito D, Soriente C (2009) On the difficulty of software-based attes-
tation of embedded devices. In: Proc the 16th ACM conference on computer and communications
security (CCS)
4. Chen Y, Venkatesan R, Cary M, Sinha S, Jakubowski MH (2002) Oblivious hashing: a stealthy soft-
ware integrity verification primitive. In: Proc int workshop, information hiding, pp 400–414
5. Chhabra S, Rogers B, Solihin Y, Prvulovic M (2009) Making secure processors OS- and performance-
friendly. ACM Trans Archit Code Optim (TACO) 5(4)
6. Ceccato M Preda, MD, Majumdar, A, Tonella, P (2009) Remote software protection by orthogonal
client replacement. In: Proc the 24th ACM symposium on applied computing, ACM
7. Common Vulnerabilities and Exposures (2010) http://cve.mitre.org/
8. Courtright K Husain, MI, Sridhar, R (2009) LASE: latency aware simple encryption for embedded
systems security. Int J Comput Sci Netw Secur (IJCSNS), 9(10)
9. CryptocellTM, Discretix Technologies Ltd. http://www.discretix.com
10. Giannetsosl T, Dimitrioul T, Krontiris I, Prasad, NR (2010) Arbitrary Code Injection through Self-
propagating Worms in Von. Neumann, Architecture Devices. Comput J Adv Access. Published online.
http://comjnl.oxfordjournals.org/cgi/content/abstract/bxq009
11. Gilani S (2007) Embedded OS: a foundation for secure networking. In: Embedded computer design.
OpenSystems publishing. http://www.mentor.com
12. Gilbert H, Handschuh H (2005) Security analysis of SHA-256 and sisters. In: Selected areas in cryp-
tography 2003, NIST cryptographic hash workshop
13. Gogniat G, Wolf T, Burleson W (2005) Reconfigurable security primitive for embedded systems. In:
Proc international symposium on system-on-chip (SOC)
14. Ghosh AK, Swaminatha TM (2001) Software security and privacy risks in mobile e-commerce. Com-
mun ACM 44:51–57
15. Henderson B (2010) Linux Loadable Kernel Module HOWTO. http://www.linux.org/docs/ldp/
howto/module-howto/
16. Hoffstein J, Pipher J, Silverman J (1998) NTRU: a ring-based public key cryptosystem. In: Buhler J
(ed) Algorithmic number theory (ANTS III). LNCS, vol 1423. Springer, Berlin, pp 267–288
17. Hwang DD, Schaumont P, Tiri K, Verbauwhede I (2006) Securing embedded systems. IEEE Secur
Priv 4(2):40–49
S.S. Kim et al.
48. Wang X, Yin, Y, Yu H (2005) Finding collisions in the full SHA-1. In: Proc Crypto
49. Welsh M (1995) Implementing Loadable Kernel Modules for Linux. Dr Dobbs J 20(5)
50. WireX Communications (2001) Linux Security Module. http://lsm.immunix.org/
51. Wright C, Cowan C, Smalley S, Morris J, Hartman GK (2002) Linux Security Module framework.
In: 2002 Ottawa Linux symposium
52. Yee B (1994) Using secure co-processors. PhD thesis, Carnegie Mellon University