The Kernel Boot Process: Booting and The Kernel
The Kernel Boot Process: Booting and The Kernel
net/publication/295010962
CITATIONS READS
0 5,497
1 author:
Nikola Zlatanov
Applied Materials
44 PUBLICATIONS 34 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Nikola Zlatanov on 18 February 2016.
This article is about booting at the details of the Kernel to see how an operating system
starts life after computers boot up right up to the point where the boot loader, after
stuffing the Kernel image into memory, is about to jump into the Kernel entry point.
The critical code of the Kernel is usually loaded into a protected area of memory, which
prevents it from being overwritten by other, less frequently used parts of the operating
system or by applications. The Kernel performs its tasks, such as executing processes
and handling interrupts, in Kernel space, whereas everything a user normally does,
such as writing text in a text editor or running programs in a GUI (graphical user
interface), is done in user space. This separation prevents user data and Kernel data
from interfering with each other and thereby diminishing performance or causing the
system to become unstable (and possibly crashing).
When a process makes requests of the Kernel, the request is called a system call.
Various Kernel designs differ in how they manage system calls and resources. For
example, a monolithic Kernel executes all the operating system instructions in the
same address space in order to improve the performance of the system. A microKernel
runs most of the operating system's background processes in user space, to make the
operating system more modular and, therefore, easier to maintain.
Functions of the Kernel
The Kernel's primary function is to mediate access to the computer's resources,
including CPU, RAM, I/O resources.
The central processing unit (CPU)
This central component of a computer system is responsible for running or executing
programs. The Kernel takes responsibility for deciding at any time which of the many
running programs should be allocated to the processor or processors (each of which
can usually run only one program at a time).
Random-access memory (RAM)
Random-access memory is used to store both program instructions and data. Typically,
both need to be present in memory in order for a program to execute. Often multiple
programs will want access to memory, frequently demanding more memory than the
computer has available. The Kernel is responsible for deciding which memory each
process can use, and determining what to do when not enough memory is available.
Input/output (I/O) devices
I/O devices include such peripherals as keyboards, mice, disk drives, printers, network
adapters, and display devices. The Kernel allocates requests from applications to
perform I/O to an appropriate device and provides convenient methods for using the
device (typically abstracted to the point where the application does not need to know
implementation details of the device).
Key aspects necessary in resource management are the definition of an execution
domain (address space) and the protection mechanism used to mediate the accesses
to the resources within a domain.
Kernels also usually provide methods for synchronization and communication between
processes called inter-process communication (IPC).
A Kernel may implement these features itself, or rely on some of the processes it runs
to provide the facilities to other processes, although in this case it must provide some
means of IPC to allow processes to access the facilities provided by each other.
Finally, a Kernel must provide running programs with a method to make requests to
access these facilities.
Device management
To perform useful functions, processes need access to the peripherals connected to the
computer, which are controlled by the kernel through device drivers. A device driver is a
computer program that enables the operating system to interact with a hardware device.
It provides the operating system with information of how to control and communicate with
a certain piece of hardware. The driver is an important and vital piece to a program
application. The design goal of a driver is abstraction; the function of the driver is to
translate the OS-mandated function calls (programming calls) into device-specific calls.
In theory, the device should work correctly with the suitable driver. Device drivers are
used for such things as video cards, sound cards, printers, scanners, modems, and LAN
cards. The common levels of abstraction of device drivers are:
On the hardware side:
• Interfacing directly.
• Using a high level interface (Video BIOS).
• Using a lower-level device driver (file drivers using disk drivers).
• Simulating work with hardware, while doing something entirely different.
On the software side:
• Allowing the operating system direct access to hardware resources.
• Implementing only primitives.
• Implementing an interface for non-driver software (Example: TWAIN).
• Implementing a language, sometimes high-level (Example PostScript).
For example, to show the user something on the screen, an application would make a
request to the kernel, which would forward the request to its display driver, which is then
responsible for actually plotting the character/pixel.
A kernel must maintain a list of available devices. This list may be known in advance (e.g.
on an embedded system where the kernel will be rewritten if the available hardware
changes), configured by the user (typical on older PCs and on systems that are not
designed for personal use) or detected by the operating system at run time (normally
called plug and play). In a plug and play system, a device manager first performs a scan
on different hardware buses, such as Peripheral Component Interconnect (PCI)
or Universal Serial Bus (USB), to detect installed devices, then searches for the
appropriate drivers.
As device management is a very OS-specific topic, these drivers are handled differently
by each kind of kernel design, but in every case, the kernel has to provide the I/O to allow
drivers to physically access their devices through some port or memory location. Very
important decisions have to be made when designing the device management system, as
in some designs accesses may involve context switches, making the operation very
CPU-intensive and easily causing a significant performance overhead.
Since I have an empirical bent I’ll link heavily to the sources for Linux Kernel 2.6.25.6 at
the Linux Cross Reference. The sources are very readable if you are familiar with C-like
syntax; even if you miss some details you can get the gist of what’s happening. The main
obstacle is the lack of context around some of the code, such as when or why it runs or
the underlying features of the machine. I hope to provide a bit of that context. Due to a lot
of fun stuff - like interrupts and memory - gets only a nod for now. The article ends with
the highlights for the Windows boot.
At this point in the Intel x86 boot story the processor is running in real-mode, is able to
address 1MB of memory, and RAM looks like this for a modern Linux system:
RAM contents after boot loader is done
The Kernel image has been loaded to memory by the boot loader using the BIOS disk I/O
services. This image is an exact copy of the file in your hard drive that contains the
Kernel, e.g. /boot/vmlinuz-2.6.22-14-server. The image is split into two pieces: a small
part containing the real-mode Kernel code is loaded below the 640K barrier; the bulk of
the Kernel, which runs in protected mode, is loaded after the first megabyte of memory.
The action starts in the real-mode Kernel header pictured above. This region of memory
is used to implement the Linux boot protocol between the boot loader and the Kernel.
Some of the values there are read by the boot loader while doing its work. These include
amenities such as a human-readable string containing the Kernel version, but also crucial
information like the size of the real-mode Kernel piece. The boot loader also writes values
to this region, such as the memory address for the command-line parameters given by
the user in the boot menu. Once the boot loader is finished it has filled in all of the
parameters required by the Kernel header. It’s then time to jump into the Kernel entry
point. The diagram below shows the code sequence for the Kernel initialization, along
with source directories, files, and line numbers:
Architecture-specific Linux Kernel Initialization
The early Kernel start-up for the Intel architecture is in file arch/x86/boot/header.S. It’s in
assembly language, which is rare for the Kernel at large but common for boot code. The
start of this file actually contains boot sector code, a left over from the days when Linux
could work without a boot loader. Nowadays this boot sector, if executed, only prints a
“bugger_off_msg” to the user and reboots. Modern boot loaders ignore this legacy code.
After the boot sector code we have the first 15 bytes of the real-mode Kernel header;
these two pieces together add up to 512 bytes, the size of a typical disk sector on Intel
hardware.
After these 512 bytes, at offset 0x200, we find the very first instruction that runs as part of
the Linux Kernel: the real-mode entry point. It’s in header.S:110 and it is a 2-byte jump
written directly in machine code as 0x3aeb. You can verify this by running hexdump on
your Kernel image and seeing the bytes at that offset – just a sanity check to make sure
it’s not all a dream. The boot loader jumps into this location when it is finished, which in
turn jumps to header.S:229 where we have a regular assembly routine called
start_of_setup. This short routine sets up a stack, zeroes the bss segment (the area that
contains static variables, so they start with zero values) for the real-mode Kernel and
then jumps to good old C code at arch/x86/boot/main.c:122.
main() does some house keeping like detecting memory layout, setting a video mode,
etc. It then calls go_to_protected_mode(). Before the CPU can be set to protected mode,
however, a few tasks must be done. There are two main issues: interrupts and memory.
In real-mode the interrupt vector table for the processor is always at memory address 0,
whereas in protected mode the location of the interrupt vector table is stored in a CPU
register called IDTR. Meanwhile, the translation of logical memory addresses (the ones
programs manipulate) to linear memory addresses (a raw number from 0 to the top of the
memory) is different between real-mode and protected mode. Protected mode requires a
register called GDTR to be loaded with the address of a Global Descriptor Table for
memory. So go_to_protected_mode() calls setup_idt() and setup_gdt() to install a
temporary interrupt descriptor table and global descriptor table.
We’re now ready for the plunge into protected mode, which is done by
protected_mode_jump, another assembly routine. This routine enables protected mode
by setting the PE bit in the CR0 CPU register. At this point we’re running with paging
disabled; paging is an optional feature of the processor, even in protected mode, and
there’s no need for it yet. What’s important is that we’re no longer confined to the 640K
barrier and can now address up to 4GB of RAM. The routine then calls the 32-bit Kernel
entry point, which is startup_32 for compressed Kernels. This routine does some basic
register initializations and calls decompress_Kernel(), a C function to do the actual
decompression.
The decompress_Kernel() prints the familiar “Decompressing Linux…” message.
Decompression happens in-place and once it’s finished the uncompressed Kernel image
has overwritten the compressed one pictured in the first diagram. Hence the
uncompressed contents also start at 1MB. decompress_Kernel() then prints “done.” and
the comforting “Booting the Kernel.” By “Booting” it means a jump to the final entry point
in this whole story, given to Linus by God himself atop Mountain Halti, which is the
protected-mode Kernel entry point at the start of the second megabyte of RAM
(0x100000). That sacred location contains a routine called, uh, startup_32. But this one is
in a different directory, you see.
The second incarnation of startup_32 is also an assembly routine, but it contains 32-bit
mode initializations. It clears the bss segment for the protected-mode Kernel (which is the
true Kernel that will now run until the machine reboots or shuts down), sets up the final
global descriptor table for memory, builds page tables so that paging can be turned on,
enables paging, initializes a stack, creates the final interrupt descriptor table, and finally
jumps to the architecture-independent Kernel start-up, start_Kernel(). The diagram below
shows the code flow for the last leg of the boot:
Resources
[1] Silberschatz, Abraham; James L. Peterson; Peter B. Galvin (1991). Operating system
concepts. Boston, Massachusetts: Addison-Wesley. p. 696. ISBN 0-201-51379-X.
[2] Ball, Stuart R. (2002) [2002]. Embedded Microprocessor Systems: Real World
Designs (first ed.). Elsevier Science. ISBN 0-7506-7534-9.
[3] Deitel, Harvey M. (1984) [1982]. An introduction to operating systems (revisited first
ed.). Addison-Wesley. p. 673. ISBN 0-201-14502-2.
[4] Denning, Peter J. (December 1976). "Fault tolerant operating systems". ACM
Computing Surveys 8 (4): 359–389. doi:10.1145/356678.356680. ISSN 0360-0300.
[5] Denning, Peter J. (April 1980). "Why not innovations in computer architecture?". ACM
SIGARCH Computer Architecture News 8 (2): 4–7. doi:10.1145/859504.859506. ISSN
0163-5964.
[6] Intel Corporation (2002) The IA-32 Architecture Software Developer’s Manual, Volume
1: Basic Architecture
[7] Linux Kernel Development by Robert Love in the comments for this article. I’ve heard
other positive reviews for that book, so it sounds worth checking out.
[8] For Windows, the best reference by far is Windows Internals by David Solomon and
Mark Russinovich, the latter of Sysinternals fame. This is a great book, well-written and
thorough. The main downside is the lack of source code.