Paper 1 Advance Operating System
Paper 1 Advance Operating System
)
SEMESTER - II (CBCS)
PAPER I
ADVANCED
OPERATING SYSTEM
SUBJECT CODE: PSCS201
© UNIVERSITY OF MUMBAI
Programme Co-ordinator : Shri Mandar Bhanushe
Head, Faculty of Science and Technology IDOL,
Univeristy of Mumbai – 400098
Course Co-ordinator : Mr Sumedh Shejole
Assistant Professor,
IDOL, University of Mumbai- 400098
SYLLABUS
Principles of I/O Hardware, Principles of I/O Software, Deadlocks, RAM Disks, Disks,
Terminals. File Systems: Files, Directories, File System Implementation, Security,
Protection mechanisms in different Linux versions
Unit IV: Android Operating System
The Android Software Stack, The Linux Kernel – its functions, essential hardware
drivers. Libraries - Surface Manager, Media framework, SQLite, WebKit, OpenGL.
Android Runtime - Dalvik Virtual Machine, Core Java Libraries. Application Framework -
Activity Manager, Content Providers, Telephony Manager, Location Manager, Resource
Manager. Android Application – Activities and Activity Lifecycle, applications such as
SMS client app, Dialer, Web browser, Contact manager
Text book:
An Introduction to Operating Systems: Concepts and Practice (GNU/Linux), 4th
18
edition, Pramod Chandra P. Bhatt, Prentice-Hall of India Pvt. Ltd, 2014.
Operating System Concepts with Java Eight Edition, Avi Silberschatz, Peter
Baer Galvin, Greg Gagne, John Wiley & Sons, Inc.,
2009, http://codex.cs.yale.edu/avi/os-book/OS8/os8j
UNIX and Linux System Administration Handbook, Fourth Edition, Evi Nemeth,
Garth Snyder, Tren Hein, Ben Whaley, Pearson Education, Inc, 2011,
PROFESSIONAL Android™ 4 Application Development, Reto Meier, John Wiley
& Sons, Inc. 2012.
References:
Operating Systems: Design and Implementation, Third Edition, Andrew S.
Tanenbaum, Albert S. Woodhull, Prentice Hall, 2006.
Fedora Documentation, http://docs.fedoraproject.org/en-US/index.html
Official Ubuntu Documentation, https://help.ubuntu.com/
Android Developers, http://developer.android.com/index.html.
LR parsers, The canonical collection of LR(0) items, Constructing SLR parsing tables,
Constructing canonical LR parsing tables, Constructing LALR parsing tables, Using
ambiguous grammars, An automatic parser generator, Implementation of LR parsing
tables, Constructing LALR sets of items.
19
1
LINUX OPERATING SYSTEM
INTRODUCTION
Unit Structure
1.0 Objectives
1.1 Introduction
1.2 Linux Versus Other Unix-Like Kernels
1.3 Types of Kernels
1.4 GRUB in Linux
1.5 Inter Process Communication
1.6 Let us Sum Up
1.7 List of References
1.8 Bibliography
1.9 Unit End Exercise
1.0 Objective
1.1 Introduction
1
Advanced Operating
System
1.2 Linux Versus Other Unix-Like Kernels
The microkernel and monolithic kernels are two types of kernels in the
operating system. The kernel is the main part of the OS. As a result, the
kernel's important code is stored in different memory spaces. The kernel is
a crucial component because it maintains the proper functioning of the
complete system. It manages hardware and processes, files handling, and
several other functions.
Microkernel
The microkernel is a type of kernel that permits the customization of the
OS. It is privileged and provides low-level address space management as
well as Inter-Process Communication (IPC). Furthermore, OS functions
like the virtual memory manager, file system, and CPU scheduler are built
on top of the microkernel. Every service has its address space to make
them secure. Moreover, every application has its address space. As a
result, there is protection between applications, OS Services, and the
kernel.
When an application requests a service from the OS services, the OS
services communicate with one another in order to provide the requested
service to the application. Inter-Process Communication (IPC) can assist
in establishing this communication. Overall, microkernel-based operating
systems offer a high level of extensibility. It is also possible to customize
the operating system's services to meet the needs of the application.
Monolithic Kernel
The monolithic kernel manages the system's resources between the system
application and the system hardware. Unlike the microkernel, user and
kernel services are run in the same address space. It increases the kernel
size and also increases the size of the OS.
The monolithic kernel offers CPU scheduling, device management, file
management, memory management, process management, and other OS
services via the system calls. All of these components, including file
management and memory management, are located within the kernel. The
user and kernel services use the same address space, resulting in a fast-
executing operating system. One drawback of this kernel is that if anyone
process or service of the system fails, the complete system crashes. The
entire operating system must be modified to add a new service to a
monolithic kernel.
Exo Kernel
It is the type of kernel which follows end-to-end principle. It has fewest
hardware abstractions as possible. It allocates physical resources to
applications.
Example:
Nemesis, ExOS etc.
3
Advanced Operating However, Linux doesn't stick to any particular variant. Instead, it tries to
System
adopt good features and design choices of several different Unix kernels.
Here is an assessment of how Linux competes against some well-known
commercial Unix kernels:
66
GRUB supports LBA (Logical Block Addressing Mode) which puts Linux Operating
System Introduction
the addressing conversion used to find files into the firmware of the
hard drive
GRUB provides maximum flexibility in loading the operating
systems with required options using a command based, pre-
operating system environment.
The booting options such as kernel parameters can be modified
using the GRUB command line.
There is no need to specify the physical location of the Linux kernel
for GRUB. It only required the hard disk number, the partition
number and file name of the kernel.
GRUB can boot almost any operating system using the direct and
chain loading boot methods.
GRUB Installation Process
GRUB automatically becomes the default loader after it is installed. The
following steps are followed to install GRUB:
The stage 1 boot loader is loaded into the memory by the BIOS. This
boot loader is also known as the primary boot loader. It exists on
512 bytes or less of disk space within the master boot record. The
primary boot loader can load the stage 1.5 or stage 2 boot loader if
required.
The stage 1.5 boot loader is loaded into the memory by the stage 1
boot loader if required. This may be necessary in some cases as
some hardware require a middle step before moving on to the stage
2 loader.
The secondary boot loader is also known as the stage 2 boot loader
and it can be loaded into the memory by the primary boot loader.
Display of the GRUB menu and command environment are
functions performed by the secondary boot loader. This allows the
user to look at system parameters and select the operating system to
boot.
7
Advanced Operating The operating system or kernel is loaded into the memory by the
System
secondary boot loader. After that, the control of the machine is
transferred to the operating system.
GRUB Interfaces
There are three interfaces in GRUB which all provide different levels of
functionality. The Linux kernel can be booted by the users with the help of
these interfaces. Details about the interfaces are:
Menu Interface
The GRUB is configured by the installation program in the menu
interface. It is the default interface available. It contains a list of the
operating systems or kernels which is ordered by name. A specific
operating system or kernel can be selected using the arrow keys and it can
be booted using the enter key.
Menu Entry Editor Interface
The e key in the boot loader menu is used to access the menu entry editor.
All the GRUB commands for the particular menu entry are displayed there
and these commands may be altered before loading the operating system.
Command Line Interface
This interface is the most basic GRUB interface, but it grants the most
control to the user. Using the command line interface, any command can
be executed by typing it and then pressing enter. This interface also
features some advanced shell features.
GRUB vs GRUB2
The default menu for GRUB2 looks very similar to GRUB but there are
some changes made in this.
Grub has two configuration files namely menu. lst and grub.
conf whereas, Grub2 has only one main configuration file namely
grub.cfg and it looks very close to a full scripting language. And this
configuration file is overwritten by certain Grub 2 package updates,
whenever a kernel is added or removed, or when the user runs
update-grub. For any configuration changes, we need to run update-
grub to make the changes effective.
In Grub, it is really hard for the normal user to modify the
configuration. But Grub2 is more user-friendly, Grub-mkconfig will
automatically changes the configuration.
In Grub, partition number starts from 0, whereas in Grub2 it starts
with 1. The first device is still identified with hd0. These changes
can be altered if needed by making some changes to device.map file
of the '/etc/grub' folder.
Grub uses physical and logical addresses to address the disk, it can't
even read from new partitions whereas, Grub2 uses UUID to
identify a disk thus is more reliable. It supports LVM and RAID
devices.
88
In today’s Linux Distros including (Ubuntu 16.04 and RHEL 7), Linux Operating
System Introduction
GRUB2 will now directly show a login prompt and no menu is
displayed now.
If you want to see the menu during boot you need to hold down
SHIFT key. Even sometimes by pressing ESC you can also display
the menu.
Users have also now choice of creating custom files in which they
can place their own menu entries. You can make use of a file called
40_custom which is available in '/etc/grub.d' folder.
Even users can now change the menu display settings. This is done
through a file called grub located in /etc/default folder.
10
10
The message size can be of fixed size or of variable size. If it is of fixed Linux Operating
System Introduction
size, it is easy for an OS designer but complicated for a programmer and if
it is of variable size then it is easy for a programmer but complicated for
the OS designer. A standard message can have two parts: header and
body.
The header part is used for storing message type, destination id, source
id, message length, and control information. The control information
contains information like what to do if runs out of buffer space, sequence
number, priority. Generally, message is sent using FIFO style.
Process Scheduling in Linux Scheduling is the action of
assigning resources to perform tasks. We will mainly focus on scheduling
where our resource is a processor or multiple processors, and the task will
be a thread or a process that needs to be executed. The act of scheduling is
carried out by a process called scheduler.
The scheduler goals are to
Maximize throughput (number of tasks done per time unit)
Minimize wait time (amount of time passed since the process was
ready until it started to execute)
Minimize response time (amount of time passed since the process
was ready until it finished executing)
Maximize fairness (distributing resources fairly for each task)
Process Types in Linux
Linux has two types of processes
Real-time Processes
Conventional Processes
Real-time processes are required to ‘obey’ response time constraints
without any regard to the system’s load. In different words, real-time
processes are urgent and cannot be delayed no matter the circumstances.
An example of a real-time process in Linux is the migration process which
is responsible for distributing processes across CPU cores (a.k.a load
balancing).
Conventional processes don’t have strict response time constraints and
they can suffer from delays in case the system is ‘busy’. Each process type
has a different scheduling algorithm, and as long as there are ready-to-run
real-time processes they will run and make the conventional processes
wait.
Real-Time Scheduling
There are two scheduling policies when it comes to real-time scheduling,
SCHED_RR and SCHED_FIFO. The policy affects how much runtime a
process will get and how is the runqueue is operating. The ready-to-run
processes I have mentioned are stored in a queue called runqueue. The
scheduler is picking processes to run from this runqueue based on the
policy.
11
Advanced Operating SCHED_FIFO
System
In this policy the scheduler will choose a process based on the arrival time
(FIFO = First In First Out). A process with a scheduling policy of
SCHED_FIFO can ‘give up’ the CPU under a few circumstances:
12
12
CFS — Completely Fair Scheduler Linux Operating
System Introduction
Before talking about how the algorithm works, let’s understand what data
structure this algorithm is using. CFS uses a red-black tree which is a
balanced binary search tree — meaning that insertion, deletion, and look-
up are performed in O(logN) where N is the number of processes.
The key in this tree is the virtual runtime of a process. New processes or
processes that got back to the ready state from waiting are inserted into the
tree with a key vruntime=min_vruntime. This is extremely important in
order to prevent starvation of older processes in the tree. Moving on to the
algorithm, at first, the algorithm sets itself a time limit — sched_latency.
In this time limit, it will try to execute already processes — N.
This means that each process will get a time slice of the time limit divided
by the number of processes — Qᵢ = sched_latency/N.
When a process finishes its time-slice (Qᵢ), the algorithm picks the process
with the least virtual runtime in the tree to execute next.
1.8 Bibliography
13
Advanced Operating
System
1.9 Unit End Exercise
14
14
2
MEMORY MANAGEMENT AND
VIRTUAL MEMORY IN LINUX
Unit Structure
2.0 Objectives
2.1 Introduction
2.2 Basic memory management
2.2.1. Monoprogramming without Swapping or Paging
2.2.2. Multiprogramming with Fixed Partitions
2.2.3. Relocation and Protection
2.3 Swapping
2.3.1. Memory Management with Bitmaps
2.3.2 Memory Management with Linked Lists
2.4 Virtual memory
2.4.1 Paging
2.4.2 Page Table
2.4.3 Translation look aside buffers
2.5 Page replacement algorithms
2.5.1 First In First Out (FIFO)
2.5.2 Least Recently Used (LRU)
2.5.3 Optimal Range
2.5.4 Last In First Out (LIFO)
2.5.5 Practice problems based on page replacement algorithm
2.6 Design issues for paging systems
2.6.1 The working set model
2.6.2 Local versus Global Allocation Policies
2.6.3 Page size
2.6.4 Virtual Memory Interface
2.7 Segmentation
2.7.1 Types of segmentation
2.7.2 Characteristics of segmentation
2.7.3 Need of segmentation
2.7.4 User’s view of a program
2.7.5 Segmentation Architecture
2.7.6 Segmentation Hardware
2.7.7 Advantages of Segmentation
2.7.8 Disadvantages of Segmentation
2.7.9 Example of Segmentation
2.8 Case Study: Linux memory management
2.9 Summary
2.10 List of References
2.11 Unit End Exercises
15
Advanced Operating
System
2.0 OBJECTIVES
2.1 INTRODUCTION
There are two types of memory management systems: those that swap
processes between main memory and disc during execution (swapping and
paging) and those that don't. Keep in mind that swapping and paging are
largely artifacts of a lack of main memory that can hold all apps and data
16
16
at the same time. If primary memory grows to the point that there is Memory Management and
Virtual Memory in Linux
genuinely enough of it, arguments for one memory management technique
or another may become obsolete.
On the other hand, as previously said, software appears to increase at the
same rate as memory, so effective memory management may be required
at all times. Many institutions in the 1980s used a 4 MB VAX to run a
timesharing system with dozens of (mostly satisfied) users. For a single-
user Windows XP machine, Microsoft now recommends at least 128 MB.
The trend toward multimedia places even greater demands on memory,
therefore good memory management will be required for at least the next
decade.
2.2.1. MONOPROGRAMMING WITHOUT SWAPPING OR
PAGING
The simplest memory management approach is to execute only one
application at a time, with that program and the operating system sharing
memory. Figure 2.1 depicts three variations on this topic. The operating
system may be in RAM (Random Access Memory) at the bottom of
memory, as shown in figure 2.1(a), or in ROM (Read-Only Memory) at
the top of memory, as shown in figure 2.1(b), or the device drivers may be
in ROM at the top of memory, with the rest of the system in RAM down
below, as shown in figure 2.1(c). The first model was once common on
mainframes and minicomputers, but it is now rarely seen. On some
palmtop computers and embedded systems, the second model is used.
Early on, the third model was employed. Personal computers (e.g., those
running MS-DOS), where the BIOS is the piece of the system stored in the
ROM (Basic Input Output System).
Only one process can execute at a time when the system is organized this
way. The operating system copies the desired application from disc to
memory and executes it as soon as the user forms a command. The
operating system shows a prompt character and waits for a new command
when the process is complete. It loads a new program into memory,
overwriting the old one, when it receives the command.
Figure 2.1 With one operating system and one user process, there are three
easy ways to organize memory. Other options are also available.
17
Advanced Operating 2.2.2. MULTIPROGRAMMING WITH FIXED PARTITIONS
System
Monoprogramming is rarely implemented these days, with the exception
of very small embedded devices. Multiple processes can run at the same
time in most modern systems. When many processes are operating at the
same time, one can use the CPU while the other is waiting for I/O to
complete. As a result, multiprogramming improves CPU usage. Although
network servers can always execute several processes (for distinct clients)
at the same time, most client (i.e., desktop) systems now have this
capability as well.
The simplest method for achieving multiprogramming is to divide
memory into n (potentially uneven) segments. This partitioning can be
done manually, for example, when the machine is booted.
When a job comes in, it can be placed in the input queue for the smallest
partition that can accommodate it. Because the partitions in this system are
fixed, any space in a partition that is not used by a work is wasted while
that process is running. This system of fixed partitions and independent
input queues is depicted in figure 2.2(a).
Figure 2.2: (a) Memory partitions are fixed, and each partition has its own
input queue.
(b) Memory partitions are fixed, and each partition has a single input
queue.
When the large partition of queue is vacant but the queue for a small
partition is filled, as is the situation for partitions 1 and 3 in figure 2.2 (a),
the disadvantage of dividing the incoming jobs into different queues
becomes obvious. Even if there is plenty of memory available, little jobs
must wait to get into memory. Maintaining a single queue, as shown in
figure 2.2 (b), is an alternate arrangement. Whenever a partition becomes
available, the work closest to the top of the queue that fits in it could be
loaded and executed.
18
18
As wasting a large partition on a tiny project is undesirable, another Memory Management and
Virtual Memory in Linux
technique is to examine the whole input queue whenever a partition
becomes available and select the largest job that fits. It's worth noting that
the latter approach considers little tasks as unworthy of a full partition,
whereas it's normally preferable to provide the best service to the smallest
jobs (often interactive activities), not the worst.
Having at least one little partition around is one way out. Small jobs will
be able to run on this partition without the need for a huge partition.
Another option is to establish a rule that no job that is eligible to run may
be skipped over more than k times. It receives one point for each time it is
skipped over. It cannot be skipped once it has accumulated k points.
2.2.3. RELOCATION AND PROTECTION
Multiprogramming brings two key issues that must be addressed:
relocation and privacy. Separate jobs will be run at different addresses, as
shown in Figure 2.2. When a program is linked, the linker needs to know
where in memory the program will start.
Assume an example that the first instruction is a call to a procedure
located at absolute address 100 in the binary file generated by the linker.
This program will jump to absolute address 100, which is inside the
operating system, if it is loaded in partition 1 (at address 100K). A call to
100K + 100 is all that is required. If the program is loaded into partition 2,
the call to 200K + 100, and so on, must be made. This issue is called as
the relocation problem.
One option is to change the instructions while the program is being loaded
into memory. 100K is added to each address in program put into partition
1, 200K is added to addresses in program loaded into partition 2, and so
on. To conduct this type of relocation during loading, the linker must
include a list or bitmap in the binary program that specifies which program
words are addresses to be relocated and which are opcodes, constants, or
other elements that must not be relocated.
The problem of protection is not solved by relocating during loading. A
malicious application can create a new instruction and jump to it at any
time. There is no mechanism to prevent a program from creating an
instruction that reads or writes any word in memory since program in this
system use absolute memory addresses rather than addresses relative to a
register. Allowing processes to read and write memory belonging to other
users is very undesirable in multiuser systems.
Equipping the machine with two unique hardware registers, known as the
base and limit registers, is an alternate solution to both the relocation and
protection difficulties. When a process is scheduled, the start address of its
partition is loaded into the base register, and the length of the partition is
placed into the limit register. Before being transmitted to memory, every
memory address is automatically supplemented with the contents of the
base register. Thus, if the base register has the value 100K, a CALL 100
instruction becomes a CALL 100K + 100 instructions without changing
the instruction itself. The limit register is also verified to ensure that 19
Advanced Operating addresses do not attempt to target memory outside the current partition.
System
The base and limit registers are protected by hardware to prevent user
program from changing them.
The necessity to do an addition and a comparison on each memory
reference is a disadvantage of this technique. Although comparisons are
quick, addition takes a long time due to carry propagation time unless
specific addition circuits are employed.
2.3 SWAPPING
Figure 2.3 As processes enter and exit memory, the memory allocation
changes. The shaded areas are memory that hasn't been used yet.
When swapping generates many memory holes, they can be merged into
one large one by shifting all the processes as far down as possible. This
20
20
concept is known as memory compaction. It is frequently avoided since it Memory Management and
Virtual Memory in Linux
consumes a significant amount of CPU time.
One thing worth mentioning is the amount of memory that should be
assigned to a process when it is created or swapped in. When processes are
established with a constant size, the allocation is straightforward: the
operating system allocates exactly what is required, no more and no less.
If, on the other hand, processes' data segments can grow by dynamically
allocating memory from a heap, as many programming languages allow, a
problem arises whenever a process attempts to grow. If there is a hole
adjacent to the process, it can be allocated and the process permitted to
develop into it. If, on the other hand, the expanding process is adjacent to
another process, it will either have to be transferred to a memory hole
large enough for it, or one or more processes will have to be swapped out
to make room. If a process can't grow in memory and the swap area on the
disc is full, it'll have to wait or die.
If most processes are expected to grow as they run, allocating a little more
memory whenever a process is swapped in or moved is probably a good
idea to reduce the overhead involved with moving or swapping processes
that no longer fit in their assigned memory. When switching processes to
disc, however, only the memory that is really in use should be changed;
exchanging the extra RAM is wasteful. Figure 2.4(a) shows a memory
arrangement in which two processes have been given space for expansion.
Figure 2.5: (a) A section of memory that contains five processes and three
holes. The memory allocation units are indicated by tick
marks. Shaded areas (0 in the bitmap) are unrestricted.
(b) Its related bitmap.
(c) The same data as in a list.
The size of the allocation unit is a crucial design consideration. The
greater the bitmap, the smaller the allocation unit. However, even with a
4-byte allocation unit, 32 bits of memory will only require 1 bit of the
map. Because a memory of 32n bits uses n map bits, the bitmap will only
take up 1/33 of the memory. The bitmap will be smaller if the allocation
unit is large, but if the process size is not an exact multiple of the
allocation unit, significant memory will be wasted in the last unit of the
process.
Because the size of a bitmap depends only on the size of memory and the
size of the allocation unit, it is a straightforward approach to keep track of
memory words in a set amount of memory. The main problem is that when
a k-unit process is brought into memory, the memory management must
scan the bitmap for a sequence of k consecutive 0 bits in the map. Because
22
22
the run may transcend word boundaries in the map, searching a bitmap for Memory Management and
Virtual Memory in Linux
a run of a particular length is a long process; this is an argument against
bitmaps.
2.3.2 MEMORY MANAGEMENT WITH LINKED LISTS
Maintaining a linked list of allocated and free memory segments, where a
segment is either a process or a gap between two processes, is another
technique to keep track of memory. Figure 2.5(c) depicts the memory of
figure 2.5(a) as a linked list of segments. Each list entry describes a hole
(H) or process (P), as well as the location at which it begins, the length,
and a pointer to the next entry.
The segment list is kept ordered by address in this example. The
advantage of sorting this method is that updating the list is simple when a
process ends or is replaced. Except when it's at the very top or very bottom
of memory, a terminating process usually has two neighbors. These could
be holes or processes, resulting in the four combinations indicated in
figure 2.6. Updating the list in figure 2.6(a) necessitates replacing a P with
an H. Two entries are combined into one in figure 2.6(b) and figure 2.6(c),
making the list one entry shorter.
Three entries are combined in figure 2.6(d), and two items are eliminated
from the list. Because the terminating process's process table slot will
usually point to the process's list entry, it may be more convenient to have
the list as a double-linked list rather than the single-linked list shown in
figure 2.5(c). This format makes it easy to locate the prior entry and
determine whether or not a merge is possible.
Figure 2.6 For the terminating process, X, there are four neighboring
combinations.
Several strategies can be used to allocate memory for a newly generated
process (or an old process being swapped in from disc) when the processes
and holes are kept on a list sorted by address. The memory management,
we suppose, knows how much memory to allocate. First fit is the simplest
algorithm. The process manager checks the list of segments until it locates
a large enough hole. Except in the statistically unusual situation of a
precise fit, the hole is then split into two portions, one for the process and
23
Advanced Operating one for the unused memory. Because it searches as little as possible, first
System
fit is a fast algorithm.
Next fit is a small variant of first fit. It functions similarly to initial fit,
with the exception that it retains track of its location whenever it finds a
suitable hole. When it's called to discover a hole again, it starts searching
the list from where it left off the last time, rather than starting from the
beginning, as first fit does. According to Bays (1977) simulations,
following fit performs somewhat worse than first fit. Best fit is another
well-known method. Best fit scans the entire list for the tiniest hole that is
suitable. Rather than splitting up a large hole that may be needed later,
best fit looks for a hole that is near to the actual size required.
Consider figure 2.5 as an example of initial fit and best fit. If a block of
size 2 is required, the hole will be allocated at 5, but the hole will be
allocated at 18. Because it must search the complete list every time it is
invoked, best fit is slower than first fit. It also resulted in more wasted
memory than first fit or next fit because it tends to fill memory with tiny,
useless holes. On average, the first fit produces larger holes.
To get around the difficulty of breaking up nearly precise matches into a
process and a little hole, consider the worst fit approach, which is to
always select the largest available hole, ensuring that the hole broken off
is large enough to be useful. Worst fit has also been proved to be a bad
concept through simulation. By keeping distinct lists for processes and
holes, all four algorithms can be made faster. As a result, they can focus
all of their attention on holes rather than processes. Because a freed
segment must be deleted from the process list and entered into the hole
list, the additional complexity and slowdown when deallocating memory
is an unavoidable cost of this allocation speedup. If separate lists for
processes and holes are kept, the hole list can be sorted by size to find the
best fit faster. When best fit examines a list of holes from smallest to
largest, it recognizes that the hole that fits is the smallest one that will
perform the task, resulting in the best fit. As with the single list technique,
no additional searching is required. First fit and best fit are equally fast
with a hole list organized by size, and next fit is meaningless. A slight
optimization is achievable when the holes are kept on separate lists from
the processes. The holes themselves can be utilized instead of a distinct set
of data structures for keeping the hole list, as shown in figure 2.5(c). Each
hole's first word may represent the hole size, while the second word could
be a link to the next item. Figure 2.5(c) which requires three words and
one bit (P/H) are no longer required.
People were initially confronted with programs that were too large to fit in
the available memory many years ago. The most common technique was
to divide the programs into sections known as overlays. Overlay 0 would
be the first to run. It would then request for another overlay when it was
finished. Some overlay systems were extremely complicated, allowing
many overlays to be stored in memory at the same time. The overlays
24
24
were stored on disc and dynamically swapped in and out of memory by Memory Management and
Virtual Memory in Linux
the operating system as needed.
Although the system did the actual work of shifting overlays in and out,
the programmer was responsible for deciding how to partition the program
into sections. It took a long time and was tedious to break down enormous
programs into small, modular bits. It didn't take long for someone to come
up with a means to automate the entire process.
Virtual memory is the name given to the method that was invented
(Fotheringham, 1961). Virtual memory works on the premise that the total
size of the program, data, and stack may exceed the amount of physical
memory accessible. The operating system keeps the bits of the program in
main memory that is currently in use, and the remainder on the disc.
2.4.1 PAGING
Paging is a memory management strategy that does away with the need for
contiguous physical memory allocation. This approach allows a process's
physical address space to be non-contiguous.
26
26
Memory Management and
Virtual Memory in Linux
To prevent creating such a big table, divide the outer page table,
which will result in a three-level page table:
30
30
Memory Management and
Virtual Memory in Linux
• Locality of reference
The notion of locality of reference in operating systems asserts that,
rather than loading the complete process in main memory, the OS
can load only the number of pages in main memory that are
regularly accessed by the CPU, as well as the page table entries that
correspond to those many pages.
In translation look aside the buffers; there are tags and keys that are
used to map data. When the requested entry is located in the
translation look aside buffer, it is referred to as a TLB hit. When this
occurs, the CPU just accesses the actual location in main memory. If
the item isn't located in the TLB (TLB miss), the CPU must first
read the page table in main memory, then the actual frame in main
memory. As a result, the effective access time in the case of a TLB
hit will be less than in the case of a TLB miss. As a result, the
effective access time can be calculated as follows:
EAT = P (t + m) + (1 - p) (t + k.m + m)
Where,
‘p’ is the TLB hit rate,
‘t’ is the time taken to access TLB,
‘m’ indicates the time taken to access main memory k = 1, if the
single level paging has been implemented.
We can deduce from the formula that
1] If the TLB hit rate is increased, the effective access time will
be reduced.
2] In the case of multilevel paging, the effective access time will
be increased.
When a new page needs to be loaded into the main memory, the Page
Replacement Algorithm determines which page to remove, also known as
swap out. When a requested page is not present in the main memory and
32
32
the available space is insufficient to allocate to the requested page, Page Memory Management and
Virtual Memory in Linux
Replacement occurs.
When the page chosen for replacement is paged out and referenced again,
it must read in from disc, which necessitates I/O completion. The quality
of the page replacement method is determined by this process: the less
time spent waiting for page-ins, the better.
A page replacement algorithm tries to determine which pages should be
replaced in order to reduce the frequency of page misses. There are
numerous page replacement algorithms to choose from. These algorithms
are tested by executing them on a specific memory reference string and
counting the number of page faults. The method for that circumstance is
better if there are less page faults. When a process requests a page and that
page is found in main memory, it is referred to as a page hit; otherwise, it
is referred to as a page miss or a page fault.
There are several page replacement algorithms in operating system as
indicated in the diagram 2.10 below
• EXAMPLE:
Consider the following page reference string of size 12: 1, 2, 3, 4, 5,
1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum 4 pages in a
frame).
33
Advanced Operating
System
• Advantages
1] Simple and straightforward to implement.
2] Low overhead.
• Disadvantages
1] Poor performance.
2] It doesn't take into account how often you use it or when you
last used it; it just changes the oldest page.
3] Belady's Anomaly affects this algorithm (i.e. more page faults
when we increase the number of page frames).
2.5.2 Least Recently Used (LRU)
The Least Recently Used page replacement algorithm maintains track of
how many times a page has been used in a short period of time. It is based
on the assumption that the pages that have been widely utilized in the past
will also be heavily used in the future.
34
34
When page replacement occurs in LRU, the page that has not been utilized Memory Management and
Virtual Memory in Linux
for the longest period is replaced.
• EXAMPLE:
• Advantages
1] Efficient
2] Not affected by Belady's Anomaly.
• Disadvantages
1] Implementation is difficult.
2] Expensive.
3] Hardware support is required.
2.5.3 Optimal Page Replacement
The best page replacement algorithm is the Optimal Page Replacement
algorithm, which produces the fewest page faults. This algorithm is also
known as OPT that stands for clairvoyant replacement algorithm, or
35
Advanced Operating Belady's optimal page replacement policy. This algorithm replaces pages
System
that will not be used for the longest period of time in the future, i.e., pages
in the memory that will be referred to the farthest in the future.
This approach was first proposed a long time ago and is difficult to
execute since it necessitates future knowledge about program behavior.
Using the page reference information obtained on the first run, however, it
is possible to implement optimal page replacement on the second run.
• EXAMPLE:
• Advantages
1] Implementation is simple.
2] The data structures are simple.
3] Extremely effective.
• Disadvantages
1] Future knowledge of the program is required.
2] Time-consuming.
36
36
2.5.4 Last In First Out (LIFO) Memory Management and
Virtual Memory in Linux
The FIFO principle is comparable to how this method works. The newest
page, which is the last to arrive in the primary memory, gets replaced in
this way. This algorithm uses the stack to keep track of all the pages.
The last items entered are the first to be eliminated in the LIFO technique
of data processing. FIFO (First In, First Out) is the contrary of LIFO, in
which objects are deleted in the order they were entered.
Imagine stacking a deck of cards by laying one card on top of the other,
starting at the bottom, to help understand LIFO. You begin removing
cards from the top of the deck after it has been entirely stacked. Because
the last cards to be placed on the deck are the first to be removed, this
procedure is an example of the LIFO approach.
When pulling data from an array or data buffer, computers sometimes
employ the LIFO approach. The LIFO method is used when a computer
needs to access the most recent data entered. The FIFO approach is
utilized when data must be retrieved in the order it was entered.
2.5.5 PRACTICE PROBLEMS BASED ON PAGE REPLACEMENT
ALGORITHMS
Problem-01: In main memory, a system uses three page frames to store
process pages. It employs a FIFO (First in, First Out) page replacement
policy. Assume that all of the page frames are blank at first. What is the
total number of page faults that will be generated while processing the
following page reference string-
4, 7, 6, 1, 7, 6, 1, 2, 7, 2
Calculate the hit and miss ratios as well.
Solution:
Number of total references = 10
2.7 SEGMENTATION
Segments are used to break down a procedure. Segments are the sections
into which a program is separated that are not always all the same size.
Another method of splitting accessible memory is segmentation. It's a new
memory management technique that generally supports the user's
perspective on memory. The logical address space consists primarily of
segments. Each section is given a name as well as a length.
A procedure is divided into segments in generally. Segmentation splits or
segments the memory in the same way as paging does. However, there is a
distinction: paging splits the memory into fixed segments, whereas
segmentation divides the memory into variable segments, which are then
loaded into logical memory space. A program is essentially a grouping of
segments. A segment is a logical unit that includes things like: main
42
42
program, procedure, function, method, object, local and global variable, Memory Management and
Virtual Memory in Linux
symbol table, common block, stack, arrays.
2.7.1 Types of segmentation
The following are the several forms of segmentation:
1] Virtual Memory Segmentation: In this sort of segmentation, each
process is divided into n divisions, but they are not segmented all at
once.
2] Simple Segmentation: With this type, each process is divided into n
divisions and all of them are segmented at the same time, but during
runtime, and they can be non-contiguous (that is they may be
scattered in the memory).
2.7.2 Characteristics of segmentation
The following are some characteristics of the segmentation technique:
1] Variable-size partitioning is used in the Segmentation scheme.
2] Segments are the conventional name for supplementary memory
partitions.
3] The length of modules determines the partition size.
4] Secondary memory and main memory are thus partitioned into
unequal-sized sections using this technique.
2.7.3 Need of segmentation
The separation of the user's image of memory and the real physical
memory is one of the major downsides of memory management in the
operating system. Paging is a technique that allows these two memories to
be separated.
The user's perspective is essentially mapped to physical storage. This
mapping also allows for the separation of physical and logical memory.
The operating system may partition the same function into multiple pages,
which may or may not be loaded into memory at the same time. The
operating system is also unconcerned about the user's perspective on the
process. The system's efficiency suffers as a result of this strategy.
Because it breaks the process into chunks, segmentation is a better
technique.
2.7.4 User’s view of a program
The user's perspective on segmentation is depicted in the figure 2.12
below
43
Advanced Operating
System
• Because two memory visits are now necessary, the time it takes to
fetch the instruction increases.
• As the open space is divided down into smaller bits, and processes
are loaded and withdrawn from the main memory, this strategy leads
to external fragmentation, resulting in a lot of memory waste.
2.7.9 Example of Segmentation
The segmentation example is shown below, with five segments numbered
from 0 to 4. As illustrated, these portions will be stored in physical
memory. Each segment has its own entry in the segment table, which
provides the segment's beginning entry address in physical memory
(referred to as the base) as well as the segment's length (denoted as limit).
46
46
Memory Management and
Virtual Memory in Linux
• Virtual memory does more than merely extend the memory of your
machine. The memory management subsystem allows you to
manage your memory.
• The virtual and physical memory is separated into pages, which are
fixed length blocks of memory.
• The theoretical page table contains the following information for
48
48 each entry.
• Valid flag: This determines whether or not the page table entry is Memory Management and
Virtual Memory in Linux
valid.
• The physical page frame number.
Information on access control: This section explains how to use the page.
Is it possible to write to it? Is there any executable code in it?
Linux Memory Management System Calls
2.9 SUMMARY
50
50
7] Write a note on various techniques used for structuring the page Memory Management and
Virtual Memory in Linux
table.
8] Write a short note on translational look aside buffer.
9] Discuss various page replacement algorithms.
10] Explain First in First Out page replacement algorithm along with an
example.
11] Explain with an example the concept of Least Recently Used
Algorithm.
12] Discuss on optimal page replacement algorithm.
13] Write a note on Last in First Out page replacement algorithm.
14] Discuss the various design issues for paging system.
15] What is the need of Segmentation?
16] Explain the concept of segmentation and state its characteristics.
17] What are the types of segmentation? Explain in brief.
18] Write a note on segmentation hardware.
19] Explain segmentation architecture and state its advantages,
disadvantages along with an example.
20] Discuss on the case study of Linux memory management.
51
Advanced Operating
System
3
INPUT/ OUTPUT IN LINUX
Unit Structure
3.0 Objective
3.1 History
3.2 Principles of I/O Hardware
3.3 File, Directories and Implementation
3.4 Security
3.5 Summary
3.6 Exercise
3.7 References
3.0 OBJECTIVE
3.1 HISTORY
54
54
A line discipline is an interpreter for the information from the terminal Input/ Output in Linux
device. The most common line discipline is the tt y discipline, which glues
the terminal's data stream onto the standard input and output streams of a
user's running processes, allowing those processes to communicate
directly with user's terminal. This job is complicated by the fact that
several such processes may be running simultaneously, and the tt y line
discipline is responsible for attaching and detaching the terminal's input
and output from the various processes connected to it as those processes
are suspended or awakened by the user.
Other line disciplines also are implemented that have nothing to do with
I/O to a user process. The PPP and SLIP networking protocols are ways of
encoding a networking connection over a terminal device such as a serial
line. These protocols are implemented under Linux as drivers that at one
end appear to the terminal system as line disciplines and at the other end
appear to the networking system as network-device drivers. After one of
these line disciplines has been enabled on a terminal device, any data
appearing on that terminal will be routed directly to the appropriate
network-device driver.
I/O Hardware:-
One of the important jobs of an Operating System is to manage various
I/O devices including mouse, keyboards, touch pad, disk drives, display
adapters, USB devices, Bit-mapped screen, LED, Analog-to-digital
converter, On/off switch, network connections, audio I/O, printers etc.
An I/O system is required to take an application I/O request and send it to
the physical device, then take whatever response comes back from the
device and send it to the application. I/O devices can be divided into two
categories −
Block devices − A block device is one with which the driver
communicates by sending entire blocks of data. For example, Hard disks,
USB cameras, Disk-On-Key etc.
Character devices − A character device is one with which the driver
communicates by sending and receiving single characters (bytes, octets).
For example, serial ports, parallel ports, sounds cards etc
Device Controllers
Device drivers are software modules that can be plugged into an OS to
handle a particular device. Operating System takes help from device
drivers to handle all I/O devices.
The Device Controller works like an interface between a device and a
device driver. I/O units (Keyboard, mouse, printer, etc.) typically consist
of a mechanical component and an electronic component where electronic
component is called the device controller.
There is always a device controller and a device driver for each device to
communicate with the Operating Systems. A device controller may be
able to handle multiple devices. As an interface its main task is to convert
serial bit stream to block of bytes, perform error correction as necessary.
Any device connected to the computer is connected by a plug and socket,
and the socket is connected to a device controller. Following is a model 55
Advanced Operating for connecting the CPU, memory, controllers, and I/O devices where CPU
System
and device controllers all use a common bus for communication.
56
56
While using memory mapped IO, OS allocates buffer in memory and Input/ Output in Linux
informs I/O device to use that buffer to send data to the CPU. I/O device
operates asynchronously with CPU, interrupts CPU when finished.
The advantage to this method is that every instruction which can access
memory can be used to manipulate an I/O device. Memory mapped IO is
used for most high-speed I/O devices like disks, communication
interfaces.
Direct Memory Access (DMA)
Slow devices like keyboards will generate an interrupt to the main CPU
after each byte is transferred. If a fast device such as a disk generated an
interrupt for each byte, the operating system would spend most of its time
handling these interrupts. So a typical computer uses direct memory
access (DMA) hardware to reduce this overhead.
Direct Memory Access (DMA) means CPU grants I/O module authority to
read from or write to memory without involvement. DMA module itself
controls exchange of data between main memory and the I/O device. CPU
is only involved at the beginning and end of the transfer and interrupted
only after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller
(DMAC) that manages the data transfers and arbitrates access to the
system bus. The controllers are programmed with source and destination
pointers (where to read/write the data), counters to track the number of
transferred bytes, and settings, which includes I/O and memory types,
interrupts and states for the CPU cycles.
57
Advanced Operating Polling vs Interrupts I/O
System
A computer must have a way of detecting the arrival of any type of input.
There are two ways that this can happen, known as polling and interrupts.
Both of these techniques allow the processor to deal with events that can
happen at any time and that are not related to the process it is currently
running.
Polling I/O
Polling is the simplest way for an I/O device to communicate with the
processor. The process of periodically checking status of the device to see
if it is time for the next I/O operation, is called polling. The I/O device
simply puts the information in a Status register, and the processor must
come and get the information.
Most of the time, devices will not require attention and when one does it
will have to wait until it is next interrogated by the polling program. This
is an inefficient method and much of the processors time is wasted on
unnecessary polls.
Compare this method to a teacher continually asking every student in a
class, one after another, if they need help. Obviously the more efficient
method would be for a student to inform the teacher whenever they require
assistance.
Interrupts I/O
An alternative scheme for dealing with I/O is the interrupt-driven method.
An interrupt is a signal to the microprocessor from a device that requires
attention.
A device controller puts an interrupt signal on the bus when it needs
CPU’s attention when CPU receives an interrupt, It saves its current state
and invokes the appropriate interrupt handler using the interrupt vector
(addresses of OS routines to handle various events). When the interrupting
device has been dealt with, the CPU continues with its original task as if it
had never been interrupted.
I/O software is often organized in the following layers −
User Level Libraries − This provides simple interface to the user program
to perform input and output. For example, stdio is a library provided by C
and C++ programming languages.
Kernel Level Modules − This provides device driver to interact with the
device controller and device independent I/O modules used by the device
drivers.
Hardware − This layer includes actual hardware and hardware controller
which interact with the device drivers and makes hardware alive.
A key concept in the design of I/O software is that it should be device
independent where it should be possible to write programs that can access
any I/O device without having to specify the device in advance. For
example, a program that reads a file as input should be able to read a file
on a floppy disk, on a hard disk, or on a CD-ROM, without having to
modify the program for each different device.
58
58
Input/ Output in Linux
Device Drivers
Device drivers are software modules that can be plugged into an OS to
handle a particular device. Operating System takes help from device
drivers to handle all I/O devices. Device drivers encapsulate device-
dependent code and implement a standard interface in such a way that
code contains device-specific register reads/writes. Device driver, is
generally written by the device's manufacturer and delivered along with
the device on a CD-ROM.
A device driver performs the following jobs −
To accept request from the device independent software above to it.
Interact with the device controller to take and give I/O and perform
required error handling
Making sure that the request is executed successfully
How a device driver handles a request is as follows: Suppose a request
comes to read a block N. If the driver is idle at the time a request arrives, it
starts carrying out the request immediately. Otherwise, if the driver is
already busy with some other request, it places the new request in the
queue of pending requests.
Interrupt handlers
An interrupt handler, also known as an interrupt service routine or ISR, is
a piece of software or more specifically a callback function in an operating
system or more specifically in a device driver, whose execution is
triggered by the reception of an interrupt.
When the interrupt happens, the interrupt procedure does whatever it has
to in order to handle the interrupt, updates data structures and wakes up
process that was waiting for an interrupt to happen.
The interrupt mechanism accepts an address ─ a number that selects a
specific interrupt handling routine/function from a small set. In most
architectures, this address is an offset stored in a table called the interrupt
vector table. This vector contains the memory addresses of specialized
interrupt handlers.
59
Advanced Operating Device-Independent I/O Software
System
The basic function of the device-independent software is to perform the
I/O functions that are common to all devices and to provide a uniform
interface to the user-level software. Though it is difficult to write
completely device independent software but we can write some modules
which are common among all the devices. Following is a list of functions
of device-independent I/O Software −
Kernel I/O Subsystem
Kernel I/O Subsystem is responsible to provide many services related to
I/O. Following are some of the services provided.
Scheduling − Kernel schedules a set of I/O requests to determine a good
order in which to execute them. When an application issues a blocking I/O
system call, the request is placed on the queue for that device. The Kernel
I/O scheduler rearranges the order of the queue to improve the overall
system efficiency and the average response time experienced by the
applications.
Buffering − Kernel I/O Subsystem maintains a memory area known as
buffer that stores data while they are transferred between two devices or
between a device with an application operation. Buffering is done to cope
with a speed mismatch between the producer and consumer of a data
stream or to adapt between devices that have different data transfer sizes.
Caching − Kernel maintains cache memory which is region of fast
memory that holds copies of data. Access to the cached copy is more
efficient than access to the original.
Spooling and Device Reservation − A spool is a buffer that holds output
for a device, such as a printer, that cannot accept interleaved data streams.
The spooling system copies the queued spool files to the printer one at a
time. In some operating systems, spooling is managed by a system
daemon process. In other operating systems, it is handled by an in-kernel
thread.
Error Handling − An operating system that uses protected memory can
guard against many kinds of hardware and application errors.
60
60
An object file is a sequence of bytes organized into blocks that are Input/ Output in Linux
understandable by the machine.
When operating system defines different file structures, it also contains the
code to support these file structure. Unix, MS-DOS support minimum
number of file structure.
File Type
File type refers to the ability of the operating system to distinguish
different types of file such as text files source files and binary files etc.
Many operating systems support many types of files. Operating system
like MS-DOS and UNIX have the following types of files −
Ordinary files
These are the files that contain user information. These may have text,
databases or executable program. The user can apply various operations
on such files like add, modify, delete or even remove the entire file.
Directory files
These files contain list of file names and other information related to these
files.
Special files
These files are also known as device files. These files represent physical
device like disks, terminals, printers, networks, tape drive etc.
These files are of two types −
Character special files − data is handled character by character as in case
of terminals or printers.
Block special files − data is handled in blocks as in the case of disks and
tapes.
File Access Mechanisms
File access mechanism refers to the manner in which the records of a file
may be accessed. There are several ways to access files −
• Sequential access
• Direct/Random access
• Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some
sequence, i.e., the information in the file is processed in order, one record
after the other. This access method is the most primitive one. Example:
Compilers usually access files in this fashion.
Direct/Random access
Random access file organization provides, accessing the records directly.
Each record has its own address on the file with by the help of which it
can be directly accessed for reading or writing. The records need not be in
61
Advanced Operating any sequence within the file and they need not be in adjacent locations on
System
the storage medium.
Indexed sequential access
This mechanism is built up on base of sequential access. An index is
created for each file which contains pointers to various blocks. Index is
searched sequentially and its pointer is used to access the file directly.
Space Allocation
Files are allocated disk spaces by operating system. Operating systems
deploy following three main ways to allocate disk space to files.
Contiguous Allocation
Linked Allocation
Indexed Allocation
Contiguous Allocation
Each file occupies a contiguous address space on disk.
Assigned disk address is in linear order.
Easy to implement.
External fragmentation is a major issue with this type of allocation
technique.
Linked Allocation
Each file carries a list of links to disk blocks.
Directory contains link / pointer to first block of a file.
No external fragmentation
Effectively used in sequential access file.
Inefficient in case of direct access file.
Indexed Allocation
Provides solutions to problems of contiguous and linked allocation.
A index block is created having all pointers to files.
Each file has its own index block which stores the addresses of disk space
occupied by the file.
Directory contains the addresses of index blocks of files.
3.4 SECURITY
64
64
Deadlock Detection Input/ Output in Linux
A deadlock occurrence can be detected by the resource scheduler. A
resource scheduler helps OS to keep track of all the resources which are
allocated to different processes. So, when a deadlock is detected, it can be
resolved using the below-given methods:
Deadlock Prevention:
It’s important to prevent a deadlock before it can occur. The system
checks every transaction before it is executed to make sure it doesn’t lead
the deadlock situations. Such that even a small change to occur dead that
an operation which can lead to Deadlock in the future it also never
allowed process to execute.
It is a set of methods for ensuring that at least one of the conditions cannot
hold.
No preemptive action:
No Preemption – A resource can be released only voluntarily by the
process holding it after that process has finished its task If a process which
is holding some resources request another resource that can’t be
immediately allocated to it, in that situation, all resources will be released.
Preempted resources require the list of resources for a process that is
waiting. The process will be restarted only if it can regain its old resource
and a new one that it is requesting. If the process is requesting some other
resource, when it is available, then it was given to the requesting process.
If it is held by another process that is waiting for another resource, we
release it and give it to the requesting process.
Mutual Exclusion:
Mutual Exclusion is a full form of Mutex. It is a special type of binary
semaphore which used for controlling access to the shared resource. It
includes a priority inheritance mechanism to avoid extended priority
inversion problems. It allows current higher priority tasks to be kept in the
blocked state for the shortest time possible. Resources shared such as read-
only files never lead to deadlocks, but resources, like printers and tape
drives, needs exclusive access by a single process.
Hold and Wait:
In this condition, processes must be stopped from holding single or
multiple resources while simultaneously waiting for one or more others.
Circular Wait:
It imposes a total ordering of all resource types. Circular wait also requires
that every process request resources in increasing order of enumeration.
Deadlock Avoidance
It is better to avoid a deadlock instead of taking action after the Deadlock
has occurred. It needs additional information, like how resources should
be used. Deadlock avoidance is the simplest and most useful model that
each process declares the maximum number of resources of each type that
it may need.
65
Advanced Operating Avoidance Algorithms
System
The deadlock-avoidance algorithm helps you to dynamically assess the
resource-allocation state so that there can never be a circular-wait
situation.
A single instance of a resource type.
Use a resource-allocation graph
Cycles are necessary which are sufficient for Deadlock
Multiples instances of a resource type.
Cycles are necessary but never sufficient for Deadlock.
Uses the banker’s algorithm
Advantages of Deadlock
Here, are pros/benefits of using Deadlock method
This situation works well for processes which perform a single burst of
activity
No preemption needed for Deadlock.
Convenient method when applied to resources whose state can be saved
and restored easily
Feasible to enforce via compile-time checks
Needs no run-time computation since the problem is solved in system
design
Disadvantages of Deadlock method
Here, are cons/ drawback of using deadlock method
Disks
The ideal storage device is
1. Fast
2. Big (in capacity)
3. Cheap
4. Impossible
Disks are big and cheap, but slow.
Disk Hardware
Show a real disk opened up and illustrate the components
• Platter
• Surface
• Head
• Track
• Sector
• Cylinder
• Seek time
• Rotational latency
• Transfer rate
66
66
Overlapping I/O operations is important. Many controllers can do Input/ Output in Linux
overlapped seeks, i.e. issue a seek to one disk while another is already
seeking.
Modern disks cheat and do not have the same number of sectors on outer
cylinders as on inner one. However, the disks have electronics and
software (firmware) that hides the cheat and gives the illusion of the same
number of sectors on all cylinders.
(Unofficial) Despite what tanenbaum says later, it is not true that when
one head is reading from cylinder C, all the heads can read from cylinder
C with no penalty. It is, however, true that the penalty is very small.
Choice of block size
• We discussed this before when studying page size.
• Current commodity disk characteristics (not for laptops) result in
about 15ms to transfer the first byte and 10K bytes per ms for
subsequent bytes (if contiguous).
• Rotation rate is 5400, 7600, or 10,000 RPM (15K just now
available).
• Recall that 6000 RPM is 100 rev/sec or one rev per 10ms. So
half a rev (the average time for to rotate to a given point) is
5ms.
• Transfer rates around 10MB/sec = 10KB/ms.
• Seek time around 10ms.
• This favors large blocks, 100KB or more.
• But the internal fragmentation would be severe since many files are
small.
• Multiple block sizes have been tried as have techniques to try to
have consecutive blocks of a given file near each other.
• Typical block sizes are 4KB-8KB.
RAID (Redundant Array of Inexpensive Disks) (Skipped)
• The name RAID is from Berkeley.
• IBM changed the name to Redundant Array of Independent Disks. I
wonder why?
• A simple form is mirroring, where two disks contain the same data.
• Another simple form is striping (interleaving) where consecutive
blocks are spread across multiple disks. This helps bandwidth, but is
not redundant. Thus it shouldn't be called RAID, but it sometimes is.
• One of the normal RAID methods is to have N (say 4) data disks and
one parity disk. Data is striped across the data disks and the bitwise
parity of these sectors is written in the corresponding sector of the
parity disk.
• On a read if the block is bad (e.g., if the entire disk is bad or even
missing), the system automatically reads the other blocks in the
stripe and the parity block in the stripe. Then the missing block is
just the bitwise exclusive or of all these blocks. 67
Advanced Operating • For reads this is very good. The failure free case has no penalty
System
(beyond the space overhead of the parity disk). The error case
requires N+1 (say 5) reads.
• A serious concern is the small write problem. Writing a sector
requires 4 I/O. Read the old data sector, compute the change, read
the parity, compute the new parity, write the new parity and the new
data sector. Hence one sector I/O became 4, which is a 300%
penalty.
• Writing a full stripe is not bad. Compute the parity of the N (say 4)
data sectors to be written and then write the data sectors and the
parity sector. Thus 4 sector I/Os become 5, which is only a 25%
penalty and is smaller for larger N, i.e., larger stripes.
• A variation is to rotate the parity. That is, for some stripes disk 1 has
the parity, for others disk 2, etc. The purpose is to not have a single
parity disk since that disk is needed for all small writes and could
become a point of contention.
Disk Arm Scheduling Algorithms
There are three components to disk response time: seek, rotational latency,
and transfer time. Disk arm scheduling is concerned with minimizing seek
time by reordering the requests.
These algorithms are relevant only if there are several I/O requests
pending. For many PCs this is not the case. For most commercial
applications, I/O is crucial and there are often many requests pending.
1. FCFS (First Come First Served): Simple but has long delays.
2. Pick: Same as FCFS but pick up requests for cylinders that are
passed on the way to the next FCFS request.
3. SSTF or SSF (Shortest Seek (Time) First): Greedy algorithm. Can
starve requests for outer cylinders and almost always favors middle
requests.
4. Scan (Look, Elevator): The method used by an old fashioned
jukebox (remember ``Happy Days'') and by elevators. The disk arm
proceeds in one direction picking up all requests until there are no
more requests in this direction at which point it goes back the other
direction. This favors requests in the middle, but can't starve any
requests.
5. C-Scan (C-look, Circular Scan/Look): Similar to Scan but only
service requests when moving in one direction. When going in the
other direction, go directly to the furthest away request. This doesn't
favor any spot on the disk. Indeed, it treats the cylinders as though
they were a clock, i.e. after the highest numbered cylinder comes
cylinder 0.
6. N-step Scan: This is what the natural implementation of Scan gives.
• While the disk is servicing a Scan direction, the controller
gathers up new requests and sorts them.
68
68
• At the end of the current sweep, the new list becomes the next Input/ Output in Linux
sweep.
Minimizing Rotational Latency
Use Scan based on sector numbers not cylinder number. For rotational
latency Scan which is the same as C-Scan. Why?
Ans: Because the disk only rotates in one direction.
Security Mechanism
OS security mechanisms:
Memory Protection:
One of the important aspects of Operating system security is Memory
Protection. Memory provides powerful indirect way for an attacker to
circumvent security mechanism, since every piece of information accessed
by any program will need to reside in memory at some point in time, and
hence may potentially be accessed in the absence of memory protection
mechanisms.
Memory protection is a way for controlling memory usage on a computer,
and is core to virtually every operating system. The main purpose of
memory protection is to prevent a process running on an operating system
from accessing the memory of other processes, or is used by the OS
kernel. This prevents a bug within the process from affecting other
processes, and also prevents malicious software from gaining
unauthorized access to the system, e.g., suppose that process A is
permitted access to a file F, while process B is not. Process B can bypass
this policy by attempting to read F's content that will be stored in A's
memory immediately after A reads F. Alternatively, B may attempt to
modify the access control policy that is stored in the OS memory so that
the OS thinks that B is permitted access to this file.
How to protect memory of one process from another?
The virtual memory mechanism supported on most OSes ensures that the
memory of different processes are logically disjoint. The virtual addresses,
which are logical addresses, are transformed into a physical memory
address using address translation hardware. To speed up translation,
various caching mechanisms are utilized. First, most L1 processor caches
are based on virtual addresses, so cache accesses don't need address
translation. Next, the paging hardware uses cache-like mechanisms
(TLBs) to avoid performing bounds checks on every virtual access.
In order to secure the virtual address translation mechanism, it is important
to ensure that processes cannot tamper with the address translation
mechanisms. To ensure this, processors have to provide some protection
primitives. Typically, this is done using the notion of privileged execution
modes.
Specifically, 2 modes of CPU execution are introduced: privileged and
unprivileged. (Processors may support multiple levels of privileges, but
today's OSes use only two levels.) Certain instructions, such as those
relating to I/O, DMA, interrupt processing, and page table translation are
permitted only in the privileged mode.
69
Advanced Operating OSes rely on the protection mechanism provided by the processor as
System
follows. All user processes (including root-processes) execute in
unprivileged mode, while the OS kernel executes in privileged mode.
Obviously, user level processes need to access OS kernel functionality
from time time to time. Typically, this is done using system calls that
represent a call from unprivileged code to privileged code. Uncontrolled
calls across the privilege boundary can defeat security mechanism, e.g., it
should not be possible for arbitrary user code to call a kernel function that
changes the page tables. For this reason, privilege transitions need to be
carefully controlled. Usually, “software trap” instructions are used to
effect transition from low to high privilege mode. (Naturally, no protection
is needed for transitioning from privileged to unprivileged mode.) On
Linux, software interrupt 0x80 is used for this purpose. When this
instruction is invoked, the processor starts executing the interrupt handler
code for this interrupt in the privileged mode. (Note that the changes to
interrupt handler should itself be permitted only in the privileged mode, or
else this mechanism could be subverted.) This code should perform
appropriate checks to ensure that the call is legitimate, and then carry it
out. This basically means that the parameters to system calls have to be
thoroughly checked.
UNIX Processes and Security:
Processes have different type of ID’s. These Ids are inherited by a child
process from its parent, except in the case of setuid processes – in their
case, their effective userid is set to be the same as the owner of the file
that is executed.
1. User ID:
a) Effective User ID (EUID)
Effective user id is used for all permission checking
operations.
b) Real User ID (RUID)
Real user ID is the one that represents the “real user” that
launched the process.
c) Saved User ID (SUID)
Saved userid stores the value of userid before a setuid
operation.
A privileged process (ie process with euid of 0) can change these 3 uids to
arbitrary values, while unprivileged processes can change them to one of
the values of these 3 uids. This constraint on unprivileged processes
prevents them for assuming the userid of arbitrary users, but allows
limited changes. For instance, an FTP server initially starts off with euid =
ruid = 0. When a user U logs in, the euid and ruid are set to U, but the
saved uid remains as root. This allows the FTP server to later change its
euid to 0 for the purpose of binding to a low-numbered port. (The original
FTP protocol requires this binding for each data connection.)
2. Group ID:
Group identifier (GID) is used to represent a specific group. As single user
can belongs to multiple groups, a single process can have multiple group
70
70
ids. These are organized as a “primary” group id, and a list of Input/ Output in Linux
supplementary gids. The primary gid has 3 flavors (real, effective and
save), analogous to uids. All objects created by a process will have the
effective gid of the process. Supplementary gids are used only for
permission checking.
3. Group Passwords :
If a user is not listed as belonging to a group G, and there is a password for
G, this user can change her group by providing this group password.
Inter-processes communication:
A process can influence the behavior of another process by
communicating with it. From a security point of view, this is not an issue
if the two processes belong to the same user. (Any damage that can be
effected by the second process can be effected by the first process as well,
so there is no incentive for the first process to attack the second --- this is
true on standard UNIX systems, where application-specific access control
policies (say, DTE) aren't used.) If not, we need to be careful. We need to
pay particular attention to situations where an unprivileged process
communicates with a privileged process in ways that the privileged
process did not expect.
1. Parent to child communication
If the parent has a child with higher privilege- e.g. the child is a setuid
program, then certain mechanism is needed to prevent the child taking
advantage of the setuid program. In particular, such a child program
should expect to receive parameters from an unprivileged process and
validate them. But it may not expect subversion attacks, for example, the
parent may modify the path (specified in an environment variable) for
searching for libraries. To prevent this, the loader typically ignores these
path specifications for seuid processes. Parents are still permitted to send
signals to children, even if their uids are different.
2. Signals are a mechanism for the OS to notify user-level processes about
exceptions, e.g.,invalid memory access. Their semantics is similar to that
of interrupts ---- processes typically install a “signal handler,” which can
be different for different signals. (UNIX defines about 30 such signals.)
When a signal occurs, process execution in interrupted, and control
transferred to the handler for that signal. Once the handler finishes
execution, the execution of application code resumes at the point where it
was interrupted.
Signals can also be used for communication: one process can send a signal
to another process using the “kill” system call. Due to security
considerations, this is permitted only when the userid of the process
sending the signal is zero, or equals that of the receiving process.
3. Debugging and Tracing
OSes need to provide some mechanisms for debugging. On Linux, this
takes the form of the ptrace mechanism. It allows a debugger to read or
write arbitrary locations in the memory of a debugged process. It can also
read or set the values of registers used by the debugged process. The
interface allows code regions to be written as well --- typically, code 71
Advanced Operating regions are protected and hence the debugged process won't be able to
System
overwrite code without first using a system call to modify the permissions
on the memory page(s) that contains the code. But the debugger is able to
change code as well.
Obviously, the debugging interface enables a debugging process to exert a
great deal of control over the debugged process. As such, UNIX allows
debugging only if the debugger and the debugged processes are both run
by the same user.
(On Linux, ptrace can also be used for system call interception. In this
case, every time the debugged process makes a system call, the call is
suspended inside the kernel, and the information delivered to the
debugger. At this point, the debugger can change this system call or its
parameters, and then allow the system call to continue. When the system
call is completed, the debugger is notified again, and it can change the
return results or modify the debugged process memory. It can then let the
system call return to the debugged process, which then resumes
execution.)
4. Network Connection
a) Binding: Programs use the socket abstraction for network
communication. In order for a socket, which represents a
communication endpoint, to become visible from outside, it needs to
be associated with a port number. (This holds for TCP and UDP, the
two main protocols used for communication on the Internet.)
Historically, ports below 1024 were considered “privileged” ports ---
binding to them required root privileges. The justification is in the
context of large, multi-user systems where a number of user
applications are running on the same system as a bunch of services.
The assumption was that user processes are not trusted, and could
try to masquerade as a server. (For instance, a user process
masquerading as a telnet server could capture passwords of other
users and forward them to the attacker.) To prevent this possibility,
trusted servers would use only ports below 1024. Since such ports
cannot be bound to normal user processes, this masquerading wont
be possible.
b) Connect:
A client initiates a connection. There are no access controls
associated with the connect operation on most contemporary OSes.
c) Accept :
Accept is used by a server to accept an incoming connection (i.e., in
response to a connect operation invoked by a client). No permission
checks are associated with this operation on most contemporary
OSes.
Boot Security:
A number of security-critical services get started up at boot time. It is
necessary to understand this sequence in order to identify the relevant
security issues.
72
72 1) Loader loads the Kernel
Loader loads the kernel and init process starts. The PID of init process is Input/ Output in Linux
0.
2) Kernel modules get loaded and devices are initialized
Some kernel modules are loaded immediately; others are loaded explicitly
by boot scripts.
3) Boot scripts are stored at /etc/init.d
4) Run Levels
0 halt
1 single user
2 Full Multi-User mode (default)
3-5 Same as 2
6 Reboot
Scripts that will be run at different run levels can be different. To support
this, UNIX systems typically use one directory per run level (named
/etc/rcN.d/) for storing these scripts.
These directories contain symbolic links to the actual files stored in
/etc/init.d. Script names that start with “S” are run at startup, while those
starting with “K” are run at shutdown time. The order of running the
scripts is determined by its name --- for instance, S01 will be run before
S02 and so on.
Other UNIX security issues
1) Devices
a) Hard disk
b) /dev/mem & /dev/kmem : (virtual memory and kernel memory)
c) /dev/tty
Access to raw devices must be carefully controlled, or else it can
defeat higher level security primitives. For instance, by directly
accessing the contents of a hard drive, a process can modify any
thing on the file system, thereby bypassing any permissions set on
the files stored therein. Similarly, one process can interfere with the
network packets that belong to another user's process by directly
reading (or writing to) the network interface. Finally, memory can
be accessed indirectly through low-level devices. In UNIX, all these
devices are typically configured so that only root processes can
access them.
2) Mounting File Systems
When we want to attach a file system to an operating system we
need to specify where in a directory structure we want to attach it,
this process is called mounting. This ability to mount raises several
security issues.
(a) For removable media (USB drives, CDROMs, etc), an
ordinary user may create a setuid-toroot executable on a
different system (on which she has root access). My mounting
this file system on a machine on which she has no root access,
73
Advanced Operating she can obtain root privileges by running the suid application.
System
So. one should be careful about granting mount privileges to
ordinary users. One common approach is to grant these
privileges while disabling setuid option for filesystems
mounted by ordinary users.
(b) UNIX allows the same file system to be mounted at more than
one mount point. When this is done, the user has effectively
created aliases for file names. For instance, if a filesystem is
mounted on /usr and /mnt/usr, then a file A in this filesystem
can be accessed using the name /usr/A and /mnt/usr/A.
3) Search Path
A search path is a sequence of directories that a system uses to
locate an object (program, library, or file). Because programs rely on
search paths, users must take care to set them appropriately.
Some systems have many types of search paths. In addition to
searching for executables, a common search path contains
directories that are used to search for libraries when the system
supports dynamic loading. If an attacker is able to influence this
search path, then he induce other users (including root) to execute
code of his choice. For instance, suppose that an attacker A can
modify root's path to include /home/A at its beginning. Then, when
root types the command ls, the file /home/A/ls may get executed
with root privileges. Since the attacker created this file, it gives the
attacker the ability to run arbitrary code with root privileges.
4) Capabilities. Modern UNIX systems have introduced some
flexibility in places were policies were hard-coded previously. For
instance, the ability to change file ownerships is now treated as a
capability within Linux. (These are not fully transferable, in the
sense of classical capabilities, but they can inherited across a fork.)
A number of similar capabilities have been defined. (On Linux, try
“man capabilities.”)
5) Network Access
Linux systems provide a built-in firewalling capabilities. This is
administered using the iptables program. When this service is
enabled, iptables related scripts are run at boottime. You can figure
out how to configure this by looking at the relevant scrips and the
documentation on iptables configuration. In addition to iptables,
additional mechanisms are available for controlled network access.
The most important ones among these are the hosts.allow and
hosts.deny files that specify which hosts are allowed to connect to
the local system.
Database security
Main issue in database security is fine granularity – it is not enough to
permit or deny access to an entire database. SQL supports an access
control mechanism that can be used to limit access to tables in much the
same way that access can be limited to specific files using a conventional
74
74
access control specification, e.g., a user may be permitted to read a table, Input/ Output in Linux
another may be permitted to update it, and so on.
Sometimes, we want to have finer granularity of protection, e.g.,
suppressing certain columns and/or rows. This can be achieved using
database views. Views are a mechanism in databases to provide a
customized view of a database to particular users. Typically, a view can be
defined as the result of a database query. As a result, rows can be omitted,
or columns can be projected out using a query. Thus, by combining views
with SQL access control primitives, we can realize fairly sophisticated
access control objectives.
Statistical security and the inference problem
When dealing with sensitive data, it is often necessary permit access to
aggregated even when access to individual data items may be too sensitive
to reveal. For instance, the census bureau collects a lot of sensitive
information about individuals. In general, no one should be able to access
detailed individual records. However, if we dont permit aggregate queries,
e.g., the number of people in a state that are african americans, then the
whole purpose of conducting census would be lost. The catch is that it
may be possible to identify sensitive information from the results of one of
more aggregate queries. This is called the inference problem. As an
example, consider a database that contains grade information for this
course. We may permit aggregate queries, e.g., average score on the final
exam. But if this average is computed over a small set, then it can reveal
sensitive information. To illustrate this, consider a class that has only a
single woman. By making a query that selects the students whose gender
is female, and asking for the average of these students, one can determine
the grade of a single individual in the class.
One can attempt to solve this problem by prescribing a minimum size on
the sets on which aggregates are computed. But an attacker can
circumvent this by computing aggregates on the complement of a set, e.g.,
by comparing the average of the whole class with the average for male
students, an attacker can compute the female student's grade. Another
possibility is to insert random errors in outputs. For instance, in the above
calculation, a small error in the average grades can greatly increase the
error in the inferred grade of the female student.
3.5 SUMMARY
3.6 EXERCISE
3.7 REFERENCES
76
76
4
ANDROID OPERATING SYSTEM
Unit Structure
4.0 Introduction
4.1 The Android Software Stack
4.2 The Linux Kernel
4.3 Libraries
4.4 Application Framework
4.5 Summary
4.6 Exercise
4.7 References
4.0 INTRODUCTION
• Android run time - The run time is what makes an Android phone
an Android phone rather than a mobile Linux implementation. 77
Advanced Operating Including the core libraries and the Dalvik VM, the Android run
System
time is the engine that powers your applications and, along with the
libraries, forms the basis for the application framework.
• Core libraries - Although most Android application development is
written using the Java language, Dalvik is not a Java VM. The core
Android libraries provide most of the functionality available in the
core Java libraries, as well as the Androidspecific libraries.
• Dalvik VM - Dalvik is a register-based Virtual Machine that’s been
optimized to ensure that a device can run multiple instances
efficiently. It relies on the Linux kernel for threading and low-level
memory management.
• Application framework - The application framework provides the
classes used to create Android applications. It also provides a
generic abstraction for hardware access and manages the user
interface and application resources.
• Application layer - All applications, both native and third-party, are
built on the application layer by means of the same API libraries.
The application layer runs within the Android run time, using the
classes and services made available from the application framework.
4.3 LIBRARIES
SQLite Library used for data storage and light in terms of mobile memory
footprints and task execution.
79
Advanced Operating WebKit Library mainly provides Web Browsing engine and a lot more
System
related features.
The surface manager library is responsible for rendering windows and
drawing surfaces of various apps on the screen.
The media framework library provides media codecs for audio and video.
The OpenGl (Open Graphics Library) and SGL(Scalable Graphics
Library) are the graphics libraries for 3D and 2D rendering, respectively.
The FreeType Library is used for rendering fonts.
Application Framework
It is a collection of APIs written in Java, which gives developers access to
the complete feature set of Android OS.
Developers have full access to the same framework APIs used by the core
applications, so that they can enhance more in terms of functionalities of
their application.
Enables and simplify the reuse of core components and services, like:
Activity Manager: Manages the Lifecycle of apps & provide common
navigation back stack.
Window Manager: Manages windows and drawing surfaces, and is an
abstraction of the surface manager library.
Content Providers: Enables application to access data from other
applications or to share their own data i.e it provides mechanism to
exchange data among apps.
View System: Contains User Interface building blocks used to build an
application's UI, including lists, grids, texts, boxes, buttons,etc. and also
performs the event management of UI elements(explained in later
tutorials).
Package Manager: Manages various kinds of information related to the
application packages that are currently installed on the device.
Telephony Manager: Enables app to use phone capabilities of the device.
Resource Manager: Provides access to non-code resources (localized
Strings, bitmaps, Graphics and Layouts).
Location Manager: Deals with location awareness capabilities.
Notification Manager: Enable apps to display custom alerts in the status
bar.
Applications
Top of the Android Application Stack, is occupied by the System apps and
tonnes of other Apps that users can download from Android's Official Play
Store, also known as Google Play Store. A set of Core applications are
pre-packed in the handset like Email Client, SMS Program, Calendar,
Maps, Browser, Contacts and few more. This layer uses all the layers
below it for proper functioning of these mobile apps. So as we can see and
understand, Android holds layered or we can say grouped functionalities
as software stack that makes Android work very fluently in any device.
Media framework
A multimedia framework is a software framework that handles media on a
computer and through a network. ... It is meant to be used by applications
80
80
such as media players and audio or video editors, but can also be used to Android
Operating System
build videoconferencing applications, media converters and other
multimedia tools.
SQLite
SQLite is a opensource SQL database that stores data to a text file on a
device. Android comes in with built in SQLite database implementation.
SQLite supports all the relational database features. In order to access this
database, you don't need to establish any kind of connections for it like
JDBC,ODBC e.t.c
Database - Package
The main package is android.database.sqlite that contains the classes to
manage your own databases
Database - Creation
In order to create a database you just need to call this method
openOrCreateDatabase with your database name and mode as a parameter.
It returns an instance of SQLite database which you have to receive in
your own object.Its syntax is given below
SQLiteDatabase mydatabase = openOrCreateDatabase("your database
name",MODE_PRIVATE,null);
Apart from this , there are other functions available in the database
package , that does this job. They are listed below
1 getColumnCount()
This method return the total number of columns of the table.
2 getColumnIndex(String columnName)
This method returns the index number of a column by
specifying the name of the column
3 getColumnName(int columnIndex)
This method returns the name of the column by specifying the
index of the column
82
82
Android
4 getColumnNames() Operating System
This method returns the array of all the column names of the
table.
5 getCount()
This method returns the total number of rows in the cursor
6 getPosition()
This method returns the current position of the cursor in the
table
7 isClosed()
This method returns true if the cursor is closed and return
false otherwise
Steps Description
package com.example.sairamkrishna.myapplication;
import android.content.Context;
import android.content.Intent;
import android.support.v7.app.ActionBarActivity;
import android.os.Bundle;
import android.view.KeyEvent;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.widget.AdapterView;
import android.widget.ArrayAdapter;
import android.widget.AdapterView.OnItemClickListener;
import android.widget.ListView;
import java.util.ArrayList;
import java.util.List;
public class MainActivity extends ActionBarActivity {
public final static String EXTRA_MESSAGE = "MESSAGE";
private ListView obj;
DBHelper mydb;
84
84
@Override Android
Operating System
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mydb = new DBHelper(this);
ArrayList array_list = mydb.getAllCotacts();
ArrayAdapter arrayAdapter=new
ArrayAdapter(this,android.R.layout.simple_list_item_1, array_list);
obj = (ListView)findViewById(R.id.listView1);
obj.setAdapter(arrayAdapter);
obj.setOnItemClickListener(new OnItemClickListener(){
@Override
public void onItemClick(AdapterView<?> arg0, View arg1, int
arg2,long arg3) {
// TODO Auto-generated method stub
int id_To_Search = arg2 + 1;
Bundle dataBundle = new Bundle();
dataBundle.putInt("id", id_To_Search);
package com.example.sairamkrishna.myapplication;
import android.os.Bundle;
import android.app.Activity;
import android.app.AlertDialog;
import android.content.DialogInterface;
import android.content.Intent;
import android.database.Cursor;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.widget.Button;
import android.widget.TextView;
import android.widget.Toast;
public class DisplayContact extends Activity {
int from_Where_I_Am_Coming = 0;
private DBHelper mydb ;
TextView name ;
TextView phone;
TextView email;
TextView street;
TextView place;
int id_To_Update = 0;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_display_contact);
name = (TextView) findViewById(R.id.editTextName);
phone = (TextView) findViewById(R.id.editTextPhone);
email = (TextView) findViewById(R.id.editTextStreet);
street = (TextView) findViewById(R.id.editTextEmail);
place = (TextView) findViewById(R.id.editTextCity);
mydb = new DBHelper(this);
Bundle extras = getIntent().getExtras();
if(extras !=null) {
86
86 int Value = extras.getInt("id");
if(Value>0){ Android
Operating System
//means this is the view part not the add contact part.
Cursor rs = mydb.getData(Value);
id_To_Update = Value;
rs.moveToFirst();
String nam =
rs.getString(rs.getColumnIndex(DBHelper.CONTACTS_COLUMN_NA
ME));
String phon =
rs.getString(rs.getColumnIndex(DBHelper.CONTACTS_COLUMN_PHO
NE));
String emai =
rs.getString(rs.getColumnIndex(DBHelper.CONTACTS_COLUMN_EM
AIL));
String stree =
rs.getString(rs.getColumnIndex(DBHelper.CONTACTS_COLUMN_STR
EET));
String plac =
rs.getString(rs.getColumnIndex(DBHelper.CONTACTS_COLUMN_CIT
Y));
if (!rs.isClosed()) {
rs.close();
}
Button b = (Button)findViewById(R.id.button1);
b.setVisibility(View.INVISIBLE);
name.setText((CharSequence)nam);
name.setFocusable(false);
name.setClickable(false);
phone.setText((CharSequence)phon);
phone.setFocusable(false);
phone.setClickable(false);
email.setText((CharSequence)emai);
email.setFocusable(false);
email.setClickable(false);
street.setText((CharSequence)stree);
street.setFocusable(false);
street.setClickable(false);
place.setText((CharSequence)plac);
place.setFocusable(false);
place.setClickable(false);
}
}
}
@Override
public boolean onCreateOptionsMenu(Menu menu) { 87
Advanced Operating // Inflate the menu; this adds items to the action bar if it is present.
System
Bundle extras = getIntent().getExtras();
if(extras !=null) {
int Value = extras.getInt("id");
if(Value>0){
getMenuInflater().inflate(R.menu.display_contact, menu);
} else{
getMenuInflater().inflate(R.menu.menu_main menu);
}
}
return true;
}
public boolean onOptionsItemSelected(MenuItem item) {
super.onOptionsItemSelected(item);
switch(item.getItemId()) {
case R.id.Edit_Contact:
Button b = (Button)findViewById(R.id.button1);
b.setVisibility(View.VISIBLE);
name.setEnabled(true);
name.setFocusableInTouchMode(true);
name.setClickable(true);
phone.setEnabled(true);
phone.setFocusableInTouchMode(true);
phone.setClickable(true);
email.setEnabled(true);
email.setFocusableInTouchMode(true);
email.setClickable(true);
street.setEnabled(true);
street.setFocusableInTouchMode(true);
street.setClickable(true);
place.setEnabled(true);
place.setFocusableInTouchMode(true);
place.setClickable(true);
return true;
case R.id.Delete_Contact:
AlertDialog d = builder.create();
d.setTitle("Are you sure");
d.show();
return true;
default:
return super.onOptionsItemSelected(item);
}
}
package com.example.sairamkrishna.myapplication;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Hashtable;
import android.content.ContentValues;
import android.content.Context;
import android.database.Cursor;
import android.database.DatabaseUtils;
import android.database.sqlite.SQLiteOpenHelper;
import android.database.sqlite.SQLiteDatabase;
public class DBHelper extends SQLiteOpenHelper {
public static final String DATABASE_NAME = "MyDBName.db";
public static final String CONTACTS_TABLE_NAME = "contacts";
public static final String CONTACTS_COLUMN_ID = "id";
public static final String CONTACTS_COLUMN_NAME = "name";
public static final String CONTACTS_COLUMN_EMAIL = "email";
public static final String CONTACTS_COLUMN_STREET = "street";
public static final String CONTACTS_COLUMN_CITY = "place";
public static final String CONTACTS_COLUMN_PHONE = "phone";
private HashMap hp;
public DBHelper(Context context) {
super(context, DATABASE_NAME , null, 1);
}
@Override
public void onCreate(SQLiteDatabase db) {
// TODO Auto-generated method stub
db.execSQL(
"create table contacts " +
"(id integer primary key, name text,phone text,email text, street
text,place text)"
);
}
@Override
90
90
public void onUpgrade(SQLiteDatabase db, int oldVersion, int Android
Operating System
newVersion) {
// TODO Auto-generated method stub
db.execSQL("DROP TABLE IF EXISTS contacts");
onCreate(db);
}
public boolean insertContact (String name, String phone, String email,
String street,String place) {
SQLiteDatabase db = this.getWritableDatabase();
ContentValues contentValues = new ContentValues();
contentValues.put("name", name);
contentValues.put("phone", phone);
contentValues.put("email", email);
contentValues.put("street", street);
contentValues.put("place", place);
db.insert("contacts", null, contentValues);
return true;
}
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/textView"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true"
android:textSize="30dp"
92
92
android:text="Data Base" /> Android
Operating System
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Tutorials Point"
android:id="@+id/textView2"
android:layout_below="@+id/textView"
android:layout_centerHorizontal="true"
android:textSize="35dp"
android:textColor="#ff16ff01" />
<ImageView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/imageView"
android:layout_below="@+id/textView2"
android:layout_centerHorizontal="true"
android:src="@drawable/logo"/>
<ScrollView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/scrollView"
android:layout_below="@+id/imageView"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:layout_alignParentBottom="true"
android:layout_alignParentRight="true"
93
Advanced Operating android:layout_alignParentEnd="true">
System
<ListView
android:id="@+id/listView1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_centerVertical="true" >
</ListView>
</ScrollView>
</RelativeLayout>
96
96
Following is the content of the res/value/string.xml Android
Operating System
<item android:id="@+id/item1"
android:icon="@drawable/add"
android:title="@string/Add_New" >
</item>
</menu>
<item
android:id="@+id/Delete_Contact"
android:orderInCategory="100"
android:title="@string/delete"/>
</menu>
97
Advanced Operating
System
This is the defualt AndroidManifest.xml of this project
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.sairamkrishna.myapplication" >
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name=".MainActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER"
/>
</intent-filter>
</activity>
<activity android:name=".DisplayContact"/>
</application>
</manifest>
Webkit
WebKit is the web browser engine used by Safari, Mail, App Store, and
many other apps on macOS, iOS, and Linux. Get started contributing
code, or reporting bugs.
Web developers can follow development, check feature status, download
Safari Technology Preview to try out the latest web technologies, and
report bugs.
OpenGL
Android supports OpenGL both through its framework API and the Native
Development Kit (NDK). This topic focuses on the Android framework
interfaces. For more information about the NDK, see the Android NDK.
There are two foundational classes in the Android framework that let you
create and manipulate graphics with the OpenGL ES API:
GLSurfaceView and GLSurfaceView.Renderer. If your goal is to use
OpenGL in your Android application, understanding how to implement
these classes in an activity should be your first objective.
98
98
GLSurfaceView Android
Operating System
This class is a View where you can draw and manipulate objects using
OpenGL API calls and is similar in function to a SurfaceView. You can
use this class by creating an instance of GLSurfaceView and adding your
Renderer to it. However, if you want to capture touch screen events, you
should extend the GLSurfaceView class to implement the touch listeners,
as shown in OpenGL training lesson, Responding to touch events.
GLSurfaceView.Renderer
This interface defines the methods required for drawing graphics in a
GLSurfaceView. You must provide an implementation of this interface as
a separate class and attach it to your GLSurfaceView instance using
GLSurfaceView.setRenderer().
The GLSurfaceView.Renderer interface requires that you implement the
following methods:
ContentProvider
sometimes it is required to share data across applications. This is where
content providers become very useful.
Content providers let you centralize content in one place and have many
different applications access it as needed. A content provider behaves very
much like a database where you can query it, edit its content, as well as
add or delete content using insert(), update(), delete(), and query()
methods. In most cases this data is stored in an SQlite database.
A content provider is implemented as a subclass of ContentProvider
class and must implement a standard set of APIs that enable other
applications to perform transactions.
100
100
public class My Application extends ContentProvider { Android
Operating System
Content URIs
To query a content provider, you specify the query string in the form of a
URI which has following format −
<prefix>://<authority>/<data_type>/<id>
prefix
1
This is always set to content://
authority
This specifies the name of the content provider, for
2 example contacts, browser etc. For third-party content
providers, this could be the fully qualified name, such
as com.tutorialspoint.statusprovider
data_type
This indicates the type of data that this particular provider
provides. For example, if you are getting all the contacts
3
from the Contacts content provider, then the data path would
be people and URI would look like
thiscontent://contacts/people
id
This specifies the specific record requested. For example, if
4 you are looking for contact number 5 in the Contacts content
provider then URI would look like
this content://contacts/people/5.
• Next you will need to create your own database to keep the content.
Usually, Android uses SQLite database and framework needs to
override onCreate() method which will use SQLite Open Helper
method to create or open the provider's database. When your
application is launched, the onCreate() handler of each of its Content
Providers is called on the main application thread. 101
Advanced Operating • Next you will have to implement Content Provider queries to
System
perform different database specific operations.
ContentProvider
• getType() This method returns the MIME type of the data at the
given URI.
Example
This example will explain you how to create your own ContentProvider.
So let's follow the following steps to similar to what we followed while
creating Hello World Example−
102
102
Android
Step Description Operating System
package com.example.MyApplication;
import android.net.Uri;
import android.os.Bundle;
import android.app.Activity;
import android.content.ContentValues;
import android.content.CursorLoader;
import android.database.Cursor;
import android.view.Menu;
import android.view.View;
import android.widget.EditText;
import android.widget.Toast;
values.put(StudentsProvider.GRADE,
((EditText)findViewById(R.id.editText3)).getText().toString());
Toast.makeText(getBaseContext(),
uri.toString(), Toast.LENGTH_LONG).show();
}
public void onClickRetrieveStudents(View view) {
// Retrieve student records
String URL =
"content://com.example.MyApplication.StudentsProvider";
if (c.moveToFirst()) {
do{
Toast.makeText(this,
c.getString(c.getColumnIndex(StudentsProvider._ID)) +
", " + c.getString(c.getColumnIndex(
StudentsProvider.NAME)) +
", " + c.getString(c.getColumnIndex(
StudentsProvider.GRADE)),
Toast.LENGTH_SHORT).show();
} while (c.moveToNext());
}
}
}
Create new file StudentsProvider.java under com.example.MyApplication
package and following is the content of
104
104 src/com.example.MyApplication/StudentsProvider.java −
package com.example.MyApplication; Android
Operating System
import java.util.HashMap;
import android.content.ContentProvider;
import android.content.ContentUris;
import android.content.ContentValues;
import android.content.Context;
import android.content.UriMatcher;
import android.database.Cursor;
import android.database.SQLException;
import android.database.sqlite.SQLiteDatabase;
import android.database.sqlite.SQLiteOpenHelper;
import android.database.sqlite.SQLiteQueryBuilder;
import android.net.Uri;
import android.text.TextUtils;
public class StudentsProvider extends ContentProvider {
static final String PROVIDER_NAME =
"com.example.MyApplication.StudentsProvider";
static final String URL = "content://" + PROVIDER_NAME +
"/students";
static final Uri CONTENT_URI = Uri.parse(URL);
static final String _ID = "_id";
static final String NAME = "name";
static final String GRADE = "grade";
private static HashMap<String, String>
STUDENTS_PROJECTION_MAP;
static final int STUDENTS = 1;
static final int STUDENT_ID = 2;
static final UriMatcher uriMatcher;
static{
uriMatcher = new UriMatcher(UriMatcher.NO_MATCH);
uriMatcher.addURI(PROVIDER_NAME, "students", STUDENTS);
uriMatcher.addURI(PROVIDER_NAME, "students/#",
STUDENT_ID);
}
/**
* Database specific constant declarations
*/
private SQLiteDatabase db;
static final String DATABASE_NAME = "College";
static final String STUDENTS_TABLE_NAME = "students";
static final int DATABASE_VERSION = 1;
static final String CREATE_DB_TABLE =
" CREATE TABLE " + STUDENTS_TABLE_NAME +
" (_id INTEGER PRIMARY KEY AUTOINCREMENT, " +
" name TEXT NOT NULL, " +
" grade TEXT NOT NULL);";
/**
* Helper class that actually creates and manages
* the provider's underlying data repository.
105
Advanced Operating */
System
private static class DatabaseHelper extends SQLiteOpenHelper {
DatabaseHelper(Context context){
super(context, DATABASE_NAME, null,
DATABASE_VERSION);
}
@Override
public void onCreate(SQLiteDatabase db) {
db.execSQL(CREATE_DB_TABLE);
}
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int
newVersion) {
db.execSQL("DROP TABLE IF EXISTS " +
STUDENTS_TABLE_NAME);
onCreate(db);
}
}
@Override
public boolean onCreate() {
Context context = getContext();
DatabaseHelper dbHelper = new DatabaseHelper(context);
/**
* Create a write able database which will trigger its
* creation if it doesn't already exist.
*/
db = dbHelper.getWritableDatabase();
return (db == null)? false:true;
}
@Override
public Uri insert(Uri uri, ContentValues values) {
/**
* Add a new student record
*/
long rowID = db.insert( STUDENTS_TABLE_NAME, "", values);
/**
* If record is added successfully
*/
if (rowID > 0) {
Uri _uri = ContentUris.withAppendedId(CONTENT_URI, rowID);
getContext().getContentResolver().notifyChange(_uri, null);
return _uri;
}
throw new SQLException("Failed to add a record into " + uri);
}
@Override
public Cursor query(Uri uri, String[] projection,
String selection,String[] selectionArgs, String sortOrder) {
SQLiteQueryBuilder qb = new SQLiteQueryBuilder();
qb.setTables(STUDENTS_TABLE_NAME);
106
106
Android
Operating System
switch (uriMatcher.match(uri)) {
case STUDENTS:
qb.setProjectionMap(STUDENTS_PROJECTION_MAP);
break;
case STUDENT_ID:
qb.appendWhere( _ID + "=" + uri.getPathSegments().get(1));
break;
default:
}
@Override
public int delete(Uri uri, String selection, String[] selectionArgs) {
int count = 0;
switch (uriMatcher.match(uri)){
case STUDENTS:
count = db.delete(STUDENTS_TABLE_NAME, selection,
selectionArgs);
break;
case STUDENT_ID:
String id = uri.getPathSegments().get(1);
count = db.delete( STUDENTS_TABLE_NAME, _ID + " = " + id
+
(!TextUtils.isEmpty(selection) ? "
AND (" + selection + ')' : ""), selectionArgs);
break;
default:
throw new IllegalArgumentException("Unknown URI " + uri);
}
getContext().getContentResolver().notifyChange(uri, null);
107
Advanced Operating return count;
System
}
@Override
public int update(Uri uri, ContentValues values,
String selection, String[] selectionArgs) {
int count = 0;
switch (uriMatcher.match(uri)) {
case STUDENTS:
count = db.update(STUDENTS_TABLE_NAME, values,
selection, selectionArgs);
break;
case STUDENT_ID:
count = db.update(STUDENTS_TABLE_NAME, values,
_ID + " = " + uri.getPathSegments().get(1) +
(!TextUtils.isEmpty(selection) ? "
AND (" +selection + ')' : ""), selectionArgs);
break;
default:
throw new IllegalArgumentException("Unknown URI " + uri );
}
getContext().getContentResolver().notifyChange(uri, null);
return count;
}
@Override
public String getType(Uri uri) {
switch (uriMatcher.match(uri)){
/**
* Get all student records
*/
case STUDENTS:
return "vnd.android.cursor.dir/vnd.example.students";
/**
* Get a particular student
*/
case STUDENT_ID:
return "vnd.android.cursor.item/vnd.example.students";
default:
throw new IllegalArgumentException("Unsupported URI: " + uri);
}
}
}
108
108
Following will the modified content of AndroidManifest.xml file. Here Android
Operating System
we have added <provider.../> tag to include our content provider:
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER"
/>
</intent-filter>
</activity>
<provider android:name="StudentsProvider"
android:authorities="com.example.MyApplication.StudentsProvider"/>
</application>
</manifest>
<TextView
android:id="@+id/textView1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Content provider"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true"
android:textSize="30dp" />
<TextView
android:id="@+id/textView2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Tutorials point "
android:textColor="#ff87ff09"
android:textSize="30dp"
android:layout_below="@+id/textView1"
android:layout_centerHorizontal="true" />
<ImageButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/imageButton"
android:src="@drawable/abc"
android:layout_below="@+id/textView2"
android:layout_centerHorizontal="true" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/button2"
android:text="Add Name"
android:layout_below="@+id/editText3"
android:layout_alignRight="@+id/textView2"
android:layout_alignEnd="@+id/textView2"
android:layout_alignLeft="@+id/textView2"
android:layout_alignStart="@+id/textView2"
android:onClick="onClickAddName"/>
<EditText
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/editText"
android:layout_below="@+id/imageButton"
android:layout_alignRight="@+id/imageButton"
110
110
android:layout_alignEnd="@+id/imageButton" /> Android
Operating System
<EditText
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/editText2"
android:layout_alignTop="@+id/editText"
android:layout_alignLeft="@+id/textView1"
android:layout_alignStart="@+id/textView1"
android:layout_alignRight="@+id/textView1"
android:layout_alignEnd="@+id/textView1"
android:hint="Name"
android:textColorHint="@android:color/holo_blue_light" />
<EditText
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/editText3"
android:layout_below="@+id/editText"
android:layout_alignLeft="@+id/editText2"
android:layout_alignStart="@+id/editText2"
android:layout_alignRight="@+id/editText2"
android:layout_alignEnd="@+id/editText2"
android:hint="Grade"
android:textColorHint="@android:color/holo_blue_bright" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Retrive student"
android:id="@+id/button"
android:layout_below="@+id/button2"
android:layout_alignRight="@+id/editText3"
android:layout_alignEnd="@+id/editText3"
android:layout_alignLeft="@+id/button2"
android:layout_alignStart="@+id/button2"
android:onClick="onClickRetrieveStudents"/>
</RelativeLayout>
Make sure you have following content of res/values/strings.xml file:
AndroidManifest.xml
You need to provide READ_PHONE_STATE permission in the
AndroidManifest.xml file.
File: AndroidManifest.xml
1. <?xml version="1.0" encoding="utf-8"?>
2. <manifest
xmlns:androclass="http://schemas.android.com/apk/res/android"
3. package="com.javatpoint.telephonymanager"
4. android:versionCode="1"
5. android:versionName="1.0" >
6.
7. <uses-sdk
8. android:minSdkVersion="8"
9. android:targetSdkVersion="17" />
10.
11. <uses-permission
android:name="android.permission.READ_PHONE_STATE"/>
12.
114
114
13. <application Android
Operating System
14. android:allowBackup="true"
15. android:icon="@drawable/ic_launcher"
16. android:label="@string/app_name"
17. android:theme="@style/AppTheme" >
18. <activity
19.
android:name="com.javatpoint.telephonymanager.MainActivity"
20. android:label="@string/app_name" >
21. <intent-filter>
22. <action android:name="android.intent.action.MAIN" />
23.
24. <category android:name="android.intent.category.LAUNCHER" />
25. </intent-filter>
26. </activity>
27. </application>
28.
29. </manifest>
LocationManager
Kotlin |Java
public class LocationManager
extends Object
java.lang.Object
↳ android.location.LocationManager
This class provides access to the system location services. These services
allow applications to obtain periodic updates of the device's geographical
location, or to be notified when the device enters the proximity of a given
geographical location.
Unless otherwise noted, all Location API methods require the
Manifest.permission.ACCESS_COARSE_LOCATION or
Manifest.permission.ACCESS_FINE_LOCATION permissions. If
your application only has the coarse permission then providers will still
return location results, but the exact location will be obfuscated to a coarse
level of accuracy.
Requires the PackageManager#FEATURE_LOCATION feature which
can be detected using PackageManager.hasSystemFeature(String).
115
Advanced Operating Summary
System
Constants
String ACTION_GNSS_CAPABILITIES_CHANGED
Broadcast intent action when GNSS capabilities change.
String EXTRA_GNSS_CAPABILITIES
Intent extra included with
ACTION_GNSS_CAPABILITIES_CHANGED broadcasts,
containing the new GnssCapabilities.
String EXTRA_LOCATION_ENABLED
Intent extra included with MODE_CHANGED_ACTION
broadcasts, containing the boolean enabled state of location.
String EXTRA_PROVIDER_ENABLED
Intent extra included with
PROVIDERS_CHANGED_ACTION broadcasts, containing
the boolean enabled state of the location provider that has
changed.
String EXTRA_PROVIDER_NAME
Intent extra included with
PROVIDERS_CHANGED_ACTION broadcasts, containing
the name of the location provider that has changed.
String FUSED_PROVIDER
Standard name of the fused location provider.
String GPS_PROVIDER
Standard name of the GNSS location provider.
String KEY_FLUSH_COMPLETE
Key used for an extra holding an integer request code when
location flush completion is sent using a PendingIntent.
String KEY_LOCATIONS
Key used for an extra holding a array of Locations when a
location change is sent using a PendingIntent.
String KEY_LOCATION_CHANGED
Key used for an extra holding a Location value when a
location change is sent using a PendingIntent.
String KEY_PROVIDER_ENABLED
Key used for an extra holding a boolean enabled/disabled
status value when a provider enabled/disabled event is
broadcast using a PendingIntent.
String KEY_PROXIMITY_ENTERING
116
116
Key used for the Bundle extra holding a boolean indicating Android
Operating System
whether a proximity alert is entering (true) or exiting (false)..
String KEY_STATUS_CHANGED
This constant was deprecated in API level 29. Status changes
are deprecated and no longer broadcast from Android Q
onwards.
String MODE_CHANGED_ACTION
Broadcast intent action when the device location enabled state
changes.
String NETWORK_PROVIDER
Standard name of the network location provider.
String PASSIVE_PROVIDER
A special location provider for receiving locations without
actively initiating a location fix.
String PROVIDERS_CHANGED_ACTION
Broadcast intent action when the set of enabled location
providers changes.
Public methods
Constants
ACTION_GNSS_CAPABILITIES_CHANGED
Added in API level 31
public static final String ACTION_GNSS_CAPABILITIES_CHANGED
Broadcast intent action when GNSS capabilities change. This is most
common at boot time as GNSS capabilities are queried from the chipset.
Includes an intent extra, EXTRA_GNSS_CAPABILITIES, with the new
GnssCapabilities.
See also:
• EXTRA_GNSS_CAPABILITIES
• getGnssCapabilities()
Constant Value:
"android.location.action.GNSS_CAPABILITIES_CHANGED"
EXTRA_GNSS_CAPABILITIES
Added in API level 31
public static final String EXTRA_GNSS_CAPABILITIES
Intent extra included with
ACTION_GNSS_CAPABILITIES_CHANGED broadcasts, containing
the new GnssCapabilities.
See also:
• ACTION_GNSS_CAPABILITIES_CHANGED
Constant Value: "android.location.extra.GNSS_CAPABILITIES"
EXTRA_LOCATION_ENABLED
Added in API level 30
public static final String EXTRA_LOCATION_ENABLED
Intent extra included with MODE_CHANGED_ACTION broadcasts,
containing the boolean enabled state of location.
124
124
See also: Android
Operating System
• MODE_CHANGED_ACTION
Constant Value: "android.location.extra.LOCATION_ENABLED"
EXTRA_PROVIDER_ENABLED
Added in API level 30
public static final String EXTRA_PROVIDER_ENABLED
Intent extra included with PROVIDERS_CHANGED_ACTION
broadcasts, containing the boolean enabled state of the location provider
that has changed.
See also:
• PROVIDERS_CHANGED_ACTION
• EXTRA_PROVIDER_NAME
Constant Value: "android.location.extra.PROVIDER_ENABLED"
EXTRA_PROVIDER_NAME
Added in API level 29
public static final String EXTRA_PROVIDER_NAME
Intent extra included with PROVIDERS_CHANGED_ACTION
broadcasts, containing the name of the location provider that has changed.
See also:
• PROVIDERS_CHANGED_ACTION
• EXTRA_PROVIDER_ENABLED
Constant Value: "android.location.extra.PROVIDER_NAME"
FUSED_PROVIDER
Added in API level 31
public static final String FUSED_PROVIDER
Standard name of the fused location provider.
If present, this provider may combine inputs from several other location
providers to provide the best possible location fix. It is implicitly used for
all requestLocationUpdates APIs that involve a Criteria.
Constant Value: "fused"
GPS_PROVIDER
Added in API level 1
public static final String GPS_PROVIDER
Standard name of the GNSS location provider.
125
Advanced Operating If present, this provider determines location using GNSS satellites. The
System
responsiveness and accuracy of location fixes may depend on GNSS
signal conditions.
The extras Bundle for locations derived by this location provider may
contain the following key/value pairs:
• satellites - the number of satellites used to derive the fix
Constant Value: "gps"
KEY_FLUSH_COMPLETE
Added in API level 31
public static final String KEY_FLUSH_COMPLETE
Key used for an extra holding an integer request code when location flush
completion is sent using a PendingIntent.
See also:
• requestFlush(String, PendingIntent, int)
Constant Value: "flushComplete"
KEY_LOCATIONS
Added in API level 31
public static final String KEY_LOCATIONS
Key used for an extra holding a array of Locations when a location change
is sent using a PendingIntent. This key will only be present if the location
change includes multiple (ie, batched) locations, otherwise only
KEY_LOCATION_CHANGED will be present. Use
Intent#getParcelableArrayExtra(String) to retrieve the locations.
The array of locations will never be empty, and will ordered from earliest
location to latest location, the same as with
LocationListener#onLocationChanged(List).
See also:
126
126
Key used for an extra holding a boolean enabled/disabled status value Android
Operating System
when a provider enabled/disabled event is broadcast using a
PendingIntent.
See also:
• requestLocationUpdates(String, LocationRequest, PendingIntent)
Constant Value: "providerEnabled"
KEY_PROXIMITY_ENTERING
Added in API level 1
public static final String KEY_PROXIMITY_ENTERING
Key used for the Bundle extra holding a boolean indicating whether a
proximity alert is entering (true) or exiting (false)..
Constant Value: "entering"
KEY_STATUS_CHANGED
Resource Manager
The job of a resource manager is, quite simply, to manage all available
resources that your company has, especially employees. One of the many
responsibilities of a resource manager (more commonly known as a
human resource manager, or HR manager) is to assign the right people to a
job.
There are many more items which you use to build a good Android
application. Apart from coding for the application, you take care of
various other resources like static content that your code uses, such as
bitmaps, colors, layout definitions, user interface strings, animation
instructions, and more. These resources are always maintained separately
in various sub-directories under res/ directory of the project.
This tutorial will explain you how you can organize your application
resources, specify alternative resources and access them in your
applications.
Organize resource in Android Studio
MyProject/
app/
manifest/
AndroidManifest.xml
java/
MyActivity.java
res/
drawable/
icon.png
layout/
activity_main.xml
info.xml
values/
strings.xml
127
Advanced Operating
System Sr.No. Directory & Resource Type
1 anim/
XML files that define property animations. They are saved in
res/anim/ folder and accessed from the R.anim class.
2 color/
XML files that define a state list of colors. They are saved in
res/color/ and accessed from the R.color class.
3 drawable/
Image files like .png, .jpg, .gif or XML files that are compiled into
bitmaps, state lists, shapes, animation drawable. They are saved in
res/drawable/ and accessed from the R.drawable class.
4 layout/
XML files that define a user interface layout. They are saved in
res/layout/ and accessed from the R.layout class.
5 menu/
XML files that define application menus, such as an Options Menu,
Context Menu, or Sub Menu. They are saved in res/menu/ and
accessed from the R.menu class.
6 raw/
Arbitrary files to save in their raw form. You need to
call Resources.openRawResource() with the resource ID, which
is R.raw.filename to open such raw files.
7 values/
XML files that contain simple values, such as strings, integers, and
colors. For example, here are some filename conventions for
resources you can create in this directory −
• arrays.xml for resource arrays, and accessed from
the R.array class.
• integers.xml for resource integers, and accessed from
the R.integer class.
• bools.xml for resource boolean, and accessed from
the R.bool class.
• colors.xml for color values, and accessed from
the R.color class.
• dimens.xml for dimension values, and accessed from
the R.dimen class.
• strings.xml for string values, and accessed from
the R.string class.
• styles.xml for styles, and accessed from the R.style class.
8 xml/
Arbitrary XML files that can be read at runtime by
calling Resources.getXML(). You can save various configuration files
here which will be used at run time.
128
128
Alternative Resources Android
Operating System
Your application should provide alternative resources to support specific
device configurations. For example, you should include alternative
drawable resources ( i.e.images ) for different screen resolution and
alternative string resources for different languages. At runtime, Android
detects the current device configuration and loads the appropriate
resources for your application.
To specify configuration-specific alternatives for a set of resources, follow
the following steps −
<TextView android:id="@+id/text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello, I am a TextView" />
<Button android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello, I am a Button" />
</LinearLayout>
131
Advanced Operating This application code will load this layout for an Activity, in the
System
onCreate() method as follows −
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
<resources>
<color name="opaque_red">#f00</color>
<string name="hello">Hello!</string>
</resources>
Now you can use these resources in the following layout file to set the text
color and text string as follows −
<EditText xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:textColor="@color/opaque_red"
android:text="@string/hello" />
Now if you will go through previous chapter once again where I have
explained Hello World! example, and I'm sure you will have better
understanding on all the concepts explained in this chapter. So I highly
recommend to check previous chapter for working example and check
how I have used various resources at very basic level.
132
132
Android Activity Lifecycle Android
Operating System
Method Description
onResume called when activity will start interacting with the user.
133
Advanced Operating
System
File: activity_main.xml
1. <?xml version="1.0" encoding="utf-8"?>
2. <android.support.constraint.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
3. xmlns:app="http://schemas.android.com/apk/res-auto"
4. xmlns:tools="http://schemas.android.com/tools"
5. android:layout_width="match_parent"
6. android:layout_height="match_parent"
7. tools:context="example.javatpoint.com.activitylifecycle.
MainActivity">
8.
134
134
9. <TextView Android
Operating System
10. android:layout_width="wrap_content"
11. android:layout_height="wrap_content"
12. android:text="Hello World!"
13. app:layout_constraintBottom_toBottomOf="parent"
14. app:layout_constraintLeft_toLeftOf="parent"
15. app:layout_constraintRight_toRightOf="parent"
16. app:layout_constraintTop_toTopOf="parent" />
17.
18. </android.support.constraint.ConstraintLayout>
Android Activity Lifecycle Example
It provides the details about the invocation of life cycle methods of
activity. In this example, we are displaying the content on the logcat.
File: MainActivity.java
1. package example.javatpoint.com.activitylifecycle;
2.
3. import android.app.Activity;
4. import android.os.Bundle;
5. import android.util.Log;
6.
7. public class MainActivity extends Activity {
8.
9. @Override
10. protected void onCreate(Bundle savedInstanceState) {
11. super.onCreate(savedInstanceState);
12. setContentView(R.layout.activity_main);
13. Log.d("lifecycle","onCreate invoked");
14. }
15. @Override
16. protected void onStart() {
17. super.onStart();
18. Log.d("lifecycle","onStart invoked");
19. }
20. @Override
21. protected void onResume() {
22. super.onResume();
23. Log.d("lifecycle","onResume invoked");
24. }
25. @Override
26. protected void onPause() {
27. super.onPause();
28. Log.d("lifecycle","onPause invoked");
135
Advanced Operating 29. }
System
30. @Override
31. protected void onStop() {
32. super.onStop();
33. Log.d("lifecycle","onStop invoked");
34. }
35. @Override
36. protected void onRestart() {
37. super.onRestart();
38. Log.d("lifecycle","onRestart invoked");
39. }
40. @Override
41. protected void onDestroy() {
42. super.onDestroy();
43. Log.d("lifecycle","onDestroy invoked");
44. }
45. }
Output:
You will not see any output on the emulator or device. You need to open
logcat.
4.5 SUMMARY
4.6 EXERCISE
4.7 REFERENCES
136
136