"Linux": A Technical Report ON
"Linux": A Technical Report ON
TECHNICAL REPORT
ON
“LINUX”
Submitted in partial fulfillment of Master Degree in Embedded
System from of the Jaipur National University
2010-2011
PREFACE
The Linux kernel is the core of the Red hat enterprise Linux operating system. It is the Kernel
responsibility to control hardware, enforce security and allocate resource such as CPU and
Memory. Most of the operating system is not Linux but rather a collection of application that
make use of facilities provided by the Linux Kernel.
Basically Linux is use for security purpose. Red hat enterprise Linux is released on an eighteen
to twenty-four month cycle. It is based on the code developed by the open source community
and adds performance enhancements, intensive testing and certification on products produced by
top independent software and hardware venders such as Dell, IBM, Fujitsu, BEA, and oracle.
Red hat enterprise Linux provides a high degree of standardization through its support for seven
processor architecture.
In the report we discussion the about the Linux, Linux commands, Servers. Linux provides open
source system facility. Red hat Network is a complete system management platform. It is a
framework of modules for any software updates system management and monitoring.
ACKNOWLEDGEMENT
With the culmination of the proposed work and compilation in this form I have an opportunity
to express my deep regard and profound sense of gratitude to my learned and most respective
H.O.D Electronic and Communication for his invaluable guidance and inspiring device
Communication) who have been a constant source of inspiration and kind help throughout the
MADHAV SHARMA
2 Description
3 Properties of Linux
4 Linux vs Windows
5 Shell
9 YUM
10 User interface
16 Conclusion
17 Reference
INTRODUCTION
Red hat Network is a complete system management platform. It is a framework of modules for
easy software updates, system management, and monitoring. There are currently four modules in
Red Hat Network- the update module, the management module, the Provisioning module, and
the monitoring module.
Red Hat Enterprise Linux-5 supports nineteen languages: English, Bengali, Chinese (Simplified),
Chinese (Traditional), French, German, Gujarati, Hindi, Italian, Japanese, Korean, Malayalam,
Marathi, Oriya, Portuguese, Russian, Spanish and Tamil. A system’s language can be selected
during installation, but the default is US English. The currently selected language is set with the
LANG shell variable.
Red Hat offers a number of additional open source application products and operating system
enhancements which may be added to the standard Red Hat enterprise Linux operating system.
Red Hat enterprise Linux advanced platform the most effective server solution; this product
includes supports of the largest x86-compatible servers, unlimited virtualized guest operating
system, storage virtualization, high availability application and guest failover clusters, and the
highest level of technical supports.
Red hat enterprise Linux the basic server solution, supporting servers with up to two CPU
sockets and up to four virtualized guest operating system.
Linux is a kernel not an operating system. Linux is an open source system (o.s.s.), refers to
software in which the source code is freely available to all. The purpose of oss is to encourage
collaborative work often through board participation in software projects. The open source
software movement, now more than 20 years old, began with a commitment to make software
and the underlying source code freely available to all.
An early advocate for the open source movement was the free software foundation, group
created to fund the GNU project, which was created to develop a UNIX like operating system.
The GNU project began by writing replacement tools for UNIX, eventually creating a complete
set of tools, libraries and other material. To advance their ideas of software freedom.
Linux kernel + GNU utilities = complete, open source, UNIX like operating system. Today most
of the utilities and application included with Red Hat Linux are also covered by GPL.
A kernel is a most fundamental part of the operating system, providing services to other user-
level commands, such as the ability to communicate with hard disks and other piece of hardware.
Most software distributed with Red Hat Enterprise Linux is distributed under the GPL. All the
software contained in Red Hat Enterprise Linux is free for the users.
DESCRIPTION
Linux is a kernel not an operating system. Linux is an open source system (o.s.s.), refers to
software in which the source code is freely available to all. The purpose of oss is to encourage
collaborative work often through board participation in software projects. The open source
software movement, now more than 20 years old, began with a commitment to make software
and the underlying source code freely available to all.
Properties of Linux:
A lot of the advantages of Linux are a consequence of Linux' origins, deeply rooted in UNIX,
except for the first advantage, of course:
• Linux is free:
As in free beer, they say. If you want to spend absolutely nothing, you don't even have to
pay the price of a CD. Linux can be downloaded in its entirety from the Internet
completely for free. No registration fees, no costs per user, free updates, and freely
available source code in case you want to change the behavior of your system.
The license commonly used is the GNU Public License (GPL). The license says that
anybody who may want to do so, has the right to change Linux and eventually to
redistribute a changed version, on the one condition that the code is still available after
redistribution. In practice, you are free to grab a kernel image, for instance to add support
for transportation machines or time travel and sell your new code, as long as your
customers can still have a copy of that code.
A vendor who wants to sell a new type of computer and who doesn't know what kind of
OS his new machine will run (say the CPU in your car or washing machine), can take a
Linux kernel and make it work on his hardware, because documentation related to this
activity is freely available.
As with UNIX, a Linux system expects to run without rebooting all the time. That is
why a lot of tasks are being executed at night or scheduled automatically for other calm
moments, resulting in higher availability during busier periods and a more balanced use of the
hardware. This property allows for Linux to be applicable also in environments where people
don't have the time or the possibility to control their systems night and day.
• Linux is secure and versatile:
The security model used in Linux is based on the UNIX idea of security, which is known
to be robust and of proven quality. But Linux is not only fit for use as a fort against
enemy attacks from the Internet: it will adapt equally to other situations, utilizing the
same high standards for security. Your development machine or control station will be as
secure as your firewall.
• Linux is scalable:
The Linux OS and most Linux applications have very short debug-times:
Because Linux has been developed and tested by thousands of people, both errors and
people to fix them are usually found rather quickly. It sometimes happens that there are
only a couple of hours between discovery and fixing of a bug.
LINUX vs WINDOWS:
Windows has two main lines. The older flavors are referred to as "Win9x" and consist of
Windows 95, 98, 98SE and me. The newer flavors are referred to as "NT class" and consist of
Windows NT3, NT4, 2000, XP and Vista. Going back in time, Windows 3.x preceded Windows
95 by a few years. And before that, there were earlier versions of Windows, but they were not
popular. Microsoft no longer supports Windows NT3, NT4, all the 9x versions and of course
anything older.
The flavors of Linux are referred to as distributions (often shortened to "distros"). All the Linux
distributions released around the same time frame will use the same kernel (the guts of the
Operating System). They differ in the add-on software provided, GUI, install process, price,
documentation and technical support. Both Linux and Windows come in desktop and server
editions.
This is also known as a command interpreter. Windows users sometimes call it a DOS prompt.
Linux users refer to it as a shell. Each version of Windows has a single command interpreter, but
the different flavors of Windows have different interpreters. In general, the command interpreters
in the Windows 9x series are very similar to each other and the NT class versions of Windows
(NT, 2000, XP) also have similar command interpreters. There are however differences between
a Windows 9x command interpreter and one in an NT class flavor of Windows. Linux, like all
versions of Unix, supports multiple command interpreters, but it usually uses one called BASH
(Bourne Again Shell). Others are the Korn shell, the Bourne shell, ash and the C shell (pun, no
doubt, intended).
Cost:
For desktop or home use, Linux is very cheap or free, Windows is expensive. For server use,
Linux is very cheap compared to Windows. Microsoft allows a single copy of Windows to be
used on only one computer. Starting with Windows XP, they use software to enforce this rule
(Windows Product Activation at first, later Genuine Windows). In contrast, once you have
purchased Linux, you can run it on any number of computers for no additional charge.
Bugs:
All software has and will have bugs (programming mistakes). Linux has a reputation for fewer
bugs than Windows, but it certainly has its fair share. This is a difficult thing to judge and
finding an impartial source on this subject is also difficult. The article also addressed whether
known bugs are fixed faster with Linux or Windows. In brief, he felt that bugs used to be fixed
faster in Linux, but that things have slowed down.
Multiple Users:
Linux is a multi-user system, Windows is not. That is, Windows is designed to be used by one
person at a time. Databases running under Windows allow concurrent access by multiple users,
but the Operating System itself is designed to deal with a single human being at a time. Linux,
like all Unix variants, is designed to handle multiple concurrent users. Windows, of course, can
run many programs concurrently, as can Linux. There is a multi-user version of Windows called
Terminal Server but this is not the Windows pre-installed on personal computers.
Windows must boot from the first hard disk. Here too Linux is better, it can boot from any hard
disk in the computer.
An operating system (OS) is a resource manager. It takes the access system resources (e.g. the
CPU, memory, disks, modems, printers network cards etc.) in a safe, efficient and abstract way.
For example, an OS ensures safe access to a printer by allowing only one application program to
send data directly to the printer at any one time. An OS encourages efficient use of the CPU by
suspending programs that are waiting for I/O operations to complete to make way for programs
that can use the CPU more productively. An OS also provides convenient abstractions (such as
files rather than disk locations) which isolate application programmers and users from the details
of the underlying hardware.
Application programs (e.g. word processors, spreadsheets) and system utility programs
(simple but useful application programs that come with the operating system, e.g. programs
which find text inside a group of files) make use of system calls. Applications and system
utilities are launched using a shell (a textual command line interface) or a graphical user
interface that provides direct user interaction.
Shells and GUIs :
Linux supports two forms of command input: through textual command line shells similar to
those found on most UNIX systems (e.g. sh - the Bourne shell, bash - the Bourne again shell and
csh - the C shell) and through graphical interfaces (GUIs) such as the KDE and GNOME
window managers. If you are connecting remotely to a server your access will typically be
through a command line shell.
System Utilities:
Virtually every system utility that you would expect to find on standard implementations of
UNIX (including every system utility described in the POSIX.2 specification) has been ported to
Linux. This includes commands such as ls, cp, grep, awk, sed, bc, wc, more, and so on. These
system utilities are designed to be powerful tools that do a single task extremely well (e.g. grep
finds text inside files while wc counts the number of words, lines and bytes inside a file). Users
can often solve problems by interconnecting these tools instead of writing a large monolithic
application program.
Application programs:
Linux distributions typically come with several useful application programs as standard.
Examples include the emacs editor, xv (an image viewer), gcc (a C compiler), g++ (a C++
compiler), xfig (a drawing package), latex (a powerful typesetting language) and soffice
(StarOffice, which is an MS-Office style clone that can read and write Word, Excel and
PowerPoint files).
Redhat Linux also comes with rpm, the Redhat Package Manager which makes it easy to install
and uninstall application programs.
Shells:
A shell is a program which reads and executes commands for the user. Shells also usually
provide features such job control, input and output redirection and a command language for
writing shell scripts. A shell script is simply an ordinary text file containing a series of
commands in a shell command language (just like a "batch file" under MS-DOS).
There are many different shells available on UNIX systems (e.g. sh, bash, csh, ksh, tcsh etc.),
and they each support a different command language. Here we will discuss the command
language for the Bourne shell sh since it is available on almost all UNIX systems (and is also
supported under bash and ksh).
FTP SERVER:
The File Transfer Protocol (FTP) is used as one of the most common means of copying files
between servers over the Internet. Most web based download sites use the built in FTP
capabilities of web browsers and therefore most server oriented operating systems usually
include an FTP server application as part of the software suite. Linux is no exception.
This chapter will show you how to convert your Linux box into an FTP server using the default
Very Secure FTP Daemon (VSFTPD) package included in Fedora.
FTP relies on a pair of TCP ports to get the job done. It operates in two connection channels as
I'll explain:
FTP Control Channel, TCP Port 21: All commands you send and the ftp server's responses to
those commands will go over the control connection, but any data sent back (such as "ls"
directory lists or actual file data in either direction) will go over the data connection.
FTP Data Channel, TCP Port 20: This port is used for all subsequent data transfers between
the client and server.
Types of FTP
From a networking perspective, the two main types of FTP are active and passive. In active FTP,
the FTP server initiates a data transfer connection back to the client. For passive FTP, the
connection is initiated from the FTP client.
Active FTP
1. Your client connects to the FTP server by establishing an FTP control connection to port
21 of the server. Your commands such as 'ls' and 'get' are sent over this connection.
2. Whenever the client requests data over the control connection, the server initiates data
transfer connections back to the client. The source port of these data transfer connections
is always port 20 on the server, and the destination port is a high port (greater than 1024)
on the client.
3. Thus the ls listing that you asked for comes back over the port 20 to high port connection,
not the port 21 control connection.
FTP active mode therefore transfers data in a counter intuitive way to the TCP standard, as it
selects port 20 as it's source port (not a random high port that's greater than 1024) and connects
back to the client on a random high port that has been pre-negotiated on the port 21 control
connection.
Active FTP may fail in cases where the client is protected from the Internet via many to one
NAT (masquerading). This is because the firewall will not know which of the many servers
behind it should receive the return connection.
Passive FTP
1. Your client connects to the FTP server by establishing an FTP control connection to port
21 of the server. Your commands such as ls and get are sent over that connection.
2. Whenever the client requests data over the control connection, the client initiates the data
transfer connections to the server. The source port of these data transfer connections is
always a high port on the client with a destination port of a high port on the server.
Passive FTP should be viewed as the server never making an active attempt to connect to the
client for FTP data transfers. Because client always initiates the required connections, passive
FTP works better for clients protected by a firewall.
As Windows defaults to active FTP, and Linux defaults to passive, you'll probably have to
accommodate both forms when deciding upon a security policy for your FTP server
FTP has a number of security drawbacks, but you can overcome them in some cases. You can
restrict an individual Linux user's access to non-anonymous FTP, and you can change the
configuration to not display the FTP server's software version information, but unfortunately,
though very convenient, FTP logins and data transfers are not encrypted.
For added security, you may restrict FTP access to certain users by adding them to the list of
users in the /etc/vsftpd.ftpusers file. The VSFTPD package creates this file with a number of
entries for privileged users that normally shouldn't have FTP access. As FTP doesn't encrypt
passwords, thereby increasing the risk of data or passwords being compromised, it is a good idea
to let these entries remain and add new entries for additional security.
Anonymous Upload
If you want remote users to write data to your FTP server, then you should create a write-only
directory within /var/ftp/pub. This will allow your users to upload but not access other files
uploaded by other users. The commands you need are:
VNC or Virtual Network Computing is in fact a remote display system which allows viewing a
desktop environment not only on the local machine on which it is running, but from anywhere on
the Internet and from a wide variety of machines and architectures, including MS Windows and
several UNIX distributions. You could, for example, run MS Word on a Windows NT machine
and display the output on your Linux desktop. VNC provides servers as well as clients, so the
opposite also works and it may thus be used to display Linux programs on Windows clients.
VNC is probably the easiest way to have X connections on a PC. The following features make
VNC different from a normal X server or commercial implementations:
• No state is stored at the viewer side: you can leave your desk and resume from another
machine, continuing where you left. When you are running a PC X server, and the PC
crashes or is restarted, all remote applications that you were running will die. With VNC,
they keep on running.
• It is small and simple, no installation needed, can be run from a floppy if needed.
• Platform independent with the Java client, runs on virtually everything that supports X.
• Sharable: one desktop may be displayed on multiple viewers.
• Free.
The network file system is the native Linux networking system. NFS allows Linux system to
share directories and files over the network. Some of the most notable benefits that NFS can
provide are:
• Local workstations use less disk space because commonly used data can be stored
on a single machine and still remain accessible to others over the network.
• There is no need for users to have separate home directories on every network
machine. Home directories could be set up on the NFS server and made available
throughout the network.
• Storage devices such as floppy disks, CDROM drives, and Zip® drives can be
used by other machines on the network. This may reduce the number of removable
media drives throughout the network.
NFS Works
NFS consists of at least two main parts: a server and one or more clients. The client remotely
accesses the data that is stored on the server machine. In order for this to function properly a few
processes have to be configured and running.
nfsd
mounted
rpcbind
Configuring NFS:
The /etc/exports file specifies which file systems NFS should export (sometimes referred
to as “share”). Each line in /etc/exports specifies a file system to be exported and which
machines have access to that file system.
1. make sure that user and group id match on the client and server system.
2. on the server, modify /etc/hosts.allow and /etc/hosts.deny so that server will
except portmap request from client network.
3. on the server side, modify the /etc/exports so that serevr will make specified
directories available to other machine on the network.
4. on the client, modify /etc/fstab so that the client will automatically load the
exported directories when start the client system.
NFS utilities:
. exportfs –v -: is used to view the exports and their options on the local machine.
. rpcinfo –p hostname: - is used to probe the portmapper on hostname and print a list of all
registered RPC services.
YUM SERVER:
Yum is a tool for automating package maintenance for a network of workstations running any
operating system that use the Red Hat Package Management (RPM) system for distributing
packaged tools and applications. It is derived from yup, an automated package updater originally
developed for Yellowdog Linux, hence its name: yum is "Yellowdog Updater, Modified".
Yup was originally written and maintained by Dan Burcaw, Bryan Stillwell, Stephen Edie, and
Troy Bengegerdes of Yellowdog Linux (an RPM-based Linux distribution that runs on Apple
Macintoshes of various generation). Yum was written and is currently being maintained by Seth
Vidal and Michael Stenner, both of Duke University, although as an open source GPL project
many others have contributed code, ideas, and bug fixes (not to mention documentation:-). The
yum link above acknowledges the (mostly) complete list of contributers, as does the AUTHORS
file in distribution tarball.
Yum is a Gnu Public License (GPL) tool; it is freely available and can be used, modified, or
redistributed without any fee or royalty provided that the terms of its associated license are
followed.
Installing Yum:
Yum is installed (originally) more or less like any other rpm you add to an existing system. After
downloading one of the rpm's above you simply:
where X is, of course, the revision number you downloaded. Do this initially on the server(s) you
wish to use as yum repositories and perhaps a single test client only. If this install fails, be sure
that yum's dependencies are satisfied on your system. As of right now its dependency list reads
something like:
on Red Hat 9. Version numbers will of course change, but this gives you an idea of what it will
need in order to install. If you simply cannot get an install to work, try a different rpm or seek
help from the yum list.
Superuser Requirements
Linux and other Unices prevent ordinary users from being able to install, remove, or modify
software, libraries, or important configurational information. On a networked, professionally run
installation of workstations the need for this sort of control to ensure security and functionality is
obvious. On a privately owned and controlled workstation the need is less obvious but just as
profound. Even there the owner of the workstation needs to take extreme care whenever working
with the fundamental system configuration and package installation or they will likely break the
system or leave it vulnerable to crackers.
To ensure that an adequate degree of care is exercised when doing things that could render a
system unusable or vulnerable, linux and Unices require the owner of such a workstation to
assume root privileges, usually by means of the su command. In order to execute the privileged
commands to add or remove software from your system, you must therefore become root. To do
this, enter (in a tty window such as an xterm):
$ su
and enter the root password when prompted to do so. The prompt should become the sharp sign
`#'
# yum list
The following commands alter the filesystem by installing, updating,deleting files. The first time
yum is run, they will also download andcache all the header files in all the repositories
in /etc/yum.conf,wich also requires root privileges. Thereafter the unprivilegedcommands
will generally work for all users, but the commands in thesubsections below will only work
as root.
Yum checks to see if package1 is already installed and is the latest version. If not, it downloads
package1 {\em and all its dependency packages} (saving them in its cache directory) and installs
them. Additional packages can be listed on the same command line. The packages can be
specified with standard filesystem globs. Some examples:
This will look for the jpilot package (which provides a lovely interface for palm pilots) and
install it if it exists on any of the repositories in any of the yum repositories.
This will install all the packages (e.g. festival and festival-devel) required to run the festival
speech generation program or write software that does speech synthesis. Note the "\" required to
escape the glob "*" character from the shell.
Yum checks to see if package1 is installed and is the latest version. If not, it downloads package1
{\em and all its dependency packages} (saving them in its cache directory) and re-installs them
(effectively upgrading them). Additional packages can be listed on the same command line. The
packages can be specified with standard filesystem globs. Some examples:
# yum update
One of the most important and useful of yum commands, in some ways the command for which
yum was invented. This command updates all the installed packages on your system to the latest
version on your repository set. This enables you to keep your system current and up to date with
a simple script, and to update any package(s) at any time should it be necessary.
This will look for the jpilot package (which provides a lovely interface for palm pilots) and
install it if it exists on any of the repositories in any of the yum repositories.
This will update all the packages in the festival speech generation suite that happen to be
installed on the system. Note the "\" required to escape the glob "*" character from the shell.
As noted above, it removes package1 and all packages in the dependency tree that depend on
package1, possibly irreversibly as far as configuration data is concerned. Be certain that the list
of removed packages that it generates meets with your approval before proceeding. Additional
packages can be listed on the same command line, and the packages can be specified with
standard filesystem globs although this makes the problem of certifying the list it generates for
removal even more difficult..
When linux is properly installed, there no longer a need to use the mouse. Chances of you
using a mouse is close to zero.
Linux file formats can be accessed in a variety of ways because they are free. Windows on
the other hand makes you lock your own data in secret formats that can only be accessed
with tools leased to you at the vendor’s price. “What we will get with Microsoft is a
three-year lease on a health record we need to keep for 100 years”
Linux is open source so you are unlikely to violate any license agreement. All the software
is happily yours. With MS Windows you likely already violate all kinds of licenses and you
could be pronounced a computer pirate if only a smart lawyer was after you. The worldwide
PC software piracy rate for 2004 is at 35%. Which means that 3 out of 10 people are likely
to get into real trouble.
5. Transparent vs Proprietary
Linux beats Windows hands down on network features, as a development platform, in data
processing capabilities, and as a scientific workstation. MS Windows desktop has a more
polished appearance, simple general business applications, and many more games for kids
(less intellectual games compared to linux’s).
7. Customizable
Linux is customizable in a way that Windows is not. For example, NASlite is a version of
Linux that runs off a single floppy disk and converts an old computer into a file server. This
ultra small edition of Linux is capable of networking, file sharing and being a web server.
8. Flexibility
Windows must boot from a primary partition. Linux can boot from either a primary
partition or a logical partition inside an extended partition. Windows must boot from the
first hard disk. Linux can boot from any hard disk in the computer.
9. Mobility
Windows allows programs to store user information (files and settings) anywhere. This
makes it impossibly hard to backup user data files and settings and to switch to a new
computer. In contrast, Linux stores all user data in the home directory making it much
easier to migrate from an old computer to a new one. If home directories are segregated in
their own partition, you can even upgrade from one version of Linux to another without
having to migrate user data and settings.
Why isn’t Linux affected by viruses? Simply because its code has been open source for
more than a decade, tested by people all around the world, and not by a single development
team like in the case of Windows. This leads to a lightning fast finding and fixing for
exploitable holes in Linux. So that my friends, proves Linux as having an extremely
enhanced security and lesser chances of exploits compared to Windows.
User interface
Windows Linux
Graphical
user
interface
A study released in 2003 by Relevantive AG indicates that “The usability of Linux as a desktop
system was judged to be nearly equal to that of Windows XP”.[37]
Windows Linux
Processes communicate with each other and with the kernel to coordinate their activities. Linux
supports a number of Inter-Process Communication (IPC) mechanisms. Signals and pipes are
two of them but Linux also supports the System V IPC mechanisms named after the Unix
TM
release in which they first appeared.
Signals
Signals are one of the oldest inter-process communication methods used by Unix TM systems.
They are used to signal asynchronous events to one or more processes. A signal could be
generated by a keyboard interrupt or an error condition such as the process attempting to access a
non-existent location in its virtual memory. Signals are also used by the shells to signal job
control commands to their child processes.
There are a set of defined signals that the kernel can generate or that can be generated by other
processes in the system, provided that they have the correct privileges. You can list a system's set
of signals using the kill command (kill -l), on my Intel Linux box this gives:
The numbers are different for an Alpha AXP Linux box. Processes can choose to ignore most of
the signals that are generated, with two notable exceptions: neither the SIGSTOP signal which
causes a process to halt its execution nor the SIGKILL signal which causes a process to exit can
be ignored. Otherwise though, a process can choose just how it wants to handle the various
signals. Processes can block the signals and, if they do not block them, they can either choose to
handle them themselves or allow the kernel to handle them. If the kernel handles the signals, it
will do the default actions required for this signal. For example, the default action when a
process receives the SIGFPE (floating point exception) signal is to core dump and then exit.
Signals have no inherent relative priorities. If two signals are generated for a process at the same
time then they may be presented to the process or handled in any order. Also there is no
mechanism for handling multiple signals of the same kind. There is no way that a process can tell
if it received 1 or 42 SIGCONT signals.
Linux implements signals using information stored in the task_struct for the process. The number
of supported signals is limited to the word size of the processor. Processes with a word size of 32
bits can have 32 signals whereas 64 bit processors like the Alpha AXP may have up to 64
signals. The currently pending signals are kept in the signal field with a mask of blocked signals
held in blocked. With the exception of SIGSTOP and SIGKILL, all signals can be blocked. If a
blocked signal is generated, it remains pending until it is unblocked. Linux also holds
information about how each process handles every possible signal and this is held in an array of
sigaction data structures pointed at by the task_struct for each process. Amongst other things it
contains either the address of a routine that will handle the signal or a flag which tells Linux that
the process either wishes to ignore this signal or let the kernel handle the signal for it. The
process modifies the default signal handling by making system calls and these calls alter the
sigaction for the appropriate signal as well as the blocked mask.
Not every process in the system can send signals to every other process, the kernel can and super
users can. Normal processes can only send signals to processes with the same uid and gid or to
processes in the same process group1. Signals are generated by setting the appropriate bit in the
task_struct's signal field. If the process has not blocked the signal and is waiting but interruptible
(in state Interruptible) then it is woken up by changing its state to Running and making sure that
it is in the run queue. That way the scheduler will consider it a candidate for running when the
system next schedules. If the default handling is needed, then Linux can optimize the handling of
the signal. For example if the signal SIGWINCH (the X window changed focus) and the default
handler is being used then there is nothing to be done.
Signals are not presented to the process immediately they are generated., they must wait until the
process is running again. Every time a process exits from a system call its signal and blocked
fields are checked and, if there are any unblocked signals, they can now be delivered. This might
seem a very unreliable method but every process in the system is making system calls, for
example to write a character to the terminal, all of the time. Processes can elect to wait for
signals if they wish, they are suspended in state Interruptible until a signal is presented. The
Linux signal processing code looks at the sigaction structure for each of the current unblocked
signals.
If a signal's handler is set to the default action then the kernel will handle it. The SIGSTOP
signal's default handler will change the current process's state to Stopped and then run the
scheduler to select a new process to run. The default action for the SIGFPE signal will core
dump the process and then cause it to exit. Alternatively, the process may have specfied its own
signal handler. This is a routine which will be called whenever the signal is generated and the
sigaction structure holds the address of this routine. The kernel must call the process's signal
handling routine and how this happens is processor specific but all CPUs must cope with the fact
that the current process is running in kernel mode and is just about to return to the process that
called the kernel or system routine in user mode. The problem is solved by manipulating the
stack and registers of the process. The process's program counter is set to the address of its signal
handling routine and the parameters to the routine are added to the call frame or passed in
registers. When the process resumes operation it appears as if the signal handling routine were
called normally.
Linux is POSIX compatible and so the process can specify which signals are blocked when a
particular signal handling routine is called. This means changing the blocked mask during the
call to the processes signal handler. The blocked mask must be returned to its original value
when the signal handling routine has finished. Therefore Linux adds a call to a tidy up routine
which will restore the original blocked mask onto the call stack of the signalled process. Linux
also optimizes the case where several signal handling routines need to be called by stacking them
so that each time one handling routine exits, the next one is called until the tidy up routine is
called.
Pipes
The common Linux shells all allow redirection. For example
$ ls | pr | lpr
pipes the output from the ls command listing the directory's files into the standard input of the pr
command which paginates them. Finally the standard output from the pr command is piped into
the standard input of the lpr command which prints the results on the default printer. Pipes then
are unidirectional byte streams which connect the standard output from one process into the
standard input of another process. Neither process is aware of this redirection and behaves just as
it would normally. It is the shell which sets up these temporary pipes between the processes.
Linux programs are normally dependent on exact kernel and library versions, and
therefore on a specific version of a specific distribution. In order to manage the complicated
dependencies that arise, most distributions have a package manager, often based upon RPM,
APT, or Gentoo Ebuild metapackages (source). Sometimes an installation can have a second
package management system which is incompatible with the primary system. Numerous
distribution-specific front-ends exist on top of the core formats allowing for GUI or command-
line package installation e.g. aptitude, Synaptic, Portage, YaST and YUM. Though rare, some
distributions create their own formats e.g. Pardus PiSi or Pacman.
Most package managers have a form of package signing usually based on PGP e.g. OpenPGP for
Debian packages. It is also possible to create a GUI installation package not depending on the
distributions by using Autopackage. Software can also be compiled from source code, which
does not require any kind of package-management system. However, compiling software from
its source code can be time-consuming and difficult. Source code is also typically tied to specific
library versions, and in many cases, source code can not be compiled without updating system
libraries, which can disable existing installed software that is dependent on exact builds of those
libraries. In some cases, conflicts arise where the latest version of one program depends on
having an older version of a specific library, and another program will depend on having a newer
version of the same library.
SECURITY- -
Linux [has] ... the notion of an administrative (root) user that maintains and operates the
system, and desktop users who only run the software on the system, is completely ingrained in
most Linux distributions. Now it’s true that many Linux users ignore these features and run all
their software from a root-level account anyway, but that’s a choice that they’ve made. The
system defaults to protecting the operating system components from its user’s actions
(intentional or otherwise). That feature alone must account in large degree for the dearth of
viruses and other malicious vermin on Linux and UNIX platforms. Windows, on the other hand,
started life as a single user system, with that single user being all-powerful. Although that’s no
longer the case, the general attitude can still be found in many Windows-based software
products – many of which just can’t be installed and/or run properly without desktop
administrator privileges. This is all changing for the better, but it took Microsoft far too long to
adopt this default-secure configuration practice.
SWAP FILES- -
A swap file (a.k.a. page file) is used by the Operating System when the demands on RAM
exceed the available capacity. Windows uses a hidden file for its swap file. By default, this file
resides in the same partition as the OS, although you can put it in another partition, after
Windows is installed. In Windows XP, the swap file resides initially on the C disk as a file called
pagefile.sys. Linux likes to use a dedicated partition for its swap file, however advanced users
can opt to implement the swap file as a file in the same partition as the OS. I'm not sure if this
issue is clearly presented and explained when installing Linux. Probably not. Xandros v4, for
example, may use a separate swap partition or not, depending on the partition environment it
finds at install time. Xandros 4 does not explain any of this.
Updated August 2006: With Windows XP the default size of the swap file is 1.5 times the
amount of RAM in the machine at the time Windows was installed. I don't know how Linux
chooses a default swap file size. In Windows XP you can change the swap file size and location
with Control Panel -> System Properties -> Advanced tab -> Performance Settings -> Advanced
tab again -> Change button, which opens the Virtual Memory window. Be aware that this
window violates user interface standards. It is the only window I know of where clicking the OK
button after making a change, does not activate the change. To change the size of the page/swap
file, you must click the Set button. I don't know how to change the size of a Linux swap file.
The graphical helper tools edit a specific set of network configuration files, using a couple of
basic commands. The exact names of the configuration files and their location in the file system
is largely dependent on your Linux distribution and version. However, a couple of network
configuration files are common on all UNIX systems:
1. /etc/hosts
The /etc/hosts file always contains the local host IP address, 127.0.0.1, which is used for
intercrosses communication. Never remove this line! Sometimes contains addresses of
additional hosts, which can be contacted without using an external naming service such
as DNS (the Domain Name Server).
2. /etc/nsswitch.conf
The /etc/nsswitch.conf file defines the order in which to contact different name services. For
Internet use, it is important that dns shows up in the "hosts" line
1. The ip command
The distribution-specific scripts and graphical tools are front-ends to ip (or ifconfig and route on
older systems) to display and configure the kernel's networking configuration.
The ip command is used for assigning IP addresses to interfaces, for setting up routes to the
Internet and to other networks, for displaying TCP/IP configurations.
While ip is the most novel way to configure a Linux system, ifconfig is still very popular. Use it
without option for displaying network interface information:
els@asus:~$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:50:70:31:2C:14
inet addr:60.138.67.31 Bcast:66.255.255.255 Mask:255.255.255.192
inet6 addr: fe80::250:70ff:fe31:2c14/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets: 31977764 errors:0 dropped:0 overruns:0 frame:0
TX packets: 51896866 errors:0 dropped:0 overruns:0 carrier:0
collisions:802207 txqueuelen:1000
RX bytes:2806974916 (2.6 GiB) TX bytes:2874632613 (2.6 GiB)
Interrupt:11 Base address:0xec00
lo
Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets: 765762 errors:0 dropped:0 overruns:0 frame:0
TX packets: 765762 errors:0 dropped:0 overruns:0 carrier:0
Collisions:0 txqueuelen:0
RX bytes:624214573 (595.2 MiB) TX bytes:624214573 (595.2 MiB)
To check the route that packets follow to a network host, use the traceroute command.
Specific domain name information can be queried using the whois command, as is explained by
many whois servers.
FILE SYSTEM- -
A filesystem is the methods and data structures that an operating system uses to keep track of
files on a disk or partition; that is, the way the files are organized on the disk. The word is also
used to refer to a partition or disk that is used to store the files or the type of the filesystem. Thus,
one might say I have two filesystems meaning one has two partitions on which one stores files,
or that one is using the extended filesystem, meaning the type of the filesystem.
Windows uses FAT12, FAT16, FAT32 and/or NTFS with NTFS almost always being the best
choice. The FATx file systems are older and have assorted limitations on file and partition size
that make them problematical in the current environment. Linux also has a number of its own
native file systems. The default file system for Linux used to be ext2, now it is typically ext3.
Other supported file systems includes XFS, JFS, JFFS and Reiser3. Reiser4 is in development.
The ext3 file system is being replaced by ext4. Among the improvements in ext4 is an increase
in the maximum filesystem size from 16 terabytes in ext3 to one exabyte. The largest file in ext3
is 2 terabytes, in ext4 it is 16 terabytes. OpenSolaris includes ZFS which seems like a drastic
change in file system design.
File systems can be either journaled or not. Non-journaled systems are subject to problems when
stopped abruptly. All the FAT variants and ext2 are non-journaled. After a crash, they should be
examined by their respective health check utilities (Scan Disk or Check Disk or fsck). In
contrast, when a journaled file system is stopped abruptly, recovery is automatic at the next
reboot. NTFS is journaled. Linux supports several journaled file systems: ext3, ext4, reiserfs and
jfs.
All the file systems use directories and subdirectories. Windows separates directories with a back
slash, Linux uses a normal forward slash. Windows file names are not case sensitive. Linux file
names are. For example "abc" and " aBC" are different files in Linux, whereas in Windows it
would refer to the same file. Case sensitivity has been a problem for this very web page, the
name of which is " Linux.vs.Windows.html". At times, people have tried to get to this page using
" linux.vs.windows.html" (all lower case) which resulted in a Page Not Found error. Eventually,
I created a new web page with the name in all lower case and this new page simply re-directs
you to the real page, the one you are reading now (with a capital L and W).
File Hierarchy:
Windows and Linux use different concepts for their file hierarchy. Windows uses a volume-
based file hierarchy, Linux uses a unified scheme. Windows uses letters of the alphabet to
represent different devices and different hard disk partitions. Under Windows, you need to know
what volume (C:, D:,...) a file resides on to select it, the file's physical location is part of it's
name. In Linux all directories are attached to the root directory, which is identified by a forward-
slash, "/". For example, below are some second-level directories:
/bin/ ---- system binaries, user programs with normal user permissions
/sbin --- executables that need root permission
/data/ --- a user defined directory
/dev/ ---- system device tree
/etc/ ---- system configuration
/home/ --- users' subdirectories
/home/{username} akin to the Windows My Documents folder
/tmp/ ---- system temporary files
/usr/ ---- applications software
/usr/bin - executables for programs with user permission
/var/ ---- system variables
/lib --- libraries needed for installed programs to run
Every device and hard disk partition is represented in the Linux file system as a subdirectory of
the lone root directory. For example, the floppy disk drive in Linux might be /etc/floppy. The
root directory lives in the root partition, but other directories (and the devices they represent) can
reside anywhere. Removable devices and hard disk partitions other than the root are attached
(i.e., "mounted") to subdirectories in the directory tree. This is done either at system initialization
or in response to a mount command.
There are no standards in Linux for which subdirectories are used for which devices. This
contrasts with Windows where the A disk is always the floppy drive and the C disk is almost
always the boot partition.
Hidden Files: Both support the concept of hidden files, which are files that, by default, are not
shown to the user when listing files in a directory. Linux implements this with a filename that
starts with a period. Windows tracks this as a file attribute in the file metadata (along with things
like the last update date). In both OSs the user can over-ride the default behavior and force the
system to list hidden files.
Case: Case sensitivity is the same with commands as with file names. When entering commands
in a DOS/command window under any version of Windows, "dir" is the same as "DIR". In Linux
"dir" is a different command than "DIR".
Filesystem Permissions
Both Windows NT-based systems and Linux-based systems support permissions on their default
filesystems. Windows' original FAT filesystem, however, does not support permissions. This
filesystem is available for use in both operating systems. Windows ME, Windows 98, Windows
95, and previous versions of Windows only operated on the FAT filesystem, and therefore do not
support permissions natively.
Most Linux distributions provide different user accounts for the various daemons. In common
practice, user applications are run on unprivileged accounts, to provide least user access. In some
distributions, administrative tasks can only be performed through explicit switching from the
user account to the root account, using tools such as su and sudo.
CONCLUSION
Red hat network is a complete system management platform. it is a framework of modules for
easy software update, system management, and monitoring. There are currently four modules in
Red hat network:
. The provisioning
Physical security is the first layer of security. Home users probably need
not worry about this too much. However, in a public environment, this aspect of security is a
much larger concern. Keep in mind that re-booting the system is only a ctrl-alt-del away if users
have access to the console.
REFERERENCE
http://www.linuxhomenetworking.com/wiki/index.php
http://en.wikipedia.org/wiki/Linux
www.redhat.com
www.rpm.org.
http://www.tux.org/lkml/
http://www.tux.org
ftp://ftp.redhat.com
ftp://ftp.kernel.org/pub/linux
http://www.kernel.org
http://x86.org/intel.doc/586manuals.htm
http://www.irisa.fr/prive/mentre/smp-faq/
http://www.teleport.com/~acpi/