Linux imp
Linux imp
By
Cloudblitz Technologies, Nagpur
CONTENT
RHCSA
Chapter 1: Introduction to Linux
Chapter 2: Basic Commands
Chapter 3: Editors in Linux
Chapter 4: Working with Text Files
Chapter 5: Hierarchy of Linux File Management System
Chapter 6: User and Group Management
Chapter 7: Configuring Permissions
Chapter 8: Managing Processes
Chapter 9: Archiving and compression
Chapter 10: Managing Software
Chapter 11: Scheduling Task
Chapter 12: Configuring Logging
Chapter 13: Managing Partitions
Chapter 14: Managing LVM Logical Volumes
Chapter 15: Basic Kernel Management
Chapter 16: SSH
Chapter 17: Managing and Understanding Boot Procedure
Chapter 18: Resetting Root Password
Chapter 19: Working with LDAP
Chapter 20: Managing SELinux
Chapter 21: Configuring a Firewall
Chapter 22: Configuring Remote Mounts and FTP
Chapter 23: Configuring Time Services
Chapter 1: Introduction to Linux
Before learning Linux, we should know what operating system is. An Operating System or OS
is the program which provides an environment to run other applications. OS interacts with hardware
which means it acts as mediator between hardware and user interface. On the basis of their utilization,
OS can be classified as Desktop OS and Server OS. Following are some differences between them,
History of Linux:
Evolution of Computer
In earlier days, computers were as big as houses or parks. So, you can imagine how difficult it
was to operate them. Moreover, every computer has a different operating system which made it
completely worse to operate on them. Every software was designed for a specific purpose and was
unable to operate on other computer. It were extremely costly and normal people neither can afford
it nor can understand it.
Evolution of UNIX
In 1969, a team of developers of Bell Labs started a project to make a common software for
all the computers and named it as 'UNIX'. It was simple and elegant, used 'C' language instead of
assembly language and its code was recyclable. As it was recyclable, a part of its code now commonly
called 'kernel' was used to develop the operating system and other functions and could be used on
different systems. Also, its source code was open source.
Initially, UNIX was only found in large organizations like government, university, or larger
financial corporations with mainframes and minicomputers (PC is a microcomputer).
UNIX Expansion
In eighties, many organizations like IBM, HP and dozen other companies started creating their
own UNIX. It results in a mess of UNIX dialects. Then in 1983, Richard Stallman developed GNU project
with the goal to make it freely available UNIX like operating system and to be used by everyone. But
his project failed in gaining popularity. Many other UNIX like operating system came into existence
but none of them was able to gain popularity.
Evolution of Linux
In 1991, Linus Torvalds a student at the University Of Helsinki, Finland, thought to have a freely
available academic version of UNIX started writing its own code. Later this project became the Linux
kernel. He wrote this program especially for his own PC as he wanted to use UNIX 386 Intel computer
but couldn't afford it. He did it on MINIX using GNU C compiler. GNU C compiler is still the main choice
to compile Linux code but other compilers are also used like Intel C compiler.
He started it just for fun but ended up with such a large project. Firstly, he wanted to name it
as 'Freax' but later it became 'Linux'.
He published the Linux kernel under his own license and was restricted to use as commercially.
Linux uses most of its tools from GNU software and are under GNU copyright. In 1992, he released the
kernel under GNU General Public License.
Linux Today
Today, supercomputers, smart phones, desktop, web servers, tablet, laptops and home
appliances like washing machines, DVD players, routers, modems, cars, refrigerators, etc. use Linux
OS.
Linux UNIX
The Source Code of Linux is freely available to The Source Code of UNIX is not available for the
its Users. general public.
Linux primarily uses Graphical User Interface
with an optional Command Line Interface. UNIX primarily uses Command Line Interface.
Linux OS is portable and can be executed in
different Hard Drives. UNIX is not portable.
UNIX has a rigid requirement of the Hardware.
Linux is very flexible and can be installed on Hence, cannot be installed on every other
most of the Home-Based and Servers. machine.
Different Versions of Linux are: Ubuntu, Different Versions of UNIX are: AIS, HP-UX, BSD,
Debian, OpenSuse, RedHat, Solaris, etc. Iris, etc.
Linux Installation is economical and doesn’t UNIX Installation is comparatively costlier as it
require much specific and high-end hardware. requires more specific hardware circuitry.
The File systems supported by Linux are as
follows: xfs, ramfs, nfs, vfat, cramfsm ext3, The File systems supported by UNIX are as
ext4, ext2, ext1, ufs, autofs, devpts, ntfs follows: zfs, js, hfx, gps, xfs, gps, xfs, vxfs.
Linux is development by an active Linux
Community worldwide. UNIX is developed by AT&T Developers
Architecture of Linux:
Linux architecture consist of inner most Hardware layer, kernel, shell, and outer application
layer, as shown in fig 1.1. Each layer has different functionalities and different uses.
Application Layer: - Users interact with the system through varies applications such as office, games,
etc. These applications run in outer layer of architecture.
Shell: - Shell provides environment to run any application. It provides interface to the user to interact
with hardware. We can say that it converts higher level language to lower level language.
Kernel: - Kernel is the program which actually communicates with the hardware. Combining shell and
kernel forms Operating System.
Hardware: - All the hardware components such as motherboard, CPU, hard disk, etc. are comes under
this layer.
Chapter 2: Basic Commands
Commands are nothing but executable programs which perform specific task written in it.
These executable programs can be called using their name as per provided syntax.
General Syntax:
# <commands> -<options> -<argument> <multiple arguments>
Commands- To run the command.
Options - To adjust behavior of the commands.
Arguments - The behavior, file folder name.
BASIC COMMANDS
#<command> --help Show short description of manual page. –help is option thus,
some command may not support this option.
Editors are used to create new files, and edit or modify the content inside it. Simply editors
are used to read and write data in existing or newly created file. Editors can be classified on the basis
of interfaces that they use, i.e. Graphical Editors and Command Line Editors.
Graphical editor:
Graphical editor uses graphical user interface. It is easy to use but consumes more memory
than command line editors. In Linux, following are some well-known graphical editors,
1. gedit: - It is same as that of notepad in windows. You can open gedit graphically in application
menu and also using command as $gedit.
2. kedit: - It is similar to gedit but contain some advance features. Generally, we have to install
kedit separately to use it.
3. Open office: - Open Office is same as that of MS Office. This Open Office is specially developed
for Linux based operating systems.
1. nano: - nano editor is easy to use since it provides simple features to edit data from files.
Syntax-
# nano <filename>.<extension>
2. pico: - Syntax-
# pico <filename>
3. vi and vim: - vi (virtual interface) and vim (virtual interface modified) are most commonly used
editors. vi and vim both editors are same where as vim is the advance version of vi editor.
Thus, it contains some additional features. These editors works in four different modes,
Insert mode
Ex-mode
Command mode
Visual mode
Syntax-
# vim <filename>
Cursor Movement,
J for upward navigation
K for downward navigation
H for leftward navigation
L for rightward navigation
Command Mode, this is default mode. Press esc to exit from any mode and
enter into command mode.
dd Delete current line
<n>dd Delete n no. of lines from current line
dw Delete current word
<n>dw Delete n no. of words from current word
yy Copy current line
<n>yy Copy n no. of lines from current line
yw Copy current word
<n>yw Copy n no. of words from current word
cc Cut current line and enter in insert mode
<n>cc Cut n no. of lines and enter in insert mode
cw Cut current word and enter in insert mode
<n>cw Cut n no. of words from current word and enter in
insert mode
p Paste
s Remove current character and enter in insert mode
S Remove current line and enter in insert mode
u Undo
Ctrl+r Redo
H Move cursor to the top of screen
M Move cursor to the middle of screen
L Move cursor to the bottom of screen
G Move cursor at the end of file
gg Move cursor at the beginning
<n>gg Move cursor at nth line
/<word> Search particular word/string/character
n Show next search result
N Show previous search result
File Creation:
#touch – touch command is used to create files. Using touch command, multiple files can be
created.
Syntax: #touch <filename>
Example:
Creating file in current directory
[root@server0 /]# touch file1.txt
Creating multiple file at different locations (below example will create two files, one at
/root/Desktop and 2nd at /etc directory. You can add more file names along with their path and
separate them by space.)
[root@server0 /]# touch /root/Desktop/file1.txt /etc/data.mp3
Creating multiple files at same location but with different file names (below example will create 3
files. You can add more file names and separate them by coma)
[root@server0 /]# touch /root/{data.txt,file.txt,demo.mp3}
Creating multiple files having continuous number in their names (below example will create hundred
files with name starting from file1 to file100.)
[root@server0 /]# touch /root/file{1..100}.txt
#mkdir – mkdir command creates directories. Creating multiple directories are also possible using
mkdir command
Syntax: #mkdir <option> <path/directory_name>
Example,
Creating directory in / directory
[root@server0 /]# mkdir /dir1
cat – cat command is used to get data of file as output on the terminal. Reading out large file
leads to navigate in terminal, which require separate scrolling device (mouse). So, cat
command is very useful in reading smaller files with few lines of data in command line.
Example,
more – more command provides line by line navigation and page by page navigation in
downward direction but, upward scrolling not possible.
Example,
less – less command allow navigation keys for scrolling up and down. Thus, it is more useful
command than any other four command.
Example,
head – head command show few lines from top of the file. If head command is used without
any option, it will show top ten lines by default. –n is used to give count of lines to be shown.
Example,
tail – tail command show few lines from bottom of file. If tail command is used without any
option, it will show bottom ten lines by default. –n is used to give count of lines to be shown.
Examples,
[root@server0 /]# tail /root/anaconda-ks.cfg
[root@server0 /]# tail –n 4 /root/anaconda-ks.cfg
SORT: - Sort command will display result in ascending or descending order. Without option, data will
be shown in ascending order.
Options,
-r to show output in reverse order
-k <n> to show output of arranged by sorting nth column.
Example,
[root@server0 /]# sort file1.txt
[root@server0 /]# sort –r file1.txt
COPY: - Copy operation use to copy files and directories in Linux, from one location to another. It will
copy contents of one file to another. If destination file is not exist in given location then automatically
new file will be generated.
Syntax,
#cp <option> <source> <destination>
Options,
-f : forcefully
-v : Verbose/View
-r : recursive (to copy directory)
-a : preserver permissions when copying
Example,
Copy one file content to another file,
[root@server0 /]# cp /root/anaconda-ks.cfg ~/Desktop/kickstart.txt
MOVE AND RENAME: - Move and rename both operation can be performed using ‘mv’
command. It moves files and directories from one location to another. It is possible to move
and rename at the same time.
Syntax,
#mv <option> <source> <destination>
Example,
Move files/directories from one location to another
[root@server0 /]# mv /root/anaconda-ks.cfg /mnt/ move single file
[root@server0 /]# mv /media ~/Desktop/ move directory
[root@server0 /]# mv /root/* /mnt/ move all files
Redirectors:
Redirectors are used to write terminal output into file. Output, generated from any command,
on terminal can be transferred into existing file. If file does not exist, automatically new file will be
created. Following are some redirectors,
Single Redirector (>): Single redirector replace existing data in the file with newly redirected data. It
overwrites the contents of existing file.
Double Redirector (>>): Double redirector keeps existing data and newly redirected data will be added
at the end of the file. It appends redirected data in existing file.
Syntax,
# <command_to_generate_output> [> or >>] <new/existing_file>
Example,
Pipe (|) – It matches first command output to the second command and execute it.
Example,
[root@server0 ~]# dmidecode | less
[root@server0 ~]# ls / | wc
Chapter 5: Hierarchy of Linux File Management System
In Linux, files are well managed using file management system. Linux File Management System
manage files in hierarchical structure where, “/” (slash) is main directory or root directory (root node
in hierarchy). All other directories comes beneath “/” directory.
In RHEL 7.0, there are nineteen default directories created by the system itself. These nineteen
directories are present just under “/” directory. Below is the list of all 19 directories, and there uses,
Directory Description
/root Home directory of root user. In this directory, root user can store its
personal files.
/home It stores home directories of local users. Home directories are assigned to
each user separately and no other user can access home directories of other
users (Except root user).
/lib Library files information. This directory is actually a soft link of /usr/lib
directory.
/lib64 Same as that of lib directory and stores 64 architecture library file
information. It is link of /usr/lib64.
/bin It stores binary executable files. These binary executable files are nothing
but the commands. It is a link of /usr/bin directory. (this directory contains
commands that can be used by local users)
/sbin It stores system binary executable files. It is same as bin directory except
that only super user has permission to execute commands from sbin. It is
also a link of /usr/sbin directory. (This directory contains commands that
only root user can use.)
/usr User related files. Such as documentary files, manual pages, etc. This
directory also contain lib, lib64, bin, and sbin directory whose links are
available in main directory i.e. in “/” directory.
/opt This directory is use for optional add-on service. Sometime path selection
for environment variable also done from this directory.
/proc Processes information. This directory also stores RAM and CPU related
information.
/boot It stores boot loader program and all other boot related files.
/dev /dev directory stores device information and their block files.
Chapter 6: User and Group Management
A user is a person who utilizes a computer or network service. Linux is said to be secure
because one user cannot access files of other user without its permission. There are three types of
user,
1. Super user: Super users are those users who has all privileges of Linux system. On all
Linux systems, by default there is the user root, also known as the super user. This account is used for
managing Linux. Root, for instance, can create other user accounts on the system. For some tasks,
root privileges are required. Some examples are installing software, managing users, and creating
partitions on disk devices.
2. System user: System accounts are used by the services in Linux system. These
accounts or users generally created when services are installed in system.
3. Standard user: local user accounts or standard user accounts are for the people who
need to work on a system and who need limited access to the resources on that system. These user
accounts typically have a password that is used for authenticating the user to the system.
Adding new local user means creating user account. User can be added by root user or using root
user’s privileges. Whenever new user has been added, some files get affected. These files holds user
accounts related information. Also whenever new user is created, by default, its home directory and
mail account also has been generated. New users are created using some skeleton files located in
/etc/skel directory. These files are hidden and copied into home directory of new user.
Skeleton files,
.bash_logout: if this file is missing, user will unable to logout from the system.
.bash_profile: If this file is missing, home directory will not be assigned to the new user.
.bashrc: If this file is missing, user will unable to login to the system.
[root@ip-172-31-19-5 ~]# ls -a /etc/skel skeleton files
. .. .bash_logout .bash_profile .bashrc
[root@ip-172-31-19-5 ~]# ls -a /home/shubham skeleton that copied in
. .. .bash_logout .bash_profile .bashrc home directory
(Note: Switching user using ‘su’ command will open new sub shell with different user login. But,
previous user remains logged in. You have to logout user manually.)
Password Management,
passwd: Password is the secret phrase that can be used to login to the system. ‘passwd’ command
will be used to assign or change password of any user by root user. Whenever, password assigned to
the user, it will stored in /etc/shadow file in encrypted format. Only root user can change password
of any user, but local users can change their own password. Password should follow some rules such
as,
- Password must be 8 character long
- It should not contain user name
- It cannot accept old password
- Any dictionary name is not allowed
- Password should not be too simplistic
Syntax, #passwd change current user’s password
#passwd <user_name> assign or change other user’s password by root user
Example,
Changing root user’s password,
[root@ip-172-31-19-5 ~]# passwd
Changing password for user root.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.
Changing current user’s password (local user changing its own password)
[shubham@ip-172-31-19-5 ~]$ passwd
Changing password for user shubham.
Changing password for shubham.
(current) UNIX password:
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Changing other user’s password by local user’s account, (it generates error because only root user has
privilege to change other user’s password)
/etc/shadow file: /etc/shadow file stores password and password policies of all users. It contains nine
fields and each field is separated by colon. Below is the summary of these fields,
Group Administration/Management
Linux users can be a member of two different kinds of groups. First, there is the primary group.
Every user must be a member of a primary group and there is only one primary group. When creating
files, the primary group becomes group owner of these files.
Users can also access all files their primary group has access to. The user’s primary group
membership is defined in /etc/passwd; the group itself is stored in the /etc/group configuration file.
Besides the mandatory primary group, users can be a member of one or more secondary groups as
well. Secondary groups are important to get access to files. If the group a user is a member of has
access to specific files, the user will get access to these files also.
#groupadd – ‘groupadd’ command is use to add secondary or supplementary group in system. Group
information are stored in /etc/group file.
Syntax,
# groupadd <groupname>
Example,
[root@ip-172-31-37-64 ~]# groupadd IBM
[root@ip-172-31-37-64 ~]# tail -1 /etc/group
IBM:x:1005:
/etc/group file: This file contain all group’s information. The file has four fields and each field is
separated by colon (:). Following are fields of shadow file,
①:②:③:④
1. Group name: As is suggested by the name of the field, this contains the name of the group.
2. Redirected group password: A feature that is hardly used anymore. A group password
can be used by users that want to join the group on a temporary basis, so that access to
files the group has access to is allowed.
3. Group id (GID): A unique numeric group identification number.
4. List of members: Here you find the names of users that are a member of this group as a
secondary group. Note that it does not show users that are a member of this group as
their primary group.
Adding group with customize setting,
Syntax, # groupadd <option> <parameter> <groupname>
Options, -g :- Group id
-o :- Non unique
-f :- Forcefully
Example,
[root@ip-172-31-37-64 ~]# groupadd -g 2005 TCS
[root@ip-172-31-37-64 ~]# tail -1 /etc/group
TCS:x:2005:
User Administration/Management
/etc/passwd file: This file stores user profile information. It contain 7 fields as follows,
①:②:③:④:⑤:⑥:⑦
1. User login name: This is a unique name for the user. User names are important to match
a user to his password, which is stored separately in /etc/shadow. On Linux, there can be
no spaces in the user name.
2. Password link from shadow file: since /etc/passdw file is readable by all user, for security
purpose, the password is stored in /etc/shadow file.
3. User id (UID): Each user has a unique user ID (UID). This is a numeric ID. It is the UID that
really determines what a user can do. When permissions are set for a user, the UID is
stored in the file metadata (and not the user name). UID 0 is reserved for root. The lower
UIDs (typically up to 999) are used for system accounts, and the higher UIDs (from 1000
on by default), are reserved for people that need to connect directory to the server. [Note:
/etc/login.defs contain default setting for user creation.]
4. Primary group’s id (GID): On Linux, each user is a member of at least one group. This
group is referred to as the primary
5. Comment field: The Comment field, as you can guess, is used to add comments for user
accounts. This field is optional, but it can be used to describe what a user account is
created for.
6. Home Directory Home Directory: This is the initial directory where the user is placed after
logging in, also referred to as the home directory. If the user account is used by a person,
this is where the person would store his personal files and programs.
7. Login shell: This is the program that is started after the user has successfully connected
to a server. For most users this will be / bin/bash, the default Linux shell. For system user
accounts, it will typically be a shell like /sbin/nologin. The /sbin/nologin command is a
specific command that silently denies access to users.
Adding user with customized setting,
Syntax, # useradd <options> <parameters> <username>
Options, -u :- User id
-g :- Primary group
-c :- Comment
-d :- Home directory
-s :- Login shell
-G :- Secondary group
-r :- System user
-e :- Account expiry date
-o :- Non unique
Example,
Create user with customized user id,
[root@ip-172-31-19-5 ~]# useradd -u 2211 amit
[root@ip-172-31-19-5 ~]# tail -1 /etc/passwd
amit:x:2211:2211::/home/amit:/bin/bash
Change comment,
[root@ip-172-31-37-64 ~]# usermod -c “windows” sumit
[root@ip-172-31-37-64 ~]# tail -1 /etc/passwd
sumit:x:6021:6021:windows:/home/sumit:/bin/bash
Delete user account along with its home directory and mail account,
[root@ip-172-31-19-5 ~]# userdel -r shubham
/etc/gshadow file: This file use to store password of group. It also stores admin and member list of
group. It contain four fields,
①:②:③:④
1. Group name
2. Encrypted Password
3. Admin of group
4. Member list
#gpasswd – ‘gpasswd’ command is use to give password to group. It also can be used to add
members and assign admin to the group.
Syntax,
# gpasswd <option> <parameter> <groupname>
Options, -a :- Add members in group
-M :- Set list of members in group
-A :- Assign user as group admin
Example,
Assign or change group password,
[root@ip-172-31-19-5 ~]# gpasswd TCS
Changing the password for group TCS
New Password:
Re-enter new password:
[root@ip-172-31-19-5 ~]# tail -1 /etc/gshadow
TCS:$6$b4a1sbtiZDy$arjapZjNWW2u.EE2D49ZI2k8VtT7WNZ3zRNkmg0ByFIrJrXbjMZe8fQ0U0R
QfTG/RrXOKAukFC5ganx0k00MO1::
Set list of members in group, (old list of members will replace with new list.)
[root@ip-172-31-19-5 ~]# gpasswd -M atul,shubham TCS
[root@ip-172-31-19-5 ~]# tail -2 /etc/group
TCS:x:2218:atul,shubham
amit:x:2219:
Linux File System Security restricts user to access the files and directories. User
require permissions to access files or directories. “ls -l” or “ll” command can be used to check security
of any file or directory. Above command will display contents from directory along with its security
details. Some of security attributes are shown below,
Example,
[root@ip-172-31-19-5 ~]# ll /root
total 8
-rw-------. 1 root root 6577 Jan 28 2019 original-ks.cfg
Above example display contents from directory along with its security details. It contains ten
fields as mentioned below,
1. File type.
2. Owner Permissions.
3. Group Permissions.
4. Other User Permission.
5. Link Count.
6. Owner of file/directory.
7. Group Owner of File or Directory.
8. File Size.
9. Creation Date and Time.
10. File/Directory Name.
File Types in Linux: There are seven types of file available mentioned as below,
Link Count: Also called as reference count. It shows count of links of file/directory that has
been created.
Default link count of directory is 2 whereas default link count of file is 1. Whenever
new directory is created, link count of its just parent directory will increase by 1.
There are two types of symbolic link,
a. Hard Link
b. Soft Link
Below is the difference between hard and soft links,
Inode number of hard link is same as that of Inode number of softlink is different than inode
inode number of original file. number of original file.
Hardlink contain actual data from the file. Softlink contain path of original file and not the
actual data.
Thus, size of hardlink and original file will be Thus, size of softlink depends on path length.
same.
Creating or removing hardlink will increase or Creating or removing softlink will not affect link
decrease link count by 1. count.
Removing original file will only reduce the link Removing original file will disable soflink, since
count and it will not affect any of its hardlink. softlink points to the non-existing file.
Creating hardlink of directory is not possible. Creating softlink of directory is possible.
Syntax to create hardlink, Synatx to create softlink,
# ln <original_file> <hardlink_name> # ln –s <original_file> <softlink_name>
Example,
Create Hardlink and softlink
Managing Permissions: Field 2nd, 3rd and 4th represents permissions for owner, group
owner and other users. Each of that field contain three basic permissions which allow user
to read, write, and executes files. The effect of these permissions differ when applied to file
or directory. If applied to a file, the read permission gives user the right to open file for
reading. Therefore, user can read it’s contain. “chmod” command is use to change these
basic permissions.
Example,
Give write permission to group,
[root@ip-172-31-41-212 ~]# mkdir /nagpur
[root@ip-172-31-41-212 ~]# chmod g+w /nagpur/
[root@ip-172-31-41-212 ~]# ls -ld /nagpur/
drwxrwxr-x. 2 root root 6 May 25 06:14 /nagpur/
Remove read permission for group and assign read and write permission to other users,
[root@ip-172-31-41-212 ~]# chmod g-r,o=rw samplefile3.txt
[root@ip-172-31-41-212 ~]# ls –l samplefile3.txt
-rwx---rw-. 1 root root 0 May 29 09:29 samplefile3.txt
Default Permission: When a user creates a file as a regular user, it’s given permission rw-
rw-r-- (664) by default. A directory is given the permission rwxrwxr-x (775). For the root
user, file and directory permission are rw-r--r-- (644) and rwxr-xr-x (755), respectively. These
default values are determined by the value of umask. Type umask to see what your umask
value is.
If you ignore the leading zero for the moment, the umask value masks what is
considered to be fully opened permissions for a file 666 or a directory 777. The umask value
of 002 results in permission for a directory of 775 (rwxrwxr-x). That same umask results in a
file permission of 644 (rw-rw-r--).
Special Permission: We have three types of special permission, i.e. suid, sgid, and stickey
bit.
Name Numeric Relative On Files Symbol
Value Value
SUID 4 u+s User executes file with No meaning.
permissions of file
owner.
SGID 2 g+s User executes file with Files created in
permissions of group directory get the
owner. same group owner.
Sticky Bit 1 o+t No meaning. Prevents users from
deleting files from
other users.
SUID (set user identity): SUID (Set owner User ID up on execution) is a special type of file permissions
given to a file. Normally in Linux/UNIX when a program runs, it inherits access permissions from the
logged in user. SUID is defined as giving temporary permissions to a user to run a program/file with
the permissions of the file owner rather that the user who runs it. In simple words users will get file
owners permissions as well as owner UID and GID when executing a file/program/command.
Syntax,
# chmod u+s <file_name>
Example,
(As we seen in basic commands, dmidecode command is use to get hardware information of the
system. But dmidecode command is owned by root user and present in /sbin directory. That means,
no local user can access it. Following example illustrates this scenario, and then set suid permission
on command. After applying suid permission, any user can execute this command.)
[shubham@ip-172-31-19-21 ~]$ dmidecode
# dmidecode 3.0
/sys/firmware/dmi/tables/smbios_entry_point: Permission denied
Scanning /dev/mem for entry point.
/dev/mem: Permission denied
[shubham@ip-172-31-19-21 ~]$ logout
[root@ip-172-31-19-21 ~]# which dmidecode
/sbin/dmidecode
[root@ip-172-31-19-21 ~]# ll /sbin/dmidecode
-rwxr-xr-x 1 root root 110608 Jul 31 2018 /sbin/dmidecode
[root@ip-172-31-19-21 ~]# chmod u+s /sbin/dmidecode
[root@ip-172-31-19-21 ~]# ll /sbin/dmidecode
-rwsr-xr-x 1 root root 110608 Jul 31 2018 /sbin/dmidecode
[root@ip-172-31-19-21 ~]# su - shubham
Last login: Thu Dec 12 05:38:58 UTC 2019 on pts/0
[shubham@ip-172-31-19-21 ~]$ dmidecode
# dmidecode 3.0
Getting SMBIOS data from sysfs.
SMBIOS 2.7 present.
11 structures occupying 359 bytes.
Table at 0x000EB01F.
SGID (set group identity): SGID (Set Group ID up on execution) bit is set on directory if we give SGID
permission to particular directory and if file created in that directory by root user or local user that
files will get directory group ownership automatically. In other words, when we applied sgid bit to
particular directory, it inherit group permission to all files and directories that will create in sgid
applied directory.
Syntax,
# chmod g+s <file_name>
Example,
[root@ip-172-31-19-21 ~]# mkdir /demo
[root@ip-172-31-19-21 ~]# chgrp TCS /demo
[root@ip-172-31-19-21 ~]# ll -d /demo
drwxr-xr-x 2 root TCS 6 Dec 12 05:55 /demo
[root@ip-172-31-19-21 ~]# touch /demo/file
[root@ip-172-31-19-21 ~]# ll /demo/file
-rw-r--r-- 1 root root 0 Dec 12 05:56 /demo/file
[root@ip-172-31-19-21 ~]# chmod g+s /demo
[root@ip-172-31-19-21 ~]# ll -d /demo
drwxr-sr-x 2 root shubham 31 Dec 12 05:57 /demo
[root@ip-172-31-19-21 ~]# touch /demo/file1
[root@ip-172-31-19-21 ~]# ll /demo
total 0
-rw-r--r-- 1 root root 0 Dec 12 05:56 file
-rw-r--r-- 1 root TCS 0 Dec 12 05:57 file1
Stickey Bit: Sticky Bit is mainly used on folders in order to avoid deletion of a folder and its content by
other users though they having write permissions on the folder contents. If Sticky bit is enabled on a
folder, the folder contents are deleted by only owner who created them and the root user. No one
else can delete other user’s data in this folder (Where sticky bit is set). This is a security measure to
avoid deletion of critical folders and their content (sub-folders and files), though other users have full
permissions.
Syntax,
# chmod o+t <file_name>
Example,
[root@ip-172-31-19-21 ~]# mkdir /demo
[root@ip-172-31-19-21 ~]# chmod 777 /demo
[root@ip-172-31-19-21 ~]# chmod o+t /demo
[root@ip-172-31-19-21 ~]# su - shubham
[shubham@ip-172-31-19-21 ~]$ touch /demo/file.txt
[shubham@ip-172-31-19-21 ~]$ logout
[root@ip-172-31-19-21 ~]# su - chetan
[chetan@ip-172-31-19-21 ~]$ rm –f /demo/file.txt
rm: cannot remove ‘/demo/file.txt’: Operation not permitted
ACL (access control list): ACL use to set permission over file and directory to specific user
or specific group. We can assign multiple user with different permission on same file or
directory. Access control list (ACL) provides an additional, more flexible permission
mechanism for file systems. It is designed to assist with UNIX file permissions. ACL allows
you to give permissions for any user or group to any disc resource.
Think of a scenario in which a particular user is not a member of group created
by you but still you want to give some read or write access, how can you do it without
making user a member of group, here comes in picture Access Control Lists, ACL helps us to
do this trick. Basically, ACLs are used to make a flexible permission mechanism in Linux.
From Linux man pages, ACLs are used to define more fine-grained discretionary
access rights for files and directories.
setfacl and getfacl are used for setting up ACL and showing ACL respectively.
Syntax, (to apply ACL)
# setfacl -m u:<user_name>:<permissions> <file_name> for user
# setfacl –m g:<grp_name>:<permissions> <file_name> for group
Syntax, (to check ACL)
# getfacl <file_name>
Examples,
For user perspective,
In above example, we have no permission of group WIPRO but we can put some permission
to the user ‘W3’ of WIPRO group.
SUDO permission: sudo (Super User DO) command in Linux is generally used as a prefix of
some command that only superuser are allowed to run. If you prefix sudo with any
command, it will run that command with elevated privileges or in other words allow a user
with proper permissions to execute a command as another user, such as the superuser. This
is the equivalent of “run as administrator” option in Windows. The option of sudo lets us
have multiple administrators. To allow user to use sudo command, user must be listed in
“/etc/sudoers” file. Or user should belong to wheel group. Wheel group is default group
which allow users to use sudo command.
Syntax,
# sudo <command_line>
Examples,
[root@ip-172-31-19-21 ~]# su – amit
[amit@ip-172-31-19-21 ~]$ useradd shubham
-bash: /usr/sbin/useradd: Permission denied
[amit@ip-172-31-19-21 ~]$ sudo useradd shubham
Chapter 8: Managing Processes
For everything that happens on a Linux server, a process is started. For that reason, process
management is among the key skills that an administrator has to master. To do this efficiently, it is
important to know which type of process you are dealing with.
A major distinction can be made between two process types:
■ Shell jobs are commands started from the command line. They are associated with the shell
that was current when the process was started. Shell jobs are also referred to as interactive
processes.
■ Daemons are processes that provide services. They normally are started when a computer
is booted and often (but certainly not in all cases) they are running with root privileges.
TIP: Do not set process priority to -20; it risks blocking other processes from getting served.
TIP: Use kill -l to show a list of available signals that can be used with kill.
Linux Process States Overview
State Meaning
top command is used to show the Linux processes. It provides dynamic real-time view of
running system. Usually, this command shows the summary information of the system and the list of
processes or threads which are currently managed by the Linux Kernel. As soon as you will run this
command it will open an interactive command mode where the half portion will contain the statistics
of processes and resource usage. And Lower half contains a list of the currently running processes.
Pressing q will simply exit the command mode.
Controlling Jobs
A job is a process that the shell manages. Each job is assigned a sequential job ID. Because a job is a
process, each job has an associated PID. There are three types of job statuses:
Foreground: When you enter a command in a terminal window, the command occupies that
terminal window until it completes. This is a foreground job.
Background: When you enter an ampersand (&) symbol at the end of a command line, the
command runs without occupying the terminal window. The shell prompt is displayed
immediately after you press Return. This is an example of a background job.
Stopped: If you press Control + Z for a foreground job, or enter the stop command for a
background job, the job stops. This job is called a stopped job.
Example,
Running job in background
[root@localhost ~]# sleep 200 &
[2] 2261
[root@localhost ~]# jobs - listing jobs
[1]- Running sleep 1000 &
[2]+ Running sleep 200 &
Killing Processes
Usually, a process terminates on its own when they’re done with their task, or when you ask
them to quit. However, sometimes a process can hang up or consume a lot of CPU or RAM. In this
situation, you would want to manually “kill” the process. In order to kill a process, you should first
locate the details of the process. You can do this through following commands:
top, ps, pidof and pgrep.
(We had already seen how to get details of processes using top and ps command in this chapter.)
Getting process details using pgrep,
pgrep command searches for processes currently running on the system, based on a
complete or partial process name, or other specified attributes.
Examples,
Killing process using process id,
[root@localhost ~]# pgrep –a sleep
2261 sleep 1000
2368 sleep 200
[root@localhost ~]# kill -9 2261
[1]+ Killed sleep 1000
Archiving is the process of combining multiple files and directories (same or different sizes)
into one file. On the other hand, compression is the process of reducing the size of a file or
directory. Archiving is usually used as part of a system backup or when moving data from one system
to another. One of the oldest and most common command for creating and working with backup
archive tar command. With tar users can gather large amount of data into single unit known as
archive.
Syntax:
tar <-options> <compress_fileName>.tar <files_to_be_compressed>
-c -> create an archive.
-f -> file name. (Compulsory option)
-v -> verbose or view.
-t -> list the content from archive.
-x -> execute the content from archive.
-P -> preserve permission when extracting file or directory.
-C -> copy content from an archive to another directory.
Examples,
Archive file using tar command
[root@localhost ~]# du –sh /etc disk usage or size of file/dir
34M /etc
[root@localhost ~]# tar -cvf /backup.tar /etc
tar: Removing leading `/' from member names
/etc/
/etc/netconfig
/etc/dracut.conf.d/
/etc/egl/
/etc/egl/egl_external_platform.d/
/etc/rc4.d
...
[root@localhost ~]# du –sh /backup.tar
30M /backup.tar
Examples,
Compression using tar command,
[root@localhost ~]# du –sh /etc
34M /etc
[root@localhost ~]# tar –czvf /backup1.tar.gz /etc using gzip method
[root@localhost ~]# du –sh /backup1.tar.gz
8.4M /backup1.tar.gz
[root@localhost ~]# tar –cjvf /backup2.tar.bz2 /etc using bzip2 method
[root@localhost ~]# du –sh /backup2.tar.bz2
7.0M /backup2.tar.bz2
[root@localhost ~]# tar –cJvf /backup3.tar.xz /etc using xz method
[root@localhost ~]# du –sh /backup3.tar.xz
5.7M /etc.tar.xz
When we use gzip command for compression, the original archive file get compressed and
replaced with new compressed file. This helps you to avoid unnecessary wasting of storage space for
both original file and compressed file. Also, new compressed file will get .gz extension automatically.
You can clearly see in it above example. Similarly, bzip2 and xz commands also generates compressed
file and replace the old file.
Compression using bzip2 command,
[root@localhost ~]# tar -cvf /etc.tar /etc
[root@localhost ~]# ls /
etc.tar
[root@localhost ~]# du -sh /etc.tar
30M /etc.tar
[root@localhost ~]# bzip2 /etc.tar
[root@localhost ~]# ls /
etc.tar.bz2
[root@localhost ~]# du -sh /etc.tar.bz2
7.5M /etc.tar.bz2
(try it yourself)
Search and Filter utility in Linux
Search utilities are used to search files from the system where as filter utilities filters the
output. Following are some filter tools that can we use,
cat – displays the text from file as output.
[root@localhost ~]# cat /flower.txt
Rose
Lotus
Lotus
Lily
Daisy
Daisy
Jasmine
Marigold
Tulip
sort – Sorts the lines alphabetically by default but there are many options available
to modify the sorting mechanism
[root@localhost ~]# sort /flower.txt
Daisy
Daisy
Jasmine
Lily
Lotus
Lotus
Marigold
Rose
Tulip
uniq – Removes duplicate lines. uniq has a limitation that it can only remove
continuous duplicate lines.
[root@localhost ~]# uniq /flower.txt
Rose
Lotus
Lily
Daisy
Jasmine
Marigold
Tulip
sed – sed stands for stream editor. It allows us to apply search and replace operation
on our data effectively. sed is quite an advanced filter and all its options can be seen
on its man page.
[root@localhost ~]# sed –i s/^/#/ /flower.txt
[root@localhost ~]# cat /flower.txt
#Rose
#Lotus
#Lotus
#Lily
#Daisy
#Daisy
#Jasmine
#Marigold
#Tulip
wc – wc command gives the number of lines, words and characters in the data.
Options,
-l Show Line Count
-w Display word count
-m Show character count
Example,
[root@localhost ~]# wc /flower.txt
9 9 57 /flower.txt
-c Output count of
matching lines only.
-n Precede each matching
line with a line number.
Example,
[root@localhost ~]# grep “Da” /flower.txt
#Daisy
#Daisy
[root@localhost ~]# grep –c “Da” /flower.txt
2
[root@localhost ~]# grep -v “Da” /flower.txt
#Rose
#Lotus
#Lotus
#Lily
#Jasmine
#Marigold
#Tulip
[root@localhost ~]# grep -c -v “Da” /flower.txt
7
Following are some search utilities,
Example,
find – The find Command is one of the most important and frequently used
command-line utility in Linux operating systems. Find command is used to search and
locate the list of files and directories based on conditions you specify for files that
match the arguments. Find can be used in a variety of conditions like you can find files
by permissions, users, groups, file type, date, size, and other possible criteria.
Syntax:
#find <search_path> <options> <required-parameters>
Options,
Options Description
-name <file_name> Search for a file with the specified name
-perm <mode> File's permission bits are exactly mode (octal or symbolic)
-size <N/+N/-N> Find files with specific size (size > or size <)
-user <name> File is owned by user specified
-uid <uid> Files numeric user id is the same as uid
-group <grp_name> File is owned by group specified
-gid <gid> The file belongs to group with the ID n
-amin <n/+n/-n> The file was last accessed n minutes ago
-mmin <n/+n/-n> File's data was last modified n minutes ago
-atime <n/+n/-n> The file was last accessed more n days ago
-cmin <n/+n/-n> The file was last changed n minutes ago
-ctime <n/+n/-n> The file was last changed more than n days ago
-mtime <n/+n/-n> File's data was last modified n days ago
-empty The file is empty
-executable The file is executable
-readable Find files which are readable
-writable Search for files that can be written to
-type <type> Search for a particular type (f,d,l,c,b,s,p)
-nouser Search for a file with no user attached to it
-exec <cmd> The file being searched which meets the above criteria and
executes the provided command
Example,
[root@server1 ~]# find /etc -name passwd
/etc/passwd
[root@server1 ~]# find /home/ -perm 644
/home/ec2-user/.bash_logout
[root@server1 ~]# find / -size +100M
/usr/lib/locale/locale-archive
[root@server1 ~]# find / -user cbz
/home/cbz
[root@server1 ~]# find / -uid 1005
/home/cbz
[root@server1 ~]# find / -group admin
/root/demo.txt
[root@server1 ~]# find / -gid 1006
/root/demo.txt
[root@server1 ~]# ll -l /boot/grub/menu.lst
[root@server1 ~]# find /boot/ -amin -1
[root@server1 ~]# vi /etc/hosts
[root@server1 ~]# find /etc -mmin -1
/etc/hosts
[root@server1 home]# find / -name authorized_keys -exec cp -rv {} /home \;
[root@server1 home]# find / -type f -name passwd -exec rm -rf {} \;
Chapter 10: Managing Software
Software Management in Linux:
In Linux, software is available in the form of packages (packages are the collection of
programs). And installing the packages means simply extracting the files from the archive and
put it on the system. Package Management is the method of installing and maintaining the
software.
Some package requires shared library, or another package, called dependency. Since there
are many families of Linux, different distribution family use a different packaging system.
Following are some commonly used package systems:
o Red Hat Packages (*.rpm)
o Debian Packages (*.deb)
o Ubuntu Packages (*.pkg)
There are two types of utilities we can use, low-level tool and high-level tool. Low-level tool
manages package files installation, update and uninstallation whereas high-level tool can
install the package with their dependencies.
#rpm: rpm is a RedHat Package Manager use to install package files on RedHat Linux system. It only
install the package and not the dependency. It requires full pathname of package for installation.
Syntax:
Options:
-i install package file
-v verbose
-h show hash bar
-U Upgrade package
-q query package
-e erase package
Example:
Installing Package from /package directory
Query of package
#yum: yum is an open source CLI as well as GUI tool for rpm based system. It allow user to easily
install, update, remove or search packages on system. Yum uses numerous third party repositories to
install package automatically. This can also resolves the dependency issue. Yum does not require
complete path-name of package for installation.
Because yum uses repositories to get the packages, either you should keep repositories on
your system or you should have access to the remote repositories. Repository is nothing but the meta-
data of all the packages. Sometimes the packages are not get installed since, its repository is not
available. In such case you can create your own repository or you can install the package using rpm
but if you use rpm, all the dependencies should be installed manually.
Syntax:
#yum <option/action> <package-name> [<-y/-d/-n>]
Actions:
install install package
update update package
list list all package
info package info.
search general info. of package
remove uninstall package
history shows history
groupinstall install group of packages
groupupdate update group of packages
grouplist list all groups of packages
groupremove remove installed group of packages
repolist list repositories
clean clean yum cache
Creating own repository:
STEP1: install required packages for creating repository if it is not already installed.
#rpm –ivh createrepo
#rpm –ivh deltarpm
STEP3: create new repository file into default repository path i.e. /etc/yum.repo.d/
#vim /etc/yum.repo.d/clientdemo.repo
[exampleID] -----repo ID
name=Sampledemo -----repo name
baseurl=file:///Packages -----url of packages(http:// or
ftp:// for remote url)
enabled=1 ----- 1 for enable, 0 for disable the
repo
gpgcheck=0
STEP4: check repository ----- 1 for gpgcheck on, 0 for
gpgcheck off
#yum repolist
#yum clean all -----to clean yum cache
Examples:
Install httpd package
#yum install httpd
#yum install tree –y --- install without
asking permission
Install group of packages
#yum grouplist --- list all groups
#yum groupinstall “Basic Web Server”
Job scheduling is a feature that allows a user to submit a command or program for execution
at a specified time in the future. On a Linux server, it is important that certain tasks run at certain
times the execution of the command or program could be one time or periodically based on a pre-
determined time schedule. For example, scheduling system maintenance commands to run during
nonworking hours is a good practice, as it does not disrupt normal business activities.
In Linux, we have three methods to schedule a job,
at – single time execution
crontab – periodic execution
anacron – periodic execution
1. # at
The at command uses atd service for executing jobs. At command queued the task
into /var/spool/at and executes them when it is scheduled. After execution, tasks removed
from the queue. After writing desired jobs, you can save jobs using shortcut key “ctrl+d”
Syntax, # at “<time> <date>”
Example,
Scheduling at job,
[root@localhost ~]# at “14:30 31 jan 2020”
at> touch /root/file.txt
at> mkdir /root/Practice
at> <EOT>
job 3 at Fri Jan 31 14:30:00 2020
2. # crontab
Crontab is similar as that of window task scheduler in windows. In Linux, we schedule
jobs using crontab. Crontab job scheduling technique is very useful for creating backup,
scanning system, performing jobs with daily, weekly, monthly basis, etc. A daemon
called crond runs in the background and check its configuration every minute to examine
configuration files in order to execute commands or shell scripts specified in the crontab if the
time matches with specified time. Crontab can executes job repeatedly in specified time
interval.
Crond executes cron jobs on a regular basis if they comply with the format defined in
the /etc/crontab file. Crontables for users are located in the /var/spool/cron
directory. A cron table includes six fields separated by space or tab characters. The first five
fields specify the times to run the command, and the sixth field is the absolute pathname to
the command to be executed. These fields are mentioned in /etc/crontab file.
/etc/crontab file,
# cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
Syntax,
# crontab <option>
Options,
-e Edit cron table
-l List cron table
-r Remove crontable
-u Specify user
-e –u Edit crontable for specific user
-l –u List crontable of specific user
-r –u Remove crontable of specific user
Example,
Suppose, we have to schedule following jobs
1. Create a file in /root/Downloads dir with name FLOWER.txt at 10.30 pm on 15 Aug.
2. Display “Welcome to Cloudblitz” massage on terminal at midnight of every Saturday.
3. Display “HELLO” massage on terminal after every hour on 10th of Jan, Feb, and March
Listing crontable,
[root@localhost ~]# crontab –l
30 22 15 aug * /bin/touch /root/Downloads/FLOWER.txt
0 0 * * sat /bin/echo “Welcome to Cloudblitz”
0 * 10 jan,feb,mar * /bin/echo “HELLO”
Scheduling a job for Natasha user, creating file in Natasha’s home directory at 10 am on Jan
31st.
■ Date and time: Every log message starts with a timestamp. For filtering purposes,
■ Host: The host the message originated from. This is relevant because rsyslogd can be
configured to handle remote logging as well.
■ Service or process name: The name of the service or process that generated the message.
■ Message content: The content of the message, which contains the exact message that has
been logged.
Live Log File Monitoring,
Syntax,
# tail –f <logfile>
Example,
[root@ip-172-31-24-16 ~]# tail -f /var/log/messages
Jan 30 04:47:19 ip-172-31-24-16 dhclient[3012]: XMT: Solicit on eth0, interval
114430ms.
Jan 30 04:49:14 ip-172-31-24-16 dhclient[3012]: XMT: Solicit on eth0, interval
114820ms.
Jan 30 04:50:01 ip-172-31-24-16 systemd: Created slice User Slice of root.
Jan 30 04:50:01 ip-172-31-24-16 systemd: Starting User Slice of root.
Jan 30 04:50:01 ip-172-31-24-16 systemd: Started Session 3 of user root.
Jan 30 04:50:01 ip-172-31-24-16 systemd: Starting Session 3 of user root.
Jan 30 04:50:01 ip-172-31-24-16 systemd: Removed slice User Slice of root.
Jan 30 04:50:01 ip-172-31-24-16 systemd: Stopping User Slice of root.
Using logger,
Most services write information to the log files all by themselves. The logger command enables users
to write messages to rsyslog from the command line. User can write logs manually.
Syntax,
# logger <option> <Message>
Example, (Writing log with priority option,)
[root@ip-172-31-24-16 ~]# logger –p local3.err “Danger”
[root@ip-172-31-32-167 ~]# tail -3 /var/log/messages
Jan 30 06:10:41 ip-172-31-32-167 systemd-logind: New session 5 of user ec2-
user.
Jan 30 06:10:41 ip-172-31-32-167 systemd: Starting Session 5 of user ec2-user.
Jan 30 06:11:14 ip-172-31-32-167 ec2-user: Danger
Configuring rsyslogd
To make sure that the information that needs to be logged is written to the location where
you want to find it, you can configure the rsyslogd service through the /etc/rsyslog.conf file.
Understanding rsyslogd Configuration Files
Like many other services on RHEL 7, the configuration for rsyslogd is not defined in
just one configuration file. The /etc/rsyslogd.conf file is the central location where rsyslogd is
configured. From this file, the content of the directory /etc/rsyslog.d is included. This directory
can be populated by installing RPM packages on a server. When looking for specific log
configuration, make sure to always consider the contents of this directory also.
If specific options need to be passed to the rsyslogd service on startup, you can do
this by using the /etc/sysconfig/rsyslog file. This file by default contains one line, which reads
SYSLOGD_OPTIONS. On this line, you can specify rsyslogd startup parameters. The
SYSLOGD_OPTIONS variable is included in the systemd configuration file that starts rsyslogd.
Theoretically, you could change startup parameters in this file, as well, but that is not
recommended
Understanding rsyslog.conf Sections
The rsyslog.conf file is used to specify what should be logged and where it should be
logged. To do this, you’ll find different sections in the configuration file:
■ #### MODULES ####: rsyslogd is modular. Modules are included to enhance the supported
features in rsyslogd.
■ #### GLOBAL DIRECTIVES ####: This section is used to specify global parameters, such as
the location where auxiliary files are written or the default timestamp format.
■ #### RULES ####: This is the most important part of the rsyslog.conf file. It contains the
rules that specify what information should be logged to which destination.
Notice Used for informational messages about items that might become an issue
later.
warning / warn Something is suboptimal, but there is no real error yet.
emerg / panic Message generated when the availability of the service is discontinued
Rotating Log Files
To prevent syslog messages from filling up your system completely, the log messages can be rotated.
That means that when a certain threshold has been reached, the old log file is closed and a new log
file is opened. The logrotate utility is started periodically through the crond service to take care of
rotating log files. The default settings for log rotation are kept in the file /etc/logrotate.conf.
Logs can be rotated with customized settings that will kept in /etc/logrotate.d/ directory.
Sample content for logrotate setting,
Logs can be rotated forcefully using logrotate command followed by configuration file for
logrotate. The logrotate command reads out configuration file and can perform
By default, the journal is stored in the file /run/log/journal. The entire /run directory is used for current
process status information only, which means that the journal is cleared when the system reboots.
Setting journald Parameters Through /etc/systemd/journald.conf
[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=login
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
STEP2: Before journald can write the journal to this directory, you have to set ownership. Type
chown root:systemd-journal /var/log/journal , followed by
chmod 2755 /var/log/journal .
STEP 3: Next, you can either reboot your system (restarting the systemd-journald service is
not enough) or use the killall -USR1 systemd-journald command.
STEP 4: The systemd journal is now persistent across reboots. If you want to see the log
messages since last reboot, use journalctl -b .
Chapter 13: Managing Partitions
A file system is an organized structure of data-holding files and directories residing on storage
device. The process of adding new file system into existing directories is called mounting and the
directory is called mount point. Hard disks and other storage device are normally divided into
partitions. In Linux, Different types of hard disks has different representations for their partitions.
These representation are the special files called as block device. The block devices are stored
in the /dev directory. Disk Partitioning is the method of dividing hard drives into multiple logical
storage units referred to as partitions. There are two schemes of disk partitioning: 1. MBR partitioning
scheme 2. GPT partitioning scheme.
MBR Partitioning Scheme: Master Boot Record partitioning scheme (MBR) dictates how disks should
be partitioned on system. MBR uses the standard BIOS partition table thus it has size limit of 2TB. MBR
has size 512 bytes among which 64 bytes are used for partition table info. Each partition creation
requires 16bytes thus MBR scheme supports a maximum of four primary partitions (or 3 primary and
1 extended).
GPT Partitioning Scheme: GUID Partition Table (GPT) overcomes from limitations of the MBR
partition. It has size limit of 8 ZB and making more than four primary partition is also possible.
Primary: A primary partition is in which operating system can be installed. In MBR, maximum four
primary partition can be create. The primary partition which is use to boot the system is called an
active partition. (Active partition is the partition from which operating system is loaded.
Extended: Extended partition breaks the limit of four partition. Using extended partition we can create
no. of logical partitions. Extended partition holds logical partition. Only one extended partition is
allowed.
Logical: Logical Partitions are created inside the extended partition. RHEL 6 supports maximum of 12
logical partitions whereas RHEL 7 supports maximum of 60 logical partitions. To create logical
partition, first you must create extended partition.
MANAGING MBR PARTITION:
For MBR partitioning scheme, fdisk partition editor is used.
Creating Partition
[root@localhost Desktop]# fdisk /dev/sdb
1.Welcome
Specify disk device on which partition is to be create.
to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
2. Above step opens MBR partition editor, enter m to get help about commands.
5. Partition will not be created until the changes has been saved. Give ‘w’ command to save the
changes.
Command (m for help): w
File Systems
A file system is the way in which files are named, stored, retrieved and organized on the
storage device or partition. Following are some commonly used file systems in now a days,
xfs – A high performance filesystem originally developed by Silicon Graphics that works
extremely well with large files. This filesystem is default for RHEL 7. NASA still uses this file
system on their 300 TB server.
ext2 – ext2 filesystem was introduced in 1993 and it was the first default file system in several
Linux file system. It overcomes the limitation of legacy ext file system. The maximum
supported size is 16GB to 2TB. Journaling feature is not available. ext2 normally is used in
flash based storage like pendrives, sd cards, etc.
ext3 – It was introduced in 2001 with all the features of ext2 and additional journaling feature.
It also provides facility to upgrade from ext2 to ext3 without having to backup and restore
data.
ext4 – It is the high anticipated ext3 successor. ext4 was introduced in 2008 with backward
capabilities. It supports maximum file size of 16TB. It has option to turn off journaling feature.
vfat – Microsoft extended FAT filesystem.
OR
[root@localhost Desktop]# mkfs -t ext4 /dev/sdb1
List assigned file system disks, (it shows filesystem as well as partition block id)
[root@localhost Desktop]# blkid
Mounting Partition
Mounting is process of adding partition in to the system am specific directory. The directory
is called as mounting point. There are two methods of mounting, temporary mounting mount the
partition temporary which get unmounts automatically after the reboot where as permanent
mounting allows to mount permanent.
Temporary mount,
[root@localhost Desktop]# mount /dev/sdb1 /hello --/hello is directory
called mounting point
[root@localhost Desktop]# vim /etc/fstab
Permanent mount,
# /etc/fstab
# Created by anaconda on Tue Jun 4 07:23:02 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=c38f98a3-3d9e-4621-99f9-5006264d72cc / xfs defaults
1 1
UUID=757aa47c-0bd7-42a0-9a8e-a379998c0975 /boot xfs defaults
1 2
UUID=f1344dcb-2f8b-4311-9b93-7dc26a16e8fb swap swap defaults
0 0
Unmounts Partition
Temporary unmounting,
[root@localhost Desktop]# umount /dev/sdb1
Permanent Unmounting can be done by removing the entry from fstab file and perform #mount –a.
Removing Partition
Before removing partition, first unmounts all the partitions that you wants to remove. Then remove
partition using fdisk partition editor.
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
swap space in Linux is used when the amount of physical memory is getting full. If the system
needs more memory resource and the RAM is full, all inactive pages in memory moved to swap space.
(swap should not be consider a replacement of RAM).
More space can be to logical volume from the volume group while the volume is still
in use.
More physical can be added to a volume group if the volume group begins to run out
of space. The physical volumes can be from same or different disks.
Move data from one physical volume to another, so you can remove smaller disks and
replace them with larger ones while the filesystem are still in use.
To reduce logical volumes, first unmounts the logical volume and clear processes if any running on
that partition. Reduction is possible only in extended type of file system.
[root@localhost Desktop]# umount /dev/vg_demo/lv1 unmount lv
[root@localhost Desktop]# fsck.ext4 –f /dev/vg_demo/lv1 clear processes
[root@localhost Desktop]# e2fsck –f /dev/vg_demo/lv1 if error occur
for above command
[root@localhost Desktop]# lvreduce –L -50M /dev/vg_demo/lv1 reduce logical
volume
[root@localhost
Reducing Desktop]#
volume groups resize2fs
means /dev/vg_demo/lv1
removing physical volumes from the volumesave
group.and resize
Only unused
in partition table
physical partitions can be removed.
[root@localhost Desktop]# vgreduce vg_demo /dev/sdb3
Kernel packages can be upgraded or new version of kernel package can be added into existing
system. Below example shows how to update the kernel.
SSH, also known as Secure Shell or Secure Socket Shell, is a network protocol that
gives users, particularly system administrators, a secure way to access a computer over an unsecured
network. SSH also refers to the suite of utilities that implement the SSH protocol. Secure Shell provides
strong authentication and encrypted data communications between two computers connecting over
an open network such as the internet. SSH is widely used by network administrators for managing
systems and applications remotely, allowing them to log into another computer over a network,
execute commands and move files from one computer to another.
SSH refers both to the cryptographic network protocol and to the suite of utilities that
implement that protocol. SSH uses the client-server model, connecting a secure shell client
application, the end at which the session is displayed, with an SSH server, the end at which the session
runs. SSH implementations often include support for application protocols used for terminal
emulation or file transfers. SSH can also be used to create secure tunnels for other application
protocols. An SSH server, by default, listens on the standard Transmission Control Protocol (TCP) port
22.
Secure Shell Capabilities,
Secure remote access to SSH-enabled network systems or devices, for users as well as
automated processes;
secure and interactive file transfer sessions;
automated and secured file transfers;
secure issuance of commands on remote devices or systems; and
Secure management of network infrastructure components.
Default Port: 22
Configuration file : /etc/ssh/sshd_config
Package required: openssh-server, openssh-client, openssh
Daemon Service: sshd
There are two types of SSH authentication:
1. Password Based Authentication – It uses password for allowing users to access their shell
remotely.
2. Key Based Authentication – It uses key pairs (i.e. public key and private key pair) for
authentication.
Syntax:
[root@server0 ~]# ssh user@server_ip
Example:
[root@server0 ~]# ssh [email protected]
[email protected]'s password:
You can enable or disable password based authentication by changing its configuration:
[root@server0 ~]# vi /etc/ssh/sshd_config
...
60 # To disable tunneled clear text passwords, change to no here!
61 #PasswordAuthentication yes
62 #PermitEmptyPasswords no
63 PasswordAuthentication yes
...
SSH key pairs are two cryptographically secure keys that can be used to authenticate a client to
an SSH server. Each key pair consists of a public key and a private key. The private key is retained by
the client and should be kept absolutely secret. Any compromise of the private key will allow the
attacker to log into servers that are configured with the associated public key without additional
authentication. As an additional precaution, the key can be encrypted on disk with a passphrase.
The associated public key can be shared freely without any negative consequences. The public key
can be used to encrypt messages that only the private key can decrypt. This property is employed as
a way of authenticating using the key pair.
The public key is uploaded to a remote server that you want to be able to log into with SSH. The
key is added to a special file within the user account you will be logging into
called ~/.ssh/authorized_keys.
When a client attempts to authenticate using SSH keys, the server can test the client on whether
they are in possession of the private key. If the client can prove that it owns the private key, a shell
session is spawned or the requested command is executed.
Transfer private key to the client using any media such as pen drive, mail, ftp, etc.
Here we are transferring key using scp:
[root@server0 ~]# scp ~/.ssh/id_rsa [email protected]:/root - ip address
of client as
destination
Now, client can access server’s shell through ssh using private key,
Syntax: (To access secure shell using key-based authentication)
[root@server0 ~]# ssh –i private_key user@server_ip
Example
[root@client ~]# ssh –i /root/id_rsa [email protected]
Below are some examples that helps you to understand more about the ssh,
You can execute single command remotely without taking whole ssh shell access,
[root@client ~]# ssh [email protected] mkdir /dir_name2
To get access of graphical applications using secure shell,
[root@client ~]# ssh –X [email protected]
[root@server0 ~]# firefox & - graphical application
...
15 # semanage port -a -t ssh_port_t -p tcp #PORTNUMBER - execute this
16 # command by replacing #PORTNUMBER
17 Port 2020 with <portnumber> i.e. 2020
...
After pressing the power button, system starts in background till we see the login screen on
the display. The boot procedure occurs in six stages of process.
1. BIOS :
BIOS – Basic Input/Output System is a firmware interface that controls not only booting process
and also provides all the control of low-level interface to attached peripheral devices. When the
system is powered on, it will read all the devices settings and executes the POST (Power ON Self-Test)
process to recognize the Hardware devices to test and initialize the system Hardware components.
This process is also called as System Integrity Check. After successful POST process, it searches boot
loader program in hard drives, CD_ROM, floppy disk, etc. and load boot loader program into ROM.
Boot sequence can be changed if any interruption occurs (such as pressing “delete” key). Once boot
loader program is detected and loaded into the memory, the control is given to the boot loader
program.
2. MBR :
Master boot Record placed in the first sector of the Linux boot Hard Drive and this information
pre-loads into ROM (Read Only Memory) by BIOS. The MBR is only 512 bytes in size and it contains
the machine code instructions for booting the Operating System, it’s called a boot loader, along with
the partition table and validation check. Once the BIOS finds and loads the boot loader (GRUB2)
program into (ROM) memory or Hard drive, it takes the control of the boot process to it. Simply MBR
(Master Boot Record) loads and executes the GRUB2 boot loader.
3. GRUB2:
GRUB stands for Grand Unified Boot loader. GRUB2 is the default boot loader program in all latest
version of like Red Hat/CentOS 7 and also Ubuntu from version 9.10. It has been replaced by GRUB
boot loader also known as GRUB legacy. GRUB2 configuration file located in /boot/grub2/grub.cfg and
it is automatically generated by grub2-mkconfig using templates from /etc/grub.d and settings from
/etc/default/grub. The boot loader (GRUB2 for RHEL 7) starts the RHEL 7 kernel and initial RAM disk
(initrd). GRUB 2 is installed in the boot sector of your server’s hard drive and is configured to load a
Linux kernel and the initramfs and the initrd is an initial root file system that will mount prior to the
real root file system on Linux system. If you have multiple kernel images installed on your system, you
can choose which one to be executed. GRUB displays a splash screen, waits for few seconds, if you
don’t enter anything, it loads the default kernel image as specified in the grub configuration file. GRUB
has the knowledge of the file. So, in simple terms GRUB just loads and executes Kernel and initrd
images.
4. KERNEL
Linux Kernel is the central core of the OS and it is the first program loaded on the system starts
up. While system starting kernel loads all the necessary Kernel Modules and Drives from initrd.img to
load system first process systemd in Linux 7. It mounts the root file system as specified in the “root=”
in grub.conf. Below command can show systemd process id (PID).
5. SYSTEMD
Systemd process is the first process (PID 1) to run on Linux 7 systems, it initializes the system and
launches all the services that were once started by the traditional init (/etc/init.d) process. Systemd
process reads the configuration file of /etc/systemd/system/default.target, then its load the OS in
targeted runlevel.target. This tells systemd to start everything in
the /usr/lib/systemd/system/basic.target before starting the other multi-user services.
6. RUNLEVEL PROGRAMS
Systemd uses ‘targets’ instead of runlevels. By default, there are two main targets:
To recover the root password from this point, use the following procedure.
1. Remount /sysroot as read-write.
switch_root: /# mount –o remount,rw /sysroot
2. Switch into a chroot jail, where /sysroot is treated as the
root of the file system tree.
switch_root:/# chroot /sysroot
3. Set a new root password.
sh-4.2# passwd root
4. Make sure that all authentication update successfully and all unlabeled file
Relabeled during boot.
sh-4.2# touch /.autorelabel
As shown in figure…
Chapter 19: Working with LDAP
LDAP, Lightweight Directory Access Protocol, is an Internet protocol that email and other
programs use to look up information from a server. LDAP is mostly used by medium-to-large
organizations. If you belong to one that has an LDAP server, you can use it to look up contact info and
the like. LDAP server mostly used to create centralize user server system so that user can be access
their account from any system in the network. Following is the example of client side configuration of
LDAP service.
Example,
Security Enhanced Linux (SELinux) is a security enhanced module on top of Linux. It provides
additional security measures, is includes by default, and set as enforcing mode in RHEL. If SELinux is
enabled and nothing else has been configured, all system calls are denied. To specify what exactly is
allowed, a policy is used. In this policy, rules define which source domain is allowed to access which
target domain. The source domain is the subject that is trying to access something. Typically, these
are processes or users. The target domain is the object that is accessed. Typically, these are files,
directories, or network ports. To define exactly what is allowed; context labels are used. These labels
are the essence of SELinux because these labels are used to define access rules.
SELinux Core Elements:
Element Use
Policy A collection of rules that define which source has access to which
target.
Source domain The object that is trying to access a target. Typically a user or a process.
Target domain The thing that a source domain is trying to access. Typically a file or
port.
Context A security label that is used to categorize objects in SELinux.
Rule A specific part of the policy that determines which source domain has
which access permissions to which target domain.
Labels Same as context label, defined to determine which source domain has
access to which target domain.
Operational Modes:
There are three modes of operating SELinux, i.e. Enforcing, Permissive, and disabled.
Enforcing mode: In Enforcing mode, SELinux is turned on and all the security policy rules are enforced.
Permissive mode: In permissive mode, SELinux is turned on but the security policy rules are not
enforced. When a security policy rule should deny admission, access is still allowed. However, a
message is sent to a log file denoting that access should have been denied. This mode can be useful
for testing new applications, testing new SELinux policy rule and troubleshooting particular property.
Disabled mode: In Disabled mode, SELinux is turned off. Only DAC rules will be applicable for access.
No logs will be generated. Disabled mode can be useful for circumstances in which enhanced security
is not required.
Note: You cannot disable the SELinux using command. You have to disable it from configuration file.
SELinux main configuration file is /etc/sysconfig/selinux. setenforce command will change
operational modes temporary. Operational modes can be managed from its main configuration file
which will applied permanently.
Security Contexts:
SELinux security context is a method to classify objects (such as files) and subjects (such as
processes and users). SELinux context also called as labels allow SELinux policy rules for subject
Accessing object. Context settings are an important part of SELinux operations. The context is a label
that can be applied to different elements:
■ Files and directories
■ Ports
■ Processes
■ Users
Context labels define the nature of the item, and SELinux rules are created to match context labels of
source objects to the context labels of target objects. So, setting correct context labels is a very
important skill for system administrators.
Monitor context labels of a file/dir,
[root@desktop0 ~]# ls –l myfile -- show DAC rules for
myfile
-rw-r--r--. 1 root root 0 Jul 24 15:26 myfile
[root@desktop0 ~]# ls –Z myfile -- show SELinux context for
myfile
-rw-r--r--. root root unconfined_u:object_r:admin_home_t:s0 myfile
More examples,
[root@desktop0 ~]# ps –eZ | grep bash shows SELinux context for
processes (ex. bash)
[root@desktop0 ~]# id shows uid, gid and SELinux
context for user
Every context label always consists of three different parts:
■ User: The user can be recognized by _u in the context label; it is set to system_u on most directories.
SELinux users are not the same as Linux users.
■ Role: The role can be recognized by _r in the context label. Most objects are labeled with the
object_r role. In advanced SELinux management, specific SELinux users can be assigned permissions
to specific SELinux roles.
■ Type: The type context can be recognized by _t in the context label.
To view current SELinux security contexts using secon command,
To set context permanently, change the context using semanage command and perform restorecon
command to set the context. restorecon command reset correct context and also update running
selinux contexts.
[root@desktop0 ~]# restorecon –vFR /demo
Note: all security context resets when system is booted. This is done due to /.autorelable file.
Using Boolean Settings to Modify SELinux Settings
In the SELinux policy, there are many rules. Some of these rules allow specific activity, whereas
other rules deny that activity. Changing rules is not easy, and that is why SELinux Booleans are
provided to easily change the behavior of a rule.
An example of a Boolean is ftpd_anon_write, which by default is set to off. That means that
even if you have configured your FTP server to allow anonymous writes, the Boolean will still deny it,
and the anonymous user cannot upload any files. If a conflict exists between the setting of a parameter
in a service configuration file and in a Boolean, the Boolean always takes precedence. But Booleans
are easy to change.
To get a list of Booleans on your system, use getsebool -a . If you are looking for Booleans that
are set for a specific service, use grep to filter down the results.
An alternative way to show current Boolean settings is by using the semanage boolean -l
command. This command provides some more details, because it shows the current Boolean setting
and the default Boolean setting.
Chapter 21: Configuring a Firewall
In computing, a firewall is a network security system that monitors and controls incoming and
outgoing network traffic based on predetermined security rules. A firewall typically establishes a
barrier between a trusted internal network and untrusted external network, such as the Internet.
Firewalls are often categorized as either hardware firewalls or software firewalls. Hardware
firewalls (also called as network firewalls) filter traffic between two or more networks and run on
network hardware. Software firewalls (also called as host-based firewall) run on host computers and
control network traffic in and out of those machines. Network traffic are controlled by allowing ports.
In Linux, firewalld is frontend controller for firewall-cmd used to implement persistent
network traffic rules. It provides command line and graphical interfaces and is available in the
repositories of most Linux distributions. FirewallD uses zones and services to manage traffics
dynamically, so that it updates rule without breaking any existing sessions and connections. As default,
firewall set all outbound traffic ports allow where as it deny all inbound traffic ports.
Port Forwarding,
The example rule below forwards traffic from port 80 to port 12345 on the same server.
[root@desktop0 ~]# firewall-cmd --add-forward-port=port=80:proto=tcp:toport
=12345
2. Add the forward rule. This example forwards traffic from local port 80 to port 8080 on a
remote server located at the IP address: 198.51.100.0
[root@desktop0 ~]# firewall-cmd --zone="public" --add-forward-port=port=80:
proto=tcp:toport=8080:toaddr=198.51.100.0
File Transfer Protocol (FTP) is a standard internet protocol for transferring files between
computers on the internet over TCP/IP connections. Here we are going to configure FTP using vsftpd
(very secure FTP) service and upload/download files over the network.
Configuration Basics:
Package name = vsftpd
Default Port = 21 tcp
Configuration file = /etc/vsftpd/vsftpd.conf
Default root directory = /var/ftp/
SELinux context = public_content_rw_t Note: Assume that
Daemon/Service Name = vsftpd IP of server = 172.25.0.11
IP of client = 172.25.0.10
Configuring an FTP Anonymous Drop Box
[root@server0 ~]# yum install vsftpd -y
[root@server0 ~]# vim /etc/vsftpd/vsftpd.conf
The FTP server uses the directory /var/ftp as the default document root. In this directory,
create a subdirectory with the name uploads (i.e. /var/ftp/uploads). Give permission 730 to
/var/ftp/uploads to set the correct permissions, and set the group owner to the group ftp. On an
anonymous drop box, users can write files, but they cannot read them.
Accessing from client,
ftp> ls
227 Entering Passive Mode (172,31,49,176,244,22).
150 Here comes the directory listing.
-rw-r--r-- 1 0 0 0 May 21 08:48 file
drwxr-xr-x 2 0 0 6 Apr 01 04:55 pub
226 Directory send OK.
(To upload file in ftp, you should have permission to upload files)
Chapter 23: Configuring Time Services
The network time protocol (NTP) synchronizes the time of a computer client or server to
another server or within a few milliseconds of Coordinated Universal Time (UTC). Chrony is a flexible
implementation of the Network Time Protocol (NTP). It is used to synchronize the system clock from
different NTP servers, reference clocks or via manual input. In Linux OS, we use chrony service to
implement NTP. Following is the example to synchronize time of one system with NTP server.
server 3.classroom.example.com
:wq