Linux Programming and Data Mining Lab Manual PDF
Linux Programming and Data Mining Lab Manual PDF
LAB MANUAL
www.jntufastupdates.com
COMPUTER SCIENCE AND ENGINEERING
Program Outcomes
PO1 Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals,
and an engineering specialization to the solution of complex engineering problems.
PO2 Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.
PO3 Design/development of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
PO4 Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of the
information to provide valid conclusions.
PO5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with
an understanding of the limitations.
PO6 The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the
professional engineering practice.
PO7 Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for
sustainable development.
PO8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms
of the engineering practice.
PO9 Individual and team work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.
PO10 Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
clear instructions.
PO11 Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
PO12 Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
Program Specific Outcomes
PSO1 Professional Skills: The ability to research, understand and implement computer programs in the
areas related to algorithms, system software, multimedia, web design, big data analytics, and
networking for efficient analysis and design of computer-based systems of varying complexity.
PSO2 Problem-Solving Skills: The ability to apply standard practices and strategies in software project
development using open-ended programming environments to deliver a quality product for business
success.
PSO3 Successful Career and Entrepreneurship: The ability to employ modern computer languages,
environments, and platforms in creating innovative career paths, to be an entrepreneur, and a zest for
higher studies.
www.jntufastupdates.com
LINUX PROGRAMMING AND DATA MINING LAB SYLLABUS
LINUX PROGRAMMING
1 a) Write a shell script that accepts a file name, starting and ending line numbers as 5
arguments and displays all the lines between the given line numbers.
b) *Illustrate by writing script that will print, message “Hello World, in Bold and
Blink effect, and in different colors like red, brown etc using echo commands?
2 a) Write a shell script that deletes all lines containing a specified word in one or 7
more files supplied as arguments to it.
b) *Illustrate by writing script using for loop to print the following patterns?
i) * ii) 1
** 22
*** 333
**** 4444
***** 55555
3 a) Write a shell script that displays a list of all the files in the current directory to 9
which the user has read, write and execute permissions.
b) * Illustrate to redirect the standard input (stdin) and the standard output (stdout)
of a process, so that scanf () reads from the pipe and printf () writes into the
pipe?
4 a) Write a shell script that receives any number of file names as arguments checks 11
if every argument supplied is a file or a directory and reports accordingly.
Whenever the argument is a file, the number of lines on it is also reported.
b) *Illustrate by writing c program where process forks to a child, and create a
child process by using forks and suddenly terminates itself?
5 Write a shell script that accepts a list of file names as its arguments, counts and 14
reports the occurrence of each word that is present in the first argument file on other
argument files
6 Write a shell script to list all of the directory files in a directory. 15
7 Write a shell script to find factorial of a given integer. 16
8 Write an awk script to count the number of lines in a file that do not contain vowels. 17
9 Write an awk script to find the number of characters, words and lines in a file. 18
10 Write a c program that makes a copy of a file using standard I/O and system calls 19
11 Implement in C the following UNIX commands using System calls 20
a) cat b) ls c) mv
12 Write a program that takes one or more file/directory names as command line input 23
and reports the following information on the file.
a) File type b) Number of links c) Time of last access
d) Read Write and Execute permissions
13 Write a C program to emulate the UNIX ls –l command. 24
14 Write a C program to list for every file in a directory, its inode number and file 25
name.
15 Write a C program that demonstrates redirection of standard output to a file.Ex: ls > 26
f1.
www.jntufastupdates.com
S. No. List of Experiments Page No.
16 Write a C program to create a child process and allow the parent to display “parent” 27
and the child to display “child” on the screen.
17 Write a C program to create a Zombie process. 28
18 Write a C program that illustrates how an orphan is created. 29
19 Write a C program that illustrates how to execute two commands concurrently with 30
a command pipe. Ex: - ls –l | sort
20 Write C programs that illustrate communication between two unrelated processes 33
using named pipe.
21 Write a C program to create a message queue with read and write permissions to 34
write 3 messages to it with different priority numbers.
22 Write a C program that receives the messages (from the above message queue as 36
specified in (21)) and displays them.
23 Write a C program to allow cooperating processes to lock a resource for exclusive 38
use, using
a) Semaphores b) flock or lockf system calls.
24 Write a C program that illustrates suspending and resuming processes using signals 39
25 Write a C program that implements a producer-consumer system with two 40
processes.
26 Write client and server programs (using c) for interaction between server and client 41
processes using Unix Domain sockets. (Using Semaphores).
27 Write client and server programs (using c) for interaction between server and client 45
processes using Internet Domain sockets.
28 Write a C program that illustrates two processes communicating using shared 48
memory.
DATA MINING
1 List all the categorical (or nominal) attributes and the real-valued attributes 52
separately.
2 What attributes do you think might be crucial in making the credit assessment? 53
Come up with some simple rules in plain English using your selected attributes.
3 *What attributes do you think might be crucial in making the bank assessment? 56
4 One type of model that you can create is a Decision Tree -train a Decision Tree 57
using the complete dataset as the training data. Report the model obtained after
training.
5 Suppose you use your above model trained on the complete dataset, and classify 60
credit good/bad for each of the examples in the dataset. What % of examples can
you classify correctly? (This is also called testing on the training set) Why do you
think you cannot get 100 % training accuracy?
6 *Find out the correctly classified instances, root mean squared error, kappa 64
statistics, and mean absolute error for weather data set?
7 Is testing on the training set as you did above a good idea? Why or Why not? 68
8 One approach for solving the problem encountered in the previous question is using 69
cross-validation? Describe what is cross -validation briefly. Train a Decision Tree
again using cross - validation and report your results. Does your accuracy
increase/decrease? Why?
www.jntufastupdates.com
S. No. List of Experiments Page No.
9 Check to see if the data shows a bias against "foreign workers" (attribute 20), or 72
"personal
-status" (attribute 9). One way to do this (perhaps rather simple minded) is to
remove these attributes from the dataset and see if the decision tree created in those
cases is significantly different from the full dataset case which you have already
done. To remove an attribute you can use the preprocess tab in Weka's GUI
Explorer. Did removing these attributes have any significant effect? Discuss.
10 *Load the „weather.arff‟ dataset in Weka and run the ID3 classification algorithm. 74
What problem do you have and what is the solution?
11 Another question might be, do you really need to input so many attributes to get 75
good results? Maybe only a few would do. For example, you could try just having
attributes 2, 3, 5, 7, 10, 17 (and 21, the class attribute (naturally)). Try out some
combinations. (You had removed two attributes in problem 7. Remember to reload
the arff data file to get all the attributes initially before you start selecting the ones
you want.)
12 Sometimes, the cost of rejecting an applicant who actually has a good credit (case 77
1) might be higher than accepting an applicant who has bad credit (case 2). Instead
of counting the misclassifications equally in both cases, give a higher cost to the
first case (say cost 5) and lower cost to the second case. You can do this by using a
cost matrix in Weka. Train your Decision Tree again and report the Decision Tree
and cross -validation results. Are they significantly different from results obtained
in problem 6 (using equal cost)?
13 Do you think it is a good idea to prefer simple decision trees instead of having long 79
complex decision trees? How does the complexity of a Decision Tree relate to the
bias of the model?
14 *Run the J48 and 1Bk classifiers using-the cross-validation strategy with various 80
fold levels. Compare the accuracy results. Holdout strategy with three percentage
levels. Compare the accuracy results.
15 You can make your Decision Trees simpler by pruning the nodes. one approach is 82
to use Reduced Error Pruning -Explain this idea briefly. Try reduced error pruning
for training your Decision Trees using cross -validation (you can do this in Weka)
and report the Decision Tree you obtain? Also, report your accuracy using the
pruned model. Does your accuracy increase?
16 (Extra Credit): How can you convert a Decision Trees into "if –then -else rules". 84
Make up your own small Decision Tree consisting of 2 - 3 levels and convert it into
a set of rules. There also exist different classifiers that output the model in the form
of rules -one such classifier in Weka is rules. PART, train this model and report the
set of rules obtained. Sometimes just one attribute can be good enough in making
the decision, yes, just one! Can you predict what attribute that might be in this
dataset? OneR classifier uses a single attribute to make decisions (it chooses the
attribute based on minimum error). Report the rule obtained by training a one R
classifier. Rank the performance of j48, PART and oneR.
17 *Run J48 and Naïve Bayes classifiers on the following datasets and determine the 88
accuracy:
1.vehicle.arff
2.kr-vs-kp.arff
3.glass.arff
4.wave-form-5000.arff
On which datasets does the Naïve Bayes perform better?
www.jntufastupdates.com
ATTAINMENT OF PROGRAM OUTCOMES
& PROGRAM SPECIFIC OUTCOMES
Program
Program
Exp. Specific
Experiment Outcomes
Outcomes
No. Attained
Attained
LINUX PROGRAMMING
1 c) Write a shell script that accepts a file name, starting and ending line
numbers as arguments and displays all the lines between the given
line numbers.
PO1, PO2 PSO1
d) *Illustrate by writing script that will print, message “Hello World, in
Bold and Blink effect, and in different colors like red, brown etc
using echo commands?
2 c) Write a shell script that deletes all lines containing a specified word
in one or more files supplied as arguments to it.
d) *Illustrate by writing script using for loop to print the following
patterns?
i) * ii) 1 PO1 PSO1
** 22
*** 333
**** 4444
***** 55555
3 c) Write a shell script that displays a list of all the files in the current
directory to which the user has read, write and execute permissions.
d) * Illustrate to redirect the standard input (stdin) and the standard PO1, PO2 PSO1
output (stdout) of a process, so that scanf () reads from the pipe and
printf () writes into the pipe?
4 c) Write a shell script that receives any number of file names as
arguments checks if every argument supplied is a file or a directory
and reports accordingly. Whenever the argument is a file, the
PO1 PSO1
number of lines on it is also reported.
d) *Illustrate by writing c program where process forks to a child, and
create a child process by using forks and suddenly terminates itself?
5 Write a shell script that accepts a list of file names as its arguments,
counts and reports the occurrence of each word that is present in the first PO1 PSO1
argument file on other argument files.
6 Write a shell script to list all of the directory files in a directory. PO1 PSO1
7 Write a shell script to find factorial of a given integer. PO1 PSO1
8 Write an awk script to count the number of lines in a file that do not
PO1 PSO1
contain vowels.
9 Write an awk script to find the number of characters, words and lines in
PO1 PSO1
a file.
10 Write a c program that makes a copy of a file using standard I/O and
PO1 PSO1
system calls
www.jntufastupdates.com
Program
Program
Exp. Specific
Experiment Outcomes
Outcomes
No. Attained
Attained
11 Implement in C the following UNIX commands using System calls
PO1, PO2 PSO1
a) cat b) ls c) mv
12 Write a program that takes one or more file/directory names as
command line input and reports the following information on the file.
PO1, PO2 PSO1
a) File type b) Number of links c) Time of last access
d) Read Write and Execute permissions
13 Write a C program to emulate the UNIX ls –l command. PO1 PSO1
14 Write a C program to list for every file in a directory, its inode number
PO1 PSO1
and file name.
15 Write a C program that demonstrates redirection of standard output to a
PO1 PSO1
file.Ex: ls > f1.
16 Write a C program to create a child process and allow the parent to
PO1 PSO1
display “parent” and the child to display “child” on the screen.
17 Write a C program to create a Zombie process. PO1 PSO1
18 Write a C program that illustrates how an orphan is created. PO1 PSO1
19 Write a C program that illustrates how to execute two commands
PO1 PSO1
concurrently with a command pipe. Ex: - ls –l | sort
20 Write C programs that illustrate communication between two unrelated
PO1 PSO1
processes using named pipe.
21 Write a C program to create a message queue with read and write
PO1 PSO1
permissions to write 3 messages to it with different priority numbers.
22 Write a C program that receives the messages (from the above message
PO1 PSO1
queue as specified in (21)) and displays them.
23 Write a C program to allow cooperating processes to lock a resource for
exclusive use, using PO1, PO2 PSO1
a) Semaphores b) flock or lockf system calls.
24 Write a C program that illustrates suspending and resuming processes
PO1, PO2 PSO1
using signals
25 Write a C program that implements a producer-consumer system with PO1, PO2, PSO1,
two processes. PO4 PSO2
26 Write client and server programs (using c) for interaction between
server and client processes using Unix Domain sockets. (Using PO1, PO2, PSO1,
PO3, PO4 PSO2
Semaphores).
27 Write client and server programs (using c) for interaction between PO1, PO2, PSO1,
server and client processes using Internet Domain sockets. PO3, PO4 PSO2
28 Write a C program that illustrates two processes communicating using PO1, PO2, PSO1,
shared memory. PO3, PO4 PSO2
DATA MINING
18 List all the categorical (or nominal) attributes and the real-valued
PO1, PO2 PSO1
attributes separately.
19 What attributes do you think might be crucial in making the credit
PSO1,
assessment? Come up with some simple rules in plain English using PO1, PO2
PSO2
your selected attributes.
www.jntufastupdates.com
Program
Program
Exp. Specific
Experiment Outcomes
Outcomes
No. Attained
Attained
20 *What attributes do you think might be crucial in making the bank PO1, PO2, PSO1,
assessment? PO12 PSO2
21 One type of model that you can create is a Decision Tree -train a
PO1, PO2,
Decision Tree using the complete dataset as the training data. Report the PSO1
PO5
model obtained after training.
22 Suppose you use your above model trained on the complete dataset, and
classify credit good/bad for each of the examples in the dataset. What %
PSO1,
of examples can you classify correctly? (This is also called testing on PO1, PO2
PSO2
the training set) Why do you think you cannot get 100 % training
accuracy?
23 *Find out the correctly classified instances, root mean squared error, PO1, PO2,
PSO1
kappa statistics, and mean absolute error for weather data set? PO5
24 Is testing on the training set as you did above a good idea? Why or Why PO1, PO2,
PSO1
not? PO5, PO12
25 One approach for solving the problem encountered in the previous
question is using cross-validation? Describe what is cross -validation PO1, PO2,
PSO1
briefly. Train a Decision Tree again using cross - validation and report PO5
your results. Does your accuracy increase/decrease? Why?
26 Check to see if the data shows a bias against "foreign workers"
(attribute 20), or "personal
-status" (attribute 9). One way to do this (perhaps rather simple minded)
is to remove these attributes from the dataset and see if the decision tree PO1, PO2,
PSO1
created in those cases is significantly different from the full dataset case PO4, PO5
which you have already done. To remove an attribute you can use the
preprocess tab in Weka's GUI Explorer. Did removing these attributes
have any significant effect? Discuss.
27 *Load the „weather.arff‟ dataset in Weka and run the ID3 classification PO1, PO2, PSO1,
algorithm. What problem do you have and what is the solution? PO5 PSO2
28 Another question might be, do you really need to input so many
attributes to get good results? Maybe only a few would do. For example,
you could try just having attributes 2, 3, 5, 7, 10, 17 (and 21, the class PO1, PO2,
PSO1
attribute (naturally)). Try out some combinations. (You had removed PO4, PO12
two attributes in problem 7. Remember to reload the arff data file to get
all the attributes initially before you start selecting the ones you want.)
29 Sometimes, the cost of rejecting an applicant who actually has a good
credit (case 1) might be higher than accepting an applicant who has bad
credit (case 2). Instead of counting the misclassifications equally in both
cases, give a higher cost to the first case (say cost 5) and lower cost to PO1, PO2, PSO1,
the second case. You can do this by using a cost matrix in Weka. Train PO5, PO12 PSO2
your Decision Tree again and report the Decision Tree and cross -
validation results. Are they significantly different from results obtained
in problem 6 (using equal cost)?
30 Do you think it is a good idea to prefer simple decision trees instead of
PO1, PO2, PSO1,
having long complex decision trees? How does the complexity of a
PO12 PSO2
Decision Tree relate to the bias of the model?
31 *Run the J48 and 1Bk classifiers using-the cross-validation strategy
PO1, PO2, PSO1,
with various fold levels. Compare the accuracy results. Holdout strategy
PO4, PO5 PSO2
with three percentage levels. Compare the accuracy results.
www.jntufastupdates.com
Program
Program
Exp. Specific
Experiment Outcomes
Outcomes
No. Attained
Attained
32 You can make your Decision Trees simpler by pruning the nodes. one
approach is to use Reduced Error Pruning -Explain this idea briefly. Try
PO1, PO2,
reduced error pruning for training your Decision Trees using cross - PSO1,
PO4, PO5,
validation (you can do this in Weka) and report the Decision Tree you PSO2
PO12
obtain? Also, report your accuracy using the pruned model. Does your
accuracy increase?
33 (Extra Credit): How can you convert a Decision Trees into "if –then -
else rules". Make up your own small Decision Tree consisting of 2 - 3
levels and convert it into a set of rules. There also exist different
classifiers that output the model in the form of rules -one such classifier
in Weka is rules. PART, train this model and report the set of rules PO1, PO2,
PSO1,
obtained. Sometimes just one attribute can be good enough in making PO4, PO5,
PSO2
the decision, yes, just one! Can you predict what attribute that might be PO12
in this dataset? OneR classifier uses a single attribute to make decisions
(it chooses the attribute based on minimum error). Report the rule
obtained by training a one R classifier. Rank the performance of j48,
PART and oneR.
34 *Run J48 and Naïve Bayes classifiers on the following datasets and
determine the accuracy:
1.vehicle.arff PO1, PO2,
PSO1,
2.kr-vs-kp.arff PO4, PO5,
PSO2
3.glass.arff PO12
4.wave-form-5000.arff
On which datasets does the Naïve Bayes perform better?
www.jntufastupdates.com
LINUX PROGRAMMING AND DATA MINING LABORATORY
OBJECTIVE:
The Linux programming laboratory course covers major methods of Inter Process Communication
(IPC), which is the basis of all client / server applications under Linux, Linux Utilities, working
with the Bourne again shell (bash), files, process and signals. There will be extensive programming
exercises in shell scripts. It also emphasizes various concepts in multithreaded programming and
socket programming.
Data mining tools allow predicting future trends and behaviors, allowing businesses to make
proactive, knowledge-driven decisions. The data mining laboratory course is designed to exercise
the data mining techniques such as classification, clustering, pattern mining etc with varied datasets
and dynamic parameters. Weka data mining tool is used for the purpose of acquainting the students
with the basic environment of the data mining tools.
OUTCOMES:
Upon the completion of Linux Programming and Data Mining practical course, the student will be able
to:
www.jntufastupdates.com
LINUX PROGRAMMING
www.jntufastupdates.com
EXPERIMENT 1
1.1 OBJECTIVE
a) Write a shell script that accepts a file name, starting and ending line numbers as arguments and
displays all the lines between the given line numbers.
b) *Illustrate by writing script that will print, message “Hello World, in Bold and Blink effect, and
in different colors like red, brown etc using echo commands?
1.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
5
www.jntufastupdates.com
echo -e "\033[32m Hello World"
# Green color
echo -e "\033[33m Hello World"
# See remains on screen
echo -e "\033[34m Hello World"
echo -e "\033[35m Hello World"
echo -e "\033[36m Hello World"
echo -e -n "\033[0m "
# print back to normal
echo -e "\033[41m Hello World"
echo -e "\033[42m Hello World"
echo -e "\033[43m Hello World"
echo -e "\033[44m Hello World"
echo -e "\033[45m Hello World"
echo -e "\033[46m Hello World"
echo -e "\033[0m Hello World"
# Print back to normal
6
www.jntufastupdates.com
EXPERIMENT 2
2.1 OBJECTIVE
a) Write a shell script that deletes all lines containing a specified word in one or more files
supplied as arguments to it.
b) *Illustrate by writing script using for loop to print the following patterns?
2.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
INPUT:
sh prog2.sh 3.sh
enter the word
echo
OUTPUT:
The given input filename is : 3.sh
It displays all the lines other than pattern matching
7
www.jntufastupdates.com
done
echo ""
done
#
8
www.jntufastupdates.com
EXPERIMENT 3
3.1 OBJECTIVE
a) Write a shell script that displays a list of all the files in the current directory to which the user has read,
write and execute permissions.
b) Illustrate to redirect the standard input (stdin) and the standard output (stdout) of a process, so that
scanf () reads from the pipe and printf () writes into the pipe?
3.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
10
www.jntufastupdates.com
EXPERIMENT 4
4.1 OBJECTIVE
a) Write a shell script that receives any number of file names as arguments checks if every argument
supplied is a file or a directory and reports accordingly. Whenever the argument is a file, the number
of lines on it is also reported.
b) *Illustrate by writing c program where process forks to a child, and create a child process by using
forks and suddenly terminates itself?
4.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
INPUT:
sh prog4.sh
OUTPUT:
enter the name
file 3.sh
number of lines 9
11
www.jntufastupdates.com
return 1;
}
if (childpid == 0) {
printf("Child 1: I inherited my parent's pid as %d.\n", mypid);
mypid = getpid();
printf("Child 1: getppid() tells my parent is %d. My own pid instead is %d.\n", getppid(), mypid);
/* forks another child */
childpid = fork();
if ( childpid == -1 ) {
perror("Cannot proceed. fork() error");
return 1;
}
if (childpid == 0) {
/* this is the child of the first child, thus "Child 2" */
printf("Child 2: I hinerited my parent's PID as %d.\n", mypid);
mypid = getpid();
printf("Child 2: getppid() tells my parent is %d. My own pid instead is %d.\n", getppid(),
mypid);
childpid = fork();
if ( childpid == -1 ) {
perror("Cannot proceed. fork() error");
return 1;
}
if (childpid == 0) {
/* "Child 3" sleeps 30 seconds then terminates 12, hopefully before its parent "Child 2" */
printf("Child 3: I hinerited my parent's PID as %d.\n", mypid);
mypid = getpid();
printf("Child 3: getppid() tells my parent is %d. My own pid instead is %d.\n", getppid(),
mypid);
sleep(30);
return 12;
} else /* the parent "Child 2" suddendly returns 15 */ return 15;
} else {
/* this is still "Child 1", which waits for its child to exit */
while ( waitpid(childpid, &status, WNOHANG) == 0 ) sleep(1);
if ( WIFEXITED(status) ) printf("Child1: Child 2 exited with exit status %d.\n",
WEXITSTATUS(status));
else printf("Child 1: child has not terminated correctly.\n");
}
} else {
/* then we're the parent process, "Parent" */
printf("Parent: fork() went ok. My child's PID is %d\n", childpid);
/* wait for the child to terminate and report about that */
wait(&status);
if ( WIFEXITED(status) ) printf("Parent: child has exited with status %d.\n",
WEXITSTATUS(status));
else printf("Parent: child has not terminated normally.\n");
}
return 0;
}
12
www.jntufastupdates.com
4.6 LAB ASSIGNMENT
1. Write a shell script to count number of txt,c and shell programs present in current directory.
2. Write a shell script to count number of only files present in current directory.
13
www.jntufastupdates.com
EXPERIMENT 5
5.1 OBJECTIVE
Write a shell script that accepts a list of file names as its arguments, counts and reports the occurrence
of each word that is present in the first argument file on other argument files.
5.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
14
www.jntufastupdates.com
EXPERIMENT 6
6.1 OBJECTIVE
Write a shell script to list all of the directory files in a directory.
6.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
INPUT:
sh Lp6.sh
Enter dir name
Presanna
OUTPUT:
Files in prasanna are
3.sh
4.sh
pp2.txt
15
www.jntufastupdates.com
EXPERIMENT 7
7.1 OBJECTIVE
Write a shell script to find factorial of a given number.
7.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
16
www.jntufastupdates.com
EXPERIMENT 8
8.1 OBJECTIVE
Write an awk script to count the number of lines in a file that do not contain vowels.
8.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
INPUT:
awk prog8.awk lp1.sh
Displaying number of lines in a file that do not contain vowels
OUTPUT:
The total lines in a file that do not contain vowels:1
17
www.jntufastupdates.com
EXPERIMENT 9
9.1 OBJECTIVE
Write an awk script to find the number of characters, words and lines in a file.
9.2 RESOURCE/REQUIREMENTS
Linux operating system ,vi-editor, shell-interpreter
INPUT:
awk prog9.awk lp5.sh
OUTPUT:
The total number of characters, words and lines in a file is:
Words:12
Lines:3
Chars:39
18
www.jntufastupdates.com
EXPERIMENT 10
10.1 OBJECTIVE
Write a C program that makes a copy of a file using Systems calls
10.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, shell interpreter
INPUT:
cc prog10.c
./a.out
entr the source file name:
file1
enter the destination file name:
file2
OUTPUT:
The copy of a file is successes
11.A.1 OBJECTIVE
Write a C Program to Implement the Unix command cat using system calls.
11.A.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, shell interpreter
INPUT:
cc prog11a.c unit1
OUTPUT:
Displays content of file
11.B.1 OBJECTIVE
Write a C Program to Implement the Unix command ls using system calls
20
www.jntufastupdates.com
11. B.3 PROGRAM LOGIC
1. Open a directory of given directory name
2. Scan directory and Read file and display filename to output stream
3. Repeat step 2 till eof directory reach.
INPUT:
Cc prog11c.c
OUTPUT:
Current Working Directory =/home/prasanna
Number of files:2
Lp1.sh
lp2.sh
21
www.jntufastupdates.com
11. B.7 POST-LAB QUESTIONS
1. What is the difference between system call and library functions
11.C.1 OBJECTIVE
Write a C Program to Implement the Unix command mv using system calls.
INPUT:
cc mv.c file1 file2
OUTPUT:
# creates file2 and copies the content of file1 to file2 and removes file1
22
www.jntufastupdates.com
EXPERIMENT 12
12.1 OBJECTIVE
Write a C program that takes one or more file or directory names as command line input and reports
the following information on the file.
1. file type
2. number of links
3. read, write and execute permissions
4. time of last access
(Note: use /fstat system calls)
12.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
23
www.jntufastupdates.com
EXPERIMENT 13
13.1 OBJECTIVE
Write a C program to emulate the Unix ls – l command.
13.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
24
www.jntufastupdates.com
EXPERIMENT 14
14.1 OBJECTIVE
Write a C program to list for every file in a directory, its inode number and file name
14.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
INPUT:
cc inode.c –o inode
./inode
OUTPUT:
FILE NAME INODE NUMBER
………….. 4195164
File2.c 4195164
….
File1.c 4195164
25
www.jntufastupdates.com
EXPERIMENT 15
15.1 OBJECTIVE
Write a C program that demonstrates redirection of standard output to a file. Ex: ls >f1.
/* freopen example: redirecting stdout */
15.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
INPUT:
cc 15.c –o file1
./file1
OUTPUT:
This sentence is redirected to a file which is given at output
26
www.jntufastupdates.com
EXPERIMENT 16
16.1 OBJECTIVE
Write a C program to create a child process and allow the parent to display “parent” and the child
to display “child” on the screen.
16.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
INPUT:
$cc fork.c ... run the program.
./a.out
OUTPUT:
I'm the original process with PID 13292 and PPID 13273.
I'm the parent process with PID 13292 and PPID 13273.
My child's PID is 13293.
I'm the child process with PID 13293 and PPID 13292.
PID 13293 terminates. ... child terminates.
PID 13292 terminates. ... parent terminates.
27
www.jntufastupdates.com
EXPERIMENT 17
17.1 OBJECTIVE
Write a C program to create a Zombie process.
17.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
INPUT:
$ cc prog17.c
./a.out& ... execute the program in the background.
[1] 13545
OUTPUT:
$ ps
PID TT STAT TIME COMMAND
13535 p2 s 0:00 -ksh(ksh) ...the shell
13545 p2 s 0:00 aombie.exe...the parent process
13536 p2 z 0:00 <defunct> ...the zombie child process
13537 p2 R 0:00 ps
$ kill 13545 ... kill the parent process.
[1] Terminated zombie.exe
$ ps ... notice the zombie is gone now.
PID TT STAT TIME COMMAND
13535 p2 s 0:00 -csh(csh)
13548 p2 R 0:00 ps
28
www.jntufastupdates.com
EXPERIMENT 18
18.1 OBJECTIVE
Write a C program that illustrates how an orphan is created.
18.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
INPUT:
$cc prog18.c
./a.out ... run the program.
OUTPUT:
I'm the original process with PID 13364 and PPID 13346.
I'm the parent process with PID 13364 and PPID 13346.
PID 13364 terminates.
I'm the child process with PID 13365 and PPID 1. ...orphaned!
PID 13365 terminates. ... child terminates.
29
www.jntufastupdates.com
EXPERIMENT 19
19.1 OBJECTIVE
Write a C program that illustrates how to execute two commands concurrently with a command
pipe. Eg. ls-l|sort.
19.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
30
www.jntufastupdates.com
break;
case S_IFIFO: printf(" p");
break;
}
if(S_IRUSR & b.st_mode)
printf(" r");
else
printf(" -");
if(S_IWUSR & b.st_mode)
printf(" w");
else
printf(" -");
if(S_IXUSR & b.st_mode)
printf(" x");
else
printf(" -");
if(S_IRGRP & b.st_mode)
printf(" r");
else
printf(" -");
if(S_IWGRP & b.st_mode)
printf(" w");
else
printf(" -");
if(S_IXGRP & b.st_mode)
printf(" x");
else
printf(" -");
if(S_IROTH & b.st_mode)
printf(" r");
else
printf(" -");
if(S_IWOTH & b.st_mode)
printf(" w");
else
printf(" -");
if(S_IXOTH & b.st_mode)
printf(" x");
else
printf(" -");
printf("%3d ",b.st_nlink);
printf("%4d ",b.st_uid);
printf("%4d ",b.st_gid);
printf("%6d ",b.st_size);
printf("%9ld",b.st_ctime);
printf(" %s\n",p->d_name);
}
}
INPUT:
vi 19.c
cc 19.c
./a.out
31
www.jntufastupdates.com
OUTPUT:
- r w - r w - r - - 1 500 500 1506 1380610351 19.c
- r w - r w - r - - 1 500 500 0 1380523478 2
- r w - r w - r - - 1 500 500 0 1380523478 3
- r w x r w x r - x 1 500 500 6038 1380610357 a.out
- r w - r w - r - - 1 500 500 0 1380523478 1
- r w - r w - r - - 1 500 500 421 1380524812 12.c
- r w - r w - r - - 1 500 500 0 1380523478 4
d r w x - - - - - - 15 500 500 4096 1380609957 ..
d r w x r w x r - x 2 500 500 4096 1380610357 .
- r w - r w - r - - 1 500 500 347 1380523684 13.c
32
www.jntufastupdates.com
EXPERIMENT 20
20.1 OBJECTIVE
Write a C program in which a parent writes a message to a pipe and the child reads the message.
20.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
INPUT:
cc prog20.c
./a.out
OUTPUT:
CHILD: Writing to the pipe
CHILD:Exiting
PARENT:reading from the pipe
PARENT:Received Data is : Hello World, I am child
33
www.jntufastupdates.com
EXPERIMENT 21
21.1 OBJECTIVE
Write a C program (sender.c) to create a message queue with read and write permissions to write 3
messages to it with different priority numbers.
21.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
INPUT:
cc prog21.c
34
www.jntufastupdates.com
OUTPUT:
Enter the message to send: hi
Enter the message to send: hello, how are you
Enter the message to send: bye
35
www.jntufastupdates.com
EXPERIMENT 22
22.1 OBJECTIVE
Write a C program (receiver.c) that receives the messages (from the above message queue as
specified in (22)) and displays them.
22.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
INPUT:
cc prog22.c
./a.out
36
www.jntufastupdates.com
OUTPUT:
Message received from sender is: hi
Message received from sender is: hello, how are you
Message received from sender is: bye
37
www.jntufastupdates.com
EXPERIMENT 23
23.1 OBJECTIVE
Write a C program to allow cooperating processes to lock a resource for exclusive use, using
a) Semaphores b) flock or lockf system calls.
23.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
38
www.jntufastupdates.com
EXPERIMENT 24
24.1 OBJECTIVE
Write a C program that illustrates suspending and resuming processes using signals.
24.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
39
www.jntufastupdates.com
EXPERIMENT 25
25.1 OBJECTIVE
Write a C program that implements a producer-consumer system with two processes. (using
Semaphores).
25.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
40
www.jntufastupdates.com
semop(sem_set_id,&sem_op,1);
printf(“producer:‟%d‟\n”,i);
fflush(stdout);
}
break;
default:
for(i=0;i<num_loops;i++)
{
printf(“consumer:‟%d‟\n”,i);
fflush(stdout);
sem_op.sem_num=0;
sem_op.sem_op=1;
sem_op.sem_flg=0;
semop(sem_set_id,&sem_op,1);
if(rand()>3*(rano_max14));
{
delay.tv_sec=0;
delay.tv_nsec=10;
nanosleep(&delay,null);
}
}
break;
}
return 0;
}
41
www.jntufastupdates.com
EXPERIMENT 26
26.1 OBJECTIVE
Write client and server programs (using c) for interaction between server and client processes
using Unix Domain sockets.
26.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
close(connection_fd);
return 0;
}
int main(void)
{
struct sockaddr_un address;
int socket_fd, connection_fd;
socklen_t address_length;
pid_t child;
42
www.jntufastupdates.com
unlink("./demo_socket");
address.sun_family = AF_UNIX;
snprintf(address.sun_path, UNIX_PATH_MAX, "./demo_socket");
if(bind(socket_fd,
(struct sockaddr *) &address,
sizeof(struct sockaddr_un)) != 0)
{
printf("bind() failed\n");
return 1;
}
if(listen(socket_fd, 5) != 0)
{
printf("listen() failed\n");
return 1;
}
while((connection_fd = accept(socket_fd,
(struct sockaddr *) &address,
&address_length)) > -1)
{
child = fork();
if(child == 0)
{
/* now inside newly created connection handling process */
return connection_handler(connection_fd);
}
close(socket_fd);
unlink("./demo_socket");
return 0;
}
Client.c
#include <stdio.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <unistd.h>
#include <string.h>
int main(void)
{
struct sockaddr_un address;
int socket_fd, nbytes;
char buffer[256];
43
www.jntufastupdates.com
socket_fd = socket(PF_UNIX, SOCK_STREAM, 0);
if(socket_fd < 0)
{
printf("socket() failed\n");
return 1;
}
address.sun_family = AF_UNIX;
snprintf(address.sun_path, UNIX_PATH_MAX, "./demo_socket");
if(connect(socket_fd,
(struct sockaddr *) &address,
sizeof(struct sockaddr_un)) != 0)
{
printf("connect() failed\n");
return 1;
}
44
www.jntufastupdates.com
EXPERIMENT 27
27.1 OBJECTIVE
Write client and server programs (using c) for interaction between server and client processes
using Internet Domain sockets.
27.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
Server.c
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <time.h>
char sendBuff[1025];
time_t ticks;
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(5000);
listen(listenfd, 10);
45
www.jntufastupdates.com
while(1)
{
connfd = accept(listenfd, (struct sockaddr*)NULL, NULL);
ticks = time(NULL);
snprintf(sendBuff, sizeof(sendBuff), "%.24s\r\n", ctime(&ticks));
write(connfd, sendBuff, strlen(sendBuff));
close(connfd);
sleep(1);
}
}
Client.c
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <netdb.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <arpa/inet.h>
if(argc != 2)
{
printf("\n Usage: %s <ip of server> \n",argv[0]);
return 1;
}
memset(recvBuff, '0',sizeof(recvBuff));
if((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
printf("\n Error : Could not create socket \n");
return 1;
}
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(5000);
46
www.jntufastupdates.com
if(inet_pton(AF_INET, argv[1], &serv_addr.sin_addr)<=0)
{
printf("\n inet_pton error occured\n");
return 1;
}
if(n < 0)
{
printf("\n Read error \n");
}
return 0;
}
27.5 PRE-LAB QUESTIONS
1. Explain about IPV6 socket address structure and compare it with IPV4 and unix socket address
structures.
47
www.jntufastupdates.com
EXPERIMENT 28
28.1 OBJECTIVE
Implement shared memory form of IPC
28.2 RESOURCE/REQUIREMENTS
Linux operating system, vi –editor, c-compiler
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
#define SHMSZ 27
main()
{ char c;
int shmid;
key_t key;
char *shm, *s;
key = 5678;
if ((shmid = shmget(key, SHMSZ, IPC_CREAT | 0666)) < 0)
{ perror("shmget");
exit(1); }
if ((shm = shmat(shmid, NULL, 0)) == (char *) -1)
{ perror("shmat");
exit(1); }
s = shm;
for (c = 'a'; c <= 'z'; c++)
*s++ = c;
*s = NULL;
while (*shm Linux operating system, vi –editor, c-compiler= '*')
sleep(1);
exit(0);
}
shm_client.c
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
#define SHMSZ 27
main()
{
int shmid;
key_t key;
char *shm, *s;
key = 5678;
48
www.jntufastupdates.com
if ((shmid = shmget(key, SHMSZ, 0666)) < 0) {
perror("shmget");
exit(1); }
if ((shm = shmat(shmid, NULL, 0)) == (char *) -1) {
perror("shmat");
exit(1); }
for (s = shm; *s != NULL; s++)
putchar(*s);
putchar('\n');
*shm = '*';
exit(0);
}
49
www.jntufastupdates.com
DATA MINING LAB
50
www.jntufastupdates.com
Credit Risk Assessment
Description: The business of banks is making loans. Assessing the credit worthiness of an applicant is of
crucial importance. You have to develop a system to help a loan officer decide whether the credit of a
customer is good, or bad. A bank‟s business rules regarding loans must consider two opposing factors. On
the one hand, a bank wants to make as many loans as possible. Interest on these loans is the ban‟s profit
source. On the other hand, a bank cannot afford to make too many bad loans. Too many bad loans could
lead to the collapse of the bank. The bank‟s loan policy must involve a compromise not too strict, and not
too lenient.
To do the assignment, you first and foremost need some knowledge about the world of credit . You can
acquire such knowledge in a number of ways.
1. Knowledge Engineering. Find a loan officer who is willing to talk. Interview her and try to
represent her knowledge in the form of production rules.
2. Books. Find some training manuals for loan officers or perhaps a suitable textbook on finance.
Translate this knowledge from text form to production rule form.
3. Common sense. Imagine yourself as a loan officer and make up reasonable rules which can be
used to judge the credit worthiness of a loan applicant.
4. Case histories. Find records of actual cases where competent loan officers correctly judged when
not to, approve a loan application.
The German Credit Data :
Actual historical credit data is not always easy to come by because of confidentiality rules. Here is one
such dataset (original) Excel spreadsheet version of the German credit data (download from web).
In spite of the fact that the data is German, you should probably make use of it for this assignment,
(Unless you really can consult a real loan officer !)
A few notes on the German dataset :
DM stands for Deutsche Mark, the unit of currency, worth about 90 cents Canadian (but looks
and acts like a quarter).
Owns_telephone. German phone rates are much higher than in Canada so fewer people own
telephones.
Foreign_worker. There are millions of these in Germany (many from Turkey). It is very hard to
get German citizenship if you were not born of German parents.
There are 20 attributes used in judging a loan applicant. The goal is the classify the applicant into
one of two categories, good or bad.
Subtasks : (Turn in your answers to the following tasks)
51
www.jntufastupdates.com
EXPERIMENT-1
1.1 OBJECTIVE:
List all the categorical (or nominal) attributes and the real-valued attributes separately.
1.2 RESOURCES:
1.3 PROCEDURE:
1.4 OUTPUT:
52
www.jntufastupdates.com
EXPERIMENT-2
2.1 OBJECTIVE:
Which attributes do you think might be crucial in making the credit assessment? Come up with some
simple rules in plain English using your selected attributes.
2.2 RESOURCES:
2.3 THEORY:
Association rule mining is defined as: Let be a set of n binary attributes called items. Let be a set of
transactions called the database. Each transaction in D has a unique transaction ID and contains a subset
of the items in I. A rule is defined as an implication of the form X=>Y where X,Y C I and X Π Y=Φ . The
sets of items (for short itemsets) X and Y are called antecedent (left hand side or LHS) and consequent
(right hand side or RHS) of the rule respectively.
To illustrate the concepts, we use a small example from the supermarket domain.
The set of items is I = {milk,bread,butter,beer} and a small database containing the items (1 codes
presence and 0 absence of an item in a transaction) is shown in the table to the right. An example rule for
the supermarket could be meaning that if milk and bread is bought, customers also buy butter.
Note: this example is extremely small. In practical applications, a rule needs a support of several hundred
transactions before it can be considered statistically significant, and datasets often contain thousands or
millions of transactions.
To select interesting rules from the set of all possible rules, constraints on various measures of
significance and interest can be used. The best known constraints are minimum thresholds on support and
confidence. The support supp(X) of an itemset X is defined as the proportion of transactions in the data set
which contain the itemset. In the example database, the itemset {milk, bread} has a support of 2 / 5 = 0.4
since it occurs in 40% of all transactions (2 out of 5 transactions).
The confidence of a rule is defined. For example, the rule has a confidence of 0.2 / 0.4 = 0.5 in the
database, which means that for 50% of the transactions containing milk and bread the rule is correct.
Confidence can be interpreted as an estimate of the probability P(Y | X), the probability of finding the
RHS of the rule in transactions under the condition that these transactions also contain the LHS .
ALGORITHM:
Association rule mining is to find out association rules that satisfy the predefined minimum support and
confidence from a given database. The problem is usually decomposed into two sub problems. One is to
find those itemsets whose occurrences exceed a predefined threshold in the database; those itemsets are
called frequent or large itemsets. The second problem is to generate association rules from those large
itemsets with the constraints of minimal confidence.
Suppose one of the large itemsets is Lk, Lk = {I1, I2, … , Ik}, association rules with this itemsets are
generated in the following way: the first rule is {I1, I2, … , Ik1} and {Ik}, by checking the confidence
this rule can be determined as interesting or not. Then other rule are generated by deleting the last items in
the antecedent and inserting it to the consequent, further the confidences of the new rules are checked to
determine the interestingness of them. Those processes iterated until the antecedent becomes empty.
53
www.jntufastupdates.com
Since the second subproblem is quite straight forward, most of the researches focus on the first
subproblem. The Apriori algorithm finds the frequent sets L In Database D.
· Join Step.
· Prune Step.
Apriori Pseudocode
Apriori (T,£)
K<2
while L(k1)≠ Φ
C(k)<Generate( Lk − 1)
for transactions t € T
C(t)Subset(Ck,t)
count[c]<count[ c]+1
K<K+ 1
return Ụ L(k) k
2.4 PROCEDURE:
54
www.jntufastupdates.com
7) Select Start button
8) Now we can see the sample rules.
2.5 OUTPUT:
55
www.jntufastupdates.com
EXPERIMENT-3
3.1 OBJECTIVE:
*What attributes do you think might be crucial in making the bank assessment?
3.2 RESOURCES:
3.3 PROCEDURE:
3.4 OUTPUT:
56
www.jntufastupdates.com
EXPERIMENT-4
4.1 OBJECTIVE:
One type of model that you can create is a decision tree. Train a decision tree using the complete dataset
as the training data. Report the model obtained after training.
4.2 RESOURCES:
4.3 THEORY:
Classification is a data mining function that assigns items in a collection to target categories or classes.
The goal of classification is to accurately predict the target class for each case in the data. For example, a
classification model could be used to identify loan applicants as low, medium, or high credit risks. A
classification task begins with a data set in which the class assignments are known. For example, a
classification model that predicts credit risk could be developed based on observed data for many loan
applicants over a period of time.
In addition to the historical credit rating, the data might track employment history, home ownership or
rental, years of residence, number and type of investments, and so on. Credit rating would be the target,
the other attributes would be the predictors, and the data for each customer would constitute a case.
Classifications are discrete and do not imply order. Continuous, floating point values would indicate a
numerical, rather than a categorical, target. A predictive model with a numerical target uses a regression
algorithm, not a classification algorithm. The simplest type of classification problem is binary
classification. In binary classification, the target attribute has only two possible values: for example, high
credit rating or low credit rating. Multiclass targets have more than two values: for example, low,
medium, high, or unknown credit rating. In the model build (training) process, a classification algorithm
finds relationships between the values of the predictors and the values of the target. Different
classification algorithms use different techniques for finding relationships. These relationships are
summarized in a model, which can then be applied to a different data set in which the class assignments
are unknown.
Classification models are tested by comparing the predicted values to known target values in a set of test
data. The historical data for a classification project is typically divided into two data sets: one for building
the model; the other for testing the model. Scoring a classification model results in class assignments and
probabilities for each case. For example, a model that classifies customers as low, medium, or high value
would also predict the probability of each classification for each customer. Classification has many
applications in customer segmentation, business modeling, marketing, credit analysis, and biomedical and
drug response modeling.
Different Classification Algorithms: Oracle Data Mining provides the following algorithms for
classification:
Decision Tree - Decision trees automatically generate rules, which are conditional statements that
reveal the logic used to build the tree.
Naive Bayes - Naive Bayes uses Bayes' Theorem, a formula that calculates a probability by counting
the frequency of values and combinations of values in the historical data.
57
www.jntufastupdates.com
4.4 PROCEDURE:
4.5 OUTPUT:
58
www.jntufastupdates.com
The decision tree constructed by using the implemented C4.5 algorithm
59
www.jntufastupdates.com
EXPERIMENT-5
5.1 OBJECTIVE:
Suppose you use your above model trained on the complete dataset, and classify credit good/bad for each
of the examples in the dataset. What % of examples can you classify correctly? (This is also called
testing on the training set) Why do you think you cannot get 100 % training accuracy?
5.2 RESOURCES:
5.3 THEORY:
Naive Bayes classifier assumes that the presence (or absence) of a particular feature of a class is unrelated
to the presence (or absence) of any other feature. For example, a fruit may be considered to be an apple if
it is red, round, and about 4" in diameter. Even though these features depend on the existence of the other
features, a naive Bayes classifier considers all of these properties to independently contribute to the
probability that this fruit is an apple.
An advantage of the naive Bayes classifier is that it requires a small amount of training data to estimate
the parameters (means and variances of the variables) necessary for classification. Because independent
variables are assumed, only the variances of the variables for each class need to be determined and not the
entire covariance matrix The naive Bayes probabilistic model :
P(C|F1 .................Fn) over a dependent class variable C with a small number of outcomes or classes,
conditional on several feature variables F1 through Fn. The problem is that if the number of features n is
large or when a feature can take on a large number of values, then basing such a model on probability
tables is infeasible. We therefore reformulate the model to make it more tractable.
P(C|F1...............Fn)=[{p(C)p(F1..................Fn|C)}/p(F1,........Fn)]
In practice we are only interested in the numerator of that fraction, since the denominator does not depend
on C and the values of the features Fi are given, so that the denominator is effectively constant. The
numerator is equivalent to the joint probability model p(C,F1........Fn) which can be rewritten as follows,
using repeated applications of the definition of conditional probability:
=p(C)p(F1|C) p(F2|C,F1)p(F3.........Fn|C,F1,F2)
= p(C)p(F1|C) p(F2|C,F1)p(F3.........Fn|C,F1,F2)......p(Fn|C,F1,F2,F3.........Fn1)
Now the "naive" conditional independence assumptions come into play: assume that each feature Fi is
conditionally independent of every other feature Fj for j≠i .
60
www.jntufastupdates.com
This means that p(Fi|C,Fj)=p(Fi|C) and so the joint model can be expressed as
p(C,F1,.......Fn)=p(C)p(F1|C)p(F2|C)...........= p(C)π p(Fi|C)
This means that under the above independence assumptions, the conditional distribution over the class
variable C can be expressed like this:
where Z is a scaling factor dependent only on F1.........Fn, i.e., a constant if the values of the feature
variables are known.
Models of this form are much more manageable, since they factor into a so called class prior p(C) and
independent probability distributions p(Fi|C). If there are k classes and if a model for each p(Fi|C=c) can
be expressed in terms of r parameters, then the corresponding naive Bayes model has (k − 1) + n r k
parameters. In practice, often k = 2 (binary classification) and r = 1 (Bernoulli variables as features) are
common, and so the total number of parameters of the naive Bayes model is 2n + 1, where n is the
number of binary features used for prediction
• D : Set of tuples
– X : (x1,x2,x3,…. xn)
61
www.jntufastupdates.com
k=1
5.4 PROCEDURE:
5.5 OUTPUT:
Weighted Avg.
62
www.jntufastupdates.com
=== Confusion Matrix ===
A b
245 29
17 309
<-- classified as
a = YES , b = NO
63
www.jntufastupdates.com
EXPERIMENT-6
6.1 OBJECTIVE:
*Find out the correctly classified instances, root mean squared error, kappa statistics, and mean absolute
error for weather data set?
6.2 RESOURCES:
6.3 THEORY:
Naive Bayes classifier assumes that the presence (or absence) of a particular feature of a class is unrelated
to the presence (or absence) of any other feature. For example, a fruit may be considered to be an apple if
it is red, round, and about 4" in diameter. Even though these features depend on the existence of the other
features, a naive Bayes classifier considers all of these properties to independently contribute to the
probability that this fruit is an apple.
An advantage of the naive Bayes classifier is that it requires a small amount of training data to estimate
the parameters (means and variances of the variables) necessary for classification. Because independent
variables are assumed, only the variances of the variables for each class need to be determined and not the
entire covariance matrix The naive Bayes probabilistic model :
P(C|F1 .................Fn) over a dependent class variable C with a small number of outcomes or classes,
conditional on several feature variables F1 through Fn. The problem is that if the number of features n is
large or when a feature can take on a large number of values, then basing such a model on probability
tables is infeasible. We therefore reformulate the model to make it more tractable.
P(C|F1...............Fn)=[{p(C)p(F1..................Fn|C)}/p(F1,........Fn)]
In practice we are only interested in the numerator of that fraction, since the denominator does not depend
on C and the values of the features Fi are given, so that the denominator is effectively constant. The
numerator is equivalent to the joint probability model p(C,F1........Fn) which can be rewritten as follows,
using repeated applications of the definition of conditional probability:
=p(C)p(F1|C) p(F2|C,F1)p(F3.........Fn|C,F1,F2)
= p(C)p(F1|C) p(F2|C,F1)p(F3.........Fn|C,F1,F2)......p(Fn|C,F1,F2,F3.........Fn1)
Now the "naive" conditional independence assumptions come into play: assume that each feature Fi is
conditionally independent of every other feature Fj for j≠i .
64
www.jntufastupdates.com
This means that p(Fi|C,Fj)=p(Fi|C) and so the joint model can be expressed as
p(C,F1,.......Fn)=p(C)p(F1|C)p(F2|C)...........= p(C)π p(Fi|C)
This means that under the above independence assumptions, the conditional distribution over the class
variable C can be expressed like this:
where Z is a scaling factor dependent only on F1.........Fn, i.e., a constant if the values of the feature
variables are known.
Models of this form are much more manageable, since they factor into a so called class prior p(C) and
independent probability distributions p(Fi|C). If there are k classes and if a model for each p(Fi|C=c) can
be expressed in terms of r parameters, then the corresponding naive Bayes model has (k − 1) + n r k
parameters. In practice, often k = 2 (binary classification) and r = 1 (Bernoulli variables as features) are
common, and so the total number of parameters of the naive Bayes model is 2n + 1, where n is the
number of binary features used for prediction
• D : Set of tuples
– X : (x1,x2,x3,…. xn)
65
www.jntufastupdates.com
k=1
6.4 PROCEDURE:
6.5 OUTPUT:
Weighted Avg.
66
www.jntufastupdates.com
=== Confusion Matrix ===
A b
245 29
17 309
<-- classified as
a = YES , b = NO
67
www.jntufastupdates.com
EXPERIMENT-7
7.1 OBJECTIVE:
Is testing on the training set as you did above a good idea? Why or Why not?
7.2 RESOURCES:
7.3 PROCEDURE:
1) In Test options, select the Supplied test set radio button
2) Click Set
3) Choose the file which contains records that were not in the training set we used to create the model.
4) Click Start(WEKA will run this test data set through the model we already created. )
5) Compare the output results with that of the 4th experiment
7.4 OUTPUT:
This can be experienced by the different problem solutions while doing practice.
The important numbers to focus on here are the numbers next to the "Correctly Classified Instances"
(92.3 percent) and the "Incorrectly Classified Instances" (7.6 percent). Other important numbers are in the
"ROC Area" column, in the first row (the 0.936); Finally, in the "Confusion Matrix," it shows the number
of false positives and false negatives. The false positives are 29, and the false negatives are 17 in this
matrix.
Based on our accuracy rate of 92.3 percent, we say that upon initial analysis, this is a good model.
One final step to validating our classification tree, which is to run our test set through the model and
ensure that accuracy of the model
Comparing the "Correctly Classified Instances" from this test set with the "Correctly Classified Instances"
from the training set, we see the accuracy of the model, which indicates that the model will not break
down with unknown data, or when future data is applied to it.
68
www.jntufastupdates.com
EXPERIMENT-8
8.1 OBJECTIVE:
One approach for solving the problem encountered in the previous question is using cross-validation?
Describe what is cross -validation briefly. Train a Decision Tree again using cross - validation and report
your results. Does your accuracy increase/decrease? Why?
8.2 RESOURCES:
8.3 THEORY:
Decision tree learning, used in data mining and machine learning, uses a decision tree as a predictive
model which maps observations about an item to conclusions about the item's target value In these tree
structures, leaves represent classifications and branches represent conjunctions of features that lead to
those classifications. In decision analysis, a decision tree can be used to visually and explicitly represent
decisions and decision making. In data mining, a decision tree describes data but not decisions; rather the
resulting classification tree can be an input for decision making. This page deals with decision trees in
data mining.
Decision tree learning is a common method used in data mining. The goal is to create a model that
predicts the value of a target variable based on several input variables. Each interior node corresponds to
one of the input variables; there are edges to children for each of the possible values of that input variable.
Each leaf represents a value of the target variable given the values of the input variables represented by
the path from the root to the leaf.
A tree can be "learned" by splitting the source set into subsets based on an attribute value test. This
process is repeated on each derived subset in a recursive manner called recursive partitioning. The
recursion is completed when the subset at a node all has the same value of the target variable, or when
splitting no longer adds value to the predictions. In data mining, trees can be described also as the
combination of mathematical and computational techniques to aid the description, categorization and
generalization of a given set of data.
The dependent variable, Y, is the target variable that we are trying to understand, classify or generalize.
The vector x is comprised of the input variables, x1, x2, x3 etc., that are used for that task.
8.4 PROCEDURE:
69
www.jntufastupdates.com
6) Go to Classify tab.
7) Choose Classifier “Tree”
8) Select J48
9) Select Test options “Cross-validation”.
10) Set “Folds” Ex:10
11) if need select attribute.
12) now Start weka.
13)now we can see the output details in the Classifier output.
14)Compare the output results with that of the 4th experiment
15) check whether the accuracy increased or decreased?
8.5 OUTPUT:
70
www.jntufastupdates.com
Relative absolute error 33.6511 %
Weighted Avg.
a b <-- classified as
236 38 | a = YES
23 303 | b = NO
71
www.jntufastupdates.com
EXPERIMENT-9
9.1 OBJECTIVE
Check to see if the data shows a bias against "foreign workers" (attribute 20), or "personal -status"
(attribute 9). One way to do this (perhaps rather simple minded) is to remove these attributes from the
dataset and see if the decision tree created in those cases is significantly different from the full dataset
case which you have already done. To remove an attribute you can use the preprocess tab in Weka's GUI
Explorer. Did removing these attributes have any significant effect? Discuss.
9.2 RESOURCES:
9.3 PROCEDURE:
72
www.jntufastupdates.com
9.4 OUTPUT:
73
www.jntufastupdates.com
EXPERIMENT-10
10.1 OBJECTIVE:
*Load the „weather.arff‟ dataset in Weka and run the ID3 classification algorithm. What problem do you
have and what is the solution?
10.2 RESOURCES:
10.3 PROCEDURE:
10.4 OUTPUT:
This can be experienced by the different problem solutions while doing practice.
The important numbers to focus on here are the numbers next to the "Correctly Classified Instances"
(92.3 percent) and the "Incorrectly Classified Instances" (7.6 percent). Other important numbers are in the
"ROC Area" column, in the first row (the 0.936); finally, in the "Confusion Matrix," it shows the number
of false positives and false negatives. The false positives are 29, and the false negatives are 17 in this
matrix.
Based on our accuracy rate of 92.3 percent, we say that upon initial analysis, this is a good model.
One final step to validating our classification tree, which is to run our test set through the model and
ensure that accuracy of the model
Comparing the "Correctly Classified Instances" from this test set with the "Correctly Classified Instances"
from the training set, we see the accuracy of the model, which indicates that the model will not break
down with unknown data, or when future data is applied to it.
74
www.jntufastupdates.com
EXPERIMENT-11
11.1 OBJECTIVE:
Another question might be, do you really need to input so many attributes to get good results? Maybe
only a few would do. For example, you could try just having attributes 2, 3, 5, 7, 10, 17 (and 21, the class
attribute (naturally)). Try out some combinations. (You had removed two attributes in problem 7.
Remember to reload the arff data file to get all the attributes initially before you start selecting the ones
you want).
11.2 RESOURCES:
11.3 PROCEDURE:
75
www.jntufastupdates.com
11.4 OUTPUT:
76
www.jntufastupdates.com
EXPERIMENT-12
12.1 OBJECTIVE:
Sometimes, the cost of rejecting an applicant who actually has a good credit (case 1) might be higher than
accepting an applicant who has bad credit (case 2). Instead of counting the misclassifications equally in
both cases, give a higher cost to the first case (say cost 5) and lower cost to the second case. You can do
this by using a cost matrix in Weka. Train your Decision Tree again and report the Decision Tree and
cross -validation results. Are they significantly different from results obtained in problem 6 (using equal
cost)?
12.2 RESOURCES:
12.3 PROCEDURE:
77
www.jntufastupdates.com
12.4 OUTPUT:
78
www.jntufastupdates.com
EXPERIMENT-13
13.1 OBJECTIVE:
Do you think it is a good idea to prefer simple decision trees instead of having long complex decision
trees? How does the complexity of a Decision Tree relate to the bias of the model?
13.2 RESOURCES:
13.3 PROCEDURE:
This will be based on the attribute set, and the requirement of relationship among attribute we want to
study. This can be viewed based on the database and user requirement.
79
www.jntufastupdates.com
EXPERIMENT-14
14.1 OBJECTIVE:
*Run the J48 and 1Bk classifiers using-the cross-validation strategy with various fold levels. Compare the
accuracy results. Hold out strategy with three percentage levels. Compare the accuracy results.
14.2 RESOURCES:
14.3 THEORY:
Reduced-error pruning
Each node of the (over-fit) tree is examined for pruning
A node is pruned (removed) only if the resulting pruned tree performs no worse than the original over
the validation set
Pruning a node consists of
Removing the sub-tree rooted at the pruned node
Making the pruned node a leaf node
Assigning the pruned node the most common classification of the training instances attached to
that node
Pruning nodes iteratively
Always select a node whose removal most increases the DT accuracy over the validation set
Stop when further pruning decreases the DT accuracy over the validation set
IF (Children=yes) Λ (income=>30000)
THEN (car=Yes)
14.4 PROCEDURE:
14.5 OUTPUT:
81
www.jntufastupdates.com
EXPERIMENT-15
15.1 OBJECTIVE:
You can make your Decision Trees simpler by pruning the nodes. one approach is to use Reduced Error
Pruning -Explain this idea briefly. Try reduced error pruning for training your Decision Trees using cross
-validation (you can do this in Weka) and report the Decision Tree you obtain? Also, report your accuracy
using the pruned model. Does your accuracy increase?
15.2 RESOURCES:
15.3 THEORY:
Reduced-error pruning
15.4 PROCEDURE:
82
www.jntufastupdates.com
14) If need select attribute.
15) Now start weka.
16) Now we can see the output details in the Classifier output.
17) Right click on the result list and select ” visualize tree “option.
15.5 OUTPUT:
83
www.jntufastupdates.com
EXPERIMENT-16
16.1 OBJECTIVE:
(Extra Credit): How can you convert a Decision Trees into "if –then -else rules". Make up your own small
Decision Tree consisting of 2 - 3 levels and convert it into a set of rules. There also exist different
classifiers that output the model in the form of rules -one such classifier in Weka is rules. PART, train this
model and report the set of rules obtained. Sometimes just one attribute can be good enough in making
the decision, yes, just one! Can you predict what attribute that might be in this dataset? OneR classifier
uses a single attribute to make decisions (it chooses the attribute based on minimum error). Report the
rule obtained by training a one R classifier. Rank the performance of j48, PART and OneR.
16.2 RESOURCES:
16.3 PROCEDURE:
12.4 OUTPUT:
85
www.jntufastupdates.com
J48
One R
86
www.jntufastupdates.com
PART
87
www.jntufastupdates.com
EXPERIMENT-17
17.1 OBJECTIVE:
*Run J48 and Naïve Bayes classifiers on the following datasets and determine the accuracy:
1.vehicle.arff
2.kr-vs-kp.arff
3.glass.arff
4.wave-form-5000.arff
On which datasets does the Naïve Bayes perform better?
17.2 RESOURCES:
17.3 PROCEDURE:
17.4 OUTPUT:
J48
89
www.jntufastupdates.com
One R
PART
90
www.jntufastupdates.com