GCC Lab 1
GCC Lab 1
1
EX.NO. : 1
CREATION OF WEB SERVICE FOR CALCULATOR
DATE :
AIM
Write a program to develop a web service for calculator.
PROCEDURE
2
Step 2. Choose Java Web and select Web Application and give next.
Step 3. Enter the project name and give next and Select the Server either tomcat or glassfish
3
Step 4. Give next and select finish
Step 5. Right click the WebApplication (Project Name) and Select New, and choose Java Class
4
Step 6. Type the following code
import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebService;
6
OUTPUT
Give some value in the fields and check the output by pressing enter key.
7
Finally select the WSDL link
RESULT
Thus the program on calculator for web services is executed successfully.
8
EX.NO. : 2
OGSA COMPLAINT WEBSERVICE
DATE :
AIM
PROCEDURE
9
Step 3: Type MavenOSGiCDIProject as the project name and click finish. When you click finish, the
IDE creates the POM project and opens the project in the project window.
Step 4: Expand the project files node in the project window and double – click pom.xml to open the file
in editor and do the modification in the file and save.
In pom.xml file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/xsd/maven-4.0.0.xsd"
xmlns:xsi="http://www.w3.org/2001/XML chema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany</groupId>
<artifactId>MavenO GiCDIProject</artifactId>
<version>1.0-SN PSHOT</version>
<packaging>pom</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencyManagement><dependencies><dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
<version>4.2.0</version>
<scope>provided</scope>
</dependency></dependencies></dependencyManagement>
10
</project>
Step 5:Creating OGSi Bundle Projects
Choose File -> New Project to open the New Project Wizzard
Step 6 :Choose OGSi Bundle from Maven category. Click Next
The IDE creates the bundle project and opens the project in the Project Window. And check the building
plugins at pom.xml under project files.
As well as it will create org.osgi.core artifact as default and it can be view at under Dependencies.
11
Step 8: Build the MavenHelloserviceApi Project by
1. Right Click the MavenHelloserviceApi project node in the project window and choose properties.
12
2. Select the source category in the project project dialog box
3. Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8 and click ok
4. Right click the source package node in the project window and choose New -> JavaInterface
5. Type Hello for the Class Name.
Step 9: Creating the MavenHelloServiceImpl Implementation Bundle Here you will create the
MavenHelloServiceImpl in the POM Project.
1. Choose File -> New Project to open the New Project Wizard
2. Choose OSGi Bundle from the Maven category. Click Next.
3. Type MavenHelloServiceImpl for the Project Name
4. Click Browse and select the MavenOSGiCDIProject POM project as the Location. Click Finish.(As
earlier step).
5. Right click the project node in the Projects window and choose Properties.
6. Select the Sources category in the Project Properties dialog box.
14
7. Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8. Click OK.
8. Right click Source Packages node in the Projects window and choose New -> Java Class.
9. Type HelloImpl for the Class Name.
10. Select com.mycompany.mavenhelloserviceimpl as the Package. Click Finish.
11. Type the following and save your changes.
package com.mycompany.mavenhelloserviceimpl;
/* @author linux */
public class HelloImpl implements Hello{public String sayHello(String name)
{
return "Hello" +name;
}
}
When you implement Hello, the IDE will display an error that you need to resolve by adding the
MavenHelloServiceApi project as a dependency.
12. Right click the Dependencies folder of MavenHelloServiceImpl in the Projects window and choose
Add Dependency.
13. Click the Open Projects tab in the dd Library dialog.
14. Select MavenHelloserviceApi OSGi Bundle. Click Add
15. Expand the com.mycompany.mavenhelloserviceimpl package and double click Activator.java and
open the file in editor.
The IDE automatically creates the Activator.java bundle and its manage the lifecycle of bundle. By
default it includes start() and stop().
15
Modify the start() and Stop() methods in the bundle activator class by adding the
following lines.
package com.mycompany.mavenhelloserviceimpl;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
16
When you build the project the IDE will create the JAR files in the target folder and aslo install the
snapshot JAR in the local repository.
In file window, by expanding the target folder of each of the two bundle projects it will
show two JAR archieves(MavenHelloServiceApi-1.0-SNAPSHOT.jar and MavenHElloServiceImpl-1.0-
SNAPSHOT.jar.)
17
Info: Installed /home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
5. Repeat the step of copying the MavenHelloServiceImpl-1.0-SNAPSHOT.jar to
the/home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles (GlassFish installed
Directory)
6. You can see the output at the glassfish server log
RESULT
Thus a new OGSA- complaint web service has been executed successfully.
18
EX.NO. : 3
GRID SERVICE USING APACHE AXIS
DATE :
AIM
To develop a Grid Service using Apache Axis
PROCEDURE
Step1. Prerequisites
a. Installing Apache Axis, Download it from :http://mirror.fibergrid.in/apache/axis/axis2/java/core/1.7.3/
b. Extract the axis file
c. Open Eclipse, Click window and choose preference and select the Axis2 preference from Web
Services. Finally map the extracted path of Axis2 and click apply and ok.
d. Download apache-tomcat and install the services by unzipping the rar file of apache tomcat. Check in
terminal by moving to tomcat folder
$ bin/startup.sh
Check the tomcat service in webbrowser by visiting-->localhost:8080
Step 2. Open Eclipse and select new dynamic web project by selecting new.
19
Step 3. Enter the name as AddNew and select tomcat server environment as 7.0 and Dynamic web
module version as 2.5. In configuration select Modify and tick the Axis2 Web services. Give Finish.
Step 4.Right click the project and add new class as Welcome and give finish.
20
}
}
Step 6. Right click the Welcome.java and select new --> Web Service
Click Web Service runtime Apache axis and select Apache Axis 2 and Tomcat 7.0, select publish and
monitor check box finally give finish.
21
22
Step 7. Right Click the project and select Run as by Run on server
Step 8. Give finish by checking the above selected field in the figure. As output in web browser
automatically a page will open. localhost:8080/AddNew/
23
Once you click the service, all the methods will show as output.
Step 9.Creating the .aar(Axis Archieve) file and Deploying Service a. In our eclipse workspace, go to
Welcome folder at
/home/linux/workspace/AddNew/WebContent/WEB-INF/services/Welcome. Go to that directory
through terminal and give the command as
$ jar cvf Welcome.aar com META-INF
b. Then copy the axis2.war file which is inside Apache axis 2 war distribution. ( Which is downloaded
earlier) to the webapps directory of Apache Tomcat.
c. Now start the Tomcat service through terminal (bin/startup.sh). Now there will be a new directory
called axis2 inside the webapps folder. Now go to http://localhost:8080/axis2/and you can find the
homepage of Axis2 Web Application.
24
Now, click the Administration Link and login by using username: admin and password :
axis2.
There you can find the upload service link on top left and you can upload the created
Welcome.aar file. This is equal to manually copying the Welcome.aar to webapps/axis2/WEB-
INF/services folder.
e. Now you can list the services by visiting localhost:8080/axis2/services/listServices, you can able to see
our newly added service.
RESULT
Thus the program for Grid Service using Apache Axis was successfully executed.
25
EX.NO. : 4
APPLICATIONS USING JAVA OR C/C++ GRID APIs
DATE :
AIM
To develop an application in Java using Grid APIs.
PROCEDURE
Step 1: Import all the necessary java packages and name the file as GridLayoutDemo.java
Step 2: Set up components to preferred size
Step 3: Add buttons to experiment with Grid Layout
Step 4: Add controls to set up horizontal and vertical gaps
Step 5: Process the Apply gaps button press
Step 6: Create the GUI
Step 7: Create and set up the window,Set up the content pane and Display the Window
Step 8: Schedule a job for the event dispatch thread
Step 9: Show the application's GUI
PROGRAM
import java.awt.*;
import javax.swing.*;
public class MyGridLayout{
JFrame f;
MyGridLayout(){
f=new JFrame();
JButton b1=new JButton("1");
JButton b2=new JButton("2");
JButton b3=new JButton("3");
JButton b4=new JButton("4");
JButton b5=new JButton("5");
JButton b6=new JButton("6");
JButton b7=new JButton("7");
JButton b8=new JButton("8");
JButton b9=new JButton("9");
f.add(b1);f.add(b2);f.add(b3);f.add(b4);f.add(b5);
f.add(b6);f.add(b7);f.add(b8);f.add(b9);
26
f.setLayout(new GridLayout(3,3));
//setting grid layout of 3 rows and 3 columns
f.setSize(300,300);
f.setVisible(true);
}
public static void main(String[] args) {
new MyGridLayout();
}
}
OUTPUT
RESULT
Thus the program to develop an application in java using Grid APIs was successfully executed.
27
EX.NO. : 5
SECURED APPLICATIONS USING GLOBUS TOOLKIT
DATE :
AIM
To develop a secured applications using a basic security mechanisms available in Globus toolkit.
PROCEDURE
Step 1.Installing and setup of Certificate Authority. Open Terminal and mo ve to root user and give
command as
root@linux:~# apt-get install
root@linux:~# sudo grid-ca-create -noint
Certificate Authority Setup
This script will setup a Certificate Authority for signing Globus users certificates. It will also generate a
simple CA package that can be distributed to the users of the CA.
The CA information about the certificates it distributes will be kept in:
/var/lib/globus/simple_ca
The unique subject name for this CA is:
cn=Globus Simple CA, ou=simpleCA-ubuntu, ou=GlobusTest, o=Grid Insufficient permissions to install
CA into the trusted certifiicate directory (tried
${sysconfdir}/grid-security/certificates and ${datadir}/certificates) Creating RPM source tarball done
28
globus_simple_ca_388f6778.tar.gz
Configure the subject name
The grid-ca-create program next prompts you for information about the name of CA you wish to
create:
root@linux:~# sudo grid-ca-create
It will ask few things in command prompt and give the things
i. Permission
ii. Unique subject name
iii. Mailid
iv. expiration date
v. password.
Generating Debian Packages
Get into the default simpla_ca path /var/lib/globus/simple_ca
Examining a Certificate Request
To examine a certificate request, use the command openssl req -text -in Get into the path /etc/grid-
security/
root@linux:/etc/grid-security# openssl req -noout -text -in hostcert_request.pem
29
Signing a Certificate Request:
30
Revoking a Certificate
SimpleCA does not yet provide a convenient interface to revoke a signed certificate, but it can be
done with the openssl command.
31
dpkg-buildpackage: host architecture amd64
dpkg-source --before-build globus-simple-ca-388f6778
debian/rules clean
test -x debian/rules
dh_clean
dh_clean debian/*.install
dpkg-source -b globus-simple-ca-388f6778 dpkg-source: warning: no source format specified in
debian/source/format, see dpkg-source(1)
dpkg-source: info: using source format `1.0'
32
dh_clean: dh_clean -k is deprecated; use dh_prep instead [ -d
/tmp/globus_simple_ca.FHnB8mnm/globus-simple-ca-388f6778/debian/tmp/etc/grid-
security/certificates ] || \
mkdir -p /tmp/globus_simple_ca.FHnB8mnm/globus-simple-ca-388f6778/debian/tmp/etc/grid-
security/certificates
rm -f debian/globus-simple-ca-388f6778.install || true touch debian/globus-simple-ca-
388f6778.install
for file in 388f6778.0 388f6778.signing_policy globus-host-ssl.conf.388f6778 globus-user-
ssl.conf.388f6778 grid-security.conf.388f6778; do \
if [ -f "$file" ]; then \
cp "$file" "/tmp/globus_simple_ca.FHnB8mnm/globus-simple-ca-388f6778/debian/tmp/etc/grid-
security/certificates" ; \
33
dh_lintian -pglobus-simple-ca-388f6778
dh_bugfiles -pglobus-simple-ca-388f6778
dh_install -pglobus-simple-ca-388f6778
dh_link -pglobus-simple-ca-388f6778
dh_installmime -pglobus-simple-ca-388f6778 dh_installgsettings -pglobus-simple-ca-388f6778
dh_strip -pglobus-simple-ca-388f6778
dh_compress -pglobus-simple-ca-388f6778
dh_fixperms -pglobus-simple-ca-388f6778
dh_makeshlibs -pglobus-simple-ca-388f6778
dh_installdeb -pglobus-simple-ca-388f6778
dh_perl -pglobus-simple-ca-388f6778
dh_shlibdeps -pglobus-simple-ca-388f6778
dh_gencontrol -pglobus-simple-ca-388f6778
dpkg-gencontrol: warning: Depends field of package globus-simple-ca-388f6778: unknown
substitution variable ${shlibs:Depends}
# only call dh_scour for packages in main
if grep -q '^Component:[[:space:]]*main' /CurrentlyBuilding 2>/dev/null; then dh_scour -
pglobus-simple-ca-388f6778 ; fi
dh_md5sums -pglobus-simple-ca-388f6778
dh_builddeb -pglobus-simple-ca-388f6778 dpkg-deb: building package `globus-simple-ca-
388f6778' in `../globus-simple-ca-388f6778_0.0_all.deb'.
dpkg-genchanges>../globus-simple-ca-388f6778_0.0_amd64.changes dpkg-genchanges:
including full source code in upload dpkg-source --after-build globus-simple-ca-388f6778
dpkg-buildpackage: full upload; Debian-native package (full source is included)
388f6778 -- Can use the same 8digit certificate to all machine
linux@linux:~$ dpkg -i globus-simple-ca-388f6778_0.0_all.deb ### used for loading toother
machines through pendrive
linux@linux:~$ sudo dpkg -i globus-simple-ca-388f6778_0.0_all.deb Selecting previously
unselected package globus-simple-ca-388f6778. (Reading database ... 260415 files and
directories currently installed.) Preparing to unpack globus-simple-ca-388f6778_0.0_all.deb ...
Unpacking globus-simple-ca-388f6778 (0.0) ... Setting up globus-simple-ca-388f6778 (0.0) ...
34
linux@linux:~$ cd .globus/simpleCA/
linux@linux:~$ cd .globus/simpleCA/
linux@linux:~/.globus/simpleCA$ sudo cp globus-* grid-* /etc/grid-security/
linux@linux:~/.globus/simpleCA$ ls -l /etc/grid-security/ total 28
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
drwxr-xr-x 2 root root 4096 Nov 29 2013 myproxy
lrwxrwxrwx 1 root root 19 Jul 2 02:29 sshftp -> /etc/gridftp-sshftp
drwxr-xr-x 2 root root 4096 Dec 2 2013 vomsdir
linux@linux:~/.globus/simpleCA$ hostname Ubuntu
35
writing new private key to '/home/gcclab/.globus/userkey.pem'
Enter PEM pass phrase:
140306478339744:error:28069065:lib(40):UI_set_result:result too small:ui_lib.c:869:You must
type in 4 to 1024 characters
140306478339744:error:0906406D:PEM routines:PEM_def_callback:problems getting
password:pem_lib.c:111:
140306478339744:error:0907E06F:PEM routines:DO_PK8PKEY:read key:pem_pk8.c:130:
Error number 1 was returned by
/usr/bin/openssl
linux@linux:~$/.globus/simpleCA$ grid-cert-request -force
/home/linux/.globus/usercert.pem already exists /home/linux/.globus/userkey.pem already exists
Enter your name, e.g., John Smith: bala m
A certificate request and private key is being created.
You will be asked to enter a PEM pass phrase. This pass phrase is akin to your account
password, and is used to protect your key file.
f you forget your pass phrase, you will need to obtain a new certificate.
Generating a 1024 bit RSA private key
..................................................................................................++++++
....++++++
writing new private key to '/home/gcclab/.globus/userkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated into your certificate
request.
What you are about to enter is what is called a Distinguished Name or a DN. There are quite a
few fields but you can leave some blank For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Level 0 Organization [Grid]:Level 0 Organizational Unit [GlobusTest]:Level 1 Organizational
Unit [simpleCA-ubuntu]:Level 2 Organizational Unit [local]:Name (E.g., John M. Smith) []:
36
A private key and a certificate request has been generated with the subject:
/O=Grid/OU=GlobusTest/OU=simpleCA-ubuntu/OU=local/CN=bala m
If the CN=bala m is not appropriate, rerun this script with the -force -cn "Common Name"
options.
37
If you enter '.', the field will be left blank.
-----
Level 0 Organization [Grid]:Level 0 Organizational Unit [GlobusTest]:Level 1 Organizational
Unit [simpleCA-ubuntu]:Name (E.g., John M. Smith) []:
A private host key and a certificate request has been generated with the subject:
/O=Grid/OU=GlobusTest/OU=simpleCA-ubuntu/CN=host/bala.globus.in
----------------------------------------------------------
The private key is stored in /etc/grid-security/hostkey.pem
The request is stored in /etc/grid-security/hostcert_request.pem
Please e-mail the request to the Globus Simple CA gcclab@ubuntu
You may use a command similar to the following:
cat /etc/grid-security/hostcert_request.pem | mail gcclab@ubuntu
Only use the above if this machine can send AND receive e-mail. if not, please mail using some
other method.
Your certificate will be mailed to you within two working days.
If you receive no response, contact Globus Simple CA at gcclab@ubuntu
linux@linux:~/.globus$ ls -l
total 12gcclab@ubuntu:~/.globus$ ls -l /etc/grid-security/
total 36
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
-rw-r--r-- 1 root root 0 Jul 2 08:09 hostcert.pem
-rw-r--r-- 1 root root 1349 Jul 2 08:09 hostcert_request.pem
38
linux@linux:~/.globus$ cp usercert_request.pem usercert.pemgcclab@ubuntu:~/.globus$ ls -l
/etc/grid-security/ total 36
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
-rw-r--r-- 1 root root 0 Jul 2 08:09 hostcert.pem
-rw-r--r-- 1 root root 1349 Jul 2 08:09 hostcert_request.pem
-r-------- 1 root root 916 Jul 2 08:09 hostkey.pem
drwxr-xr-x 2 root root 4096 Nov 29 2013 myproxy
lrwxrwxrwx 1 root root 19 Jul 2 02:29 sshftp -> /etc/gridftp-sshftp
drwxr-xr-x 2 root root 4096 Dec 2 2013 vomsdir
gcclab@ubuntu:~/.globus$ ls -l
total 16
39
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
-rw-r--r-- 1 root root 0 Jul 2 08:09 hostcert.pem
-rw-r--r-- 1 root root 1349 Jul 2 08:09 hostcert_request.pem
-r-------- 1 root root 916 Jul 2 08:09 hostkey.pem
drwxr-xr-x 2 root root 4096 Nov 29 2013 myproxy
lrwxrwxrwx 1 root root 19 Jul 2 02:29 sshftp -> /etc/gridftp-sshftp
drwxr-xr-x 2 root root 4096 Dec 2 2013 vomsdir
linux@linux:~/.globus$ cd /etc/grid-security
gcclab@ubuntu:/etc/grid-security$ sudo cp hostcert_request.pem hostcert.pem
linux@linux:/etc/grid-security$ ls -l
total 40
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
-rw-r--r-- 1 root root 1349 Jul 2 08:16 hostcert.pem
-rw-r--r-- 1 root root 1349 Jul 2 08:09 hostcert_request.pem
-r-------- 1 root root 916 Jul 2 08:09 hostkey.pem
RESULT
Thus the program to develop a security application available in Globus toolkit was successfully
executed.
40
EX.NO. : 6 GRID PORTAL AND IMPLEMENT IT WITH AND WITHOUT
GRAM CONCEPT
DATE :
AIM
To develop a Grid portal and implement it with and without GRAM concept.
PROCEDURE
Multiple times that the likely user interface to grid applications will be through portals,
specifically Web portals. A grid portal may be constructed as a Web page interface to provide
easy access to grid applications. The Web user interface provides user authentication, job
submission, job monitoring, and results of the job.
41
The login.html produces the login screen, where the user enters the user ID and password. The
control is passed to the Login Servlet with the user ID and password as input arguments. The
user is authenticated by the servlet. If successful, the user is presented with a welcome screen
with the welcome.html file. Otherwise, the user is presented with an unsuccessful login screen
with the unsuccessfulLogin.html file
42
Globus Resource Allocation Manager (GRAM)
When a job is submitted by a client, the request is sent to the remote host and handled by a
gatekeeper daemon. The gatekeeper creates a job manager to start and monitor the job. When the
job is finished, the job manager sends the status information back to the client and terminates.
The GRAM subsystem consists of the following elements:
The globusrun command and associated APIs Resource Specification Language (RSL)
The gatekeeper daemon The job manager Dynamically-Updated Request Online
Coallocator (DUROC)
Each of these elements are described briefly below.
The globusrun command
The globusrun command (or its equivalent API) submits a job to a resource within the grid. This
command is typically passed an RSL string (see below) that specifies parameters and other
properties required to successfully launch and run the job.
Resource Specification Language (RSL)
RSL is a language used by clients to specify the job to be run. All job submission requests are
described in an RSL string that includes information such as the executable file; its parameters;
information about redirection of stdin, stdout, and stderr; and so on. Basically it provides a
standard way of specifying all of the information required to execute a job, independent of the
43
target environment. It is then the responsibility of the job manager on the target system to parse
the information and launch the job in the appropriate way. The syntax of RSL is very
straightforward. Each statement is enclosed within parenthesis. Comments are designated with
parenthesis and asterisks, for example, (* this is a comment *). Supported attributes include the
following:
rsl_substitution: Defines variables
executable: The script or command to be run
arguments: Information or flags to be passed to the executable
stdin: Specifies the remote URL and local file used for the executable stdout: Specifies the
remote file to place standard output from the job stderr: Specifies the remote file to place
standard error from the job queue: Specifies the queue to submit the job (requires a scheduler)
count: Specifies the number of executions
directory: Specifies the directory to run the job
project: Specifies a project account for the job (requires a scheduler) dryRun: Verifies the RSL
string but does not run the job
maxMemory: Specifies the maximum amount of memory in MBs required for the job
minMemory: Specifies the minimum amount of memory in MBs required for the job
hostCount: Specifies the number of nodes in a cluster required for the job environment:
Specifies environment variables that are required for the job
jobType: Specifies the type of job single process, multi-process, mpi, or condor
maxTime: Specifies the maximum execution wall or cpu time for one execution
maxWallTime: Specifies the maximum walltime for one execution maxCpuTime: Specifies the
maximum cpu time for one execution
gramMyjob: Specifies the whether the gram myjob interface starts one process/thread
(independent) or more (collective)
RESULT
Thus the program to develop Grid Portal was successfully executed.
44
CLOUD COMPUTING LAB
INTRODUCTION
45
cart, checkout, and payment mechanism running on a merchant's server. App Cloud (from
salesforce.com) and the Google App Engine are examples of PaaS.
46
Command Line Interface (CLI)
XML-RPC API
OpenNebulaRuby and Java Cloud APIs
The aim off a Private Cloud is not to expose to the world a cloud interface to sell capacity over
the Internet, but to provide local cloud users and administrators with a flexible and agile private
infrastructure to run virtualized service workloads within the admin
administrative
istrative domain.
domain
OpenNebula virtual infrastructure interfaces expose user and administrator functionality for
virtualization, networking, image and physical resource configuration, management, monitoring
and accounting.
47
EX.NO. : 1
VIRTUAL MACHINE CREATION
DATE :
AIM
Write a program to understand procedure to create the virtual machine.
PROCEDURE
48
kvm-amd.ko : AMD Processor
kvm-intel.ko : Intel Processor
kvm.ko : Kernel object
$ ls /dev/kvm
/dev/kvm
49
virsh # node info
Step 4 : Create the VMS
$ virt-install --connect qemu:///system -n hardy -r 512 -f hardy1.qcow2 -s 12 -c
ubuntu-14.04.2-server-amd64.iso --vnc --noautoconsole --os-type linux --os-variant
ubuntuHardy
(or)Open disk image Error
$ sudo chmod 777 hardy.gcow2
Step 5 : Run the Virtual machine
$ virt-install --connect qemu:///system -n hardy -r 512 -f hardy1.qcow2 -s 12 -c
ubuntu-14.04.2-server-amd64.iso --vnc --noautoconsole --os-type linux --os-variant
ubuntuHardy
$ sudo virt-manager
OUTPUT
50
RESULT
Thus the program has been executed successfully for virtual machine creation.
51
EX.NO. : 2
VIRTUAL MACHINE WITH DIFFERENT CONFIGURATION
DATE :
AIM
To create and run the virtual machine of different configuration. Check how many virtual
machines can be utilized at particular time.
PROCEDURE
Step 1: Check that your CPU supports hardware virtualization.
$ egrep -c '(vmx|svm)' /proc/cpuinfo
Step 2: To see if your processor is 64-bit or not.
$ egrep -c ' lm ' /proc/cpuinfo
Step 3: Now see if your running kernel is 64-bit or not.
$ uname –a
Step 4: To install the KVM, execute the following command.
$ sudo apt-get install qemu-kvm
$ sudo apt-get install libvirt-bin
$ sudo apt-get install ubuntu-vm-builder
$ sudo apt-get install bridge-utils
Step 5: Verify the KVM installation has been successful or not.
$ virsh -c qemu:///system list
Step 6: Installing a GUI for KVM.
$ sudo apt-get install virt-manager
Step 7: Creating a KVM guest machine.
$ virt-manager
52
Step 8:Then start with creating a new virtual machine by hitting the new button.Enter the name
of your virtual machine. Select your installation media type and click forward.
Step 9: Then you will have to set the amount RAM and CPU's that will be available to that
virtual machine.
Step 10:Finally, you will get a confirmation screen that shows the details of your virtual
machine. Then click finish button.
Step 11: Repeat the same procedure to create multiple virtual machines.
53
RESULT
Thus the virtual machine of different configuration has been checked successfully for how many
virtual machines can be utilized at particular time.
54
EX.NO. : 3
C PROGRAM - VIRTUAL MACHINE
DATE :
AIM
Write code to install a C compiler and execute addition of two matrices program in the virtual
machine.
PROCEDURE
55
SOURCE CODE
56
OUTPUT
RESULT
Thus the C program has been executed successfully.
57
EX.NO. : 4
VIRTUAL MACHINE MIGRATION
DATE :
AIM
To Show the virtual machine migration based on the certain condition from one node to the other
PROCEDURE
Step 2: Install rsync and screen packages on source and target virtual machines
The migration of application packages and files in this process will use rsync over ssh between
source and target virtual machines. The actual transfer of files between virtual machines can take
some time, so I also recommend using screen so that you can easily re-attach to an in-progress
migration session if you are inadvertently disconnected.
Ensure that rsync and screen packages are installed on both the source and target virtual
machines with these commands:
sudo apt-get install rsync
sudo apt-get install screen
Step 3: Add a consistent user account to both source and target virtual machines
To facilitate the migration process, ensure that you have a consistent user account configured on
both source and target virtual machines with sudo enabled. The newly provisioned target virtual
machines from Step 3 already include a user named azureuser with sudo enabled. To configure
this same user on each source virtual machine, use the following commands:
58
sudogroupadd -g 500 azureuser
sudouseradd -u 500 -g 500 -m -s /bin/bash azureuser
sudopasswdazureuser
59
Credits: Kudos to Kevin Carter who wrote a great article a couple years ago that provided a
useful starting point for building a list of directories and files to consider excluding as part of a
Linux-to-Linux migration process!
RESULT
Thus the program to implement migration of virtual machine was executed successfully.
60
EX.NO. : 5
CREATION OF SINGLE NODE CLUSTER USING HADOOP
DATE :
AIM
Write a program to install Hadoop 2.7.1 and using this create a Single Node Cluster.
PROCEDURE
Step 1 :Before installing or downloading anything, It is always better to update using the
following command
Command: sudo apt-get install update
Step 2 : Install Java 7 or 8
Command: sudo apt-get install default-jdk
Step 3 :We can check JAVA is properly installed or not using following command:
Command: java –version
Step 4 : First Create a group named hadoop.
Coomand: sudo addgroup hadoop
Step 4 :Add dedicated hadoop user
Command: sudo adduser --ingroup hadoop hduser (Enter the password & Retype the
password or don’t write password)
Step 5 :Hadoop requires SSH access to manage its nodes. SSH setup is required to do different
operations on a cluster such as starting, stopping, distributed daemon shell operations. So, Install
ssh
Command: sudo apt-get install ssh
61
Step 6: Just find which ssh?
Command: which ssh
Step 7 : just find which sshd
Command: which sshd
Step 7 : Login as hduser
Command: su hduser
Step 8 : Enter password : Password is hduser
Step 9 :The cd (ChangeDirectory) command will change from your current directory to any
directory you specify.
Command :cd
Step 10:To authenticate different users of Hadoop, it is required to provide public/private key
pair for a Hadoop user and share it with different users. So, Generate a key of hduser
Command: ssh-keygen -t rsa (Note: Press Enter for default)
Step 11 : Add the generated key to the authorized keys.
Command : cat $HOME/.ssh/id_rsa.pub >> $HOME /.ssh/authorized_keys
Step 12 : Login to localhost
Command: ssh localhost
Step 13 :Add hduser to the sudo group
62
Command :su it (change to admin first) and enter admin password.
Command : sudo adduser hduser sudo and enter hduser password.
Step 14:Run a command as Admin
Command: sudo su hduser
Step 15: Now Download the Hadoop package from the below link
Command : wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.1 /hadoop-
2.7.1.tar.gz
Step 16 : Extract the Hadoop package. (Note : Try this in it or mce login)
Command: tar xvzf hadoop-2.7.1.tar.gz
Step 17 : Move the downloaded package to /usr/local (Note : Try this in it or mce login)
Command : sudo mv haddop-2.7.1 /usr/local/hadoop Enter the it or mce password. Verify by
Folder : MyComputer/usr/local/hadoop
Step 18 : Change the owner of a file or directory
Command: sudo chown –R hduser:hadoop /usr/local/hadoop. Enter the password.
Step 19 : View the path of jvm installed
Command: update-alternatives - - config java
Step 20: Edit the bash_profile to set path.
Command: sudo nano ~/.bashrc
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
Step 21: Update .bashrc file to apply changes
Command: source ~/.bashrc
Step 22: To set up the JAVA_HOME variables
63
Command: sudo nano /usr/local/hadoop/etc/hadoop/hadoop-env.sh (to open the file)
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
64
Step 24: Change the owner of a file or directory
Command: sudo chown hduser:hadoop /app/hadoop/tmp.
Step 25 :Setup Configuration Files needs to be modified
Command: sudo nano /usr/local/hadoop/etc/hadoop/core-site.xml (to open the file)
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose scheme and authority
determine the FileSystem implementation. The uri's scheme determines the config property
65
(fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
Step 26: Make a copy of mapred-site.xml.template
cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-
site.xml
Step 27 : Update the mapreduce xml file(This may be done in Root login)
Command: sudo nano /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description> The host and port that the MapReduce job tracker runs at. If "local", then jobs are
run in-process as a single map and reduce task.
</description>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
Step 28: Make a directory for namenode and datanode
Command: sudo mkdir –p /usr/local/hadoop_store/hdfs/namenode
Command: sudo mkdir –p /usr/local/hadoop_store/hdfs/datanode
Step 29: Change the ownership of the file or directory
Command: sudo chown –R hduser:hadoop /usr/local/hadoop_store
66
Step 30: Update the hdfs-site xml file
Command: Sudo nano /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.The actual number of replications can be specified when
the file is created. The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
Step 31:Update the Yarn file
Command: Sudo nano /usr/local/hadoop/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
Step 32: Format the Hadoop file system
Command: hadoop namenode –format
Step 33 :Start Hadoop Daemons
Command :start-all.sh
67
Step 34 :We can check all daemons are properly started using following command:
Command :jps
Step 35 :Stop hadoop Daemons
Command : stop-all.sh
68
OUTPUT
69
70
71
72
RESULT
Thus the program for single node cluster has been developed and executed successfully.
73
EX.NO. : 6
MOUNT THE ONE NODE HADOOP CLUSTER USING FUSE
DATE :
AIM
To mount the one node Hadoop cluster using FUSE and access files on HDFS in the same
way as we do on Linux operating systems.
PROCEDURE
Step 1:
FUSE (Filesystem in Userspace) enables you to write a normal user application as a bridge for a
traditional filesystem interface.
The hadoop-hdfs-fuse package enables you to use your HDFS cluster as if it were a traditional
filesystem on Linux. It is assumed that you have a working HDFS cluster and know the
hostname and port that your NameNode exposes.
To install fuse-dfs on Ubuntu systems:
hdpuser@jiju-PC:~$ wget http://archive.cloudera.com/cdh5/one-click-
install/trusty/amd64/cdh5-repository_1.0_all.deb
--2016-07-24 09:10:33-- http://archive.cloudera.com/cdh5/one-click-
install/trusty/amd64/cdh5-repository_1.0_all.deb
Resolving archive.cloudera.com (archive.cloudera.com)... 151.101.8.167
Connecting to archive.cloudera.com (archive.cloudera.com)|151.101.8.167|:80... connected.
HTTP request sent, awaiting response... 200 OK Length: 3508 (3.4K) [application/x-debian-
package]
Saving to: ‘cdh5-repository_1.0_all.deb’
100%[======================================>] 3,508 --.-K/s in 0.09s
2016-07-24 09:10:34 (37.4 KB/s) - ‘cdh5-repository_1.0_all.deb’ saved [3508/3508]
74
Step 2:
hdpuser@jiju-PC:~$ sudo dpkg -i cdh5-repository_1.0_all.deb
75
hdpuser@jiju-PC:~$ ls/home/hdpuser/hdfs/
/home/hdpuser/hdfs/ new
hdpuser@jiju-PC:~$ mkdir /home/hdpuser/hdfs/example
hdpuser@jiju-PC:~$ ls -ll /home/hdpuser/hdfs/ total 8
76
hdpuser@jiju-PC:~$ sudo umount /home/hdpuser/hdfs
NOTE: You can now add a permanent HDFS mount which persists through reboots.
For example:
RESULT:
Thus fuse has been installed successfully.
77
EX.NO. : 7
PROGRAM USING API'S ON HADOOP
DATE :
AIM
To write a program for using the API's on hadoop and as well as to interact with it.
PROCEDURE
Step 1: Start the hadoop services by giving the following command in terminal
$ sbin/start-all.sh
$ jps
Step 2: Open web browser and open
localhost:50070
localhost:8088
Step 3: Creating folder in web interface (HDFS) from terminal.
$ bin/hadoop fs -mkdir /bala
Wait until the command executes.
Step 4: Open the localhost:50070
Utilities --> Browse the filesytem.
An folder has been created which we had given in terminal
bin/hadoop ----> represents the location of hdfs
fs ---> file system
-mkdir ------> create a folder
/ ------> root in hdfs
bala ----> folder name
Step 5: Loading the data into the folder we created in hdfs
$bin/hadoop fs -copyFromLocal /home/bala/Pictures /bala2
Open web browser and under utilities, browse the filesytem and check whether the content is
moved
78
OUTPUT
NameNode
DataNode
79
RESULT
Thus an API program has been developed for creating folder and copying files into it.
80
EX.NO. : 8
STORAGE CONTROLLER INSTALLATION
DATE :
AIM
Find procedure to install storage controller and interact with it.
PROCEDURE
81
Step 9: Access a command window and navigate to the directory where you extracted the files or
where you copied the installer directory. Run the appropriate script. If you do not want the
license agreement to display, use the -ioption when you run the script. For example,
StorageControlInstall.sh -i.
Important: If you are not using IBM DB2 managed by Systems Director, then
the DB2 user ID used must have DB2 Administrator privileges.
On Microsoft Windows systems, run the script StorageControlInstall.bat.
On Linux and AIX systems, run the script StorageControlInstall.sh.
Step10: Restart Systems Director as directed.
RESULT
Thus the program to install storage controller was executed successfully.
82
EX.NO. : 9
WORD COUNT USING HADOOP
DATE :
AIM
Write a program to understand the procedure to word count using Hadoop.
PROCEDURE
83
Step 7: In the same folder WordCountTutorial, create a new folder tutorial_classes to hold the
java class files
Step 8: Open the terminal and type the following :
Command: export HADOOP_CLASSPATH = $(hadoop classpath)
Step 9 : echo $HADOOP_CLASSPATH
Step 10: Create a directory on HDFS
Command: hadoop fs mkdir /WordCountTutorial
Step 11: Create a directory inside it for the input
Command: hadoop fs –mkdir /WordCountTutorial/Input
Step 12 : Check it : localhost:50070
Step 13: Upload the input file to that directory
Command: hadoop fs –put /it/Desktop/WordCountTutorial/input_data/input.txt (just drag and
drop) /WordCountTutorial/Input
Step 14: Now Check it : localhost:50070
Step 15: Change the current directory to the tutorial directory
Command: cd /it/Desktop/ WordCountTutorial
Step 16 : Compile the java code
Command: javac -classpath ${HADOOP_CLASSPATH} -d (drag drop the tutorial classes folder
then leave a space and drag drop the wordcount.java
Step 17: Check the class files created
84
Step 18: Put the output files in one jar file
Command: jar -cvf firstTutorial.jar -C tutorial_classes/ .
Step 19: Now run the jar file on hadoop
hadoop jar
<JAR_FILE><CLASS_NAME><HDFS_INPUT_DIRECTORY><HDFS_OUTPUT_DIRECTO
RY>
Command: hadoop jar firstTutorial.jar (drag drop jar file) WordCount
/WordCountTutorial/Input /WordCountTutorial/Output
Then the program executed successfully.
Step 20: To view the output
Command: hadoop dfs -cat <HDFS_OUTPUT_DIRECTORY>*
hadoop dfs -cat /WordCountTutorial/Output/*
85
SOURCE CODE
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
publicclassWordCount {
publicstaticvoid main(String [] args) throws Exception
{
Configuration c=new Configuration();
String[] files=new GenericOptionsParser(c,args).getRemainingArgs();
Path input=new Path(files[0]);
Path output=new Path(files[1]);
Job j=new Job(c,"wordcount");
j.setJarByClass(WordCount.class);
j.setMapperClass(MapForWordCount.class);
j.setReducerClass(ReduceForWordCount.class);
j.setOutputKeyClass(Text.class);
j.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(j, input);
FileOutputFormat.setOutputPath(j, output);
System.exit(j.waitForCompletion(true)?0:1);
}
publicstaticclassMapForWordCountextends Mapper<LongWritable, Text, Text, IntWritable>{
86
publicvoid map(LongWritable key, Text value, Context con) throws IOException,
InterruptedException
{
String line = value.toString();
String[] words=line.split(",");
for(String word: words )
{
Text outputKey = new Text(word.toUpperCase().trim());
IntWritable outputValue = new IntWritable(1);
con.write(outputKey, outputValue);
}
}
}
publicstaticclassReduceForWordCountextends Reducer<Text, IntWritable, Text, IntWritable>
{
publicvoid reduce(Text word, Iterable<IntWritable> values, Context con) throws IOException,
InterruptedException
{
int sum = 0;
for(IntWritable value : values)
{
sum += value.get();
}
con.write(word, new IntWritable(sum));
}
}
}
87
OUTPUT
RESULT
Thus the program for word count has been developed and executed successfully.
88