0% found this document useful (0 votes)
86 views

GCC Lab 1

The document provides steps to install the Globus Toolkit on Ubuntu which is a software toolkit used for grid computing. It involves: 1. Updating Java and installing Oracle Java 8. 2. Installing additional dependencies like GCC, Perl, OpenSSL. 3. Downloading and installing the Globus Toolkit repository package. 4. Installing various Globus Toolkit client, server and SDK packages through apt-get. 5. Installing additional tools like MyProxy, OpenSSH for grid security. 6. Installing Netbeans IDE.

Uploaded by

Bálãjí MJ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views

GCC Lab 1

The document provides steps to install the Globus Toolkit on Ubuntu which is a software toolkit used for grid computing. It involves: 1. Updating Java and installing Oracle Java 8. 2. Installing additional dependencies like GCC, Perl, OpenSSL. 3. Downloading and installing the Globus Toolkit repository package. 4. Installing various Globus Toolkit client, server and SDK packages through apt-get. 5. Installing additional tools like MyProxy, OpenSSH for grid security. 6. Installing Netbeans IDE.

Uploaded by

Bálãjí MJ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

GRID COMPUTING LAB

GLOBUS TOOLKIT INSTALLATION


Step 1: java updates
Command: sudo add-apt-repository ppa:webupd8team/java
Step 2: sudo apt-get update
Step 3: sudo apt-get install oracle-java8-installer
Step 4: java -version
Step 5: javac -version
Step 6: sudo apt-get install oracle-java8-set-default
Step 7: sudo add-apt-repository ppa:ubuntu-toolchain-r/test
Step 8: sudo apt-get update
Step 9: sudo apt-get install gcc-6 gcc-6-base
Step 10: sudo apt-get install perl
Step 11: sudo apt-get install sed make libssl-dev pkg-config
Step 12: wget http://www.globus.org/ftppub/gt6/installers/repo/deb/globus-toolkit-repo_latest_all.deb
Step 13: sudo dpkg –i globus-toolkit-repo_latest_all.deb
Step 14: sudo apt-get update
Step 15: sudo apt-get install globus-data-management-client
Step 16: sudo apt-get install globus-gridftp
Step 17: sudo apt-get install globus-gram5
Step 18: sudo apt-get install globus-gsi
Step 19: sudo apt-get install globus-data-management-server
Step 20: sudo apt-get install globus-data-management-sdk
Step 21: sudo apt-get install globus-resource-management-server
Step 22: sudo apt-get install globus-resource-management-client
Step 23: sudo apt-get install globus-resource-management-sdk
Step 24: sudo apt-get install myproxy
Step 25: sudo apt-get install gsi-openssh
Step 26: sudo apt-get install globus-gridftp globus-gram5 globus-gsi myproxy myproxy-server myproxy-
admin
Step 27: echo install netbeans
Step 28: wget http://download.netbeans.org/netbeans/8.1/final/bundles/netbeans-8.1-javaee-linux.sh
Step 29: chmod +x netbeans-8.1-javaee-linux.sh
Step 30: ./netbeans-8.1-javaee-linux.sh

1
EX.NO. : 1
CREATION OF WEB SERVICE FOR CALCULATOR
DATE :

AIM
Write a program to develop a web service for calculator.

PROCEDURE

Step1: Open Netbeans and go to New

2
Step 2. Choose Java Web and select Web Application and give next.

Step 3. Enter the project name and give next and Select the Server either tomcat or glassfish

3
Step 4. Give next and select finish

Step 5. Right click the WebApplication (Project Name) and Select New, and choose Java Class

4
Step 6. Type the following code
import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebService;

@WebService(serviceName="MathService", targetNamespace = "http://my.org/ns/")


public class MathService
{
@WebMethod(operationName = "hello")
public String hello(@WebParam(name="name")String txt){
return "Hello"+txt+"!";
}
@WebMethod(operationName = "addSer")
public String addSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1+v2)+"!";
}
@WebMethod(operationName = "subSer")
public String subSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1-v2)+"!";
}
@WebMethod(operationName = "mulSer")
public String mulSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1*v2)+"!";
}
@WebMethod(operationName = "divSer")
public String divSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
float res = 0;
try
{
res = ((float)v1)/((float) v2);
5
return "Answer:" +res+"!";
}
catch(ArithmeticException e){
System.out.println("Can't be divided by Zero"+e);
return "Answer:" +e.getMessage().toString()+"!!!";
}
}
}

Step 7. Run Project by pressing F6 key or Run button.


Step 8. Check Web browser
for the following name is available else give it
http://localhost:8080/WebApplication2/MathService?Tester
MathService?Tester ---> represents the java class name

6
OUTPUT

Give some value in the fields and check the output by pressing enter key.

7
Finally select the WSDL link

RESULT
Thus the program on calculator for web services is executed successfully.

8
EX.NO. : 2
OGSA COMPLAINT WEBSERVICE
DATE :

AIM

Write a program to develop a new OGSA complaint webservice

PROCEDURE

Step 1: Choose New Project from the main menu

Step 2: Select POM project from the maven category

9
Step 3: Type MavenOSGiCDIProject as the project name and click finish. When you click finish, the
IDE creates the POM project and opens the project in the project window.

Step 4: Expand the project files node in the project window and double – click pom.xml to open the file
in editor and do the modification in the file and save.
In pom.xml file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/xsd/maven-4.0.0.xsd"
xmlns:xsi="http://www.w3.org/2001/XML chema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany</groupId>
<artifactId>MavenO GiCDIProject</artifactId>
<version>1.0-SN PSHOT</version>
<packaging>pom</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencyManagement><dependencies><dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
<version>4.2.0</version>
<scope>provided</scope>
</dependency></dependencies></dependencyManagement>
10
</project>
Step 5:Creating OGSi Bundle Projects
Choose File -> New Project to open the New Project Wizzard
Step 6 :Choose OGSi Bundle from Maven category. Click Next

Step 7: Creating MavenHelloServiceApi as the Project Name for OGSi Bundle

The IDE creates the bundle project and opens the project in the Project Window. And check the building
plugins at pom.xml under project files.
As well as it will create org.osgi.core artifact as default and it can be view at under Dependencies.

11
Step 8: Build the MavenHelloserviceApi Project by
1. Right Click the MavenHelloserviceApi project node in the project window and choose properties.

12
2. Select the source category in the project project dialog box
3. Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8 and click ok

4. Right click the source package node in the project window and choose New -> JavaInterface
5. Type Hello for the Class Name.

6. Select com.mycompany.mavenhelloserviceapi as the Package. Click finish.


7. Add the following sayHello method to the interface and save the changes.
package com.mycompany.mavenhelloserviceapi; public interface Hello {
String sayHello(String name);
}
13
8. Right click the project node in the project window and choose build.
9. After building the project, open files window and expand the project node such that you can see
MavenHelloServiceApi-1.0-SN PSHOT.jar is created in the target folder.

Step 9: Creating the MavenHelloServiceImpl Implementation Bundle Here you will create the
MavenHelloServiceImpl in the POM Project.
1. Choose File -> New Project to open the New Project Wizard
2. Choose OSGi Bundle from the Maven category. Click Next.
3. Type MavenHelloServiceImpl for the Project Name
4. Click Browse and select the MavenOSGiCDIProject POM project as the Location. Click Finish.(As
earlier step).
5. Right click the project node in the Projects window and choose Properties.
6. Select the Sources category in the Project Properties dialog box.

14
7. Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8. Click OK.
8. Right click Source Packages node in the Projects window and choose New -> Java Class.
9. Type HelloImpl for the Class Name.
10. Select com.mycompany.mavenhelloserviceimpl as the Package. Click Finish.
11. Type the following and save your changes.
package com.mycompany.mavenhelloserviceimpl;
/* @author linux */
public class HelloImpl implements Hello{public String sayHello(String name)
{
return "Hello" +name;
}
}
When you implement Hello, the IDE will display an error that you need to resolve by adding the
MavenHelloServiceApi project as a dependency.
12. Right click the Dependencies folder of MavenHelloServiceImpl in the Projects window and choose
Add Dependency.
13. Click the Open Projects tab in the dd Library dialog.
14. Select MavenHelloserviceApi OSGi Bundle. Click Add

15. Expand the com.mycompany.mavenhelloserviceimpl package and double click Activator.java and
open the file in editor.
The IDE automatically creates the Activator.java bundle and its manage the lifecycle of bundle. By
default it includes start() and stop().
15
Modify the start() and Stop() methods in the bundle activator class by adding the
following lines.
package com.mycompany.mavenhelloserviceimpl;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;

public class Activator implements BundleActivator {


public void start(BundleContext context) throws Exception {
// TODO add activation code here
System.out.println("HelloActivator::start"); context.registerService(Hello.class.getName(),new
HelloImpl(),null); System.out.println("HelloActivator::registration of Hello Service Successfull");
}
public void stop(BundleContext context) throws Exception { // TODO add deactivation code here
context.ungetService(context.getServiceReference(Hello.class.get ame()));
System.out.println("HelloActivator stopped");
}
}

Step 10: Building and Deploying the OSGi Bundles


Here you will build the OSGi bundles and deploy the bundles to GlassFish
1. Right click the MavenOSGiCDIProject folder in the Projects window and choose Clean and Build.

16
When you build the project the IDE will create the JAR files in the target folder and aslo install the
snapshot JAR in the local repository.
In file window, by expanding the target folder of each of the two bundle projects it will
show two JAR archieves(MavenHelloServiceApi-1.0-SNAPSHOT.jar and MavenHElloServiceImpl-1.0-
SNAPSHOT.jar.)

2. Start the GlassFish server (if not already started)


3. Copy the MavenHelloService pi-1.0-SN PSHOT.jar to the /home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles ( GlassFish installed Directory)
4. You can see output similar to the following in the GlassFish Server log in the output window.

17
Info: Installed /home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
5. Repeat the step of copying the MavenHelloServiceImpl-1.0-SNAPSHOT.jar to
the/home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles (GlassFish installed
Directory)
6. You can see the output at the glassfish server log

Info: Installed /home/linux/glassfish


4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-1.0-SNAPSHOT.jar
Info: HelloActivator::start
Info: HelloActivator::registration of Helloservicesuccessful
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-1.0-SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-1.0-SNAPSHOT.jar

RESULT
Thus a new OGSA- complaint web service has been executed successfully.

18
EX.NO. : 3
GRID SERVICE USING APACHE AXIS
DATE :

AIM
To develop a Grid Service using Apache Axis

PROCEDURE

Step1. Prerequisites
a. Installing Apache Axis, Download it from :http://mirror.fibergrid.in/apache/axis/axis2/java/core/1.7.3/
b. Extract the axis file
c. Open Eclipse, Click window and choose preference and select the Axis2 preference from Web
Services. Finally map the extracted path of Axis2 and click apply and ok.

d. Download apache-tomcat and install the services by unzipping the rar file of apache tomcat. Check in
terminal by moving to tomcat folder

$ bin/startup.sh
Check the tomcat service in webbrowser by visiting-->localhost:8080
Step 2. Open Eclipse and select new dynamic web project by selecting new.

19
Step 3. Enter the name as AddNew and select tomcat server environment as 7.0 and Dynamic web
module version as 2.5. In configuration select Modify and tick the Axis2 Web services. Give Finish.

Step 4.Right click the project and add new class as Welcome and give finish.

Step 5. Type the following sample code in the class.


package com.Add.Good;
publicclass Welcome {
publicint add(int x, int y)
{
return x + y;

20
}
}

Step 6. Right click the Welcome.java and select new --> Web Service
Click Web Service runtime Apache axis and select Apache Axis 2 and Tomcat 7.0, select publish and
monitor check box finally give finish.

21
22
Step 7. Right Click the project and select Run as by Run on server

Step 8. Give finish by checking the above selected field in the figure. As output in web browser
automatically a page will open. localhost:8080/AddNew/

23
Once you click the service, all the methods will show as output.
Step 9.Creating the .aar(Axis Archieve) file and Deploying Service a. In our eclipse workspace, go to
Welcome folder at
/home/linux/workspace/AddNew/WebContent/WEB-INF/services/Welcome. Go to that directory
through terminal and give the command as
$ jar cvf Welcome.aar com META-INF

b. Then copy the axis2.war file which is inside Apache axis 2 war distribution. ( Which is downloaded
earlier) to the webapps directory of Apache Tomcat.
c. Now start the Tomcat service through terminal (bin/startup.sh). Now there will be a new directory
called axis2 inside the webapps folder. Now go to http://localhost:8080/axis2/and you can find the
homepage of Axis2 Web Application.

24
Now, click the Administration Link and login by using username: admin and password :
axis2.
There you can find the upload service link on top left and you can upload the created
Welcome.aar file. This is equal to manually copying the Welcome.aar to webapps/axis2/WEB-
INF/services folder.
e. Now you can list the services by visiting localhost:8080/axis2/services/listServices, you can able to see
our newly added service.

RESULT
Thus the program for Grid Service using Apache Axis was successfully executed.

25
EX.NO. : 4
APPLICATIONS USING JAVA OR C/C++ GRID APIs
DATE :

AIM
To develop an application in Java using Grid APIs.

PROCEDURE

Step 1: Import all the necessary java packages and name the file as GridLayoutDemo.java
Step 2: Set up components to preferred size
Step 3: Add buttons to experiment with Grid Layout
Step 4: Add controls to set up horizontal and vertical gaps
Step 5: Process the Apply gaps button press
Step 6: Create the GUI
Step 7: Create and set up the window,Set up the content pane and Display the Window
Step 8: Schedule a job for the event dispatch thread
Step 9: Show the application's GUI
PROGRAM
import java.awt.*;
import javax.swing.*;
public class MyGridLayout{
JFrame f;
MyGridLayout(){
f=new JFrame();
JButton b1=new JButton("1");
JButton b2=new JButton("2");
JButton b3=new JButton("3");
JButton b4=new JButton("4");
JButton b5=new JButton("5");
JButton b6=new JButton("6");
JButton b7=new JButton("7");
JButton b8=new JButton("8");
JButton b9=new JButton("9");
f.add(b1);f.add(b2);f.add(b3);f.add(b4);f.add(b5);
f.add(b6);f.add(b7);f.add(b8);f.add(b9);

26
f.setLayout(new GridLayout(3,3));
//setting grid layout of 3 rows and 3 columns
f.setSize(300,300);
f.setVisible(true);
}
public static void main(String[] args) {
new MyGridLayout();
}
}

OUTPUT

Figure 1: Horizontal, Left-to-Right Figure 2: Horizontal, Right-to-Left

RESULT
Thus the program to develop an application in java using Grid APIs was successfully executed.

27
EX.NO. : 5
SECURED APPLICATIONS USING GLOBUS TOOLKIT
DATE :

AIM
To develop a secured applications using a basic security mechanisms available in Globus toolkit.

PROCEDURE

Step 1.Installing and setup of Certificate Authority. Open Terminal and mo ve to root user and give
command as
root@linux:~# apt-get install
root@linux:~# sudo grid-ca-create -noint
Certificate Authority Setup
This script will setup a Certificate Authority for signing Globus users certificates. It will also generate a
simple CA package that can be distributed to the users of the CA.
The CA information about the certificates it distributes will be kept in:
/var/lib/globus/simple_ca
The unique subject name for this CA is:
cn=Globus Simple CA, ou=simpleCA-ubuntu, ou=GlobusTest, o=Grid Insufficient permissions to install
CA into the trusted certifiicate directory (tried
${sysconfdir}/grid-security/certificates and ${datadir}/certificates) Creating RPM source tarball done

28
globus_simple_ca_388f6778.tar.gz
Configure the subject name
The grid-ca-create program next prompts you for information about the name of CA you wish to
create:
root@linux:~# sudo grid-ca-create

It will ask few things in command prompt and give the things
i. Permission
ii. Unique subject name
iii. Mailid
iv. expiration date
v. password.
Generating Debian Packages
Get into the default simpla_ca path /var/lib/globus/simple_ca
Examining a Certificate Request
To examine a certificate request, use the command openssl req -text -in Get into the path /etc/grid-
security/
root@linux:/etc/grid-security# openssl req -noout -text -in hostcert_request.pem

29
Signing a Certificate Request:

root@linux:/var/lib/globus/simple_ca# grid-ca-sign -in certreq.pem -out cert.pem

30
Revoking a Certificate

SimpleCA does not yet provide a convenient interface to revoke a signed certificate, but it can be
done with the openssl command.

root@linux:/var/lib/globus/simple_ca# openssl ca -config grid-ca-ssl.conf -revoke


newcerts/01.pem
Using configuration from /home/simpleca/.globus/simpleCA/grid-ca-ssl.conf Enter pass phrase
for /home/simpleca/.globus/simpleCA/private/cakey.pem: Revoking Certificate 01.
Data Base Updated
Renewing a CA
root@linux:/var/lib/globus/simple_ca# openssl req -key private/cakey.pem -new -x509 -days
1825 -out newca.pem -config grid-ca-ssl.conf
output:
You are about to be asked to enter information that will be incorporated into your certificate
request.
What you are about to enter is what is called a Distinguished Name or a DN. There are quite a
few fields but you can leave some blank For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Level 0 Organization [Grid]:
Level 0 Organizational Unit [GlobusTest]:
Level 1 Organizational Unit [simpleCA-elephant.globus.org]:
Name (E.g., John M. Smith) []:bala
root@linux:/var/lib/globus/simple_ca# grid-ca-package -d -cadir ~/.globus/simple_ca/
Creating RPM source tarball... done
globus_simple_ca_388f6778.tar.gz
Creating debian binary...dpkg-buildpackage: source package globus-simple-ca-388f6778
dpkg-buildpackage: source version 0.0
dpkg-buildpackage: source distribution UNRELEASED
dpkg-buildpackage: source changed by gcclab <gcclab@>

31
dpkg-buildpackage: host architecture amd64
dpkg-source --before-build globus-simple-ca-388f6778
debian/rules clean
test -x debian/rules
dh_clean
dh_clean debian/*.install
dpkg-source -b globus-simple-ca-388f6778 dpkg-source: warning: no source format specified in
debian/source/format, see dpkg-source(1)
dpkg-source: info: using source format `1.0'

dpkg-source: warning: source directory 'globus-simple-ca-388f6778' is not <sourcepackage>-


<upstreamversion> 'globus-simple-ca-388f6778-0.0'
dpkg-source: info: building globus-simple-ca-388f6778 in globus-simple-ca-388f6778_0.0.tar.gz
dpkg-source: info: building globus-simple-ca-388f6778 in globus-simple-ca-388f6778_0.0.dsc
dpkg-source: warning: missing information for output field Standards-Version debian/rules build
test -x debian/rules
mkdir -p "."
debian/rules binary
test -x debian/rules
dh_testroot
dh_clean -k
dh_clean: dh_clean -k is deprecated; use dh_prep instead
dh_installdirs -A
mkdir -p "."
Adding cdbs dependencies to debian/globus-simple-ca-388f6778.substvars dh_installdirs -
pglobus-simple-ca-388f6778 dh_testdir
dh_testroot
dh_clean -k

32
dh_clean: dh_clean -k is deprecated; use dh_prep instead [ -d
/tmp/globus_simple_ca.FHnB8mnm/globus-simple-ca-388f6778/debian/tmp/etc/grid-
security/certificates ] || \
mkdir -p /tmp/globus_simple_ca.FHnB8mnm/globus-simple-ca-388f6778/debian/tmp/etc/grid-
security/certificates
rm -f debian/globus-simple-ca-388f6778.install || true touch debian/globus-simple-ca-
388f6778.install
for file in 388f6778.0 388f6778.signing_policy globus-host-ssl.conf.388f6778 globus-user-
ssl.conf.388f6778 grid-security.conf.388f6778; do \
if [ -f "$file" ]; then \
cp "$file" "/tmp/globus_simple_ca.FHnB8mnm/globus-simple-ca-388f6778/debian/tmp/etc/grid-
security/certificates" ; \

echo "debian/tmp/etc/grid-security/certificates/$file" etc/grid-security/certificates>>


debian/globus-simple-ca-388f6778.install; \ fi ; \
done
dh_installdocs -pglobus-simple-ca-388f6778
dh_installexamples -pglobus-simple-ca-388f6778
dh_installman -pglobus-simple-ca-388f6778
dh_installinfo -pglobus-simple-ca-388f6778
dh_installmenu -pglobus-simple-ca-388f6778
dh_installcron -pglobus-simple-ca-388f6778
dh_installinit -pglobus-simple-ca-388f6778
dh_installdebconf -pglobus-simple-ca-388f6778
dh_installemacsen -pglobus-simple-ca-388f6778
dh_installcatalogs -pglobus-simple-ca-388f6778
dh_installpam -pglobus-simple-ca-388f6778
dh_installlogrotate -pglobus-simple-ca-388f6778
dh_installlogcheck -pglobus-simple-ca-388f6778
dh_installchangelogs -pglobus-simple-ca-388f6778
dh_installudev -pglobus-simple-ca-388f6778

33
dh_lintian -pglobus-simple-ca-388f6778
dh_bugfiles -pglobus-simple-ca-388f6778
dh_install -pglobus-simple-ca-388f6778
dh_link -pglobus-simple-ca-388f6778
dh_installmime -pglobus-simple-ca-388f6778 dh_installgsettings -pglobus-simple-ca-388f6778
dh_strip -pglobus-simple-ca-388f6778
dh_compress -pglobus-simple-ca-388f6778
dh_fixperms -pglobus-simple-ca-388f6778
dh_makeshlibs -pglobus-simple-ca-388f6778
dh_installdeb -pglobus-simple-ca-388f6778
dh_perl -pglobus-simple-ca-388f6778
dh_shlibdeps -pglobus-simple-ca-388f6778
dh_gencontrol -pglobus-simple-ca-388f6778
dpkg-gencontrol: warning: Depends field of package globus-simple-ca-388f6778: unknown
substitution variable ${shlibs:Depends}
# only call dh_scour for packages in main
if grep -q '^Component:[[:space:]]*main' /CurrentlyBuilding 2>/dev/null; then dh_scour -
pglobus-simple-ca-388f6778 ; fi
dh_md5sums -pglobus-simple-ca-388f6778
dh_builddeb -pglobus-simple-ca-388f6778 dpkg-deb: building package `globus-simple-ca-
388f6778' in `../globus-simple-ca-388f6778_0.0_all.deb'.
dpkg-genchanges>../globus-simple-ca-388f6778_0.0_amd64.changes dpkg-genchanges:
including full source code in upload dpkg-source --after-build globus-simple-ca-388f6778
dpkg-buildpackage: full upload; Debian-native package (full source is included)
388f6778 -- Can use the same 8digit certificate to all machine
linux@linux:~$ dpkg -i globus-simple-ca-388f6778_0.0_all.deb ### used for loading toother
machines through pendrive
linux@linux:~$ sudo dpkg -i globus-simple-ca-388f6778_0.0_all.deb Selecting previously
unselected package globus-simple-ca-388f6778. (Reading database ... 260415 files and
directories currently installed.) Preparing to unpack globus-simple-ca-388f6778_0.0_all.deb ...
Unpacking globus-simple-ca-388f6778 (0.0) ... Setting up globus-simple-ca-388f6778 (0.0) ...

34
linux@linux:~$ cd .globus/simpleCA/
linux@linux:~$ cd .globus/simpleCA/
linux@linux:~/.globus/simpleCA$ sudo cp globus-* grid-* /etc/grid-security/
linux@linux:~/.globus/simpleCA$ ls -l /etc/grid-security/ total 28
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
drwxr-xr-x 2 root root 4096 Nov 29 2013 myproxy
lrwxrwxrwx 1 root root 19 Jul 2 02:29 sshftp -> /etc/gridftp-sshftp
drwxr-xr-x 2 root root 4096 Dec 2 2013 vomsdir
linux@linux:~/.globus/simpleCA$ hostname Ubuntu

root@linux:~/.globus/simpleCA# gedit /etc/hosts


192.168.0.28 bala.globus.in
192.168.0.10 baas.globus.in
Password -- bala
Create the fully qualified domain name linux@linux:~/.globus/simpleCA$ sudo
bashroot@linux~/.globus/simpleCA# gedit /etc/hosts
linux@linux:~$/.globus/simpleCA$ grid-cert-requestEnter your name, e.g., John Smith: bala m
A certificate request and private key is being created.
You will be asked to enter a PEM pass phrase. This pass phrase is akin to your account
password, and is used to protect your key file.
If you forget your pass phrase, you will need to obtain a new certificate.
Generating a 1024 bit RSA private key
.................................++++++
........++++++

35
writing new private key to '/home/gcclab/.globus/userkey.pem'
Enter PEM pass phrase:
140306478339744:error:28069065:lib(40):UI_set_result:result too small:ui_lib.c:869:You must
type in 4 to 1024 characters
140306478339744:error:0906406D:PEM routines:PEM_def_callback:problems getting
password:pem_lib.c:111:
140306478339744:error:0907E06F:PEM routines:DO_PK8PKEY:read key:pem_pk8.c:130:
Error number 1 was returned by
/usr/bin/openssl
linux@linux:~$/.globus/simpleCA$ grid-cert-request -force
/home/linux/.globus/usercert.pem already exists /home/linux/.globus/userkey.pem already exists
Enter your name, e.g., John Smith: bala m
A certificate request and private key is being created.
You will be asked to enter a PEM pass phrase. This pass phrase is akin to your account
password, and is used to protect your key file.
f you forget your pass phrase, you will need to obtain a new certificate.
Generating a 1024 bit RSA private key
..................................................................................................++++++
....++++++
writing new private key to '/home/gcclab/.globus/userkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated into your certificate
request.
What you are about to enter is what is called a Distinguished Name or a DN. There are quite a
few fields but you can leave some blank For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Level 0 Organization [Grid]:Level 0 Organizational Unit [GlobusTest]:Level 1 Organizational
Unit [simpleCA-ubuntu]:Level 2 Organizational Unit [local]:Name (E.g., John M. Smith) []:

36
A private key and a certificate request has been generated with the subject:
/O=Grid/OU=GlobusTest/OU=simpleCA-ubuntu/OU=local/CN=bala m
If the CN=bala m is not appropriate, rerun this script with the -force -cn "Common Name"
options.

Your private key is stored in /home/gcclab/.globus/userkey.pem Your request is stored in


/home/gcclab/.globus/usercert_request.pem
Please e-mail the request to the Globus Simple CA gcclab@ubuntu
You may use a command similar to the following:
cat /home/gcclab/.globus/usercert_request.pem | mail gcclab@ubuntu
Only use the above if this machine can send AND receive e-mail. if not, please mail using some
other method.

Your certificate will be mailed to you within two working days.


If you receive no response, contact Globus Simple CA at gcclab@ubuntu

linux@linux:~/.globus/simpleCA$ cd newcerts/ linux@linux:~/.globus/simpleCA/newcerts$ ls


linux@linux:~/.globus/simpleCA/newcerts$ cd .. linux@linux:~/.globus/simpleCA$
pwd/home/gcclab/.globus/simpleCA linux@linux:~/.globus/simpleCA$ cd ..
linux@linux:~/.globus$ pwd
/home/linux.globus
linux@linux:~$/.globus$ ls -l
linux@linux:~/.globus$ sudo grid-cert-request -host bala.globus.inGenerating a 1024 bit RSA
private key
.......................................++++++
...................++++++
writing new private key to '/etc/grid-security/hostkey.pem' -----
You are about to be asked to enter information that will be incorporated into your certificate
request.
What you are about to enter is what is called a Distinguished Name or a DN. There are quite a
few fields but you can leave some blank For some fields there will be a default value,

37
If you enter '.', the field will be left blank.
-----
Level 0 Organization [Grid]:Level 0 Organizational Unit [GlobusTest]:Level 1 Organizational
Unit [simpleCA-ubuntu]:Name (E.g., John M. Smith) []:
A private host key and a certificate request has been generated with the subject:
/O=Grid/OU=GlobusTest/OU=simpleCA-ubuntu/CN=host/bala.globus.in
----------------------------------------------------------
The private key is stored in /etc/grid-security/hostkey.pem
The request is stored in /etc/grid-security/hostcert_request.pem
Please e-mail the request to the Globus Simple CA gcclab@ubuntu
You may use a command similar to the following:
cat /etc/grid-security/hostcert_request.pem | mail gcclab@ubuntu
Only use the above if this machine can send AND receive e-mail. if not, please mail using some
other method.
Your certificate will be mailed to you within two working days.
If you receive no response, contact Globus Simple CA at gcclab@ubuntu
linux@linux:~/.globus$ ls -l
total 12gcclab@ubuntu:~/.globus$ ls -l /etc/grid-security/
total 36
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
-rw-r--r-- 1 root root 0 Jul 2 08:09 hostcert.pem
-rw-r--r-- 1 root root 1349 Jul 2 08:09 hostcert_request.pem

-r-------- 1 root root 916 Jul 2 08:09 hostkey.pem


drwxr-xr-x 2 root root 4096 Nov 29 2013 myproxy
lrwxrwxrwx 1 root root 19 Jul 2 02:29 sshftp -> /etc/gridftp-sshftp
drwxr-xr-x 2 root root 4096 Dec 2 2013 vomsdir

38
linux@linux:~/.globus$ cp usercert_request.pem usercert.pemgcclab@ubuntu:~/.globus$ ls -l
/etc/grid-security/ total 36
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
-rw-r--r-- 1 root root 0 Jul 2 08:09 hostcert.pem
-rw-r--r-- 1 root root 1349 Jul 2 08:09 hostcert_request.pem
-r-------- 1 root root 916 Jul 2 08:09 hostkey.pem
drwxr-xr-x 2 root root 4096 Nov 29 2013 myproxy
lrwxrwxrwx 1 root root 19 Jul 2 02:29 sshftp -> /etc/gridftp-sshftp
drwxr-xr-x 2 root root 4096 Dec 2 2013 vomsdir
gcclab@ubuntu:~/.globus$ ls -l
total 16

drwx------ 6 gcclab gcclab 4096 Jul 2 07:43 simpleCA


-rw-r--r-- 1 gcclab gcclab 1351 Jul 2 08:12 usercert.pem
-rw-r--r-- 1 gcclab gcclab 1351 Jul 2 08:05 usercert_request.pem
-r-------- 1 gcclab gcclab 1041 Jul 2 08:05 userkey.pem
linux@linux:~/.globus$ cp usercert_request.pem usercert.pem
linux@linux:~/.globus$ ls -l
total 16
drwx------ 6 gcclab gcclab 4096 Jul 2 07:43 simpleCA
-rw-r--r-- 1 gcclab gcclab 1351 Jul 2 08:13 usercert.pem
-rw-r--r-- 1 gcclab gcclab 1351 Jul 2 08:05 usercert_request.pem -r-------- 1 gcclab gcclab 1041
Jul 2 08:05 userkey.pem linux@linux:~/.globus$ ls -l /etc/grid-security/
total 36
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates

39
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
-rw-r--r-- 1 root root 0 Jul 2 08:09 hostcert.pem
-rw-r--r-- 1 root root 1349 Jul 2 08:09 hostcert_request.pem
-r-------- 1 root root 916 Jul 2 08:09 hostkey.pem
drwxr-xr-x 2 root root 4096 Nov 29 2013 myproxy
lrwxrwxrwx 1 root root 19 Jul 2 02:29 sshftp -> /etc/gridftp-sshftp
drwxr-xr-x 2 root root 4096 Dec 2 2013 vomsdir
linux@linux:~/.globus$ cd /etc/grid-security
gcclab@ubuntu:/etc/grid-security$ sudo cp hostcert_request.pem hostcert.pem
linux@linux:/etc/grid-security$ ls -l
total 40
drwxr-xr-x 2 root root 4096 Jul 2 07:50 certificates
-rw-r--r-- 1 root root 2929 Jul 2 07:53 globus-host-ssl.conf
-rw-r--r-- 1 root root 3047 Jul 2 07:53 globus-user-ssl.conf
-rw-r--r-- 1 root root 2929 Jul 2 07:53 grid-ca-ssl.conf
-rw-r--r-- 1 root root 1251 Jul 2 07:53 grid-security.conf
-rw-r--r-- 1 root root 1349 Jul 2 08:16 hostcert.pem
-rw-r--r-- 1 root root 1349 Jul 2 08:09 hostcert_request.pem
-r-------- 1 root root 916 Jul 2 08:09 hostkey.pem

drwxr-xr-x 2 root root 4096 Nov 29 2013 myproxy


lrwxrwxrwx 1 root root 19 Jul 2 02:29 sshftp -> /etc/gridftp-sshftp
drwxr-xr-x 2 root root 4096 Dec 2 2013 vomsdi

RESULT
Thus the program to develop a security application available in Globus toolkit was successfully
executed.

40
EX.NO. : 6 GRID PORTAL AND IMPLEMENT IT WITH AND WITHOUT
GRAM CONCEPT
DATE :

AIM
To develop a Grid portal and implement it with and without GRAM concept.

PROCEDURE

Multiple times that the likely user interface to grid applications will be through portals,
specifically Web portals. A grid portal may be constructed as a Web page interface to provide
easy access to grid applications. The Web user interface provides user authentication, job
submission, job monitoring, and results of the job.

41
The login.html produces the login screen, where the user enters the user ID and password. The
control is passed to the Login Servlet with the user ID and password as input arguments. The
user is authenticated by the servlet. If successful, the user is presented with a welcome screen
with the welcome.html file. Otherwise, the user is presented with an unsuccessful login screen
with the unsuccessfulLogin.html file

42
Globus Resource Allocation Manager (GRAM)
When a job is submitted by a client, the request is sent to the remote host and handled by a
gatekeeper daemon. The gatekeeper creates a job manager to start and monitor the job. When the
job is finished, the job manager sends the status information back to the client and terminates.
The GRAM subsystem consists of the following elements:
 The globusrun command and associated APIs Resource Specification Language (RSL)
 The gatekeeper daemon The job manager Dynamically-Updated Request Online
Coallocator (DUROC)
Each of these elements are described briefly below.
The globusrun command
The globusrun command (or its equivalent API) submits a job to a resource within the grid. This
command is typically passed an RSL string (see below) that specifies parameters and other
properties required to successfully launch and run the job.
Resource Specification Language (RSL)
RSL is a language used by clients to specify the job to be run. All job submission requests are
described in an RSL string that includes information such as the executable file; its parameters;
information about redirection of stdin, stdout, and stderr; and so on. Basically it provides a
standard way of specifying all of the information required to execute a job, independent of the

43
target environment. It is then the responsibility of the job manager on the target system to parse
the information and launch the job in the appropriate way. The syntax of RSL is very
straightforward. Each statement is enclosed within parenthesis. Comments are designated with
parenthesis and asterisks, for example, (* this is a comment *). Supported attributes include the
following:
rsl_substitution: Defines variables
executable: The script or command to be run
arguments: Information or flags to be passed to the executable
stdin: Specifies the remote URL and local file used for the executable stdout: Specifies the
remote file to place standard output from the job stderr: Specifies the remote file to place
standard error from the job queue: Specifies the queue to submit the job (requires a scheduler)
count: Specifies the number of executions
directory: Specifies the directory to run the job
project: Specifies a project account for the job (requires a scheduler) dryRun: Verifies the RSL
string but does not run the job
maxMemory: Specifies the maximum amount of memory in MBs required for the job
minMemory: Specifies the minimum amount of memory in MBs required for the job
hostCount: Specifies the number of nodes in a cluster required for the job environment:
Specifies environment variables that are required for the job
jobType: Specifies the type of job single process, multi-process, mpi, or condor
maxTime: Specifies the maximum execution wall or cpu time for one execution
maxWallTime: Specifies the maximum walltime for one execution maxCpuTime: Specifies the
maximum cpu time for one execution
gramMyjob: Specifies the whether the gram myjob interface starts one process/thread
(independent) or more (collective)

RESULT
Thus the program to develop Grid Portal was successfully executed.

44
CLOUD COMPUTING LAB

INTRODUCTION

What is cloud computing?


Cloud computing means that instead of all the computer hardware and software you're using
sitting on your desktop, or somewhere inside your company's network, it's provided for you as a
service by another company and accessed over the Internet, usually in a completely seamless
way. Exactly where the hardware and software is located and how it all works doesn't matter to
you, the user—it's just somewhere up in the nebulous "cloud" that the Internet represents.
Cloud computing is a buzzword that means different things to different people. For some, it's just
another way of describing IT (information technology) "outsourcing"; others use it to mean any
computing service provided over the Internet or a similar network; and some define it as any
bought-in computer service you use that sits outside your firewall.

Types of cloud computing


IT people talk about three different kinds of cloud computing, where different services are being
provided for you. Note that there's a certain amount of vagueness about how these things are
defined and some overlap between them.

 Infrastructure as a Service (IaaS) means you're buying access to raw computing


hardware over the Net, such as servers or storage. Since you buy what you need and pay-as-you-
go, this is often referred to as utility computing. Ordinary web hosting is a simple example of
IaaS: you pay a monthly subscription or a per-megabyte/gigabyte fee to have a hosting company
serve up files for your website from their servers.
 Software as a Service (SaaS) means you use a complete application running on someone
else's system. Web-based email and Google Documents are perhaps the best-known examples.
Zoho is another well-known SaaS provider offering a variety of office applications online.
 Platform as a Service (PaaS) means you develop applications using Web-based tools so
they run on systems software and hardware provided by another company. So, for example, you
might develop your own ecommerce website but have the whole thing, including the shopping

45
cart, checkout, and payment mechanism running on a merchant's server. App Cloud (from
salesforce.com) and the Google App Engine are examples of PaaS.

Advantages and disadvantages of cloud computing


Advantages
The pros of cloud computing are obvious and compelling. If your business is selling books or
repairing shoes, why get involved in the nitty gritty of buying and maintaining a complex
computer system? If you run an insurance office, do you really want your sales agents wasting
time running anti-virus software, upgrading word-processors, or worrying about hard-drive
crashes? Do you really want them cluttering your expensive computers with their personal
emails, illegally shared MP3 files, and naughty YouTube videos—when you could leave that
responsibility to someone else? Cloud computing allows you to buy in only the services you
want, when you want them, cutting the upfront capital costs of computers and peripherals. You
avoid equipment going out of date and other familiar IT problems like ensuring system security
and reliability. You can add extra services (or take them away) at a moment's notice as your
business needs change. It's really quick and easy to add new applications or services to your
business without waiting weeks or months for the new computer (and its software) to arrive.
Drawbacks
Instant convenience comes at a price. Instead of purchasing computers and software, cloud
computing means you buy services, so one-off, upfront capital costs become ongoing operating
costs instead. That might work out much more expensive in the long-term.
If you're using software as a service (for example, writing a report using an online word
processor or sending emails through webmail), you need a reliable, high-speed, broadband
Internet connection functioning the whole time you're working. That's something we take for
granted in countries such as the United States, but it's much more of an issue in developing
countries or rural areas where broadband is unavailable.
An Introduction to Cloud Computing with OpenNebula
An OpenNebula Private Cloud provides infrastructure users with an elastic platform for fast
delivery and scalability of services to meet dynamic demands of service end-users. Services are
hosted in VMs, and then submitted, monitored and controlled in the Cloud by using Sunstone or
any of the OpenNebula interfaces:

46
 Command Line Interface (CLI)
 XML-RPC API
 OpenNebulaRuby and Java Cloud APIs

The aim off a Private Cloud is not to expose to the world a cloud interface to sell capacity over
the Internet, but to provide local cloud users and administrators with a flexible and agile private
infrastructure to run virtualized service workloads within the admin
administrative
istrative domain.
domain
OpenNebula virtual infrastructure interfaces expose user and administrator functionality for
virtualization, networking, image and physical resource configuration, management, monitoring
and accounting.

47
EX.NO. : 1
VIRTUAL MACHINE CREATION
DATE :

AIM
Write a program to understand procedure to create the virtual machine.

PROCEDURE

Step1 : KVM INSTALLATION


Check that your CPU supports hardware virtualization
To run KVM, you need a processor that supports hardware virtualization. Intel and AMD both
have developed extensions for their processors, deemed respectively
Intel VT-x (code name Vanderpool) and AMD-V (code name Pacifica). To see if your
processor supports one of these, you can review the output from this command:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If 0 it means that your CPU doesn't support hardware virtualization.
If 1 (or more) it does - but you still need to make sure that virtualization is enabled in
the BIOS.
Use a 64 bit kernel (if possible)
Running a 64 bit kernel on the host operating system is recommended but not required.
To serve more than 2GB of RAM for your VMs,you must use a 64-bit kernel, On a 32-bit kernel
install, you'll be limited to 2GB RAM at maximum for a given VM.
To see if your processor is 64-bit, you can run this command:
$ egrep -c ' lm ' /proc/cpuinfo
If 0 is printed, it means that your CPU is not 64-bit.
If 1 or higher, it is. Now see if your running kernel is 64-bit, just issue the following command:
$ uname –m
x86_64indicates a running 64-bit kernel. If you use see i386, i486, i586 or i686, you're
running a 32-bit kernel.
$ ls /lib/modules/3.16.0-30- generic/kernel/arch/x86/kvm/kvm

48
kvm-amd.ko : AMD Processor
kvm-intel.ko : Intel Processor
kvm.ko : Kernel object
$ ls /dev/kvm
/dev/kvm

Step 2 : Install Necessary Packages


a. qemu-kvm
b. libvirt-bin
c. bridge-utils
d. virt-manager
e. qemu-system
$ sudo apt-get install qemu-kvm
$ sudo apt-get install libvirt-bin
$ sudo apt-get install bridge-utils
$ sudo apt-get install virt-manager
$ sudo apt-get install qemu-system
To check package installation Check
$ dpkg –l|grep qemu-kvm
$ virsh
virsh# exit
Step 3 : Verify Installation
You can test if your install has been successful with the following command:
$ virsh -c qemu:///system list
Id Name State
----------------------------------
If on the other hand you get something like this:
$ virsh -c qemu:///system list
libvir: Remote error : Permission denied
error: failed to connect to the hypervisor
virsh # version

49
virsh # node info
Step 4 : Create the VMS
$ virt-install --connect qemu:///system -n hardy -r 512 -f hardy1.qcow2 -s 12 -c
ubuntu-14.04.2-server-amd64.iso --vnc --noautoconsole --os-type linux --os-variant
ubuntuHardy
(or)Open disk image Error
$ sudo chmod 777 hardy.gcow2
Step 5 : Run the Virtual machine
$ virt-install --connect qemu:///system -n hardy -r 512 -f hardy1.qcow2 -s 12 -c
ubuntu-14.04.2-server-amd64.iso --vnc --noautoconsole --os-type linux --os-variant
ubuntuHardy
$ sudo virt-manager

OUTPUT

50
RESULT
Thus the program has been executed successfully for virtual machine creation.

51
EX.NO. : 2
VIRTUAL MACHINE WITH DIFFERENT CONFIGURATION
DATE :

AIM
To create and run the virtual machine of different configuration. Check how many virtual
machines can be utilized at particular time.

PROCEDURE
Step 1: Check that your CPU supports hardware virtualization.
$ egrep -c '(vmx|svm)' /proc/cpuinfo
Step 2: To see if your processor is 64-bit or not.
$ egrep -c ' lm ' /proc/cpuinfo
Step 3: Now see if your running kernel is 64-bit or not.
$ uname –a
Step 4: To install the KVM, execute the following command.
$ sudo apt-get install qemu-kvm
$ sudo apt-get install libvirt-bin
$ sudo apt-get install ubuntu-vm-builder
$ sudo apt-get install bridge-utils
Step 5: Verify the KVM installation has been successful or not.
$ virsh -c qemu:///system list
Step 6: Installing a GUI for KVM.
$ sudo apt-get install virt-manager
Step 7: Creating a KVM guest machine.
$ virt-manager

52
Step 8:Then start with creating a new virtual machine by hitting the new button.Enter the name
of your virtual machine. Select your installation media type and click forward.

Step 9: Then you will have to set the amount RAM and CPU's that will be available to that
virtual machine.

Step 10:Finally, you will get a confirmation screen that shows the details of your virtual
machine. Then click finish button.

Step 11: Repeat the same procedure to create multiple virtual machines.

53
RESULT

Thus the virtual machine of different configuration has been checked successfully for how many
virtual machines can be utilized at particular time.

54
EX.NO. : 3
C PROGRAM - VIRTUAL MACHINE
DATE :

AIM
Write code to install a C compiler and execute addition of two matrices program in the virtual
machine.

PROCEDURE

Step 1: To login into Guest OS in KVM

Step 2: To write and execute your own C Program in gcc compiler.


Install c compiler using commands.
$ apt-get install gcc

55
SOURCE CODE

56
OUTPUT

RESULT
Thus the C program has been executed successfully.

57
EX.NO. : 4
VIRTUAL MACHINE MIGRATION
DATE :

AIM
To Show the virtual machine migration based on the certain condition from one node to the other

PROCEDURE

Step1:Update source and target virtual machines to latest package versions


To help ensure that platform images between cloud providers are running the same version of
key operating system packages, update these packages to the latest versions on both source and
target virtual machines.
sudo apt-get update
sudo apt-get upgrade

Step 2: Install rsync and screen packages on source and target virtual machines
The migration of application packages and files in this process will use rsync over ssh between
source and target virtual machines. The actual transfer of files between virtual machines can take
some time, so I also recommend using screen so that you can easily re-attach to an in-progress
migration session if you are inadvertently disconnected.
Ensure that rsync and screen packages are installed on both the source and target virtual
machines with these commands:
sudo apt-get install rsync
sudo apt-get install screen

Step 3: Add a consistent user account to both source and target virtual machines
To facilitate the migration process, ensure that you have a consistent user account configured on
both source and target virtual machines with sudo enabled. The newly provisioned target virtual
machines from Step 3 already include a user named azureuser with sudo enabled. To configure
this same user on each source virtual machine, use the following commands:

58
sudogroupadd -g 500 azureuser
sudouseradd -u 500 -g 500 -m -s /bin/bash azureuser
sudopasswdazureuser

Step 4: Start a screen session for the migration


On the source virtual machine, enter a new screen session for the migration by using the
following command:
sudo screen -S AzureMigration
If you are disconnected from the source virtual machine during the migration process, you can
reconnect to the detached screen session by using the following command after signing in again
to the source virtual machine:
sudo screen –r

Step 5: Build an exclusion list of directories and files


During the migration, we want to be careful to skip any files that include configuration
information relating to the identity of the source virtual machines, such as IP addresses,
hostnames, ssh keys, etc. For the Ubuntu-based virtual machines that we migrated, we used the
following commands on each source virtual machine to build our list of directories and files to
exclude from the migration process:
EXCLUDEFILE=/tmp/exclude.file
EXCLUDELIST='/boot /etc/fstab /etc/hostname /etc/issue /etc/hosts
/etc/sudoers /etc/networks /etc/network/* /etc/resolv.conf
/etc/ssh/* /etc/sysctl.conf /etc/mtab /etc/udev/rules.d/*
/lock /net /tmp'
EXCLUDEPATH=$(echo $EXCLUDELIST | sed 's/\ /\\n/g')
echo -e $EXCLUDEPATH > $EXCLUDEFILE
find / -name '*cloud-init*' >> $EXCLUDEFILE
find / -name '*cloud-config*' >> $EXCLUDEFILE
find / -name '*cloud-final*' >> $EXCLUDEFILE
The actual list of directories and files that you exclude may vary from this list, based on the
Linux distro version, packages and applications that you are migrating.

59
Credits: Kudos to Kevin Carter who wrote a great article a couple years ago that provided a
useful starting point for building a list of directories and files to consider excluding as part of a
Linux-to-Linux migration process!

Step 6: Stop applications during migration


To minimize application data changes from occurring during the migration process, stop the
related applications and daemons on the source virtual machines. The application that we
migrated was a web application built using Apache2, so we simply stopped the related Apache2
daemon.
sudo service stop apache2

Step 7: Migrate the application files and data


From each source virtual machine, migrate application files and data using two rsync passes
over ssh. The first pass performs the bulk of the data transfer, whereas the second pass uses
checksums to confirm that all files were transferred successfully.
TARGETVM="insert_target_vm_public_ip_address"
rsync -e "ssh" -rlpEAXogDtSzh -P -x –exclude-from="$EXCLUDEFILE" –rsync-
path="sudorsync" –verbose –progress / azureuser@$TARGETVM:/
rsync -e "ssh" -crlpEAXogDtSzh -P -x –exclude-from="$EXCLUDEFILE" –rsync-
path="sudorsync" –verbose –progress / azureuser@$TARGETVM:/

Step 8: Restart each target virtual machine


After both rsync passes have completed, restart each target virtual machine to complete the
migration process.
sshazureuser@$TARGETVM
shutdown -r now

RESULT
Thus the program to implement migration of virtual machine was executed successfully.

60
EX.NO. : 5
CREATION OF SINGLE NODE CLUSTER USING HADOOP
DATE :

AIM
Write a program to install Hadoop 2.7.1 and using this create a Single Node Cluster.

PROCEDURE

Step 1 :Before installing or downloading anything, It is always better to update using the
following command
Command: sudo apt-get install update
Step 2 : Install Java 7 or 8
Command: sudo apt-get install default-jdk
Step 3 :We can check JAVA is properly installed or not using following command:
Command: java –version
Step 4 : First Create a group named hadoop.
Coomand: sudo addgroup hadoop
Step 4 :Add dedicated hadoop user
Command: sudo adduser --ingroup hadoop hduser (Enter the password & Retype the
password or don’t write password)
Step 5 :Hadoop requires SSH access to manage its nodes. SSH setup is required to do different
operations on a cluster such as starting, stopping, distributed daemon shell operations. So, Install
ssh
Command: sudo apt-get install ssh

61
Step 6: Just find which ssh?
Command: which ssh
Step 7 : just find which sshd
Command: which sshd
Step 7 : Login as hduser
Command: su hduser
Step 8 : Enter password : Password is hduser
Step 9 :The cd (ChangeDirectory) command will change from your current directory to any
directory you specify.
Command :cd
Step 10:To authenticate different users of Hadoop, it is required to provide public/private key
pair for a Hadoop user and share it with different users. So, Generate a key of hduser
Command: ssh-keygen -t rsa (Note: Press Enter for default)
Step 11 : Add the generated key to the authorized keys.
Command : cat $HOME/.ssh/id_rsa.pub >> $HOME /.ssh/authorized_keys
Step 12 : Login to localhost
Command: ssh localhost
Step 13 :Add hduser to the sudo group

62
Command :su it (change to admin first) and enter admin password.
Command : sudo adduser hduser sudo and enter hduser password.
Step 14:Run a command as Admin
Command: sudo su hduser
Step 15: Now Download the Hadoop package from the below link
Command : wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.1 /hadoop-
2.7.1.tar.gz
Step 16 : Extract the Hadoop package. (Note : Try this in it or mce login)
Command: tar xvzf hadoop-2.7.1.tar.gz
Step 17 : Move the downloaded package to /usr/local (Note : Try this in it or mce login)
Command : sudo mv haddop-2.7.1 /usr/local/hadoop Enter the it or mce password. Verify by
Folder : MyComputer/usr/local/hadoop
Step 18 : Change the owner of a file or directory
Command: sudo chown –R hduser:hadoop /usr/local/hadoop. Enter the password.
Step 19 : View the path of jvm installed
Command: update-alternatives - - config java
Step 20: Edit the bash_profile to set path.
Command: sudo nano ~/.bashrc
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
Step 21: Update .bashrc file to apply changes
Command: source ~/.bashrc
Step 22: To set up the JAVA_HOME variables

63
Command: sudo nano /usr/local/hadoop/etc/hadoop/hadoop-env.sh (to open the file)
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

Step 23: Make a directory as below


Command: sudo mkdir –p /app/hadoop/tmp

64
Step 24: Change the owner of a file or directory
Command: sudo chown hduser:hadoop /app/hadoop/tmp.
Step 25 :Setup Configuration Files needs to be modified
Command: sudo nano /usr/local/hadoop/etc/hadoop/core-site.xml (to open the file)
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose scheme and authority
determine the FileSystem implementation. The uri's scheme determines the config property

65
(fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
Step 26: Make a copy of mapred-site.xml.template
cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-
site.xml
Step 27 : Update the mapreduce xml file(This may be done in Root login)
Command: sudo nano /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description> The host and port that the MapReduce job tracker runs at. If "local", then jobs are
run in-process as a single map and reduce task.
</description>
</property>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
Step 28: Make a directory for namenode and datanode
Command: sudo mkdir –p /usr/local/hadoop_store/hdfs/namenode
Command: sudo mkdir –p /usr/local/hadoop_store/hdfs/datanode
Step 29: Change the ownership of the file or directory
Command: sudo chown –R hduser:hadoop /usr/local/hadoop_store

66
Step 30: Update the hdfs-site xml file
Command: Sudo nano /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.The actual number of replications can be specified when
the file is created. The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
Step 31:Update the Yarn file
Command: Sudo nano /usr/local/hadoop/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
Step 32: Format the Hadoop file system
Command: hadoop namenode –format
Step 33 :Start Hadoop Daemons
Command :start-all.sh

67
Step 34 :We can check all daemons are properly started using following command:
Command :jps
Step 35 :Stop hadoop Daemons
Command : stop-all.sh

68
OUTPUT

69
70
71
72
RESULT
Thus the program for single node cluster has been developed and executed successfully.

73
EX.NO. : 6
MOUNT THE ONE NODE HADOOP CLUSTER USING FUSE
DATE :

AIM
To mount the one node Hadoop cluster using FUSE and access files on HDFS in the same
way as we do on Linux operating systems.

PROCEDURE

Step 1:
FUSE (Filesystem in Userspace) enables you to write a normal user application as a bridge for a
traditional filesystem interface.
The hadoop-hdfs-fuse package enables you to use your HDFS cluster as if it were a traditional
filesystem on Linux. It is assumed that you have a working HDFS cluster and know the
hostname and port that your NameNode exposes.
To install fuse-dfs on Ubuntu systems:
hdpuser@jiju-PC:~$ wget http://archive.cloudera.com/cdh5/one-click-
install/trusty/amd64/cdh5-repository_1.0_all.deb
--2016-07-24 09:10:33-- http://archive.cloudera.com/cdh5/one-click-

install/trusty/amd64/cdh5-repository_1.0_all.deb
Resolving archive.cloudera.com (archive.cloudera.com)... 151.101.8.167
Connecting to archive.cloudera.com (archive.cloudera.com)|151.101.8.167|:80... connected.
HTTP request sent, awaiting response... 200 OK Length: 3508 (3.4K) [application/x-debian-
package]
Saving to: ‘cdh5-repository_1.0_all.deb’
100%[======================================>] 3,508 --.-K/s in 0.09s
2016-07-24 09:10:34 (37.4 KB/s) - ‘cdh5-repository_1.0_all.deb’ saved [3508/3508]

74
Step 2:
hdpuser@jiju-PC:~$ sudo dpkg -i cdh5-repository_1.0_all.deb

Selecting previously unselected package cdh5-repository.


(Reading database ... 170607 files and directories currently installed.)
Preparing to unpack cdh5-repository_1.0_all.deb ...
Unpacking cdh5-repository (1.0) ...
Setting up cdh5-repository (1.0) ...
gpg: keyring `/etc/apt/secring.gpg' created
gpg: keyring `/etc/apt/trusted.gpg.d/cloudera-cdh5.gpg' created
gpg: key 02A818DD: public key "Cloudera Apt Repository" imported
gpg: Total number processed: 1
gpg: imported: 1
Step 3:
hdpuser@jiju-PC:~$ sudo apt-get update
Step 4:
hdpuser@jiju-PC:~$ sudo apt-get install hadoop-hdfs-fuse

Reading package lists... Done


Building dependency tree
Reading state information... Done
hdpuser@jiju-PC:~$ sudo mkdir -p /home/hdpuser/hdfs
[sudo] password for hdpuser:
hdpuser@jiju-PC:~$ sudo hadoop-fuse-dfs dfs://localhost:54310 /home/hdpuser/hdfs/
INFO /data/jenkins/workspace/generic-package-ubuntu64-14-04/CDH5.8.0-Packaging-
Hadoop-2016-07-12_15-43-10/hadoop-2.6.0+cdh5.8.0+1601-
1.cdh5.8.0.p0.93~trusty/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse
dfs/fuse_options.c:164 Adding FUSE arg /home/hdpuser/hdfs/
hdpuser@jiju-PC:~$ ls /home/hdpuser/hdfs/
hdpuser@jiju-PC:~$ mkdir /home/hdpuser/hdfs/new

75
hdpuser@jiju-PC:~$ ls/home/hdpuser/hdfs/
/home/hdpuser/hdfs/ new
hdpuser@jiju-PC:~$ mkdir /home/hdpuser/hdfs/example
hdpuser@jiju-PC:~$ ls -ll /home/hdpuser/hdfs/ total 8

drwxr-xr-xx 2 hdpuser 99 4096 Jul 24 15:28 example


drwxr-xr-xx 2 hdpuser 99 4096 Jul 24 15:19 new

To Unmont the file system


Using umount command the filesystem can be unmounted.

76
hdpuser@jiju-PC:~$ sudo umount /home/hdpuser/hdfs
NOTE: You can now add a permanent HDFS mount which persists through reboots.

To add a system mount:


Open /etc/fstab and add lines to the bottom similar to these: (sudo vi /etc/fstab)
hadoop-fuse-dfs#dfs://<name_node_hostname>:<namenode_port><mount_point> fuse
allow_other,usetrash,rw 2 0

For example:

sudo hadoop-fuse-dfs#dfs://localhost:54310 /home/hdpuser/hdfs fuse


allow_other,usetrash,rw 2 0
Test to make sure everything is working properly:
$ mount <mount_point>
hdpuser@jiju-PC:~$ sudo mount /home/hdpuser/hdfs

RESULT:
Thus fuse has been installed successfully.

77
EX.NO. : 7
PROGRAM USING API'S ON HADOOP
DATE :

AIM
To write a program for using the API's on hadoop and as well as to interact with it.

PROCEDURE

Step 1: Start the hadoop services by giving the following command in terminal
$ sbin/start-all.sh
$ jps
Step 2: Open web browser and open
localhost:50070
localhost:8088
Step 3: Creating folder in web interface (HDFS) from terminal.
$ bin/hadoop fs -mkdir /bala
Wait until the command executes.
Step 4: Open the localhost:50070
Utilities --> Browse the filesytem.
An folder has been created which we had given in terminal
bin/hadoop ----> represents the location of hdfs
fs ---> file system
-mkdir ------> create a folder
/ ------> root in hdfs
bala ----> folder name
Step 5: Loading the data into the folder we created in hdfs
$bin/hadoop fs -copyFromLocal /home/bala/Pictures /bala2
Open web browser and under utilities, browse the filesytem and check whether the content is
moved

78
OUTPUT

NameNode

DataNode

79
RESULT

Thus an API program has been developed for creating folder and copying files into it.

80
EX.NO. : 8
STORAGE CONTROLLER INSTALLATION
DATE :

AIM
Find procedure to install storage controller and interact with it.

PROCEDURE

To install Storage Control, follow these steps.


Step 1:Ensure that Systems Director 6.3 is installed and running.
Step 2: Ensure that you are logged in to Systems Director with a user ID that has
administrative privileges.
Step 3:Windows only: Restart the DB2 Management server.
a. Go to Start >Administrative Tools >Services.
b. Select DB2 Management Service from the services window and restart it.
Step 4:If you want to download and install Storage Control, go to step 5. If you
Wantto install Storage Control from read-only media, such as a CD or mounted .iso image, go to
step 8.
Step 5:From the Systems Director summary page, click the link Try Storage
Control in the upper right corner.
Step6:A download page opens. Download the appropriate file for your operating system.
Step 7: Extract the files to the directory where you want to install Storage Control, then go to
step 9.
Step 8: Copy the Storage Control installer directory from the CD or the mounted .iso
image into a temporary directory close to the system root. For example, /SCInstall for AIX or
Linux and C:\SCInstall for Windows.
Note: Storage Control cannot be installed from a read-only file system because the installation
process extracts files into the same directory where the installation script runs.

81
Step 9: Access a command window and navigate to the directory where you extracted the files or
where you copied the installer directory. Run the appropriate script. If you do not want the
license agreement to display, use the -ioption when you run the script. For example,
StorageControlInstall.sh -i.
Important: If you are not using IBM DB2 managed by Systems Director, then
the DB2 user ID used must have DB2 Administrator privileges.
On Microsoft Windows systems, run the script StorageControlInstall.bat.
On Linux and AIX systems, run the script StorageControlInstall.sh.
Step10: Restart Systems Director as directed.

RESULT
Thus the program to install storage controller was executed successfully.

82
EX.NO. : 9
WORD COUNT USING HADOOP
DATE :

AIM
Write a program to understand the procedure to word count using Hadoop.

PROCEDURE

Step 1: Make sure hadoop is installed


Command : Hadoop version
Step 2 : Make sure java is installed
Command : java -version
Step 3: Create a folder named WordCountTutorial in desktop
Step 4: In that folder Save the java program WordCount.java

Step 5: In the same folder WordCountTutorial, create another folder input_data


Step 6: In this, input_data folder, add your own text file input.txt

83
Step 7: In the same folder WordCountTutorial, create a new folder tutorial_classes to hold the
java class files
Step 8: Open the terminal and type the following :
Command: export HADOOP_CLASSPATH = $(hadoop classpath)
Step 9 : echo $HADOOP_CLASSPATH
Step 10: Create a directory on HDFS
Command: hadoop fs mkdir /WordCountTutorial
Step 11: Create a directory inside it for the input
Command: hadoop fs –mkdir /WordCountTutorial/Input
Step 12 : Check it : localhost:50070
Step 13: Upload the input file to that directory
Command: hadoop fs –put /it/Desktop/WordCountTutorial/input_data/input.txt (just drag and
drop) /WordCountTutorial/Input
Step 14: Now Check it : localhost:50070
Step 15: Change the current directory to the tutorial directory
Command: cd /it/Desktop/ WordCountTutorial
Step 16 : Compile the java code
Command: javac -classpath ${HADOOP_CLASSPATH} -d (drag drop the tutorial classes folder
then leave a space and drag drop the wordcount.java
Step 17: Check the class files created

84
Step 18: Put the output files in one jar file
Command: jar -cvf firstTutorial.jar -C tutorial_classes/ .
Step 19: Now run the jar file on hadoop
hadoop jar
<JAR_FILE><CLASS_NAME><HDFS_INPUT_DIRECTORY><HDFS_OUTPUT_DIRECTO
RY>
Command: hadoop jar firstTutorial.jar (drag drop jar file) WordCount
/WordCountTutorial/Input /WordCountTutorial/Output
Then the program executed successfully.
Step 20: To view the output
Command: hadoop dfs -cat <HDFS_OUTPUT_DIRECTORY>*
hadoop dfs -cat /WordCountTutorial/Output/*

85
SOURCE CODE
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
publicclassWordCount {
publicstaticvoid main(String [] args) throws Exception
{
Configuration c=new Configuration();
String[] files=new GenericOptionsParser(c,args).getRemainingArgs();
Path input=new Path(files[0]);
Path output=new Path(files[1]);
Job j=new Job(c,"wordcount");
j.setJarByClass(WordCount.class);
j.setMapperClass(MapForWordCount.class);
j.setReducerClass(ReduceForWordCount.class);
j.setOutputKeyClass(Text.class);
j.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(j, input);
FileOutputFormat.setOutputPath(j, output);
System.exit(j.waitForCompletion(true)?0:1);
}
publicstaticclassMapForWordCountextends Mapper<LongWritable, Text, Text, IntWritable>{

86
publicvoid map(LongWritable key, Text value, Context con) throws IOException,
InterruptedException
{
String line = value.toString();
String[] words=line.split(",");
for(String word: words )
{
Text outputKey = new Text(word.toUpperCase().trim());
IntWritable outputValue = new IntWritable(1);
con.write(outputKey, outputValue);
}
}
}
publicstaticclassReduceForWordCountextends Reducer<Text, IntWritable, Text, IntWritable>
{
publicvoid reduce(Text word, Iterable<IntWritable> values, Context con) throws IOException,
InterruptedException
{
int sum = 0;
for(IntWritable value : values)
{
sum += value.get();
}
con.write(word, new IntWritable(sum));
}
}
}

87
OUTPUT

RESULT

Thus the program for word count has been developed and executed successfully.

88

You might also like