ccs335 Lab Manual
ccs335 Lab Manual
PREPARED BY
LIST OF EXPERIMENTS
5. Simulate a cloud scenario using CloudSim and run a scheduling algorithm that is not
present in CloudSim.
6. Find a procedure to transfer the files from one virtual machine to another virtual
machine.
7. Install Hadoop single node cluster and run simple applications like word count.
TABLE OF CONTENTS
Aim:
To Install Virtualbox / VMware Workstation with different flavours of linux or windows OS on top of
windows7 or 8.
PROCEDURE:
6. Then installation was completed..the show virtual box icon on desktop screen….
APPLICATIONS:
There are various applications of cloud computing in today’s network world. Many search engines
and social websites are using the concept of cloud computing like www.amazon.com, hotmail.com,
facebook.com, linkedln.com etc. the advantages of cloud computing in context to scalability is like
reduced risk , low cost testing ,ability to segment the customer base and auto-scaling based on application
load.
RESULT:
Thus the procedure to run the virtual machine of different configuration.
EX.NO.:2 Install
a C compiler in the virtual machine created using
DATE: virtual box and execute Simple Programs
Aim:
To Install a C compiler in the virtual machine created using virtual box and
execute Simple Programs`
PROCEDURE:
APPLICATIONS:
Simply running all programs in grid environment.
RESULT:
Aim:
To Install Google App Engine. Create hello world app and other simple web applications using
python/java.
Procedure:
Figure – Deselect the “Google Web ToolKit“, and link your GAE Java SDK via the “configure SDK” link.
Click finished, Google Plugin for Eclipse will generate a sample project automatically.
3. Hello World
Review the generated project directory.
HelloWorld/ src/
...Java source code...
META-INF/
...other configuration...
war/
...JSPs, images, data files...
WEB-INF/
...app configuration...
lib/
...JARs for libraries...
classes/
...compiled classes...
Copy
The extra is this file “appengine-web.xml“, Google App Engine need this to run and deploy the application.
File : appengine-web.xml
</appengine-web-app>
Copy
4. Run it local
Right click on the project and run as “Web Application“.
Eclipse console :
//...
INFO: The server is running at http://localhost:8888/
30 Mac 2012 11:13:01 PM com.google.appengine.tools.development.DevAppServerImpl start INFO: The
admin console is running at http://localhost:8888/_ah/admin
Copy
Access URL http://localhost:8888/, see output
In this demonstration, I created an application ID, named “mkyong123”, and put it in appengine- web.xml.
File : appengine-web.xml
</appengine-web-app>
Copy
Figure 1.2 – Sign in with your Google account and click on the Deploy button.
Figure 1.3 – If everything is fine, the hello world web application will be deployed to this URL –
http://mkyong123.appspot.com/
Result:
Aim:
To Simulate a cloud scenario using CloudSim and run a scheduling algorithm that is not
present in CloudSim.
Steps:
CloudSimExample1 finished!
RESULT:
Aim:
To Use GAE launcher to launch the web applications.
Steps:
Now you need to create a simple application. We could use the “+”option to have the
launcher make us an application – but instead we will do it by hand to get a better
sense of what is going on.
Make a folder for your Google App Engine applications. I am going to make the
Folder on my Desktop called “apps” – the path to this folder is:
Once you have selected your application and press Run. After a few moments your application
will start and the launcher will show a little green icon next to your application. Then press
Browse to open a browser pointing at your application which is running at
http://localhost:8080/
Paste http://localhost:8080 into your browser and you should see your application as
follows:
Just for fun, edit the index.pytochange the name “Chuck” to you row
You can watch the internal log of the actions that the web server is performing when you
are interacting with your application in the browser. Select your application in the Launcher
and press the Logs button to bring up a log window:
Each time you press Refresh in your browser–you can see it retrieving the output
with a GET request.
With two files to edit, there are two general categories of errors that you may encounter.
If youmake a mistake onthe app.yamlfile, the App Engine willnotstart and your launcher will show
a yellow icon near your application:
To get more detail on what is going wrong, take a look at the log for the application:
In this instance – the mistake is mis-‐indenting the last line in the app.yaml (line 8).
Ifyoumake asyntaxerror in the index.pyfile, a Pythontrace backerrorwillappear in yourbrowser.
The error you need to see is likely to be the last few lines of the output – in this case I made a
Python syntax error on line one of our one-•‐line application.
Reference: http://en.wikipedia.org/wiki/Stack_trace
When you make a mistake in the app.yaml file – you must the fix the mistake and attempt
to start the application again.
If you make a mistake in a file like index.py, you can simply fix the file and press refresh
in your browser – there is no need to restart the server.
Result:
EX.NO:6 Find a procedure to transfer the files from one virtual machine
to another virtual machine.
DATE:
Aim:
To Find a procedure to transfer the files from one virtual machine
to another virtual machine.
Steps:
1. You can copy few (or more) lines with copy & paste mechanism.
For this you need to share clipboard between host OS and guest OS, installing Guest
Addition on both the virtual machines (probably setting bidirectional and restarting them).
You copy from guest OS in the clipboard that is shared with the host OS.
Then you paste from the host OS to the second guest OS.
2. You can enable drag and drop too with the same method (Click on the machine,
settings, general, advanced, drag and drop: set to bidirectional )
3. You can have common Shared Folders on both virtual machines and use one of
the directory shared as buffer to copy.
Installing Guest Additions you have the possibility to set Shared Folders too. As you put
a file in a shared folder from host OS or from guest OS, is immediately visible to the
other. (Keep in mind that can arise some problems for date/time of the files when there
are different clock settings on the different virtual machines).
If you use the same folder shared on more machines you can exchange files directly copying
them in this folder.
4. You can use usual method to copy files between 2 different computer with client-server
application. (e.g. scp with sshd active for linux, winscp... you can get some info about
SSH servers e.g. here)
You need an active server (sshd) on the receiving machine and a client on the
sending machine. Of course you need to have the authorization setted (via password
or, better, via an automatic authentication method).
Note: many Linux/Ubuntu distribution install sshd by default: you can see if it is running
with pgrep sshd from a shell. You can install with sudo apt-get install openssh-server.
5. You can mount part of the file system of a virtual machine via NFS or SSHFS on
the other, or you can share file and directory with Samba. You may find
interesting the article Sharing files between guest and host without VirtualBox
shared folders with detailed step by step instructions.
You should remember that you are dialling with a little network of machines with different
operative systems, and in particular:
Each virtual machine has its own operative system running on and acts as a
physical machine.
Each virtual machine is an instance of a program owned by an user in the hosting
operative system and should undergo the restrictions of the user in the hosting OS.
E.g Let we say that Hastur and Meow are users of the hosting machine, but they did not
allow each other to see their directories (no read/write/execute authorization). When each
of them run a virtual machine, for the hosting OS those virtual machine are two normal
programs owned by Hastur and Meow and cannot see the private directory of the other
user. This is a restriction due to the hosting OS. It's easy to overcame it: it's enough to give
authorization to read/write/execute to a directory or to chose a different directory in which
Downloaded by Bike Lover ([email protected])
lOMoARcPSD|48853542
PROCEDURE:
Steps:
1. Open Browser, type localhost:9869
2. Login using username: oneadmin, password: opennebula
3. Then follow the steps to migrate VMs
a. Click on infrastructure
b. Select clusters and enter the cluster name
c. Then select host tab, and select all host
d. Then select Vnets tab, and select all vnet
e. Then select datastores tab, and select all datastores
f. And then choose host under infrastructure tab
g. Click on + symbol to add new host, name the host then click on create.
4. on instances, select VMs to migrate then follow the stpes
a. Click on 8th icon ,the drop down list display
b. Select migrate on that ,the popup window display
c. On that select the target host to migrate then click on migrate.
Before migration
Host:SACET
Host:one-sandbox
After Migration:
Host:one-sandbox
Host:SACET
APPLICATIONS:
Easily migrate your virtual machine from one pc to another.
Result:
Thus the file transfer between VM was successfully completed…..
Aim:
To Install Hadoop single node cluster and run simple
applications like wordcount.
Steps:
Install Hadoop
Step 1: Click here to download the Java 8 Package. Save this file in your home directory.
Step 2: Extract the Java Tar File.
Command: tar -xvf jdk-8u101-linux-i586.tar.gz
Step 5: Add the Hadoop and Java paths in the bash file (.bashrc). Open. bashrc
Command: vi .bashrc
For applying all these changes to the current Terminal, execute the source command.
Command: source .bashrc
To make sure that Java and Hadoop have been properly installed on your system and can be
accessed through the Terminal, execute the java -version and hadoop version commands.
Command: cd hadoop-2.7.3/etc/hadoop/
Command: ls
All the Hadoop configuration files are located in hadoop-2.7.3/etc/hadoop directory as you can
see in the snapshot below:
Step 7: Open core-site.xml and edit the property mentioned below inside configuration tag:
core-site.xml informs Hadoop daemon where NameNode runs in the cluster. It contains
configuration settings of Hadoop core such as I/O settings that are common to HDFS &
MapReduce.
Command: vi core-site.xml
1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?
3 >
4 <configuration>
5 <property>
6 <name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
7 </property>
</configuration>
Step 8: Edit hdfs-site.xml and edit the property mentioned below inside
configuration tag:
1
2 <?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3
<configuration>
4 <property>
5 <name>dfs.replication</name>
6 <value>1</value>
Downloaded by Bike Lover ([email protected])
</property>
7
<name>dfs.permission</name>
lOMoARcPSD|48853542
9 <value>false</value>
10 </property>
11 </configuration>
Step 9: Edit the mapred-site.xml file and edit the property mentioned below
In some cases, mapred-site.xml file is not available. So, we have to create the mapred- site.xml
file using mapred-site.xml template.
Command: vi mapred-site.xml.
1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>mapreduce.framework.name</name>
6 <value>yarn</value>
</property>
7 </configuration>
Step 10: Edit yarn-site.xml and edit the property mentioned below inside
configuration tag:
yarn-site.xml contains configuration settings of ResourceManager and NodeManager like
application memory management size, the operation needed on program & algorithm, etc.
Command: vi yarn-site.xml
hadoop-env.sh contains the environment variables that are used in the script to run Hadoop
like Java home path, etc.
Command: vi hadoop–env.sh
Command: cd
Command: cd hadoop-2.7.3
This formats the HDFS via NameNode. This command is only executed for the first time.
Formatting the file system means initializing the directory specified by the dfs.name.dir
variable.
Never format, up and running Hadoop filesystem. You will lose all your data stored in the
HDFS.
Step 13: Once the NameNode is formatted, go to hadoop-2.7.3/sbin directory and start all the daemons.
Command: cd hadoop-2.7.3/sbin
Either you can start all daemons with a single command or do it individually.
Command: ./start-all.sh
Start NameNode:
The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files
stored in the HDFS and tracks all the file stored across the cluster.
On startup, a DataNode connects to the Namenode and it responds to the requests from
the Namenode for different operations.
Start ResourceManager:
ResourceManager is the master that arbitrates all the available cluster resources and
thus helps in managing the distributed applications running on the YARN system.
Its work is to manage each NodeManagers and the each application’s
ApplicationMaster.
Start NodeManager:
The NodeManager in each machine framework is the agent which is responsible for
managing containers, monitoring their resource usage and reporting the same to the
ResourceManager.
Start JobHistoryServer:
JobHistoryServer is responsible for servicing all job history related requests from client.
Step 14: To check that all the Hadoop services are up and running, run the
below command.
Command: jps
Result:
Thus the Hadoop one cluster was installed and simple applications executed successfully.
AIM:
To write a program to run a container from Docker hub.
PROCEDURE:
Run a container from docker hub Run
docker -h,
$ docker -h
Flag shorthand -h has been deprecated, please use --help Usage:
...
Management Commands:
builder Manage builds
config Manage Docker configs
container Manage containers engine
Manage the docker engine
image Manage images
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
The Docker command line can be used to manage several features of the Docker Engine. In this lab, we will
mainly focus on the container command.
If podman is installed, you can run the alternative command for comparison. sudo
podman -h
docker version
Downloaded by Bike Lover ([email protected])
lOMoARcPSD|48853542
Client:
Version: 19.03.6
...
We are going to use the Docker CLI to run our first container. Open
KiB Mem : 2046768 total, 173308 free, 117248 used, 1756212 buff/cache
KiB Swap: 1048572 total, 1048572 free, 0 used. 1548356 avail Mem
# Lots of processes!
docker ps -a
5e1bf0e6b926bd73a66f98b3cbe23d04189c16a43d55dd46b8486359f6fdf048 Nginx is a
lightweight web server. You can access it on port 8080 on your localhost.
Access the nginx server on localhost:8080. curl
localhost:8080
will return the HTML home page of Nginx,
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body
{
width: 35em; margin:
0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
If you are using play-with-docker, look for the 8080 link near the top of the page, or if you run a Docker
client with access to a local browser,
Run a mongo DB server
Now, run a mongoDB server. We will use the official mongoDB image from the Docker Hub. Instead of
using the latest tag (which is the default if no tag is specified), we will use a specific version of the mongo
image.
$ docker container run --detach --publish 8081:27017 --name mongo mongo:4.4 Unable
to find image mongo:4.4 locally
4.4: Pulling from library/mongo
d13d02fa248d: Already exists
bc8e2652ce92: Pull complete
3cc856886986: Pull complete
c319e9ec4517: Pull complete
b4cbf8808f94: Pull complete
cb98a53e6676: Pull complete
f0485050cd8a: Pull complete
ac36cdc414b3: Pull complete
61814e3c487b: Pull complete
523a9f1da6b9: Pull complete
3b4beaef77a2: Pull complete
Digest: sha256:d13c897516e497e898c229e2467f4953314b63e48d4990d3215d876ef9d1fc7c Status:
Downloaded newer image for mongo:4.4
d8f614a4969fb1229f538e171850512f10f490cb1a96fca27e4aa89ac082eba5
Access localhost:8081 to see some output from mongo.
curl localhost:8081
which will return a warning from MongoDB,
It looks like you are trying to access MongoDB over HTTP on the native driver port. If
you are using play-with-docker, look for the 8080 link near the top of the page.
d6777df89fea nginx "nginx -g 'daemon ..." Less than a second ago Up 2 seconds
0.0.0.0:8080-
>80/tcp nginx
ead80a0db505 mongo "docker-entrypoint..." 17 seconds ago Up 19 seconds 0.0.0.0:8081-
>27017/tcp mongo
af549dccd5cf ubuntu "top" 5 minutes ago Up 5 minutes priceless_kepler
Step 3: Clean Up
First get a list of the containers running using docker container ls.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6777df89fea nginx "nginx -g 'daemon ..." 3 minutes ago Up 3 minutes
0.0.0.0:8080-
>80/tcp nginx
ead80a0db505 mongo "docker-entrypoint..." 3 minutes ago
Up 3 minutes 0.0.0.0:8081->27017/tcp mongo
af549dccd5cf ubuntu "top" 8 minutes ago
Up 8 minutes priceless_kepler
Next, run docker container stop [container id] for each container in the list. You can also use the
names of the containers that you specified before.
$ docker container stop
d67 ead af5 d67
e
a
d
a
f
5
1. Remove the stopped containers
docker system prune is a really handy command to clean up your system. It will remove any
stopped containers, unused volumes and networks, and dangling images.
$ docker system
prune WARNING!
This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all dangling images
Are you sure you want to
continue? [y/N] y Deleted
Containers:
7872fd96ea4695795c41150a06067d605f69702dbcb9ce49492c9029f0e1b44b
Downloaded by Bike Lover ([email protected])
lOMoARcPSD|48853542
60abd5ee65b1e2732ddc02b971a86e22de1c1c446dab165462a08b037ef7835c
31617fdd8e5f584c51ce182757e24a1c9620257027665c20be75aa3ab6591740
etc), communication between different entities, implementation of broker policies, etc. This toolkit allows
to:
The Support of modeling and simulation of large scale computing environment as federated cloud
data centers, virtualized server hosts, with customizable policies for provisioning host resources
to virtual machines and energy-aware computational resources
It is a self-contained platform for modeling cloud’s service brokers, provisioning, and
allocation policies.
It supports the simulation of network connections among simulated system elements.
Support for simulation of federated cloud environment, that inter-networks resources from both
private and public domains.
Availability of a virtualization engine that aids in the creation and management of
multiple independent and co-hosted virtual services on a data center node.
Flexibility to switch between space shared and time shared allocation of processing cores
to virtualized services.