0% found this document useful (0 votes)
14 views

Cc Print Only Observation

The document outlines procedures for installing VirtualBox and VMware Workstation on Windows 7 or 8, importing an OpenNebula sandbox, and creating virtual machines. It also details the installation of a C compiler in a virtual machine, running simple programs, and deploying applications using Google App Engine. Additionally, it describes the installation of a single-node Hadoop cluster and running applications like wordcount, along with troubleshooting and checking the status of services.

Uploaded by

gocon41581
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Cc Print Only Observation

The document outlines procedures for installing VirtualBox and VMware Workstation on Windows 7 or 8, importing an OpenNebula sandbox, and creating virtual machines. It also details the installation of a C compiler in a virtual machine, running simple programs, and deploying applications using Google App Engine. Additionally, it describes the installation of a single-node Hadoop cluster and running applications like wordcount, along with troubleshooting and checking the status of services.

Uploaded by

gocon41581
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

EX NO.

: 1

DATE:
Install Virtualbox / VMware Workstation with different flavours of linux or
windows OS on top of windows7 or 8.

Aim:
To Install Virtualbox / VMware Workstation with different flavours of linux or
windows OS on top of windows7 or 8.

Procedure:

Steps to install Virtual Box:


1. Download the Virtual box exe and click the exe file…and select next button..
2. Click the next button.

3. Click the next button


4. Click the YES button.

5. Click the install button…

6. Then installation was completed..the show virtual box icon on desktop


screen….
Steps to import Open nebula sandbox:
1. Open Virtual box
2. File import Appliance
3. Browse OpenNebula-Sandbox-5.0.ova file
4. Then go to setting, select Usb and choose USB 1.1
5. Then Start the Open Nebula
6. Login using username: root, password:opennebula
Steps to create Virtual Machine through opennebula
1. Open Browser, type localhost:9869
2. Login using username: oneadmin, password: opennebula
3. Click on instances, select VMs then follow the steps to create Virtaul machine
a. Expand the + symbol
b. Select user oneadmin
c. Then enter the VM name,no.of instance, cpu.
d. Then click on create button.
e. Repeat the steps the C,D for creating more than one VMs.
Applications:
There are various applications of cloud computing in today’s network world. Many search
engines and social websites are using the concept of cloud computing like
www.amazon.com, hotmail.com, facebook.com, linkedln.com etc. the advantages of cloud
computing in context to scalability is like reduced risk , low cost testing, ability to segment the
customer base and auto-scaling based on application load.

Result:
EX.NO.:2
DATE:
Install a C compiler in the virtual machine created using virtual box and
execute Simple Programs

Aim:
To Install a C compiler in the virtual machine created using virtual box and
execute Simple Programs`

Procedure:

Steps to import .ova file:


1. Open Virtual box
2. File import Appliance
3. Browse ubuntu_gt6.ova file
4. Then go to setting, select Usb and choose USB 1.1
5. Then Start the ubuntu_gt6
6. Login using username: dinesh, password:99425.
Steps to run c program:

1. Open the terminal


2. Type cd /opt/axis2/axis2-1.7.3/bin then press enter
3. gedit hello.c
4. gcc hello.c
5. ./a.out

1. Type cd /opt/axis2/axis2-1.7.3/bin then press enter

2. Type gedit first.c


3. Type the c program

4. Running the C program


5. Display the output:

Applications:
Simply running all programs in grid environment.

Result:
Click finished, Google Plugin for Eclipse will generate a sample project automatically.

3. Hello World
Review the generated project directory.
To deploy, see following steps:

Figure 1.1 – Click on GAE deploy button on the toolbar.

Figure 1.2 – Sign in with your Google account


Once you have selected your application and press Run. After a few moments your
application will start and the launcher will show a little green icon next to your
application. Then press Browse to open a browser pointing at your application
which is running at http://localhost:8080/

Paste http://localhost:8080 into your browser and you should see your
application as follows:

Just for fun, edit the index.py to change the name “Chuck” to your own name
and press Refresh in the browser to verify your updates.

Watching the Log

You can watch the internal log of the actions that the web server is performing
when you are interacting with your application in the browser. Select your
application in the Launcher and press the Logs button to bring up a log window:

Each time you pressRefresh in your browser –youcan seeit retrieving the
output with a GET request.
Dealing With Errors

With two files to edit, there are two general categories of errors that you may
encounter. If you make a mistake on the app.yaml file, the App Engine will not start
and your launcher will show a yellow icon near your application:

To get more detail on what is going wrong, take a look at the log for the application:
In this instance – the mistake is mis-­‐indenting the last line in the app.yaml (line 8).
If you make a syntax error in the index.py file, a Python trace back error will appear in
your browser.

The error you need to see is likely to be the last few lines of the output – in this
case I made a Python syntax error on line one of our one-•‐line application.
Reference: http://en.wikipedia.org/wiki/Stack_trace
When you make a mistake in the app.yaml file – you must the fix the mistake
and attempt to start the application again.
If you make a mistake in a file like index.py, you can simply fix the file and
press refresh in your browser – there is no need to restart the server.
Shutting Down the Server
To shut down the server, use the Launcher, select your application and press the
Stop button.

Result:
Before migration
Host:SACET

Host:one-sandbox
After Migration:

Host:one-sandbox
Host:SACET

APPLICATIONS:
Easily migrate your virtual machine from one pc to another.

Result:
EX.NO:7
DATE:
Find a procedure to launch virtual machine using trystack (Online
Openstack Demo Version)

Aim:
To Find a procedure to launch virtual machine using trystack.
Steps:

OpenStack is an open-source software cloud computing platform. OpenStack is


primarily used for deploying an infrastructure as a service (IaaS) solution like
Amazon Web Service (AWS). In other words, you can make your own AWS by
using OpenStack. If you want to try out OpenStack, TryStack is the easiest and
free way to do it.
In order to try OpenStack in TryStack, you must register yourself by joining
TryStack Facebook Group. The acceptance of group needs a couple days because
it’s approved manually. After you have been accepted in the TryStack Group,
you can log in TryStack.

TryStack.org Homepage

I assume that you already join to the Facebook Group and login to the dashboard.
After you log in to the TryStack, you will see the Compute Dashboard like:
OpenStack Compute Dashboard
Overview: What we will do?

In this post, I will show you how to run an OpenStack instance. The instance will
be accessible through the internet (have a public IP address). The final topology
will like:
EX NO.:8
DATE :
Install Hadoop single node cluster and run simple
applications like wordcount.

Aim:
To Install Hadoop single node cluster and run simple applications like
wordcount.

Steps:

Install Hadoop

Step 1: Click here to download the Java 8 Package. Save this file in your home
directory.

Step 2: Extract the Java Tar File.

Command: tar -xvf jdk-8u101-linux-i586.tar.gz

Fig: Hadoop Installation – Extracting Java Files


Step 3: Download the Hadoop 2.7.3 Package.

Command: wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-


2.7.3.tar.gz

Fig: Hadoop Installation – Downloading Hadoop

Step 4: Extract the Hadoop tar File.

Command: tar -xvf hadoop-2.7.3.tar.gz


5: Add the Hadoop and Java paths in the bash file (.bashrc). Open. bashrc

file. Now, add Hadoop and Java Path as shown below.

Command: vi .bashrc

Fig: Hadoop Installation – Setting Environment Variable


Then, save the bash file and close it.

For applying all these changes to the current Terminal, execute the source command.
Command: source .bashrc

Fig: Hadoop Installation – Refreshing environment variables

To make sure that Java and Hadoop have been properly installed on your system and can be
accessed through the Terminal, execute the java -version and hadoop version commands.

Command: java -version


Fig: Hadoop Installation – Checking Java Version
Command: hadoop version

Fig: Hadoop Installation – Checking Hadoop Version

Step 6: Edit the Hadoop Configuration files.

Command: cd hadoop-2.7.3/etc/hadoop/

Command: ls

All the Hadoop configuration files are located in hadoop-2.7.3/etc/hadoop directory as you can
see in the snapshot below:

Fig: Hadoop Installation – Hadoop Configuration Files


Step 7: Open core-site.xml and edit the property mentioned below inside
configuration tag:

core-site.xml informs Hadoop daemon where NameNode runs in the cluster. It contains
configuration settings of Hadoop core such as I/O settings that are common to HDFS &
MapReduce.

Command: vi core-site.xml

Fig: Hadoop Installation – Configuring core-site.xml

1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
6 </property>
7 </configuration>

Step 8: Edit hdfs-site.xml and edit the property mentioned below inside
configuration tag:

hdfs-site.xml contains configuration settings of HDFS daemons (i.e. NameNode, DataNode,


Secondary NameNode). It also includes the replication factor and block size of HDFS.

Command: vi hdfs-site.xml
Fig: Hadoop Installation – Configuring hdfs-site.xml

1
2 <?xml version="1.0" encoding="UTF-8"?>
3 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
4 <property>
5 <name>dfs.replication</name>
6 <value>1</value>
7 </property>
<property>
8 <name>dfs.permission</name>
9 <value>false</value>
10 </property>
</configuration>
11

Step 9: Edit the mapred-site.xml file and edit the property mentioned below
inside configuration tag:

mapred-site.xml contains configuration settings of MapReduce application like number of JVM


that can run in parallel, the size of the mapper and the reducer process, CPU cores available for a
process, etc.

In some cases, mapred-site.xml file is not available. So, we have to create the mapred- site.xml
file using mapred-site.xml template.

Command: cp mapred-site.xml.template mapred-site.xml

Command: vi mapred-site.xml.

Fig: Hadoop Installation – Configuring mapred-site.xml


1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>mapreduce.framework.name</name>
<value>yarn</value>
6 </property>
7 </configuration>

Step 10: Edit yarn-site.xml and edit the property mentioned below inside
configuration tag:

yarn-site.xml contains configuration settings of ResourceManager and NodeManager like


application memory management size, the operation needed on program & algorithm, etc.

Command: vi yarn-site.xml

Fig: Hadoop Installation – Configuring yarn-site.xml

Step 11: Edit hadoop-env.sh and add the Java Path as mentioned below:
1
2 <?xml version="1.0">
3 <configuration>
4 <property>
5 <name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
6 </property>
7 <property>
8 <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</
name>
9
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
1 </property>
0
1
Command: vi hadoop–env.sh

Fig: Hadoop Installation – Configuring hadoop-env.sh Step

12: Go to Hadoop home directory and format the NameNode.

Command: cd

Command: cd hadoop-2.7.3

Command: bin/hadoop namenode -format

Fig: Hadoop Installation – Formatting NameNode

This formats the HDFS via NameNode. This command is only executed for the first time.
Formatting the file system means initializing the directory specified by the dfs.name.dir
variable.

Never format, up and running Hadoop filesystem. You will lose all your data stored in the
HDFS.

Step 13: Once the NameNode is formatted, go to hadoop-2.7.3/sbin directory and start all the daemons.

Command: cd hadoop-2.7.3/sbin

Either you can start all daemons with a single command or do it individually.

Command: ./start-all.sh

The above command is a combination of start-dfs.sh, start-yarn.sh & mr-jobhistory-


daemon.sh

Or you can run all the services individually as below:


Start NameNode:

The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files
stored in the HDFS and tracks all the file stored across the cluster.

Command: ./hadoop-daemon.sh start namenode


On startup, a DataNode connects to the Namenode and it responds to the requests from
the Namenode for different operations.

Command: ./hadoop-daemon.sh start datanode

Fig: Hadoop Installation – Starting DataNode

Start ResourceManager:

ResourceManager is the master that arbitrates all the available cluster resources and
thus helps in managing the distributed applications running on the YARN system.
Its work is to manage each NodeManagers and the each application’s
ApplicationMaster.

Command: ./yarn-daemon.sh start resourcemanager

Fig: Hadoop Installation – Starting ResourceManager

Start NodeManager:

The NodeManager in each machine framework is the agent which is responsible for
managing containers, monitoring their resource usage and reporting the same to the
ResourceManager.

Command: ./yarn-daemon.sh start nodemanager


Fig: Hadoop Installation – Starting NodeManager

Start JobHistoryServer:

JobHistoryServer is responsible for servicing all job history related requests from client.

Command: ./mr-jobhistory-daemon.sh start historyserver

Step 14: To check that all the Hadoop services are up and running, run the below
command.

Command: jps

Fig: Hadoop Installation – Checking Daemons


Step 15: Now open the Mozilla browser and go
to localhost:50070/dfshealth.html to check the NameNode interface.

Fig: Hadoop Installation – Starting WebUI

Congratulations, you have successfully installed a single node Hadoop cluster

Result:

You might also like