Cloudstack Installation
Cloudstack Installation
Release 4.5.0
Contents
3
3
13
13
Source Installation
3.1 Building from Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
23
General Installation
4.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Management Server Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
29
31
Configuration
5.1 Configuring your CloudStack Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
45
Hypervisor Setup
6.1 Host Hyper-V Installation . . . . .
6.2 Host KVM Installation . . . . . . .
6.3 Host LXC Installation . . . . . . .
6.4 Host VMware vSphere Installation
6.5 Host Citrix XenServer Installation .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
71
. 71
. 74
. 85
. 93
. 117
Network Setup
129
7.1 Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Storage Setup
8.1 Storage Setup . . . . . . . . . . . .
8.2 Small-Scale Setup . . . . . . . . .
8.3 Large-Scale Setup . . . . . . . . .
8.4 Storage Architecture . . . . . . . .
8.5 Network Configuration For Storage
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
143
143
143
143
144
145
Optional Installation
151
9.1 Additional Installation Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
9.2 About Password and Key Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
ii
This is the Apache CloudStack installation guide, for the Documentation home, the administrator guide or the ReleaseNotes please see:
Documentation home
Administration Guide
Release Notes
Note: In this guide we first go through some design and architectural choices to build your cloud. Then we dive into
a single node quick start guide to give you a feel for the installation process. The source installation steps are given in
the follow-on section for people who want to build their own packages. Otherwise you can use the general installation
which makes use of community maintained package repositories. The rest of the guide goes through the configuration
of the data-center and the setup of the network, storage and hypervisors.
Contents
Contents
CHAPTER 1
Data Center 1 houses the primary Management Server as well as zone 1. The MySQL database is replicated in real
time to the secondary Management Server installation in Data Center 2.
This diagram illustrates a setup with a separate storage network. Each server has four NICs, two connected to pod-level
network switches and two connected to storage network switches.
There are two ways to configure the storage network:
Bonded NIC and redundant switches can be deployed for NFS. In NFS deployments, redundant switches and
bonded NICs still result in one network (one CIDR block+ default gateway address).
8
iSCSI can take advantage of two separate storage networks (two CIDR blocks each with its own default gateway). Multipath iSCSI client can failover and load balance between separate storage networks.
This diagram illustrates the differences between NIC bonding and Multipath I/O (MPIO). NIC bonding configuration
involves only one network. MPIO involves two separate networks.
Feature
XenServer
vSphere KVM RHEL
Yes
Yes
No
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
(Native)
Yes
Yes
Yes
Yes
No
Yes
No
DRS
No
Yes
Yes
Yes
Yes
No
Yes
Network Throttling
Security groups in zones that use basic networking
iSCSI
FibreChannel
Local Disk
HA
LXC HyperV
No ?
Yes ?
Yes Yes
Yes Yes
Yes Yes
?
Yes
Bare
Metal
N/A
No
N/A
N/A
Yes
N/A
?
Yes
No
?
Yes
N/A
N/A
N/A
N/A
N/A
?
Yes
?
Yes
?
XenServer
vSphere
KVM - RHEL
VHD
VMDK
QCOW2
CLVM
VMFS
VMFS
NFS support
Local storage support
Storage over-provisioning
Yes, Via
existing SR
Yes
Yes
NFS
SMB/CIFS
No
Yes
Yes
NFS and
iSCSI
No
LXC
HyperV
VHD
No
No
No
Yes
No
No
Yes
No
XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not
support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a
result the CloudStack can still support storage over-provisioning by running on thin-provisioned storage volumes.
KVM supports Shared Mountpoint storage. A shared mountpoint is a file system path local to each server in a given
cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint
is assumed to be a clustered filesystem such as OCFS2. In this case the CloudStack does not attempt to mount or
unmount the storage as is done with NFS. The CloudStack requires that the administrator insure that the storage is
available
With NFS storage, CloudStack manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type.
Local storage is an option for primary storage for vSphere, XenServer, and KVM. When the local disk option is
enabled, a local disk storage pool is automatically created on each host. To use local storage for the System Virtual
Machines (such as the Virtual Router), set system.vm.use.local.storage to true in global configuration.
CloudStack supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers
in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first
approaches capacity.
10
11
be down at any given time, the total number of VM instances you can permit in the cluster is at most (N-1) *
(per-host-limit). Once a cluster reaches this number of VMs, use the CloudStack UI to disable allocation to the
cluster.
Warning: The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through
your hypervisor vendors support channel, and apply patches as soon as possible after they are released. CloudStack
will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date
with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to
date with patches.
12
CHAPTER 2
2.1.2 Environment
Before you begin , you need to prepare the environment before you install CloudStack. We will go over the steps to
prepare now.
13
Operating System
Using the CentOS 6.5 x86_64 minimal install ISO, youll need to install CentOS 6 on your hardware. The defaults
will generally be acceptable for this installation.
Once this installation is complete, youll want to connect to your freshly installed machine via SSH as the root user.
Note that you should not allow root logins in a production environment, so be sure to turn off remote logins once you
have finished the installation and configuration.
Configuring the network
By default the network will not come up on your hardware and you will need to configure it to work in your environment. Since we specified that there will be no DHCP server in this environment we will be manually configuring your
network interface. We will assume, for the purposes of this exercise, that eth0 is the only network interface that will
be connected and used.
Connecting via the console you should login as root. Check the file /etc/sysconfig/network-scripts/ifcfg-eth0, it will
look like this by default:
DEVICE="eth0"
HWADDR="52:54:00:B9:A6:C0"
NM_CONTROLLED="yes"
ONBOOT="no"
Unfortunately, this configuration will not permit you to connect to the network, and is also unsuitable for our purposes
with CloudStack. We want to configure that file so that it specifies the IP address, netmask, etc., as shown in the
following example:
Note: You should not use the Hardware Address (aka the MAC address) from our example for your configuration. It
is network interface specific, so you should keep the address already provided in the HWADDR directive.
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
Note: IP Addressing - Throughout this document we are assuming that you will have a /24 network for your CloudStack implementation. This can be any RFC 1918 network. However, we are assuming that you will match the machine
address that we are using. Thus we may use 172.16.10.2 and because you might be using the 192.168.55.0/24 network
you would use 192.168.55.2
Now that we have the configuration files properly set up, we need to run a few commands to start up the network:
# chkconfig network on
# service network start
14
Hostname
CloudStack requires that the hostname be properly set. If you used the default options in the installation, then your
hostname is currently set to localhost.localdomain. To test this we will run:
# hostname --fqdn
To rectify this situation - well set the hostname by editing the /etc/hosts file so that it follows a similar format to this
example:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.10.2 srvr1.cloud.priv
After youve modified that file, go ahead and restart the network using:
# service network restart
Now recheck with the hostname fqdn command and ensure that it returns a FQDN response
SELinux
At the moment, for CloudStack to work properly SELinux must be set to permissive. We want to both configure this
for future boots and modify it in the current running system.
To configure SELinux to be permissive in the running system we need to run the following command:
# setenforce 0
To ensure that it remains in that state we need to configure the file /etc/selinux/config to reflect the permissive state, as
shown in this example:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
NTP
NTP configuration is a necessity for keeping all of the clocks in your cloud servers in sync. However, NTP is not
installed by default. So well install and and configure NTP at this stage. Installation is accomplished as follows:
# yum -y install ntp
The actual default configuration is fine for our purposes, so we merely need to enable it and set it to start on boot as
follows:
15
# chkconfig ntpd on
# service ntpd start
NFS
Our configuration is going to use NFS for both primary and secondary storage. We are going to go ahead and setup
two NFS shares for those purposes. Well start out by installing nfs-utils.
# yum -y install nfs-utils
We now need to configure NFS to serve up two different shares. This is handled comparatively easily in the /etc/exports
file. You should ensure that it has the following content:
/secondary *(rw,async,no_root_squash,no_subtree_check)
/primary *(rw,async,no_root_squash,no_subtree_check)
You will note that we specified two directories that dont exist (yet) on the system. Well go ahead and create those
directories and set permissions appropriately on them with the following commands:
# mkdir /primary
# mkdir /secondary
CentOS 6.x releases use NFSv4 by default. NFSv4 requires that domain setting matches on all clients. In our case,
the domain is cloud.priv, so ensure that the domain setting in /etc/idmapd.conf is uncommented and set as follows:
Domain = cloud.priv
Now youll need uncomment the configuration values in the file /etc/sysconfig/nfs
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
RQUOTAD_PORT=875
STATD_PORT=662
STATD_OUTGOING_PORT=2020
Now we need to configure the firewall to permit incoming NFS connections. Edit the file /etc/sysconfig/iptables
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p udp --dport 111 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p tcp --dport 2049 -j ACCEPT
16
-A
-A
-A
-A
-A
-A
-A
-A
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
-s
-s
-s
-s
-s
-s
-s
-s
172.16.10.0/24
172.16.10.0/24
172.16.10.0/24
172.16.10.0/24
172.16.10.0/24
172.16.10.0/24
172.16.10.0/24
172.16.10.0/24
-m
-m
-m
-m
-m
-m
-m
-m
state
state
state
state
state
state
state
state
--state
--state
--state
--state
--state
--state
--state
--state
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
-p
-p
-p
-p
-p
-p
-p
-p
tcp
udp
tcp
udp
tcp
udp
tcp
udp
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
32803 -j ACCEPT
32769 -j ACCEPT
892 -j ACCEPT
892 -j ACCEPT
875 -j ACCEPT
875 -j ACCEPT
662 -j ACCEPT
662 -j ACCEPT
Now you can restart the iptables service with the following command:
# service iptables restart
We now need to configure the nfs service to start on boot and actually start it on the host by executing the following
commands:
#
#
#
#
With MySQL now installed we need to make a few configuration changes to /etc/my.cnf. Specifically we need to add
the following options to the [mysqld] section:
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'
Now that MySQL is properly configured we can start it and configure it to start on boot as follows:
# service mysqld start
# chkconfig mysqld on
Installation
We are now going to install the management server. We do that by executing the following command:
# yum -y install cloudstack-management
With the application itself installed we can now setup the database, well do that with the following command and
options:
17
When this process is finished, you should see a message like CloudStack has successfully initialized the database.
Now that the database has been created, we can take the final step in setting up the management server by issuing the
following command:
# cloudstack-setup-management
That concludes our setup of the management server. We still need to configure CloudStack, but we will do that after
we get our hypervisor set up.
18
KVM Configuration
We have two different parts of KVM to configure, libvirt, and QEMU.
QEMU Configuration
KVM configuration is relatively simple at only a single item. We need to edit the QEMU VNC configuration. This is
done by editing /etc/libvirt/qemu.conf and ensuring the following line is present and uncommented.
vnc_listen=0.0.0.0
Libvirt Configuration
CloudStack uses libvirt for managing virtual machines. Therefore it is vital that libvirt is configured correctly. Libvirt
is a dependency of cloud-agent and should already be installed.
1. In order to have live migration working libvirt has to listen for unsecured TCP connections. We also need to
turn off libvirts attempt to use Multicast DNS advertising. Both of these settings are in /etc/libvirt/libvirtd.conf
Set the following paramaters:
listen_tls
listen_tcp
tcp_port =
auth_tcp =
mdns_adv =
= 0
= 1
"16059"
"none"
0
2. Turning on listen_tcp in libvirtd.conf is not enough, we have to change the parameters as well we also need
to modify /etc/sysconfig/libvirtd:
Uncomment the following line:
#LIBVIRTD_ARGS="--listen"
3. Restart libvirt
# service libvirtd restart
For the sake of completeness you should check if KVM is running OK on your machine:
# lsmod | grep kvm
kvm_intel
kvm
55496
337772
0
1 kvm_intel
That concludes our installation and configuration of KVM, and well now move to using the CloudStack UI for the
actual configuration of our cloud.
19
2.1.5 Configuration
As we noted before we will be using security groups to provide isolation and by default that implies that well be using
a flat layer-2 network. It also means that the simplicity of our setup means that we can use the quick installer.
UI Access
To get access to CloudStacks web interface, merely point your browser to http://172.16.10.2:8080/client The default
username is admin, and the default password is password. You should see a splash screen that allows you to choose
several options for setting up CloudStack. You should choose the Continue with Basic Setup option.
You should now see a prompt requiring you to change the password for the admin user. Please do so.
Setting up a Zone
A zone is the largest organization entity in CloudStack - and well be creating one, this should be the screen that you
see in front of you now. And for us there are 5 pieces of information that we need.
1. Name - we will set this to the ever-descriptive Zone1 for our cloud.
2. Public DNS 1 - we will set this to 8.8.8.8 for our cloud.
3. Public DNS 2 - we will set this to 8.8.4.4 for our cloud.
4. Internal DNS1 - we will also set this to 8.8.8.8 for our cloud.
5. Internal DNS2 - we will also set this to 8.8.4.4 for our cloud.
Note: CloudStack distinguishes between internal and public DNS. Internal DNS is assumed to be capable of resolving
internal-only hostnames, such as your NFS servers DNS name. Public DNS is provided to the guest VMs to resolve
public IP addresses. You can enter the same DNS server for both types, but if you do so, you must make sure that
both internal and public IP addresses can route to the DNS server. In our specific case we will not use any names for
resources internally, and we have indeed them set to look to the same external resource so as to not add a namerserver
setup to our list of requirements.
Pod Configuration
Now that weve added a Zone, the next step that comes up is a prompt for information regading a pod. Which is
looking for several items.
1. Name - Well use Pod1 for our cloud.
2. Gateway - Well use 172.16.10.1 as our gateway
3. Netmask - Well use 255.255.255.0
4. Start/end reserved system IPs - we will use 172.16.10.10-172.16.10.20
5. Guest gateway - Well use 172.16.10.1
6. Guest netmask - Well use 255.255.255.0
7. Guest start/end IP - Well use 172.16.10.30-172.16.10.200
20
Cluster
Now that weve added a Zone, we need only add a few more items for configuring the cluster.
1. Name - Well use Cluster1
2. Hypervisor - Choose KVM
You should be prompted to add the first host to your cluster at this point. Only a few bits of information are needed.
1. Hostname - well use the IP address 172.16.10.2 since we didnt set up a DNS server.
2. Username - well use root
3. Password - enter the operating system password for the root user
Primary Storage
With your cluster now setup - you should be prompted for primary storage information. Choose NFS as the storage
type and then enter the following values in the fields:
1. Name - Well use Primary1
2. Server - Well be using the IP address 172.16.10.2
3. Path - Well define /primary as the path we are using
Secondary Storage
If this is a new zone, youll be prompted for secondary storage information - populate it as follows:
1. NFS server - Well use the IP address 172.16.10.2
2. Path - Well use /secondary
Now, click Launch and your cloud should begin setup - it may take several minutes depending on your internet
connection speed for setup to finalize.
Thats it, you are done with installation of your Apache CloudStack cloud.
21
22
CHAPTER 3
Source Installation
23
GPG
The CloudStack project provides a detached GPG signature of the release. To check the signature, run the following
command:
$ gpg --verify apache-cloudstack-4.5.0-src.tar.bz2.asc
If the signature is valid you will see a line of output that contains Good signature.
MD5
In addition to the cryptographic signature, CloudStack has an MD5 checksum that you can use to verify the download
matches the release. You can verify this hash by executing the following command:
If this successfully completes you should see no output. If there is any output from them, then there is a difference
between the hash you generated locally and the hash that has been pulled from the server.
SHA512
In addition to the MD5 hash, the CloudStack project provides a SHA512 cryptographic hash to aid in assurance of the
validity of the downloaded release. You can verify this hash by executing the following command:
If this command successfully completes you should see no output. If there is any output from them, then there is a
difference between the hash you generated locally and the hash that has been pulled from the server.
24
7. genisoimage
8. rpmbuild or dpkg-dev
sudo
sudo
sudo
sudo
apt-get
apt-get
apt-get
apt-get
update
install python-software-properties
update
install ant debhelper openjdk-7-jdk tomcat6 libws-commons-util-java genisoimage python
While we have defined, and you have presumably already installed the bootstrap prerequisites, there are a number of
build time prerequisites that need to be resolved. CloudStack uses maven for dependency resolution. You can resolve
the buildtime depdencies for CloudStack by running:
$ mvn -P deps
Now that we have resolved the dependencies we can move on to building CloudStack and packaging them into DEBs
by issuing the following command.
$ dpkg-buildpackage -uc -us
This command will build the following debian packages. You should have all of the following:
cloudstack-common-4.5.0.amd64.deb
cloudstack-management-4.5.0.amd64.deb
cloudstack-agent-4.5.0.amd64.deb
cloudstack-usage-4.5.0.amd64.deb
cloudstack-awsapi-4.5.0.amd64.deb
cloudstack-cli-4.5.0.amd64.deb
cloudstack-docs-4.5.0.amd64.deb
25
The next step is to copy the DEBs to the directory where they can be served over HTTP. Well use
/var/www/cloudstack/repo in the examples, but change the directory to whatever works for you.
$
$
$
$
Note: You can safely ignore the warning about a missing override file.
Now you should have all of the DEB packages and Packages.gz in the binary directory and available over HTTP.
(You may want to use wget or curl to test this before moving on to the next step.)
Configuring your machines to use the APT repository
Now that we have created the repository, you need to configure your machine to make use of the APT repository. You
can do this by adding a repository file under /etc/apt/sources.list.d. Use your preferred editor to create
/etc/apt/sources.list.d/cloudstack.list with this line:
deb http://server.url/cloudstack/repo/binary ./
Now that you have the repository info in place, youll want to run another update so that APT knows where to find the
CloudStack packages.
$ sudo apt-get update
Next, youll need to install build-time dependencies for CloudStack with Maven. Were using Maven 3, so youll want
to grab Maven 3.0.5 (Binary tar.gz) and uncompress it in your home directory (or whatever location you prefer):
$ cd ~
$ tar zxvf apache-maven-3.0.5-bin.tar.gz
$ export PATH=~/apache-maven-3.0.5/bin:$PATH
Maven also needs to know where Java is, and expects the JAVA_HOME environment variable to be set:
$ export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
26
You probably want to ensure that your environment variables will survive a logout/reboot. Be sure to update
~/.bashrc with the PATH and JAVA_HOME variables.
Building RPMs for CloudStack is fairly simple. Assuming you already have the source downloaded and have uncompressed the tarball into a local directory, youre going to be able to generate packages in just a few minutes.
Note: Packaging has Changed. If youve created packages for CloudStack previously, you should be aware that the
process has changed considerably since the project has moved to using Apache Maven. Please be sure to follow the
steps in this section closely.
Generating RPMS
Now that we have the prerequisites and source, you will cd to the packaging/centos63/ directory.
$ cd packaging/centos63
That will run for a bit and then place the finished packages in dist/rpmbuild/RPMS/x86_64/.
You should see the following RPMs in that directory:
cloudstack-agent-4.5.0.el6.x86_64.rpm
cloudstack-awsapi-4.5.0.el6.x86_64.rpm
cloudstack-cli-4.5.0.el6.x86_64.rpm
cloudstack-common-4.5.0.el6.x86_64.rpm
cloudstack-docs-4.5.0.el6.x86_64.rpm
cloudstack-management-4.5.0.el6.x86_64.rpm
cloudstack-usage-4.5.0.el6.x86_64.rpm
While RPMs is a useful packaging format - its most easily consumed from Yum repositories over a network. The next
step is to create a Yum Repo with the finished packages:
$ mkdir -p ~/tmp/repo
$ cd ../..
$ cp dist/rpmbuild/RPMS/x86_64/*rpm ~/tmp/repo/
$ createrepo ~/tmp/repo
The files and directories within ~/tmp/repo can now be uploaded to a web server and serve as a yum repository.
Configuring your systems to use your new yum repository
Now that your yum repository is populated with RPMs and metadata we need to configure the machines that need to
install CloudStack. Create a file named /etc/yum.repos.d/cloudstack.repo with this information:
[apache-cloudstack]
name=Apache CloudStack
baseurl=http://webserver.tld/path/to/repo
27
enabled=1
gpgcheck=0
Completing this step will allow you to easily install CloudStack on a number of machines across the network.
1. Once youve built CloudStack with the noredist profile, you can package it using the Building RPMs from
Source or Building DEB packages instructions.
28
CHAPTER 4
General Installation
4.1.1 Introduction
Who Should Read This
For those who have already gone through a design phase and planned a more sophisticated deployment, or those who
are ready to start scaling up a trial installation. With the following procedures, you can start using the more powerful
features of CloudStack, such as advanced VLAN networking, high availability, additional network elements such as
load balancers and firewalls, and support for multiple hypervisors including Citrix XenServer, KVM, and VMware
vSphere.
Installation Steps
For anything more than a simple trial installation, you will need guidance for a variety of configuration choices. It is
strongly recommended that you read the following:
Choosing a Deployment Architecture
Choosing a Hypervisor: Supported Features
Network Setup
Storage Setup
Best Practices
1. Make sure you have the required hardware ready. See Minimum System Requirements
2. Install the Management Server (choose single-node or multi-node). See Management Server Installation
29
30
Hosts have additional requirements depending on the hypervisor. See the requirements listed at the top of the Installation section for your chosen hypervisor:
Warning: Be sure you fulfill the additional hypervisor requirements and installation steps provided in this
Guide. Hypervisor hosts must be properly prepared to work with CloudStack. For example, the requirements
for XenServer are listed under Citrix XenServer Installation.
31
This should return a fully qualified hostname such as management1.lab.example.org. If it does not, edit
/etc/hosts so that it does.
3. Make sure that the machine can reach the Internet.
ping www.cloudstack.org
5. Repeat all of these steps on every host where the Management Server will be installed.
There is a RPM package repository for CloudStack so you can easily install on RHEL based platforms.
If youre using an RPM-based system, youll want to add the Yum repository so that you can install CloudStack with
Yum.
32
Yum repository information is found under /etc/yum.repos.d. Youll see several .repo files in this directory,
each one denoting a specific repository.
To add the CloudStack repository, create /etc/yum.repos.d/cloudstack.repo and insert the following
information.
[cloudstack]
name=cloudstack
baseurl=http://cloudstack.apt-get.eu/rhel/4.5/
enabled=1
gpgcheck=0
You can add a DEB package repository to your apt sources with the following commands. Please note that only
packages for Ubuntu 12.04 LTS (precise) are being built at this time.
Use your preferred editor and open (or create) /etc/apt/sources.list.d/cloudstack.list. Add the
community provided repository to the file:
deb http://cloudstack.apt-get.eu/ubuntu precise 4.5
Your DEB package repository should now be configured and ready for use.
Install on CentOS/RHEL
yum install cloudstack-management
Install on Ubuntu
sudo apt-get install cloudstack-management
33
2. Open the MySQL configuration file. The configuration file is /etc/my.cnf or /etc/mysql/my.cnf,
depending on your OS.
Insert the following lines in the [mysqld] section.
You can put these lines below the datadir line. The max_connections parameter should be set to 350 multiplied
by the number of Management Servers you are deploying. This example assumes one Management Server.
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'
Note: You can also create a file /etc/mysql/conf.d/cloudstack.cnf and add these directives there.
Dont forget to add [mysqld] on the first line of the file.
3. Start or restart MySQL to put the new configuration into effect.
On RHEL/CentOS, MySQL doesnt automatically start after installation. Start it manually.
service mysqld start
5. CloudStack can be blocked by security mechanisms, such as SELinux. Disable SELinux to ensure + that the
Agent has all the required permissions.
34
(b) Set the SELINUX variable in /etc/selinux/config to permissive. This ensures that the permissive setting will be maintained after a system reboot.
In RHEL or CentOS:
vi /etc/selinux/config
to this:
SELINUX=permissive
(c) Set SELinux to permissive starting immediately, without requiring a system reboot.
setenforce permissive
6. Set up the database. The following command creates the cloud user on the database.
cloudstack-setup-databases cloud:<dbpassword>@localhost \
--deploy-as=root:<password> \
-e <encryption_type> \
-m <management_server_key> \
-k <database_key> \
-i <management_server_ip>
In dbpassword, specify the password to be assigned to the cloud user. You can choose to provide no
password although that is not recommended.
In deploy-as, specify the username and password of the user deploying the database. In the following
command, it is assumed the root user is deploying the database and creating the cloud user.
(Optional) For encryption_type, use file or web to indicate the technique used to pass in the database
encryption password. Default: file. See About Password and Key Encryption.
(Optional) For management_server_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack properties file. Default: password. It is highly recommended that you replace
this with a more secure value. See About Password and Key Encryption.
(Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in
the CloudStack database. Default: password. It is highly recommended that you replace this with a more
secure value. See About Password and Key Encryption.
(Optional) For management_server_ip, you may explicitly specify cluster management server node IP. If
not specified, the local IP address will be used.
When this script is finished, you should see a message like Successfully initialized the database.
Note: If the script is unable to connect to the MySQL database, check the localhost loopback address in
/etc/hosts. It should be pointing to the IPv4 loopback address 127.0.0.1 and not the IPv6 loopback
address ::1. Alternatively, reconfigure MySQL to bind to the IPv6 loopback interface.
35
7. If you are running the KVM hypervisor on the same machine with the Management Server, edit /etc/sudoers
and add the following line:
Defaults:cloud !requiretty
8. Now that the database is set up, you can finish configuring the OS for the Management Server. This command
will set up iptables, sudoers, and start the Management Server.
cloudstack-setup-management
You should get the output message CloudStack Management Server setup is done.
Install the Database on a Separate Node
This section describes how to install MySQL on a standalone machine, separate from the Management Server. This
technique is intended for a deployment that includes several Management Server nodes. If you have a single-node
Management Server deployment, you will typically use the same node for MySQL. See Install the Database on the
Management Server Node.
Note: The management server doesnt require a specific distribution for the MySQL node. You can use a distribution
or Operating System of your choice. Using the same distribution as the management server is recommended, but not
required. See Management Server, Database, and Storage System Requirements.
1. Install MySQL from the package repository from your distribution:
yum install mysql-server
sudo apt-get install mysql-server
2. Edit the MySQL configuration (/etc/my.cnf or /etc/mysql/my.cnf, depending on your OS) and insert the following lines in the [mysqld] section. You can put these lines below the datadir line. The max_connections parameter
should be set to 350 multiplied by the number of Management Servers you are deploying. This example assumes
two Management Servers.
Note: On Ubuntu, you can also create /etc/mysql/conf.d/cloudstack.cnf file and add these directives there.
Dont forget to add [mysqld] on the first line of the file.
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=700
log-bin=mysql-bin
binlog-format = 'ROW'
bind-address = 0.0.0.0
36
Warning: On RHEL and CentOS, MySQL does not set a root password by default. It is very strongly
recommended that you set a root password as a security precaution. Run the following command to secure
your installation. You can answer Y to all questions except Disallow root login remotely?. Remote root
login is required to set up the databases.
mysql_secure_installation
5. If a firewall is present on the system, open TCP port 3306 so external MySQL connections can be established.
On Ubuntu, UFW is the default firewall. Open the port with this command:
ufw allow mysql
On RHEL/CentOS:
(a) Edit the /etc/sysconfig/iptables file and add the following line at the beginning of the INPUT chain.
-A INPUT -p tcp --dport 3306 -j ACCEPT
When this script is finished, you should see a message like Successfully initialized the database.
8. Now that the database is set up, you can finish configuring the OS for the Management Server. This command
will set up iptables, sudoers, and start the Management Server.
cloudstack-setup-management
You should get the output message CloudStack Management Server setup is done.
4.2. Management Server Installation
37
vi /etc/exports
*(rw,async,no_root_squash,no_subtree_check)
4. On the management server, create a mount point for secondary storage. For example:
mkdir -p /mnt/secondary
5. Mount the secondary storage on your Management Server. Replace the example NFS server name and NFS
share paths below with your own.
mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary
38
2. On the Management Server host, create two directories that you will use for primary and secondary storage. For
example:
mkdir -p /export/primary
mkdir -p /export/secondary
vi /etc/exports
*(rw,async,no_root_squash,no_subtree_check)
Add the following lines at the beginning of the INPUT chain, where <NETWORK> is the network that youll
be using:
-A
-A
-A
-A
-A
-A
-A
-A
-A
-A
-A
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
<NETWORK>
<NETWORK>
<NETWORK>
<NETWORK>
<NETWORK>
<NETWORK>
<NETWORK>
<NETWORK>
<NETWORK>
<NETWORK>
<NETWORK>
-m
-m
-m
-m
-m
-m
-m
-m
-m
-m
-m
state
state
state
state
state
state
state
state
state
state
state
--state
--state
--state
--state
--state
--state
--state
--state
--state
--state
--state
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
udp
tcp
tcp
tcp
udp
tcp
udp
tcp
udp
tcp
udp
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
111 -j ACCEPT
111 -j ACCEPT
2049 -j ACCEPT
32803 -j ACCEPT
32769 -j ACCEPT
892 -j ACCEPT
892 -j ACCEPT
875 -j ACCEPT
875 -j ACCEPT
662 -j ACCEPT
662 -j ACCEPT
39
8. If NFS v4 communication is used between client and server, add your domain to /etc/idmapd.conf on both the
hypervisor host and Management Server.
vi /etc/idmapd.conf
Remove the character # from the beginning of the Domain line in idmapd.conf and replace the value in the file
with your own domain. In the example below, the domain is company.com.
Domain = company.com
(c) Log back in to the hypervisor host and try to mount the /export directories. For example, substitute your
own management server name:
mkdir /primary
mount -t nfs <management-server-name>:/export/primary
umount /primary
mkdir /secondary
mount -t nfs <management-server-name>:/export/secondary
umount /secondary
40
4. Configure the database client. Note the absence of the deploy-as argument in this case. (For more details about
the arguments to this command, see Install the Database on a Separate Node.)
cloudstack-setup-databases cloud:dbpassword@dbhost \
-e encryption_type \
-m management_server_key \
-k database_key \
-i management_server_ip
For XenServer:
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt/secondary \
-u http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-xen.vhd.bz2 \
-h xenserver \
-s <optional-management-server-secret-key> \
-F
For vSphere:
41
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt/secondary \
-u http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-vmware.ova \
-h vmware \
-s <optional-management-server-secret-key> \
-F
For KVM:
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt/secondary \
-u http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-kvm.qcow2.bz2 \
-h kvm \
-s <optional-management-server-secret-key> \
-F
For LXC:
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt/secondary \
-u http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-kvm.qcow2.bz2 \
-h lxc \
-s <optional-management-server-secret-key> \
-F
2. If you are using a separate NFS server, perform this step. If you are using the Management Server as the NFS
server, you MUST NOT perform this step.
When the script has finished, unmount secondary storage and remove the created directory.
umount /mnt/secondary
rmdir /mnt/secondary
42
43
44
CHAPTER 5
Configuration
45
46
Chapter 5. Configuration
2. By the end of the installation procedure, the Management Server should have been started. Be sure that the
Management Server installation was successful and complete.
3. Now add the new region to region 1 in CloudStack.
(a) Log in to CloudStack in the first region as root administrator (that is, log in to <region.1.IP.address>:8080/client).
(b) In the left navigation bar, click Regions.
(c) Click Add Region. In the dialog, fill in the following fields:
ID. A unique identifying number. Use the same number you set in the database during Management
Server installation in the new region; for example, 2.
Name. Give the new region a descriptive name.
Endpoint. The URL where you can log in to the Management Server in the new region. This has the
format <region.2.IP.address>:8080/client.
4. Now perform the same procedure in reverse. Log in to region 2, and add region 1.
5. Copy the account, user, and domain tables from the region 1 database to the region 2 database.
In the following commands, it is assumed that you have set the root password on the database, which is a
CloudStack recommended best practice. Substitute your own MySQL root password.
(a) First, run this command to copy the contents of the database:
# mysqldump -u root -p<mysql_password> -h <region1_db_host> cloud account user domain > regi
(b) Then run this command to put the data onto the region 2 database:
# mysql -u root -p<mysql_password> -h <region2_db_host> cloud < region1.sql
2. Once the Management Server is running, add your new region to all existing regions by repeatedly using the
Add Region button in the UI. For example, if you were adding region 3:
(a) Log in to CloudStack in the first region as root administrator (that is, log in to <region.1.IP.address>:8080/client), and add a region with ID 3, the name of region 3, and the endpoint <region.3.IP.address>:8080/client.
47
(b) Log in to CloudStack in the second region as root administrator (that is, log in to <region.2.IP.address>:8080/client), and add a region with ID 3, the name of region 3, and the endpoint <region.3.IP.address>:8080/client.
3. Repeat the procedure in reverse to add all existing regions to the new region. For example, for the third region,
add the other two existing regions:
(a) Log in to CloudStack in the third region as root administrator (that is, log in to <region.3.IP.address>:8080/client).
(b) Add a region with ID 1, the name of region 1, and the endpoint <region.1.IP.address>:8080/client.
(c) Add a region with ID 2, the name of region 2, and the endpoint <region.2.IP.address>:8080/client.
4. Copy the account, user, and domain tables from any existing regions database to the new regions database.
In the following commands, it is assumed that you have set the root password on the database, which is a
CloudStack recommended best practice. Substitute your own MySQL root password.
(a) First, run this command to copy the contents of the database:
# mysqldump -u root -p<mysql_password> -h <region1_db_host> cloud account user domain > regi
(b) Then run this command to put the data onto the new regions database. For example, for region 3:
# mysql -u root -p<mysql_password> -h <region3_db_host> cloud < region1.sql
48
Chapter 5. Configuration
Description
If you want to enable security groups for guest traffic isolation, choose this.
(See Using Security Groups to Control Traffic to VMs.)
If you do not need security groups, choose this.
If you have installed a Citrix NetScaler appliance as part of your zone
network, and you will be using its Elastic IP and Elastic Load Balancing
features, choose this. With the EIP and ELB features, a basic zone with
security groups enabled can offer 1:1 static NAT and load balancing.
Network Domain. (Optional) If you want to assign a special domain name to the guest VM network,
specify the DNS suffix.
Public. A public zone is available to all users. A zone that is not public will be assigned to a particular
domain. Only users in that domain will be allowed to create guest VMs in this zone.
2. Choose which traffic types will be carried by the physical network.
49
The traffic types are management, public, guest, and storage traffic. For more information about the types, roll
over the icons to display their tool tips, or see Basic Zone Network Traffic Types. This screen starts out with
some traffic types already assigned. To add more, drag and drop traffic types onto the network. You can also
change the network name if desired.
3. Assign a network traffic label to each traffic type on the physical network. These labels must match the labels
you have already defined on the hypervisor host. To assign each label, click the Edit button under the traffic type
icon. A popup dialog appears where you can type the label, then click OK.
These traffic labels will be defined only for the hypervisor selected for the first cluster. For all other hypervisors,
the labels can be configured after the zone is created.
4. Click Next.
5. (NetScaler only) If you chose the network offering for NetScaler, you have an additional screen to fill out.
Provide the requested details to set up the NetScaler, then click Next.
IP address. The NSIP (NetScaler IP) address of the NetScaler device.
Username/Password. The authentication credentials to access the device. CloudStack uses these credentials to access the device.
Type. NetScaler device type that is being added. It could be NetScaler VPX, NetScaler MPX, or NetScaler
SDX. For a comparison of the types, see About Using a NetScaler Load Balancer.
Public interface. Interface of NetScaler that is configured to be part of the public network.
Private interface. Interface of NetScaler that is configured to be part of the private network.
Number of retries. Number of times to attempt a command on the device before considering the operation
failed. Default is 2.
Capacity. Number of guest networks/accounts that will share this NetScaler device.
Dedicated. When marked as dedicated, this device will be dedicated to a single account. When Dedicated
is checked, the value in the Capacity field has no significance implicitly, its value is 1.
6. (NetScaler only) Configure the IP range for public traffic. The IPs in this range will be used for the static NAT
capability which you enabled by selecting the network offering for NetScaler with EIP and ELB. Enter the
following details, then click Add. If desired, you can repeat this step to add more IP ranges. When done, click
Next.
Gateway. The gateway in use for these IP addresses.
Netmask. The netmask associated with this IP range.
VLAN. The VLAN that will be used for public traffic.
Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet and will be
allocated for access to guest VMs.
7. In a new zone, CloudStack adds the first pod for you. You can always add more pods later. For an overview of
what a pod is, see About Pods.
To configure the first pod, enter the following, then click Next:
Pod Name. A name for the pod.
Reserved system gateway. The gateway for the hosts in that pod.
Reserved system netmask. The network prefix that defines the pods subnet. Use CIDR notation.
Start/End Reserved System IP. The IP range in the management network that CloudStack uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more
information, see System Reserved IP Addresses.
50
Chapter 5. Configuration
8. Configure the network for guest traffic. Provide the following, then click Next:
Guest gateway. The gateway that the guests should use.
Guest netmask. The netmask in use on the subnet the guests will use.
Guest start IP/End IP. Enter the first and last IP addresses that define a range that CloudStack can assign
to guests.
We strongly recommend the use of multiple NICs. If multiple NICs are used, they may be in a
different subnet.
If one NIC is used, these IPs should be in the same CIDR as the pod CIDR.
9. In a new pod, CloudStack adds the first cluster for you. You can always add more clusters later. For an overview
of what a cluster is, see About Clusters.
To configure the first cluster, enter the following, then click Next:
Hypervisor. (Version 3.0.0 only; in 3.0.1, this field is read only) Choose the type of hypervisor software
that all hosts in this cluster will run. If you choose VMware, additional fields appear so you can give
information about a vSphere cluster. For vSphere servers, we recommend creating the cluster of hosts in
vCenter and then adding the entire cluster to CloudStack. See Add Cluster: vSphere.
Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack.
10. In a new cluster, CloudStack adds the first host for you. You can always add more hosts later. For an overview
of what a host is, see About Hosts.
Note: When you add a hypervisor host to CloudStack, the host must not have any VMs already running.
Before you can configure the host, you need to install the hypervisor software on the host. You will need to know
which version of the hypervisor software version is supported by CloudStack and what additional configuration
is required to ensure the host will work with CloudStack. To find these installation details, see:
Citrix XenServer Installation and Configuration
VMware vSphere Installation and Configuration
KVM vSphere Installation and Configuration
To configure the first host, enter the following, then click Next:
Host Name. The DNS name or IP address of the host.
Username. The username is root.
Password. This is the password for the user named above (from your XenServer or KVM install).
Host Tags. (Optional) Any labels that you use to categorize hosts for ease of maintenance. For example,
you can set this to the clouds HA tag (set in the ha.tag global configuration parameter) if you want this
host to be used only for VMs with the high availability feature enabled. For more information, see
HA-Enabled Virtual Machines as well as HA for Hosts.
11. In a new cluster, CloudStack adds the first primary storage server for you. You can always add more servers
later. For an overview of what primary storage is, see About Primary Storage.
To configure the first primary storage server, enter the following, then click Next:
Name. The name of the storage device.
Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS, SharedMountPoint,CLVM, or RBD. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining
fields in the screen vary depending on what you choose here.
5.1. Configuring your CloudStack Installation
51
52
Chapter 5. Configuration
53
10. In a new cluster, CloudStack adds the first primary storage server for you. You can always add more servers
later. For an overview of what primary storage is, see About Primary Storage.
To configure the first primary storage server, enter the following, then click Next:
Name. The name of the storage device.
Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS, SharedMountPoint, CLVM, and RBD. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here.
54
Chapter 5. Configuration
NFS
iSCSI
preSetup
SharedMountPoint
VMFS
The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A
provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary
storage that has tags T1 and T2.
55
11. In a new zone, CloudStack adds the first secondary storage server for you. For an overview of what secondary
storage is, see About Secondary Storage.
Before you can fill out this screen, you need to prepare the secondary storage by setting up NFS shares and
installing the latest CloudStack System VM template. See Adding Secondary Storage :
NFS Server. The IP address of the server or fully qualified domain name of the server.
Path. The exported path from the server.
12. Click Launch.
56
Chapter 5. Configuration
7. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack.
8. Click OK.
Add Cluster: vSphere
Host management for vSphere is done through a combination of vCenter and the CloudStack admin UI. CloudStack
requires that all hosts be in a CloudStack cluster, but the cluster may consist of a single host. As an administrator you
must decide if you would like to use clusters of one host or of multiple hosts. Clusters of multiple hosts allow for
features like live migration. Clusters also require shared storage such as NFS or iSCSI.
For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to
CloudStack. Follow these requirements:
Do not put more than 8 hosts in a vSphere cluster
Make sure the hypervisor hosts do not have any VMs already running before you add them to CloudStack.
To add a vSphere cluster to CloudStack:
1. Create the cluster of hosts in vCenter. Follow the vCenter instructions to do this. You will create a cluster that
looks something like this in vCenter.
57
8. Provide the following information in the dialog. The fields below make reference to the values from vCenter.
Cluster Name: Enter the name of the cluster you created in vCenter. For example, cloud.cluster.2.2.1
vCenter Username: Enter the username that CloudStack should use to connect to vCenter. This user must
have all the administrative privileges.
CPU overcommit ratio: Enter the CPU overcommit ratio for the cluster. The value you enter determines
the CPU consumption of each VM in the selected cluster. By increasing the over-provisioning ratio, more
resource capacity will be used. If no value is specified, the value is defaulted to 1, which implies no
58
Chapter 5. Configuration
over-provisioning is done.
RAM overcommit ratio: Enter the RAM overcommit ratio for the cluster. The value you enter determines
the memory consumption of each VM in the selected cluster. By increasing the over-provisioning ratio,
more resource capacity will be used. If no value is specified, the value is defaulted to 1, which implies no
over-provisioning is done.
vCenter Host: Enter the hostname or IP address of the vCenter server.
vCenter Password: Enter the password for the user named above.
vCenter Datacenter: Enter the vCenter datacenter that the cluster is in. For example, cloud.dc.VM.
Override Public Traffic: Enable this option to override the zone-wide public traffic for the cluster you
are creating.
Public Traffic vSwitch Type: This option is displayed only if you enable the Override Public Traffic
option. Select a desirable switch. If the vmware.use.dvswitch global parameter is true, the default option
will be VMware vNetwork Distributed Virtual Switch.
If you have enabled Nexus dvSwitch in the environment, the following parameters for dvSwitch configuration are displayed:
Nexus dvSwitch IP Address: The IP address of the Nexus VSM appliance.
Nexus dvSwitch Username: The username required to access the Nexus VSM appliance.
Nexus dvSwitch Password: The password associated with the username specified above.
Override Guest Traffic: Enable this option to override the zone-wide guest traffic for the cluster you are
creating.
Guest Traffic vSwitch Type: This option is displayed only if you enable the Override Guest Traffic option.
Select a desirable switch.
If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch.
If you have enabled Nexus dvSwitch in the environment, the following parameters for dvSwitch configuration are displayed:
Nexus dvSwitch IP Address: The IP address of the Nexus VSM appliance.
Nexus dvSwitch Username: The username required to access the Nexus VSM appliance.
Nexus dvSwitch Password: The password associated with the username specified above.
There might be a slight delay while the cluster is provisioned. It will automatically display in the UI.
59
2. Now add the hypervisor host to CloudStack. The technique to use varies depending on the hypervisor.
Adding a Host (XenServer or KVM)
Adding a Host (vSphere)
Adding a Host (XenServer or KVM)
XenServer and KVM hosts can be added to a cluster at any time.
Requirements for XenServer and KVM Hosts
Warning: Make sure the hypervisor host does not have any VMs already running before you add it to CloudStack.
Configuration requirements:
Each cluster must contain only hosts with the identical hypervisor.
For XenServer, do not put more than 8 hosts in a cluster.
For KVM, do not put more than 16 hosts in a cluster.
For hardware requirements, see the installation section for your hypervisor in the CloudStack Installation Guide.
XenServer Host Additional Requirements
identically to other hosts in the cluster.
If network bonding is in use, the administrator must cable the new host
For all additional hosts to be added to the cluster, run the following command. This will cause the host to join the
master in a XenServer pool.
# xe pool-join master-address=[master IP] master-username=root master-password=[your password]
Note: When copying and pasting a command, be sure the command has pasted as a single line before executing.
Some document viewers may introduce unwanted line breaks in copied text.
With all hosts added to the XenServer pool, run the cloud-setup-bond script. This script will complete the configuration
and setup of the bonds on the new hosts in the cluster.
1. Copy
the
script
from
the
Management
Server
in
/usr/share/cloudstackcommon/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the master host and ensure it is executable.
2. Run the script:
# ./cloud-setup-bonding.sh
Chapter 5. Configuration
1. If you have not already done so, install the hypervisor software on the host. You will need to know which version
of the hypervisor software version is supported by CloudStack and what additional configuration is required to
ensure the host will work with CloudStack. To find these installation details, see the appropriate section for your
hypervisor in the CloudStack Installation Guide.
2. Log in to the CloudStack UI as administrator.
3. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want
to add the host.
4. Click the Compute tab. In the Clusters node, click View All.
5. Click the cluster where you want to add the host.
6. Click View Hosts.
7. Click Add Host.
8. Provide the following information.
Host Name. The DNS name or IP address of the host.
Username. Usually root.
Password. This is the password for the user from your XenServer or KVM install).
Host Tags (Optional). Any labels that you use to categorize hosts for ease of maintenance. For example,
you can set to the clouds HA tag (set in the ha.tag global configuration parameter) if you want this host to
be used only for VMs with the high availability feature enabled. For more information, see HA-Enabled
Virtual Machines as well as HA for Hosts.
There may be a slight delay while the host is provisioned. It should automatically display in the UI.
9. Repeat for additional hosts.
Adding a Host (vSphere)
For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to
CloudStack. See Add Cluster: vSphere.
61
If you do not provision shared primary storage, you must set the global configuration parameter system.vm.local.storage.required to true, or else you will not be able to start VMs.
Adding Primary Storage
When you create a new zone, the first primary storage is added as part of that procedure. You can add primary storage
servers at any time, such as when adding a new cluster or adding more servers to an existing cluster.
Warning: When using preallocated storage for primary storage, be sure there is nothing on the storage (ex. you
have an empty SAN volume or an empty NFS share). Adding the storage to CloudStack will destroy any existing
data.
1. Log in to the CloudStack UI (see Log In to the UI).
2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want
to add the primary storage.
3. Click the Compute tab.
4. In the Primary Storage node of the diagram, click View All.
5. Click Add Primary Storage.
6. Provide the following information in the dialog. The information required varies depending on your choice in
Protocol.
Scope. Indicate whether the storage is available to all hosts in the zone or only to hosts in a single cluster.
Pod. (Visible only if you choose Cluster in the Scope field.) The pod for the storage device.
Cluster. (Visible only if you choose Cluster in the Scope field.) The cluster for the storage device.
Name. The name of the storage device.
Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. For Hyper-V, choose
SMB.
Server (for NFS, iSCSI, or PreSetup). The IP address or DNS name of the storage device.
Server (for VMFS). The IP address or DNS name of the vCenter server.
Path (for NFS). In NFS this is the exported path from the server.
Path (for VMFS). In vSphere this is a combination of the datacenter name and the datastore name. The
format is / datacenter name / datastore name. For example, /cloud.dc.VM/cluster1datastore.
Path (for SharedMountPoint). With KVM this is the path on each host that is where this primary storage
is mounted. For example, /mnt/primary.
SMB Username (for SMB/CIFS): Applicable only if you select SMB/CIFS provider. The username of
the account which has the necessary permissions to the SMB shares. The user must be part of the Hyper-V
administrator group.
SMB Password (for SMB/CIFS): Applicable only if you select SMB/CIFS provider. The password associated with the account.
SMB Domain(for SMB/CIFS): Applicable only if you select SMB/CIFS provider. The Active Directory
domain that the SMB share is a part of.
SR Name-Label (for PreSetup). Enter the name-label of the SR that has been set up outside CloudStack.
Target IQN (for iSCSI). In iSCSI this is the IQN of the target.
03.com.sun:02:01ec9bb549-1271378984.
62
Chapter 5. Configuration
Lun # (for iSCSI). In iSCSI this is the LUN number. For example, 3.
Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set
or superset of the tags on your disk offerings..
The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides
primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that
has tags T1 and T2.
7. Click OK.
Configuring a Storage Plug-in
Note: Primary storage that is based on a custom plug-in (ex. SolidFire) must be added through the CloudStack API
(described later in this section). There is no support at this time through the CloudStack UI to add this type of primary
storage (although most of its features are available through the CloudStack UI).
Note: The SolidFire storage plug-in for CloudStack is part of the standard CloudStack install. There is no additional
work required to add this component.
Adding primary storage that is based on the SolidFire plug-in enables CloudStack to provide hard quality-of-service
(QoS) guarantees.
When used with Compute or Disk Offerings, an administrator is able to build an environment in which a root or data
disk that a user creates leads to the dynamic creation of a SolidFire volume, which has guaranteed performance. Such
a SolidFire volume is associated with one (and only ever one) CloudStack volume, so performance of the CloudStack
volume does not vary depending on how heavily other tenants are using the system.
The createStoragePool API has been augmented to support plugable storage providers. The following is a list of
parameters to use when adding storage to CloudStack that is based on the SolidFire plug-in:
command=createStoragePool
scope=zone
zoneId=[your zone id]
name=[name for primary storage]
hypervisor=Any
provider=SolidFire
capacityIops=[whole number of IOPS from the SAN to give to CloudStack]
capacityBytes=[whole number of bytes from the SAN to give to CloudStack]
The url parameter is somewhat unique in that its value can contain additional key/value pairs.
url=[key/value pairs detailed below (values are URL encoded; for example, = is represented as %3D)]
MVIP%3D[Management Virtual IP Address] (can be suffixed with :[port number])
SVIP%3D[Storage Virtual IP Address] (can be suffixed with :[port number])
clusterAdminUsername%3D[cluster admins username]
clusterAdminPassword%3D[cluster admins password]
clusterDefaultMinIops%3D[Min IOPS (whole number) to set for a volume; used if Min IOPS is not specified
by administrator or user]
63
clusterDefaultMaxIops%3D[Max IOPS (whole number) to set for a volume; used if Max IOPS is not specified
by administrator or user]
clusterDefaultBurstIopsPercentOfMaxIops%3D[Burst IOPS is determined by (Min IOPS * clusterDefaultBurstIopsPercentOfMaxIops parameter) (can be a decimal value)]
64
Chapter 5. Configuration
Create NFS Secondary Staging Store. This box must always be checked.
Warning: Even if the UI allows you to uncheck this box, do not do so. This checkbox and the three
fields below it must be filled in. Even when Swift or S3 is used as the secondary storage provider, an
NFS staging storage in each zone is still required.
Zone. The zone where the NFS Secondary Staging Store is to be located.
SMB Username: Applicable only if you select SMB/CIFS provider. The username of the account which
has the necessary permissions to the SMB shares. The user must be part of the Hyper-V administrator
group.
SMB Password: Applicable only if you select SMB/CIFS provider. The password associated with the
account.
SMB Domain: Applicable only if you select SMB/CIFS provider. The Active Directory domain that the
SMB share is a part of.
NFS server. The name of the zones Secondary Staging Store.
Path. The path to the zones Secondary Staging Store.
Adding an NFS Secondary Staging Store for Each Zone
Every zone must have at least one NFS store provisioned; multiple NFS servers are allowed per zone. To provision an
NFS Staging Store for a zone:
1. Log in to the CloudStack UI as root administrator.
2. In the left navigation bar, click Infrastructure.
3. In Secondary Storage, click View All.
4. In Select View, choose Secondary Staging Store.
5. Click the Add NFS Secondary Staging Store button.
6. Fill out the dialog box fields, then click OK:
Zone. The zone where the NFS Secondary Staging Store is to be located.
NFS server. The name of the zones Secondary Staging Store.
Path. The path to the zones Secondary Staging Store.
65
(b) In the template selection, choose the template to use in the VM. If this is a fresh installation, likely only
the provided CentOS template is available.
(c) Select a service offering. Be sure that the hardware you have allows starting the selected service offering.
(d) In data disk offering, if desired, add another data disk. This is a second volume that will be available to but
not mounted in the guest. For example, in Linux on XenServer you will see /dev/xvdb in the guest after
rebooting the VM. A reboot is not required if you have a PV-enabled OS kernel in use.
(e) In default network, choose the primary network for the guest. In a trial installation, you would have only
one option here.
(f) Optionally give your VM a name and a group. Use any descriptive text you would like.
(g) Click Launch VM. Your VM will be created and started. It might take some time to download the template
and complete the VM startup. You can watch the VMCs progress in the Instances screen.
4. To use the VM, click the View Console button.
For more information about using VMs, including instructions for how to allow incoming network traffic to the
VM, start, stop, and delete VMs, and move a VM from one host to another, see Working With Virtual Machines
in the AdministratorCs Guide.
Congratulations! You have successfully completed a CloudStack Installation.
If you decide to grow your deployment, you can add more hosts, primary storage, zones, pods, and clusters.
66
Chapter 5. Configuration
Field
management.network.cidr
Value
A CIDR that describes the network that the management CIDRs reside on. This variable
must be set for deployments that use vSphere. It is recommended to be set for other
deployments as well. Example: 192.168.3.0/24.
xen.setup.multipath For XenServer nodes, this is a true/false variable that instructs CloudStack to enable iSCSI
multipath on the XenServer Hosts when they are added. This defaults to false. Set it to true if
you would like CloudStack to enable multipath.If this is true for a NFS-based deployment
multipath will still be enabled on the XenServer host. However, this does not impact NFS
operation and is harmless.
secstorThis is used to protect your internal network from rogue attempts to download arbitrary files
age.allowed.internal.sites
using the template download feature. This is a comma-separated list of CIDRs. If a requested
URL matches any of these CIDRs the Secondary Storage VM will use the private network
interface to fetch the URL. Other URLs will go through the public interface. We suggest you
set this to 1 or 2 hardened internal machines where you keep your templates. For example,
set it to 192.168.1.66/32.
use.local.storage
Determines whether CloudStack will use storage that is local to the Host for data disks,
templates, and snapshots. By default CloudStack will not use this storage. You should
change this to true if you want to use local storage and you understand the reliability and
feature drawbacks to choosing local storage.
host
This is the IP address of the Management Server. If you are using multiple Management
Servers you should enter a load balanced IP address that is reachable via the private network.
default.page.size
Maximum number of items per page that can be returned by a CloudStack API command.
The limit applies at the cloud level and can vary from cloud to cloud. You can override this
with a lower value on a particular API call by using the page and pagesize API command
parameters. For more information, see the Developers Guide. Default: 500.
ha.tag
The label you want to use throughout the cloud to designate certain hosts as dedicated HA
hosts. These hosts will be used only for HA-enabled VMs that are restarting due to the
failure of another host. For example, you could set this to ha_host. Specify the ha.tag value
asa host tag when you add a new host to the cloud.
vmware.vcenter.session.timeout
Determines the vCenter session timeout value by using this parameter. The default value is
20 minutes. Increase the timeout value to avoid timeout errors in VMware deployments
because certain VMware operations take more than 20 minutes.
Setting Global Configuration Parameters
Use the following steps to set global configuration parameters. These values will be the defaults in effect throughout
your CloudStack deployment.
1. Log in to the UI as administrator.
2. In the left navigation bar, click Global Settings.
3. In Select View, choose one of the following:
Global Settings. This displays a list of the parameters with brief descriptions and current values.
Hypervisor Capabilities. This displays a list of hypervisor versions with the maximum number of guests
supported for each.
4. Use the search box to narrow down the list to those you are interested in.
5. In the Actions column, click the Edit icon to modify a value. If you are viewing Hypervisor Capabilities, you
must click the name of the hypervisor first to display the editing screen.
67
68
Chapter 5. Configuration
Field
account
account
account
account
cluster
cluster
cluster
cluster
cluster
cluster
Field
Value
reThe range of IPs to be allocated to remotely access the VPN clients. The first
mote.access.vpn.client.iprange
IP in the range is used by the VPN server.
alIf false, users will not be able to create public templates.
low.public.user.templates
use.system.public.ips
If true and if an account has one or more dedicated public IP ranges, IPs are
acquired from the system pool after all the IPs dedicated to the account have
been consumed.
use.system.guest.vlans
If true and if an account has one or more dedicated guest VLAN ranges,
VLANs are allocated from the system pool after all the VLANs dedicated to
the account have been consumed.
clusThe percentage, as a value between 0 and 1, of allocated storage utilization
ter.storage.allocated.capacity.notificationthreshold
above which alerts are sent that the storage is below the threshold.
clusThe percentage, as a value between 0 and 1, of storage utilization above which
ter.storage.capacity.notificationthreshold
alerts are sent that the available storage is below the threshold.
clusThe percentage, as a value between 0 and 1, of cpu utilization above which
ter.cpu.allocated.capacity.notificationthreshold
alerts are sent that the available CPU is below the threshold.
clusThe percentage, as a value between 0 and 1, of memory utilization above
ter.memory.allocated.capacity.notificationthreshold
which alerts are sent that the available memory is below the threshold.
clusThe percentage, as a value between 0 and 1, of CPU utilization above which
ter.cpu.allocated.capacity.disablethreshold
allocators will disable that cluster from further usage. Keep the corresponding
notification threshold lower than this value to be notified beforehand.
clusThe percentage, as a value between 0 and 1, of memory utilization above
ter.memory.allocated.capacity.disablethreshold
which allocators will disable that cluster from further usage. Keep the
corresponding notification threshold lower than this value to be notified
beforehand.
cpu.overprovisioning.factor Used for CPU over-provisioning calculation; the available CPU will be the
mathematical product of actualCpuCapacity and cpu.overprovisioning.factor.
mem.overprovisioning.factorUsed for memory over-provisioning calculation.
cluster
cluster
clus- vmware.reserve.cpu
Specify whether or not to reserve CPU when not over-provisioning; In case of
ter
CPU over-provisioning, CPU is always reserved.
clus- vmware.reserve.mem
Specify whether or not to reserve memory when not over-provisioning; In case
ter
of memory over-provisioning memory is always reserved.
zone pool.storage.allocated.capacity.disablethreshold
The percentage, as a value between 0 and 1, of allocated storage utilization
above which allocators will disable that pool because the available allocated
storage is below the threshold.
zone pool.storage.capacity.disablethreshold
The percentage, as a value between 0 and 1, of storage utilization above which
allocators will disable the pool because the available storage capacity is below
the threshold.
zone storUsed for storage over-provisioning calculation; available storage will be the
age.overprovisioning.factor mathematical product of actualStorageSize and
storage.overprovisioning.factor.
zone network.throttling.rate
Default data transfer rate in megabits per second allowed in a network.
zone guest.domain.suffix
Default domain name for VMs inside a virtual networks with a router.
zone router.template.xen
Name of the default router template on Xenserver.
zone router.template.kvm
Name of the default router template on KVM.
zone router.template.vmware
Name of the default router template on VMware.
zone enable.dynamic.scale.vm Enable or diable dynamically scaling of a VM.
zone use.external.dns
Bypass internal DNS, and use the external DNS1 and DNS2
zone blacklisted.routes
Routes that are blacklisted cannot be used for creating static routes for a VPC
Private Gateway.
69
70
Chapter 5. Configuration
CHAPTER 6
Hypervisor Setup
71
Hyper-V Requiremen ts
Server Roles
Value
Hyper-V
Share Location
72
Full control
Description
After the Windows Server 2012 R2
installation, ensure that Hyper-V is
selected from Server Roles. For more
information, see Installing Hyper-V.
Ensure that folders are created for
Primary and Secondary storage. The
SMB share and the hosts should be
part of the same domain.
If you are using Windows SMB
share, the location of the file share for
the Hyper-V deployment will be the
new folder created in the \Shares on
the selected volume. You can create
sub-folders for both PRODUCT Primary and Secondary storage within
the share location. When you select the profile for the file shares,
ensure that you select SMB Share Applications. This creates the file
shares with settings appropriate for
Hyper-V.
Hosts should be part of the same Active Directory domain.
Full control on the SMB file share.
If you are using Hyper-V 2012 R2,
manually create an external virtual
switch before adding the host to
PRODUCT. If the Hyper-V host is
added to the Hyper-V manager, select
the host, then click Virtual Switch
Manager, then New Virtual Switch.
In the External Network, select the
desired NIC adapter and click Apply.
If you are using Windows 2012 R2,
virtual switch is created automatically.
Take a note of the name of the virtual switch. You need to specify that
when configuring PRODUCT physical network labels.
Add the Hyper-V domain users
to the Hyper-V Administrators
group.
A domain user should have full
control on the SMB share that
is exported for primary and
secondary storage.
This domain user should be
part of the Hyper-V Administrators and Local Administrators group on the Hyper-V
hosts that are to be managed by
PRODUCT.
The Hyper-V Agent service
runs 6.
withHypervisor
the credentials
of this
Chapter
Setup
domain user account.
Specify the credential of the
domain user while adding a
This command creates the self-signed certificate and add that to the certificate store LocalMachine\My.
(b) Add the created certificate to port 8250 for https communication:
73
If you are using Windows 2012 R2, virtual switch is created automatically.
74
This should return a fully qualified hostname such as kvm1.lab.example.org. If it does not, edit /etc/hosts so
that it does.
3. Make sure that the machine can reach the Internet.
$ ping www.cloudstack.org
75
In RHEL or CentOS:
$ yum install cloudstack-agent
In Ubuntu:
$ apt-get install cloudstack-agent
The host is now ready to be added to a cluster. This is covered in a later section, see Adding a Host. It is recommended
that you continue to read the documentation before adding the host!
If youre using a non-root user to add the KVM host, please add the user to sudoers file:
cloudstack ALL=NOPASSWD: /usr/bin/cloudstack-setup-agent
defaults:cloudstack !requiretty
host-model
guest.cpu.mode=host-model
host-passthrough
76
guest.cpu.mode=host-passthrough
guest.cpu.features=vmx
Note: host-passthrough may lead to migration failure,if you have this problem, you should use host-model or custom.
guest.cpu.features will force cpu features as a required policy so make sure to put only those features that are provided
by the host CPU.
2. Turning on listen_tcp in libvirtd.conf is not enough, we have to change the parameters as well:
On RHEL or CentOS modify /etc/sysconfig/libvirtd:
Uncomment the following line:
#LIBVIRTD_ARGS="--listen"
so it looks like:
libvirtd_opts="-d -l"
3. Restart libvirt
In RHEL or CentOS:
$ service libvirtd restart
In Ubuntu:
$ service libvirt-bin restart
77
(b) Set the SELINUX variable in /etc/selinux/config to permissive. This ensures that the permissive setting will be maintained after a system reboot.
In RHEL or CentOS:
$ vi /etc/selinux/config
to this
SELINUX=permissive
(c) Then set SELinux to permissive starting immediately, without requiring a system reboot.
$ setenforce permissive
Note: This section details how to configure bridges using the native implementation in Linux. Please refer to the next
section if you intend to use OpenVswitch
In order to forward traffic to your instances you will need at least two bridges: public and private.
78
By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each
hypervisor.
The most important factor is that you keep the configuration consistent on all your hypervisors.
Network example
There are many ways to configure your network. In the Basic networking mode you should have two (V)LANs, one
for your private network and one for the public network.
We assume that the hypervisor has one NIC (eth0) with three tagged VLANs:
1. VLAN 100 for management of the hypervisor
2. VLAN 200 for public network of the instances (cloudbr0)
3. VLAN 300 for private network of the instances (cloudbr1)
On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1
Note: The Hypervisor and Management server dont have to be in the same subnet!
The required packages were installed when libvirt was installed, we can proceed to configuring the network.
First we configure eth0
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
79
TYPE=Ethernet
VLAN=yes
IPADDR=192.168.42.11
GATEWAY=192.168.42.1
NETMASK=255.255.255.0
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0.200
DEVICE=eth0.200
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr0
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0.300
DEVICE=eth0.300
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr1
Now we have the VLAN interfaces configured we can add the bridges on top of them.
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
With this configuration you should be able to restart the network, although a reboot is recommended to see if everything
works properly.
80
Warning: Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a
configuration error and the network stops functioning!
Configure in Ubuntu
All the required packages were installed when you installed libvirt, so we only have to configure the network.
$ vi /etc/network/interfaces
With this configuration you should be able to restart the network, although a reboot is recommended to see if everything
works properly.
Warning: Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a
configuration error and the network stops functioning!
81
The most important factor is that you keep the configuration consistent on all your hypervisors.
Preparing
To make sure that the native bridge module will not interfere with openvswitch the bridge module should be added
to the blacklist. See the modprobe documentation for your distribution on where to find the blacklist. Make sure the
module is not loaded either by rebooting or executing rmmod bridge before executing next steps.
The network configurations below depend on the ifup-ovs and ifdown-ovs scripts which are part of the openvswitch
installation. They should be installed in /etc/sysconfig/network-scripts/
Network example
There are many ways to configure your network. In the Basic networking mode you should have two (V)LANs, one
for your private network and one for the public network.
We assume that the hypervisor has one NIC (eth0) with three tagged VLANs:
1. VLAN 100 for management of the hypervisor
2. VLAN 200 for public network of the instances (cloudbr0)
3. VLAN 300 for private network of the instances (cloudbr1)
On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1
Note: The Hypervisor and Management server dont have to be in the same subnet!
Configure OpenVswitch
The network interfaces using OpenVswitch are created using the ovs-vsctl command. This command will configure
the interfaces and persist them to the OpenVswitch database.
First we create a main bridge connected to the eth0 interface. Next we create three fake bridges, each connected to a
specific vlan tag.
#
#
#
#
#
#
ovs-vsctl
ovs-vsctl
ovs-vsctl
ovs-vsctl
ovs-vsctl
ovs-vsctl
82
add-br cloudbr
add-port cloudbr eth0
set port cloudbr trunks=100,200,300
add-br mgmt0 cloudbr 100
add-br cloudbr0 cloudbr 200
add-br cloudbr1 cloudbr 300
The required packages were installed when openvswitch and libvirt were installed, we can proceed to configuring the
network.
First we configure eth0
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
83
With this configuration you should be able to restart the network, although a reboot is recommended to see if everything
works properly.
Warning: Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a
configuration error and the network stops functioning!
These iptable settings are not persistent accross reboots, we have to save them first.
$ iptables-save > /etc/sysconfig/iptables
84
Note: By default UFW is not enabled on Ubuntu. Executing these commands with the firewall disabled does not
enable the firewall.
85
This should return a fully qualified hostname such as kvm1.lab.example.org. If it does not, edit /etc/hosts so
that it does.
3. Make sure that the machine can reach the Internet.
$ ping www.cloudstack.org
In RHEL or CentOS:
$ yum install cloudstack-agent
In Ubuntu:
$ apt-get install cloudstack-agent
setttings.
The
settings
are
in
2. Optional: If you would like to use direct networking (instead of the default bridge networking), configure these
lines:
libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.DirectVifDriver
network.direct.source.mode=private
network.direct.device=eth0
The host is now ready to be added to a cluster. This is covered in a later section, see Adding a Host. It is recommended
that you continue to read the documentation before adding the host!
2. Turning on listen_tcp in libvirtd.conf is not enough, we have to change the parameters as well:
On RHEL or CentOS modify /etc/sysconfig/libvirtd:
Uncomment the following line:
#LIBVIRTD_ARGS="--listen"
87
libvirtd_opts="-d"
so it looks like:
libvirtd_opts="-d -l"
3. In order to have the VNC Console work we have to make sure it will bind on 0.0.0.0. We do this by editing
/etc/libvirt/qemu.conf
Make sure this parameter is set:
vnc_listen = "0.0.0.0"
4. Restart libvirt
In RHEL or CentOS:
$ service libvirtd restart
In Ubuntu:
$ service libvirt-bin restart
(b) Set the SELINUX variable in /etc/selinux/config to permissive. This ensures that the permissive setting will be maintained after a system reboot.
In RHEL or CentOS:
$ vi /etc/selinux/config
to this
SELINUX=permissive
(c) Then set SELinux to permissive starting immediately, without requiring a system reboot.
$ setenforce permissive
88
Note: This section details how to configure bridges using the native implementation in Linux. Please refer to the next
section if you intend to use OpenVswitch
In order to forward traffic to your instances you will need at least two bridges: public and private.
By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each
hypervisor.
The most important factor is that you keep the configuration consistent on all your hypervisors.
Network example
There are many ways to configure your network. In the Basic networking mode you should have two (V)LANs, one
for your private network and one for the public network.
We assume that the hypervisor has one NIC (eth0) with three tagged VLANs:
1. VLAN 100 for management of the hypervisor
2. VLAN 200 for public network of the instances (cloudbr0)
3. VLAN 300 for private network of the instances (cloudbr1)
On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1
Note: The Hypervisor and Management server dont have to be in the same subnet!
89
The required packages were installed when libvirt was installed, we can proceed to configuring the network.
First we configure eth0
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
Now we have the VLAN interfaces configured we can add the bridges on top of them.
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
90
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes
With this configuration you should be able to restart the network, although a reboot is recommended to see if everything
works properly.
Warning: Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a
configuration error and the network stops functioning!
Configure in Ubuntu
All the required packages were installed when you installed libvirt, so we only have to configure the network.
$ vi /etc/network/interfaces
91
# Private network
auto cloudbr1
iface cloudbr1 inet manual
bridge_ports eth0.300
bridge_fd 5
bridge_stp off
bridge_maxwait 1
With this configuration you should be able to restart the network, although a reboot is recommended to see if everything
works properly.
Warning: Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a
configuration error and the network stops functioning!
These iptable settings are not persistent accross reboots, we have to save them first.
$ iptables-save > /etc/sysconfig/iptables
92
Note: By default UFW is not enabled on Ubuntu. Executing these commands with the firewall disabled does not
enable the firewall.
93
Hardware requirements:
The host must be certified as compatible with vSphere. See the VMware Hardware Compatibility Guide at
http://www.vmware.com/resources/compatibility/search.php.
All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled).
All hosts within a cluster must be homogenous. That means the CPUs must be of the same type, count, and
feature flags.
64-bit x86 CPU (more cores results in better performance)
Hardware virtualization support required
4 GB of memory
36 GB of local disk
At least 1 NIC
Statically allocated IP Address
vCenter Server requirements:
Processor - 2 CPUs 2.0GHz or higher Intel or AMD x86 processors. Processor requirements may be higher if
the database runs on the same machine.
Memory - 3GB RAM. RAM requirements may be higher if your database runs on the same machine.
Disk storage - 2GB. Disk requirements may be higher if your database runs on the same machine.
Microsoft SQL Server 2005 Express disk requirements. The bundled database requires up to 2GB free disk
space to decompress the installation archive.
Networking - 1Gbit or 10Gbit.
For more information, see vCenter Server and the vSphere Client Hardware Requirements.
Other requirements:
VMware vCenter Standard Edition 4.1, 5.0, 5.1 or 5.5 must be installed and available to manage the vSphere
hosts.
vCenter must be configured to use the standard port 443 so that it can communicate with the CloudStack Management Server.
You must re-install VMware ESXi if you are going to re-use a host from a previous install.
CloudStack requires VMware vSphere 4.1, 5.0, 5.1 or 5.5. VMware vSphere 4.0 is not supported.
All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled). All hosts within a cluster must
be homogeneous. That means the CPUs must be of the same type, count, and feature flags.
The CloudStack management network must not be configured as a separate virtual network. The CloudStack
management network is the same as the vCenter management network, and will inherit its configuration. See
Configure vCenter Management Network.
CloudStack requires ESXi and vCenter. ESX is not supported.
Ideally all resources used for CloudStack must be used for CloudStack only. CloudStack should not share
instance of ESXi or storage with other management consoles. Do not share the same storage volumes that will
be used by CloudStack with a different set of ESXi servers that are not managed by CloudStack.
94
Put all target ESXi hypervisors in dedicated clusters in a separate Datacenter in vCenter.
Ideally clusters that will be managed by CloudStack should not contain any other VMs. Do not run the management server or vCenter on the cluster that is designated for CloudStack use. Create a separate cluster for use of
CloudStack and make sure that they are no VMs in this cluster.
All of the required VLANs must be trunked into all network switches that are connected to the ESXi hypervisor hosts. These would include the VLANs for Management, Storage, vMotion, and guest VLANs. The
guest VLAN (used in Advanced Networking; see Network Setup) is a contiguous range of VLANs that will be
managed by CloudStack.
Notes
This user must have admin privileges.
Password for the above user.
Name of the datacenter.
Name of the cluster.
Notes
VLAN on which all your ESXi hypervisors reside.
IP Address Range in the ESXi VLAN. One address per Virtual Router is used from this
range.
Range of Public IP Addresses available for CloudStack use. These addresses will be used for
virtual router on CloudStack to route private traffic to external networks.
A contiguous range of non-routable VLANs. One VLAN will be assigned for each customer.
95
Optional
NIC bonding
Multipath
storage
96
In the host configuration tab, click the Hardware/Networking link to bring up the networking configuration page as
above.
Configure Virtual Switch
During the initial installation of an ESXi host a default virtual switch vSwitch0 is created. You may need to create
additional vSwiches depending on your required architecture. CloudStack requires all ESXi hosts in the cloud to use
consistently named virtual switches. If you change the default virtual switch name, you will need to configure one or
more CloudStack configuration variables as well.
Separating Traffic
CloudStack allows you to configure three separate networks per ESXi host. CloudStack identifies these networks by
the name of the vSwitch they are connected to. The networks for configuration are public (for traffic to/from the
public internet), guest (for guest-guest traffic), and private (for management and usually storage traffic). You can use
the default virtual switch for all three, or create one or two other vSwitches for those traffic types.
If you want to separate traffic in this way you should first create and configure vSwitches in vCenter according to
the vCenter instructions. Take note of the vSwitch names you have used for each traffic type. You will configure
CloudStack to use these vSwitches.
Increasing Ports
By default a virtual switch on ESXi hosts is created with 56 ports. We recommend setting it to 4088, the maximum
number of ports allowed. To do that, click the Properties... link for virtual switch (note this is not the Properties link
6.4. Host VMware vSphere Installation
97
for Networking).
In vSwitch properties dialog, select the vSwitch and click Edit. You should see the following dialog:
98
In this dialog, you can change the number of switch ports. After youve done that, ESXi hosts are required to reboot
in order for the setting to take effect.
99
100
If the ESXi hosts have multiple VMKernel ports, and ESXi is not using the default value Management Network as
the management network name, you must follow these guidelines to configure the management network port group so
that CloudStack can find it:
Use one label for the management network port across all ESXi hosts.
In the CloudStack UI, go to Configuration - Global Settings and set vmware.management.portgroup to the
management network label from the ESXi hosts.
Extend Port Range for CloudStack Console Proxy
(Applies only to VMware vSphere version 4.x)
You need to extend the range of firewall ports that the console proxy works with on the hosts. This is to enable the
console proxy to work with VMware-based VMs. The default additional port range is 59000-60000. To extend the
port range, log in to the VMware ESX service console on each host and run the following commands:
esxcfg-firewall -o 59000-60000,tcp,in,vncextras
esxcfg-firewall -o 59000-60000,tcp,out,vncextras
101
For a smoother configuration of Nexus 1000v switch, gather the following information before you start:
vCenter credentials
Nexus 1000v VSM IP address
Nexus 1000v VSM Credentials
Ethernet port profile names
vCenter Credentials Checklist
Value Notes
443
The following information specified in the Nexus Configure Networking screen is displayed in the Details tab of the
Nexus dvSwitch in the CloudStack UI:
Control Port Group VLAN ID The VLAN ID of the Control Port Group. The control VLAN is used for communication between the VSM and the VEMs.
102
Management Port Group VLAN ID The VLAN ID of the Management Port Group. The management VLAN corresponds to the mgmt0 interface that is used to establish and maintain the connection between the VSM and VMware
vCenter Server.
Packet Port Group VLAN ID The VLAN ID of the Packet Port Group. The packet VLAN forwards relevant data
packets from the VEMs to the VSM.
Note: The VLANs used for control, packet, and management port groups can be the same.
For more information, see Cisco Nexus 1000V Getting Started Guide.
VSM Configuration Checklist
Whether you create a Basic or Advanced zone configuration, ensure that you always create an Ethernet port
profile on the VSM after you install it and before you create the zone.
The Ethernet port profile created to represent the physical network or networks used by an Advanced zone
configuration trunk all the VLANs including guest VLANs, the VLANs that serve the native VLAN, and
the packet/control/data/management VLANs of the VSM.
The Ethernet port profile created for a Basic zone configuration does not trunk the guest VLANs because
the guest VMs do not get their own VLANs provisioned on their network interfaces in a Basic zone.
An Ethernet port profile configured on the Nexus 1000v virtual switch should not use in its set of system
VLANs, or any of the VLANs configured or intended to be configured for use towards VMs or VM resources
in the CloudStack environment.
You do not have to create any vEthernet port profiles CloudStack does that during VM deployment.
Ensure that you create required port profiles to be used by CloudStack for different traffic types of CloudStack,
such as Management traffic, Guest traffic, Storage traffic, and Public traffic. The physical networks configured
during zone creation should have a one-to-one relation with the Ethernet port profiles.
103
For information on creating a port profile, see Cisco Nexus 1000V Port Profile Configuration Guide.
Assigning Physical NIC Adapters
Assign ESXi hosts physical NIC adapters, which correspond to each physical network, to the port profiles. In each
ESXi host that is part of the vCenter cluster, observe the physical networks assigned to each port profile and note
down the names of the port profile for future use. This mapping information helps you when configuring physical networks during the zone configuration on CloudStack. These Ethernet port profile names are later specified as VMware
Traffic Labels for different traffic types when configuring physical networks during the zone configuration. For more
information on configuring physical networks, see Configuring a vSphere Cluster with Nexus 1000v Virtual Switch.
Adding VLAN Ranges
Determine the public VLAN, System VLAN, and Guest VLANs to be used by the CloudStack. Ensure that you add
them to the port profile database. Corresponding to each physical network, add the VLAN range to port profiles. In
the VSM command prompt, run the switchport trunk allowed vlan<range> command to add the VLAN ranges to the
port profile.
For example:
switchport trunk allowed vlan 1,140-147,196-203
In this example, the allowed VLANs added are 1, 140-147, and 196-203
You must also add all the public and private VLANs or VLAN ranges to the switch. This range is the VLAN range
you specify in your zone.
Note: Before you run the vlan command, ensure that the configuration mode is enabled in Nexus 1000v virtual
switch.
104
For example:
If you want the VLAN 200 to be used on the switch, run the following command:
vlan 200
If you want the VLAN range 1350-1750 to be used on the switch, run the following command:
vlan 1350-1750
105
After the zone is created, if you want to create an additional cluster along with Nexus 1000v virtual switch in the
existing zone, use the Add Cluster option. For information on creating a cluster, see Add Cluster: vSphere.
In both these cases, you must specify the following parameters to configure Nexus virtual switch:
Parameters
Cluster Name
vCenter Host
vCenter User name
vCenter Password
vCenter Datacenter
Nexus dvSwitch IP
Address
Nexus dvSwitch
Username
Nexus dvSwitch
Password
106
Description
Enter the name of the cluster you created in vCenter. For example,cloud.cluster.
Enter the host name or the IP address of the vCenter host where you have deployed the
Nexus virtual switch.
Enter the username that CloudStack should use to connect to vCenter. This user must
have all administrative privileges.
Enter the password for the user named above.
Enter the vCenter datacenter that the cluster is in. For example, cloud.dc.VM.
The IP address of the VSM component of the Nexus 1000v virtual switch.
The admin name to connect to the VSM appliance.
The corresponding password for the admin user specified above.
107
108
The Public Traffic vSwitch Type field when you add a VMware VDS-enabled cluster.
The switch name in the traffic label while updating the switch type in a zone.
Traffic label format in the last case is [[Name of vSwitch/dvSwitch/EthernetPortProfile][,VLAN ID[,vSwitch
Type]]]
The possible values for traffic labels are:
empty string
dvSwitch0
dvSwitch0,200
dvSwitch1,300,vmwaredvs
myEthernetPortProfilenexusdvs
dvSwitch0vmwaredvs
The three fields to fill in are:
Name of the virtual / distributed virtual switch at vCenter.
The default value depends on the type of virtual switch:
vSwitch0: If type of virtual switch is VMware vNetwork Standard virtual switch
dvSwitch0: If type of virtual switch is VMware vNetwork Distributed virtual switch
epp0: If type of virtual switch is Cisco Nexus 1000v Distributed virtual switch
VLAN ID to be used for this traffic wherever applicable.
This field would be used for only public traffic as of now. In case of guest traffic this field would be ignored and
could be left empty for guest traffic. By default empty string would be assumed which translates to untagged
VLAN for that specific traffic type.
Type of virtual switch. Specified as string.
Possible valid values are vmwaredvs, vmwaresvs, nexusdvs.
vmwaresvs: Represents VMware vNetwork Standard virtual switch
vmwaredvs: Represents VMware vNetwork distributed virtual switch
nexusdvs: Represents Cisco Nexus 1000v distributed virtual switch.
If nothing specified (left empty), zone-level default virtual switchwould be defaulted, based on the value of
global parameter you specify.
Following are the global configuration parameters:
vmware.use.dvswitch: Set to true to enable any kind (VMware DVS and Cisco Nexus 1000v) of distributed
virtual switch in a CloudStack deployment. If set to false, the virtual switch that can be used in that CloudStack
deployment is Standard virtual switch.
vmware.use.nexus.vswitch: This parameter is ignored if vmware.use.dvswitch is set to false. Set to true to
enable Cisco Nexus 1000v distributed virtual switch in a CloudStack deployment.
Enabling Virtual Distributed Switch in CloudStack
To make a CloudStack deployment VDS enabled, set the vmware.use.dvswitch parameter to true by using the Global
Settings page in the CloudStack UI and restart the Management Server. Unless you enable the vmware.use.dvswitch
parameter, you cannot see any UI options specific to VDS, and CloudStack ignores the VDS-specific parameters that
109
you specify. Additionally, CloudStack uses VDS for virtual network infrastructure if the value of vmware.use.dvswitch
parameter is true and the value of vmware.use.nexus.dvswitch parameter is false. Another global parameter that defines
VDS configuration is vmware.ports.per.dvportgroup. This is the default number of ports per VMware dvPortGroup in
a VMware environment. Default value is 256. This number directly associated with the number of guest network you
can create.
CloudStack supports orchestration of virtual networks in a deployment with a mix of Virtual Distributed Switch,
Standard Virtual Switch and Nexus 1000v Virtual Switch.
Configuring Distributed Virtual Switch in CloudStack
You can configure VDS by adding the necessary resources while a zone is created.
Alternatively, at the cluster level, you can create an additional cluster with VDS enabled in the existing zone. Use the
Add Cluster option. For information as given in Add Cluster: vSphere.
In both these cases, you must specify the following parameters to configure VDS:
110
111
Parameters
Description
Cluster Name
vCenter Host
vCenter User
name
vCenter
Password
vCenter
Datacenter
Override Public
Traffic
Public Traffic
vSwitch Type
Public Traffic
vSwitch Name
Override Guest
Traffic
Guest Traffic
vSwitch Type
Guest Traffic
vSwitch Name
Enter the name of the cluster you created in vCenter. For example, cloudcluster.
Enter the name or the IP address of the vCenter host where you have deployed the VMware
VDS.
Enter the username that CloudStack should use to connect to vCenter. This user must have all
administrative privileges.
Enter the password for the user named above.
Enter the vCenter datacenter that the cluster is in. For example, clouddcVM.
Enable this option to override the zone-wide public traffic for the cluster you are creating.
This option is displayed only if you enable the Override Public Traffic option. Select VMware
vNetwork Distributed Virtual Switch. If the vmware.use.dvswitch global parameter is true, the
default option will be VMware vNetwork Distributed Virtual Switch.
Name of virtual switch to be used for the public traffic.
Enable the option to override the zone-wide guest traffic for the cluster you are creating.
This option is displayed only if you enable the Override Guest Traffic option. Select VMware
vNetwork Distributed Virtual Switch. If the vmware.use.dvswitch global parameter is true, the
default option will be VMware vNetwork Distributed Virtual Switch.
Name of virtual switch to be used for guest traffic.
112
113
114
115
116
117
CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any
system that is not up to date with patches.
All hosts within a cluster must be homogeneous. The CPUs must be of the same type, count, and feature flags.
Must support HVM (Intel-VT or AMD-V enabled in BIOS)
64-bit x86 CPU (more cores results in better performance)
Hardware virtualization support required
4 GB of memory
36 GB of local disk
At least 1 NIC
Statically allocated IP Address
When you deploy CloudStack, the hypervisor host must not have any VMs already running
Warning: The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
118
# vi /etc/ntp.conf
Add one or more server lines in this file with the names of the NTP servers you want to use. For example:
server
server
server
server
0.xenserver.pool.ntp.org
1.xenserver.pool.ntp.org
2.xenserver.pool.ntp.org
3.xenserver.pool.ntp.org
119
# xe-install-supplemental-pack xenserver-cloud-supp.iso
4. If the XenServer host is part of a zone that uses basic networking, disable Open vSwitch (OVS):
# xe-switch-network-backend
bridge
The output should look like this, although the specific file name will be different (scsi-<scsiID>):
lrwxrwxrwx 1 root root 9 Mar 16 13:47
/dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -> ../../sdc
The output should look like this, although the specific ID will be different:
e6849e96-86c3-4f2c-8fcc-350cc711be3d
7. Create the FiberChannel SR. In name-label, use the unique ID you just generated.
# xe sr-create type=lvmohba shared=true
device-config:SCSIid=360a98000503365344e6f6177615a516b
name-label="e6849e96-86c3-4f2c-8fcc-350cc711be3d"
This command returns a unique ID for the SR, like the following example (your ID will be different):
7a143820-e893-6c6a-236e-472da6ee66bf
8. To create a human-readable description for the SR, use the following command. In uuid, use the SR ID returned
by the previous command. In name-description, set whatever friendly text you prefer.
120
Make note of the values you will need when you add this storage to CloudStack later (see Add Primary Storage). In the Add Primary Storage dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you will
enter the name-label you set earlier (in this example, e6849e96-86c3-4f2c-8fcc-350cc711be3d).
9. (Optional) If you want to enable multipath I/O on a FiberChannel SAN, refer to the documentation provided by
the SAN vendor.
121
different NICs on the hosts in a cluster. For example, the public network can be on eth0 on node A and eth1 on
node B. However, the XenServer name-label for the public network must be identical across all hosts. The following
examples set the network label to cloud-public. After the management server is installed and running you must
configure it with the name of the chosen network label (e.g. cloud-public); this is discussed in Management Server
Installation.
If you are using two NICs bonded together to create a public network, see NIC Bonding for XenServer (Optional).
If you are using a single dedicated NIC to provide public network access, follow this procedure on each new host that
is added to CloudStack before adding the host.
1. Run xe network-list and find the public network. This is usually attached to the NIC that is public. Once you
find the network make note of its UUID. Call this <UUID-Public>.
2. Run the following command.
# xe network-param-set name-label=cloud-public uuid=<UUID-Public>
3. Repeat these steps for each additional guest network, using a different name-label and uuid each time.
Separate Storage Network for XenServer (Optional)
You can optionally set up a separate storage network. This should be done first on the host, before implementing the
bonding steps below. This can be done using one or two available NICs. With two NICs bonding may be done as
above. It is the administrators responsibility to set up a separate storage network.
Give the storage network a different name-label than what will be given for other networks.
For the separate storage network to work correctly, it must be the only interface that can ping the primary storage
devices IP address. For example, if eth0 is the management network NIC, ping -I eth0 <primary storage device IP>
must fail. In all deployments, secondary storage devices must be pingable from the management network NIC or bond.
If a secondary storage device has been placed on the storage network, it must also be pingable via the storage network
NIC or bond on the hosts as well.
You can set up two separate storage networks as well. For example, if you intend to implement iSCSI multipath,
dedicate two non-bonded NICs to multipath. Each of the two networks needs a unique name-label.
If no bonding is done, the administrator must set up and name-label the separate storage network on all hosts (masters
and slaves).
Here is an example to set up eth5 to access a storage network on 172.16.0.0/24.
122
The administrator must bond the management network NICs prior to adding the host to CloudStack.
Creating a Private Bond on the First Host in the Cluster
Use the following steps to create a bond in XenServer. These steps should be run on only the first host in a cluster.
This example creates the cloud-private network with two physical NICs (eth0 and eth1) bonded into it.
1. Find the physical NICs that you want to bond together.
# xe pif-list host-name-label='hostname' device=eth0
# xe pif-list host-name-label='hostname' device=eth1
These command shows the eth0 and eth1 NICs and their UUIDs. Substitute the ethX devices of your choice.
Call the UUIDs returned by the above command slave1-UUID and slave2-UUID.
2. Create a new network for the bond. For example, a new network with name cloud-private.
This label is important. CloudStack looks for a network by a name you configure. You must use the same
name-label for all hosts in the cloud for the management network.
# xe network-create name-label=cloud-private
# xe bond-create network-uuid=[uuid of cloud-private created above]
pif-uuids=[slave1-uuid],[slave2-uuid]
123
Now you have a bonded pair that can be recognized by CloudStack as the management network.
Public Network Bonding
Bonding can be implemented on a separate, public network. The administrator is responsible for creating a bond for
the public network if that network will be bonded and will be separate from the management network.
Creating a Public Bond on the First Host in the Cluster
These steps should be run on only the first host in a cluster. This example creates the cloud-public network with two
physical NICs (eth2 and eth3) bonded into it.
1. Find the physical NICs that you want to bond together.
# xe pif-list host-name-label='hostname' device=eth2
# xe pif-list host-name-label='hostname' device=eth3
These command shows the eth2 and eth3 NICs and their UUIDs. Substitute the ethX devices of your choice.
Call the UUIDs returned by the above command slave1-UUID and slave2-UUID.
2. Create a new network for the bond. For example, a new network with name cloud-public.
This label is important. CloudStack looks for a network by a name you configure. You must use the same
name-label for all hosts in the cloud for the public network.
# xe network-create name-label=cloud-public
# xe bond-create network-uuid=[uuid of cloud-public created above]
pif-uuids=[slave1-uuid],[slave2-uuid]
Now you have a bonded pair that can be recognized by CloudStack as the public network.
Adding More Hosts to the Cluster
With the bonds (if any) established on the master, you should add additional, slave hosts. Run the following command
for all additional hosts to be added to the cluster. This will cause the host to join the master in a single XenServer pool.
# xe pool-join master-address=[master IP] master-username=root
master-password=[your password]
With all hosts added to the pool, run the cloud-setup-bond script. This script will complete the configuration and set
up of the bonds across all hosts in the cluster.
1. Copy
the
script
from
the
Management
Server
in
/usr/share/cloudstackcommon/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the master host and ensure it is executable.
2. Run the script:
# ./cloud-setup-bonding.sh
Now the bonds are set up and configured properly across the cluster.
124
(b) You might need to change the OS type settings for VMs running on the upgraded hosts.
If you upgraded from XenServer 5.6 GA to XenServer 5.6 SP2, change any VMs that have the OS
type CentOS 5.5 (32-bit), Oracle Enterprise Linux 5.5 (32-bit), or Red Hat Enterprise Linux 5.5 (32bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to
Other Linux (64-bit).
If you upgraded from XenServer 5.6 SP2 to XenServer 6.0.2, change any VMs that have the OS type
CentOS 5.6 (32-bit), CentOS 5.7 (32-bit), Oracle Enterprise Linux 5.6 (32-bit), Oracle Enterprise
Linux 5.7 (32-bit), Red Hat Enterprise Linux 5.6 (32-bit) , or Red Hat Enterprise Linux 5.7 (32-bit) to
Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other
Linux (64-bit).
If you upgraded from XenServer 5.6 to XenServer 6.0.2, do all of the above.
(c) Restart the Management Server and Usage Server. You only need to do this once for all clusters.
# service cloudstack-management start
# service cloudstack-usage start
Troubleshooting: If you see the error cant eject CD, log in to the VM and umount the CD, then run the script
again.
5. Upgrade the XenServer software on all hosts in the cluster. Upgrade the master first.
(a) Live migrate all VMs on this host to other hosts. See the instructions for live migration in the Administrators Guide.
Troubleshooting: You might see the following error when you migrate a VM:
125
b6cf79c8-02ee-050b-922f-49583d9f1a14
Troubleshooting: If you see the following error message, you can safely ignore it.
mv: cannot stat `/etc/cron.daily/logrotate`: No such file or directory
(f) Plug in the storage repositories (physical block devices) to the XenServer host:
# for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`; do xe p
Note: If you add a host to this XenServer pool, you need to migrate all VMs on this host to other hosts,
and eject this host from XenServer pool.
6. Repeat these steps to upgrade every host in the cluster to the same version of XenServer.
7. Run the following command on one host in the XenServer cluster to clean up the host tags:
# for host in $(xe host-list | grep ^uuid | awk '{print $NF}') ; do xe host-param-clear uuid=$ho
Note: When copying and pasting a command, be sure the command has pasted as a single line before executing.
Some document viewers may introduce unwanted line breaks in copied text.
8. Reconnect the XenServer cluster to CloudStack.
(a) Log in to the CloudStack UI as root.
(b) Navigate to the XenServer cluster, and click Actions Manage.
(c) Watch the status to see that all the hosts come up.
9. After all hosts are up, run the following on one host in the cluster:
126
# /opt/xensource/bin/cloud-clean-vlan.sh
127
128
CHAPTER 7
Network Setup
Basic Network
Single network
Physical
Physical
Layer 3
No
Physical
Physical
No
Yes
sFlow / netFlow at physical router
Yes
Advanced Network
Multiple networks
Physical and Virtual
Physical and Virtual
Layer 2 and Layer 3
Yes
Physical and Virtual
Physical and Virtual
Physical and Virtual
Yes
Hypervisor and Virtual Router
Yes
The two types of networking may be in use in the same cloud. However, a given zone must use either Basic Networking
or Advanced Networking.
Different types of network traffic can be segmented on the same physical network. Guest traffic can also be segmented
by account. To isolate traffic, you can use separate VLANs. If you are using separate VLANs on a single physical
network, make sure the VLAN tags are in separate numerical ranges.
129
Traffic type
Scope
800-899
900-999
greater
than 1000
1. Setting VTP mode to transparent allows us to utilize VLAN IDs above 1000. Since we only use VLANs up to
999, vtp transparent mode is not strictly required.
vtp mode transparent
vlan 200-999
exit
2. Configure GigabitEthernet1/0/1.
interface GigabitEthernet1/0/1
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 201
exit
2. VLAN 201 is used to route untagged private IP addresses for pod 1, and pod 1 is connected to this layer-2
switch.
interface range ethernet all
switchport mode general
switchport general allowed vlan add 300-999 tagged
exit
131
Cisco 3750
The following steps show how a Cisco 3750 is configured for pod-level layer-2 switching.
1. Setting VTP mode to transparent allows us to utilize VLAN IDs above 1000. Since we only use VLANs up to
999, vtp transparent mode is not strictly required.
vtp mode transparent
vlan 300-999
exit
2. Configure all ports to dot1q and set 201 as the native VLAN.
interface range GigabitEthernet 1/0/1-24
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 201
exit
By default, Cisco passes all VLANs. Cisco switches complain of the native VLAN IDs are different when 2 ports are
connected together. Thats why you must specify VLAN 201 as the native VLAN on the layer-2 switch.
132
133
be interface-specific. For example, here is the configuration where the public zone is untrust and the
private zone is trust:
root@cloud-srx# show firewall
filter trust {
interface-specific;
}
filter untrust {
interface-specific;
}
(b) Add the firewall filters to your public interface. For example, a sample configuration output (for public
interface ge-0/0/3.0, public security zone untrust, and private security zone trust) is:
ge-0/0/3 {
unit 0 {
family inet {
filter {
input untrust;
output trust;
}
address 172.25.0.252/16;
}
}
}
10. Make sure all VLANs are brought to the private interface of the SRX.
11. After the CloudStack Management Server is installed, log in to the CloudStack UI as administrator.
12. In the left navigation bar, click Infrastructure.
13. In Zones, click View More.
14. Choose the zone you want to work with.
15. Click the Network tab.
16. In the Network Service Providers node of the diagram, click Configure. (You might have to scroll down to see
this.)
17. Click SRX.
18. Click the Add New SRX button (+) and provide the following:
IP Address: The IP address of the SRX.
Username: The user name of the account on the SRX that CloudStack should use.
Password: The password of the account.
Public Interface. The name of the public interface on the SRX. For example, ge-0/0/2. A .x at the end
of the interface indicates the VLAN that is in use.
Private Interface: The name of the private interface on the SRX. For example, ge-0/0/1.
Usage Interface: (Optional) Typically, the public interface is used to meter traffic. If you want to use a
different interface, specify its name here
Number of Retries: The number of times to attempt a command on the SRX before failing. The default
value is 2.
Timeout (seconds): The time to wait for a command on the SRX before considering it failed. Default is
300 seconds.
134
Public Network: The name of the public network on the SRX. For example, trust.
Private Network: The name of the private network on the SRX. For example, untrust.
Capacity: The number of networks the device can handle
Dedicated: When marked as dedicated, this device will be dedicated to a single account. When Dedicated
is checked, the value in the Capacity field has no significance implicitly, its value is 1
19. Click OK.
20. Click Global Settings. Set the parameter external.network.stats.interval to indicate how often you want CloudStack to fetch network usage statistics from the Juniper SRX. If you are not using the SRX to gather network
usage statistics, set to 0.
External Guest Firewall Integration for Cisco VNMC (Optional)
Cisco Virtual Network Management Center (VNMC) provides centralized multi-device and policy management for
Cisco Network Virtual Services. You can integrate Cisco VNMC with CloudStack to leverage the firewall and NAT
service offered by ASA 1000v Cloud Firewall. Use it in a Cisco Nexus 1000v dvSwitch-enabled cluster in CloudStack.
In such a deployment, you will be able to:
Configure Cisco ASA 1000v firewalls. You can configure one per guest network.
Use Cisco ASA 1000v firewalls to create and apply security profiles that contain ACL policy sets for both
ingress and egress traffic.
Use Cisco ASA 1000v firewalls to create and apply Source NAT, Port Forwarding, and Static NAT policy sets.
CloudStack supports Cisco VNMC on Cisco Nexus 1000v dvSwich-enabled VMware hypervisors.
Using Cisco ASA 1000v Firewall, Cisco Nexus 1000v dvSwitch, and Cisco VNMC in a Deployment
Guidelines
Cisco ASA 1000v firewall is supported only in Isolated Guest Networks.
Cisco ASA 1000v firewall is not supported on VPC.
Cisco ASA 1000v firewall is not supported for load balancing.
When a guest network is created with Cisco VNMC firewall provider, an additional public IP is acquired along
with the Source NAT IP. The Source NAT IP is used for the rules, whereas the additional IP is used to for the
ASA outside interface. Ensure that this additional public IP is not released. You can identify this IP as soon as
the network is in implemented state and before acquiring any further public IPs. The additional IP is the one that
is not marked as Source NAT. You can find the IP used for the ASA outside interface by looking at the Cisco
VNMC used in your guest network.
Use the public IP address range from a single subnet. You cannot add IP addresses from different subnets.
Only one ASA instance per VLAN is allowed because multiple VLANS cannot be trunked to ASA ports.
Therefore, you can use only one ASA instance in a guest network.
Only one Cisco VNMC per zone is allowed.
Supported only in Inline mode deployment with load balancer.
The ASA firewall rule is applicable to all the public IPs in the guest network. Unlike the firewall rules created
on virtual router, a rule created on the ASA device is not tied to a specific public IP.
135
Use a version of Cisco Nexus 1000v dvSwitch that support the vservice command. For example: nexus1000v.4.2.1.SV1.5.2b.bin
Cisco VNMC requires the vservice command to be available on the Nexus switch to create a guest network in
CloudStack.
Prerequisites
1. Configure Cisco Nexus 1000v dvSwitch in a vCenter environment.
Create Port profiles for both internal and external network interfaces on Cisco Nexus 1000v dvSwitch. Note
down the inside port profile, which needs to be provided while adding the ASA appliance to CloudStack.
For information on configuration, see Configuring a vSphere Cluster with Nexus 1000v Virtual Switch.
2. Deploy and configure Cisco VNMC.
For more information, see Installing Cisco Virtual Network Management Center and Configuring Cisco Virtual
Network Management Center.
3. Register Cisco Nexus 1000v dvSwitch with Cisco VNMC.
For more information, see Registering a Cisco Nexus 1000V with Cisco VNMC.
4. Create Inside and Outside port profiles in Cisco Nexus 1000v dvSwitch.
For more information, see Configuring a vSphere Cluster with Nexus 1000v Virtual Switch.
5. Deploy and Cisco ASA 1000v appliance.
For more information, see Setting Up the ASA 1000V Using VNMC.
Typically, you create a pool of ASA 1000v appliances and register them with CloudStack.
Specify the following while setting up a Cisco ASA 1000v instance:
VNMC host IP.
Ensure that you add ASA appliance in VNMC mode.
Port profiles for the Management and HA network interfaces. This need to be pre-created on Cisco Nexus
1000v dvSwitch.
Internal and external port profiles.
The Management IP for Cisco ASA 1000v appliance. Specify the gateway such that the VNMC IP is
reachable.
Administrator credentials
VNMC credentials
6. Register Cisco ASA 1000v with VNMC.
After Cisco ASA 1000v instance is powered on, register VNMC from the ASA console.
Using Cisco ASA 1000v Services
1. Ensure that all the prerequisites are met.
See Prerequisites.
2. Add a VNMC instance.
See Adding a VNMC Instance.
136
137
Cluster: The VMware cluster to which you are adding the ASA 1000v instance.
Ensure that the cluster is Cisco Nexus 1000v dvSwitch enabled.
10. Click OK.
Creating a Network Offering Using Cisco ASA 1000v
To have Cisco ASA 1000v support for a guest network, create a network offering as follows:
1. Log in to the CloudStack UI as a user or admin.
2. From the Select Offering drop-down, choose Network Offering.
3. Click Add Network Offering.
4. In the dialog, make the following choices:
Name: Any desired name for the network offering.
Description: A short description of the offering that can be displayed to users.
Network Rate: Allowed data transfer rate in MB per second.
Traffic Type: The type of network traffic that will be carried on the network.
Guest Type: Choose whether the guest network is isolated or shared.
Persistent: Indicate whether the guest network is persistent or not. The network that you can provision
without having to deploy a VM on it is termed persistent network.
VPC: This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private
Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology
that resembles a traditional physical network. For more information on VPCs, see About Virtual Private
Clouds.
Specify VLAN: (Isolated guest networks only) Indicate whether a VLAN should be specified when this
offering is used.
Supported Services: Use Cisco VNMC as the service provider for Firewall, Source NAT, Port Forwarding,
and Static NAT to create an Isolated guest network offering.
System Offering: Choose the system service offering that you want virtual routers to use in this network.
Conserve mode: Indicate whether to use conserve mode. In this mode, network resources are allocated
only when the first virtual machine starts in the network.
5. Click OK
The network offering is created.
Reusing ASA 1000v Appliance in new Guest Networks
You can reuse an ASA 1000v appliance in a new guest network after the necessary cleanup. Typically, ASA 1000v is
cleaned up when the logical edge firewall is cleaned up in VNMC. If this cleanup does not happen, you need to reset
the appliance to its factory settings for use in new guest networks. As part of this, enable SSH on the appliance and
store the SSH credentials by registering on VNMC.
1. Open a command line on the ASA appliance:
(a) Run the following:
138
ASA1000V(config)# reload
(b) Enter N.
You will get the following confirmation message:
"Proceed with reload? [confirm]"
139
Private interface: Interface of device that is configured to be part of the private network.
Number of retries. Number of times to attempt a command on the device before considering the operation
failed. Default is 2.
Capacity: The number of networks the device can handle.
Dedicated: When marked as dedicated, this device will be dedicated to a single account. When Dedicated
is checked, the value in the Capacity field has no significance implicitly, its value is 1.
13. Click OK.
The installation and provisioning of the external load balancer is finished. You can proceed to add VMs and NAT or
load balancing rules.
Destination Port
8080 (or 20400 with AJP)
8250
8096
Protocol
HTTP (or AJP)
TCP
HTTP
Persistence Required?
Yes
Yes
No
In addition to above settings, the administrator is responsible for setting the host global config value from the management server IP to load balancer virtual IP address. If the host value is not set to the VIP for Port 8250 and
one of your management servers crashes, the UI is still available but the system VMs will not be able to contact the
management server.
140
141
Based on your deployments needs, choose the appropriate value of guest.vlan.bits. Set it as described in Edit the
Global Configuration Settings (Optional) section and restart the Management Server.
142
CHAPTER 8
Storage Setup
XenServer
Supported
Supported
Supported via Pre-existing SR
Supported
vSphere
Supported
Supported via VMFS
Supported
Supported
KVM
Supported
Supported via Clustered Filesystems
Supported via Clustered Filesystems
Supported
The use of the Cluster Logical Volume Manager (CLVM) for KVM is not officially supported with CloudStack.
143
Primary storage is likely to have to support mostly random read/write I/O once a template has been deployed. Secondary storage is only going to experience sustained sequential reads or writes.
In clouds which will experience a large number of users taking snapshots or deploying VMs at the same time, secondary storage performance will be important to maintain a good user experience.
It is important to start the design of your storage with the a rough profile of the workloads which it will be required to
support. Care should be taken to consider the IOPS demands of your guest VMs as much as the volume of data to be
stored and the bandwidth (MB/s) available at the storage interfaces.
144
145
4. In order for the primary storage management interface to communicate with the primary storage, the interfaces
on the primary storage arrays must be in the same CIDR as the primary storage management interface.
5. Therefore the primary storage must be in a different subnet to the management network
146
Figure
3: Hypervisor Communications with Separated Storage Traffic
Other Primary Storage Types If you are using PreSetup or SharedMountPoints to connect to IP based storage then
the same principles apply; if the primary storage and primary storage interface are in a different subnet to the
management subnet then the hypervisor will use the primary storage interface to communicate with the primary
storage.
147
4. If you have more than 16TB of storage on one host, create multiple EXT3 file systems and multiple NFS exports.
Individual EXT3 file systems cannot exceed 16TB.
5. After /export directory is created, run the following command to configure it as an NFS export.
# echo "/export <CIDR>(rw,async,no_root_squash,no_subtree_check)" >> /etc/exports
Removing the async flag. The async flag improves performance by allowing the NFS server to respond before
writes are committed to the disk. Remove the async flag in your mission critical production deployment.
6. Run the following command to enable NFS service.
# chkconfig nfs on
8. Edit the /etc/sysconfig/iptables file and add the following lines at the beginning of the INPUT chain.
-A
-A
-A
-A
-A
-A
-A
-A
-A
-A
-A
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
-m
-m
-m
-m
-m
-m
-m
-m
-m
-m
-m
state
state
state
state
state
state
state
state
state
state
state
--state
--state
--state
--state
--state
--state
--state
--state
--state
--state
--state
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
NEW
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
-p
udp
tcp
tcp
tcp
udp
tcp
udp
tcp
udp
tcp
udp
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
--dport
111 -j ACCEPT
111 -j ACCEPT
2049 -j ACCEPT
32803 -j ACCEPT
32769 -j ACCEPT
892 -j ACCEPT
892 -j ACCEPT
875 -j ACCEPT
875 -j ACCEPT
662 -j ACCEPT
662 -j ACCEPT
1. Install iscsiadm.
#
#
#
#
For example:
3. Log in.
# iscsiadm -m node -T <Complete Target Name> -l -p <Group IP>:3260
For example:
Removing the async flag. The async flag improves performance by allowing the NFS server to respond before
writes are committed to the disk. Remove the async flag in your mission critical production deployment.
149
150
CHAPTER 9
Optional Installation
3. Once installed, start the Usage Server with the following command.
# service cloudstack-usage start
151
# chkconfig cloudstack-usage on
The server_id must be unique with respect to other servers. The recommended way to achieve this is to give the
master an ID of 1 and each slave a sequential number greater than 1, so that the servers are numbered 1, 2, 3,
etc.
3. Restart the MySQL service. On RHEL/CentOS systems, use:
# service mysqld restart
4. Create a replication account on the master and give it privileges. We will use the cloud-repl user with the
password password. This assumes that master and slave run on the 172.16.1.0/24 network.
# mysql -u root
mysql> create user 'cloud-repl'@'172.16.1.%' identified by 'password';
mysql> grant replication slave on *.* TO 'cloud-repl'@'172.16.1.%';
mysql> flush privileges;
mysql> flush tables with read lock;
152
8. Note the file and the position that are returned by your instance.
9. Exit from this session.
10. Complete the master setup. Returning to your first session on the master, release the locks and exit MySQL.
mysql> unlock tables;
11. Install and configure the slave. On the slave server, run the following commands.
# yum install mysql-server
# chkconfig mysqld on
12. Edit my.cnf and add the following lines in the [mysqld] section below datadir.
server_id=2
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
14. Instruct the slave to connect to and replicate from the master. Replace the IP address, password, log file, and
position with the values you have used in the previous steps.
mysql>
->
->
->
->
->
change master to
master_host='172.16.1.217',
master_user='cloud-repl',
master_password='password',
master_log_file='mysql-bin.000001',
master_log_pos=412;
16. Optionally, open port 3306 on the slave as was done on the master earlier.
This is not required for replication to work. But if you choose not to do this, you will need to do it when failover
to the replica occurs.
153
Failover
This will provide for a replicated database that can be used to implement manual failover for the Management Servers.
CloudStack failover from one MySQL instance to another is performed by the administrator. In the event of a database
failure you should:
1. Stop the Management Servers (via service cloudstack-management stop).
2. Change the replicas configuration to be a master and restart it.
3. Ensure that the replicas port 3306 is open to the Management Servers.
4. Make a change so that the Management Server uses the new database. The simplest process here is to put the IP
address of the new database server into each Management Servers /etc/cloudstack/management/db.properties.
5. Restart the Management Servers:
# service cloudstack-management start
154
To enable the EC2 and S3 compatible services you need to set the configuration variables enable.ec2.api and enable.s3.api to true. You do not have to enable both at the same time. Enable the ones you need. This can be done via
the CloudStack GUI by going in Global Settings or via the API.
The snapshot below shows you how to use the GUI to enable these services
Using the CloudStack API, the easiest is to use the so-called integration port on which you can make unauthenticated
calls. In Global Settings set the port to 8096 and subsequently call the updateConfiguration method. The following
urls shows you how:
http://localhost:8096/client/api?command=updateConfiguration&name=enable.ec2.api&value=true
http://localhost:8096/client/api?command=updateConfiguration&name=enable.ec2.api&value=true
You will also need to define compute service offerings with names compatible with the Amazon EC2 instance types
API names (e.g m1.small,m1.large). This can be done via the CloudStack GUI. Go under Service Offerings select
Compute offering and either create a new compute offering or modify an existing one, ensuring that the name matches
an EC2 instance type API name. The snapshot below shows you how:
155
Note: (Optional) The AWS API listens for requests on port 7080. If you prefer AWS API to listen on another port,
you can change it as follows:
3. Change the port to whatever port you want to use, then save the files.
4. Restart the Management Server.
If you re-install CloudStack, you will have to re-enable the services and if need be update the port.
156
export
export
export
export
EC2_CERT=/path/to/cert.pem
EC2_PRIVATE_KEY=/path/to/private_key.pem
EC2_URL=http://localhost:7080/awsapi
EC2_HOME=/path/to/EC2_tools_directory
Example:
ec2-run-instances 2 -z us-test1 -n 1-3 --connection-timeout 120 --request-timeout 120
157
differences are noted. The underlying SOAP call for each command is also given, for those who have built tools using
those calls.
Table 1. Elastic IP API mapping
EC2 command
ec2-allocate-address
ec2-associate-address
ec2-describe-addresses
ec2-diassociate-address
ec2-release-address
SOAP call
AllocateAddress
AssociateAddress
DescribeAddresses
DisassociateAddress
ReleaseAddress
SOAP call
DescribeAvailabilityZones
SOAP call
CreateImage
DeregisterImage
DescribeImages
RegisterImage
SOAP call
DescribeImageAttribute
ModifyImageAttribute
ResetImageAttribute
158
SOAP call
DescribeInstances
RunInstances
RebootInstances
StartInstances
StopInstances
TerminateInstances
SOAP call
DescribeInstanceAttribute
SOAP call
CreateKeyPair
DeleteKeyPair
DescribeKeyPairs
ImportKeyPair
SOAP call
GetPasswordData
SOAP call
AuthorizeSecurityGroupIngress
CreateSecurityGroup
DeleteSecurityGroup
DescribeSecurityGroups
RevokeSecurityGroupIngress
SOAP call
CreateSnapshot
DeleteSnapshot
DescribeSnapshots
159
EC2 command
ec2-attach-volume
ec2-create-volume
ec2-delete-volume
ec2-describe-volume
ec2-detach-volume
SOAP call
AttachVolume
CreateVolume
DeleteVolume
DescribeVolume
DetachVolume
Examples
There are many tools available to interface with a AWS compatible API. In this section we provide a few examples
that users of CloudStack can build upon.
Boto Examples
Boto is one of them. It is a Python package available at https://github.com/boto/boto. In this section we provide two
examples of Python scripts that use Boto and have been tested with the CloudStack AWS API Interface.
First is an EC2 example. Replace the Access and Secret Keys with your own and update the endpoint.
Example 1. An EC2 Boto example
#!/usr/bin/env python
import
import
import
import
sys
os
boto
boto.ec2
region = boto.ec2.regioninfo.RegionInfo(name="ROOT",endpoint="localhost")
apikey='GwNnpUPrO6KgIdZu01z_ZhhZnKjtSdRwuYd4DvpzvFpyxGMvrzno2q05MB0ViBoFYtdqKd'
secretkey='t4eXLEYWw7chBhDlaKf38adCMSHx_wlds6JfSx3z9fSpSOm0AbP9Moj0oGIzy2LSC8iw'
def main():
'''Establish connection to EC2 cloud'''
conn = boto.connect_ec2(aws_access_key_id=apikey,
aws_secret_access_key=secretkey,
is_secure=False,
region=region,
port=7080,
path="/awsapi",
api_version="2010-11-15")
'''Get list of images that I own'''
images = conn.get_all_images()
print images
myimage = images[0]
'''Pick an instance type'''
vm_type='m1.small'
reservation = myimage.run(instance_type=vm_type,security_groups=['default'])
if __name__ == '__main__':
main()
160
Second is an S3 example. The S3 interface in CloudStack is obsolete. If you need an S3 interface you should look
at systems like RiakCS, Ceph or GlusterFS. This example is here for completeness and can be adapted to other S3
endpoint.
Example 2. An S3 Boto Example
#!/usr/bin/env python
import sys
import os
from boto.s3.key import Key
from boto.s3.connection import S3Connection
from boto.s3.connection import OrdinaryCallingFormat
apikey='ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z3OB_s-ogF5HjZtN4rnzKnq2UjtnHeg_yLA5gOw'
secretkey='IMY8R7CJQiSGFk4cHwfXXN3DUFXz07cCiU80eM3MCmfLs7kusgyOfm0g9qzXRXhoAPCH-IRxXc3w'
cf=OrdinaryCallingFormat()
def main():
'''Establish connection to S3 service'''
conn = S3Connection(aws_access_key_id=apikey,aws_secret_access_key=secretkey, \
is_secure=False, \
host='localhost', \
port=7080, \
calling_format=cf, \
path="/awsapi/rest/AmazonS3")
try:
bucket=conn.create_bucket('cloudstack')
k = Key(bucket)
k.key = 'test'
try:
k.set_contents_from_filename('/Users/runseb/Desktop/s3cs.py')
except:
print 'could not write file'
pass
except:
bucket = conn.get_bucket('cloudstack')
k = Key(bucket)
k.key = 'test'
try:
k.get_contents_to_filename('/Users/runseb/Desktop/foobar')
except:
print 'Could not get file'
pass
try:
bucket1=conn.create_bucket('teststring')
k=Key(bucket1)
k.key('foobar')
k.set_contents_from_string('This is my silly test')
except:
bucket1=conn.get_bucket('teststring')
k = Key(bucket1)
k.key='foobar'
161
k.get_contents_as_string()
if __name__ == '__main__':
main()
162
UserAuthenticators property in the same files. If Non-OSS components, such as VMware environments, are to be deployed, modify the UserPasswordEncoders and UserAuthenticators lists in
the nonossComponentContext.xml file, for OSS environments, such as XenServer or KVM, modify
the ComponentContext.xml file. It is recommended to make uniform changes across both the files.
When a new authenticator or encoder is added, you can add them to this list.
While doing so, ensure that the new authenticator or encoder is specified as a bean in both these files. The administrator can change the ordering of both these properties as preferred to change the order of schemes. Modify
the following list properties available in client/tomcatconf/nonossComponentContext.xml.in or
client/tomcatconf/componentContext.xml.in as applicable, to the desired order:
<property name="UserAuthenticators">
<list>
<ref bean="SHA256SaltedUserAuthenticator"/>
<ref bean="MD5UserAuthenticator"/>
<ref bean="LDAPUserAuthenticator"/>
<ref bean="PlainTextUserAuthenticator"/>
</list>
</property>
<property name="UserPasswordEncoders">
<list>
<ref bean="SHA256SaltedUserAuthenticator"/>
<ref bean="MD5UserAuthenticator"/>
<ref bean="LDAPUserAuthenticator"/>
<ref bean="PlainTextUserAuthenticator"/>
</list>
</property>
In the above default ordering, SHA256Salt is used first for UserPasswordEncoders. If the module is found and
encoding returns a valid value, the encoded password is stored in the user tables password column. If it fails for
any reason, the MD5UserAuthenticator will be tried next, and the order continues. For UserAuthenticators,
SHA256Salt authentication is tried first. If it succeeds, the user is logged into the Management server. If it fails, md5
is tried next, and attempts continues until any of them succeeds and the user logs in . If none of them works, the user
is returned an invalid credential message.
163