0% found this document useful (0 votes)
99 views34 pages

PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer For Office Environments

The document describes steps to create a Proxmox cluster with Ceph storage and high availability features. It includes enabling nested virtualization in Proxmox, creating virtual machines, preparing the VMs for the cluster, activating the cluster, configuring Ceph storage, and testing migration and high availability functions.

Uploaded by

PaxyInRs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views34 pages

PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer For Office Environments

The document describes steps to create a Proxmox cluster with Ceph storage and high availability features. It includes enabling nested virtualization in Proxmox, creating virtual machines, preparing the VMs for the cluster, activating the cluster, configuring Ceph storage, and testing migration and high availability functions.

Uploaded by

PaxyInRs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

Configure Ubuntu, Proxmox, Zabbix & NethServer for Office


environments
English Powered by Translate

Proxmox Ceph
7/14/2017 9 Comments

GO BACK

The following article describes the steps to create a PROXMOX Cluster with "High Availability" on a Virtualized
environment. The objective is to thoroughly understand PROXMOX Cluster, Ceph, "High Availability" among other
features and benefits to bring it to a productive environment.
NOTE :
Part of this documentation takes as a reference the official PROXMOX videos on its YouTube account and the
documentation at https://pve.proxmox.com/wiki/Main_Page
For those who have asked about the tools we use to edit these articles:
Ubuntu Desktop , Inkscape , Gimp , Shutter , LibreOffice Write , free fonts from Google Fonts and Weebly are used .

Laboratorio PROXMOX + Cluster + Ceph + HA

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 1/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

Steps to follow to activate Cluster PROXMOX + Ceph and High Availability

1. What is Ceph?
2. Enable "Nested Virtualization" in PROXMOX
3. Creation of "Virtual Machines" in PROXMOX
4 . Prepare "Virtual Machines" for Cluster
5. Activate Cluster PROXMOX
6. Prepare links in "Virtual Machines" for Ceph
7. Add Hard Disks in Nodes for Ceph
8. Install Ceph libraries and Service Activation
9. Submit "SpaceBYBlocks" to the Cluster
POWERED

10 "Migration" and "High Availability" tests


911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 2/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments
10. Migration and High Availability tests
Translated to: English Show original Options ▼

1. What is Ceph?
Graph 1.1 - Ceph conceptual description
This graph shows two Servers/Nodes with PROXMOX, forming a Cluster.
Each "Node" through Ceph makes its storage units available to the Cluster, thus allowing the creation of a
"Storage Ceph" with common access for the Nodes.
In other terms: Each "Node" becomes a "Storage Unit" and similar (not equal) to a Raid5, Ceph replicates the
information between the "Nodes" returning to the Cluster a Storage with "Fault Tolerance" and High
Performance.

The "Nodes" can store in the "Storage Ceph" the "Virtual Disks" of the "Virtual Machines (VM's) and Containers (LXC)",
enabling in the Cluster the possibility of "Migrating/Moving" the "Virtual Machines" between each Node and define
"High Availability" rules.
This
is a brief summary of what we can expect from Ceph for this lab. It is suggested to go into more technical detail through
the official page: www.ceph.com and Wikipedia (More information click on each link).

Additional: It is important to know how PROXMOX Cluster works, for this reason we suggest to see the link:

PROXMOX NAS/SAN. In the following article we are going to present the basic concepts of storage
based on NAS/SAN and how PROXMOX works in virtualized environments .

2. Habilitar "Nested Virtualization" en PROXMOX


POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 3/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

The lab requires three "Virtual Machines" and the Host Hypervisor is at your discretion, however in this article we
suggest configuring a PROXMOX 4.x/5.x server with "Nested Virtualization".
​Nested Virtualization: "Nested Virtualization" refers to running one or more hypervisors inside another
hypervisor, thus optimizing our resource. More information click on this Wikipedia link.

If you want to review the steps to install PROXMOX , click on the following link:

Installation, configuration and creation of Cluster in PROXMOX. Basic installation of


PROXMOX on a physical server and activation of a Cluster with 2 nodes.
NOTE: It is very important to validate the Processor of the computer where we are going to install

PROXMOX, verifying that it has "Hardware Virtualization Extensions" .

Well, now yes... After so much mystery we start with the


configuration
...
2.1 Connection via SSH
We connect via SSH through a Linux terminal, in the case of Windows Putty. You can also use the web interface and
access the PROXMOX Shell
:
Web Interface -> Left Side, Click Server icon -> Right Panel, Click Shell

Depending on our processor, we execute the following commands:

2.2.1 For Intel processors:


​echo
"options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf

modprobe -r kvm_intel

modprobe kvm_intel

After executing these commands, the following statement should return " Y":
POWERED BY

cat /sys/module/kvm intel/parameters/nested


911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 4/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments
cat /sys/module/kvm_intel/parameters/nested
Translated to: English Show original Options ▼

2.2.2 For AMD processors:


echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf

modprobe -r kvm_amd

modprobe kvm_amd

After executing these commands, the following statement should return "Y ":

cat /sys/module/kvm_amd/parameters/nested

It is not necessary to restart the server, however to ensure that everything works properly it is
suggested to restart the server and validate that everything starts properly.

3. Creation of "Virtual Machines" in PROXMOX


After enabling "Nested Virtualization" on the PROXMOX host, we start with the creation of "Virtual Machines" with
the following characteristics:

3.1 Specifications "Virtual Machines"


Memory (RAM): A minimum of 1GB of RAM is recommended.
Processors: Important!, select the type of "Processor" for the "Virtual Machine" as "HOST" , this
parameterization allows you to enable the properties of the "Physical Processor" to the "Virtual Processor" as is
the case of "Intel VMX / AMD SVM ".

POWERED BY
Graph
911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 5/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

3.2 - VirtualTranslated
Machines to: with PROXMOX
English Show original Options ▼

We start with the installation process of PROXMOX VE in each Server/Node


.

​ OTE : To simplify this guide we are going to capture the COSYSCO.NETWORK


N
company scheme
. easier way. Later you can recreate the scenario with your organization's data.

3.3 Specifications of IP's and Hostname

First Virtual Server (Primary):


Hostname: nodeX000.cosysco.network

IP Network: 10.42.0.200

Second Virtual Server:


Hostname: nodeX001.cosysco.network

IP Network: 10.42.0.201
Third Virtual Server:
Hostname: nodeX002.cosysco.network

IP Network: 10.42.0..202​

Important!
After installing the three servers, it is convenient to update.
Run in a terminal or Proxmox Shell
:
apt-get update && apt-get -y dist-upgrade && apt-get remove --purge && apt-get -y
autoremove --purge && apt-get clean && apt-get autoclean
POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 6/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

4. Prepare "Virtual Machines" for Cluster


In a PROXMOX Cluster, one of the Nodes must be defined as "Orchestrator" in order to centralize the work; however,
each node has its own Web administrator.
One of the advantages of creating a "Cluster" is the enabling of "High Availability" which, in short, is to define automatic
migration rules to the "Virtual Machines", between the Nodes in case of contingencies. For them, communication
between the members of the Cluster is very important and this task is carried out by "CoroSync" (Click for more
technical information) . The official PROXMOX documentation suggests separating the Cluster communication
(CoroSync) from the Network and Storage link.

4.1 Add VNIC's to Nodes


With the web administrator PROXMOX of the HOST, we are going to add a Network card in each "Virtual
Machine", with the characteristic that it is connected to a VLAN. In this example we follow the scheme of
COSYSCO.NETWORK, we select VLAN 7.
In the "Virtual Machines" the new "Virtual Card" should automatically appear.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 7/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

Technical Information: Linux Bridge (vmbrX) is a "Virtual Bridge" that switches with the physical card of our
physical server. It behaves like a Virtual Switch, allowing the "Virtual Cards" of the "Virtual Machines" to
connect to the physical interface. Additionally, it allows defining "Virtual Networks (VLAN)" isolating the
communication of the "Virtual Cards" in the group that is defined. This example shows the new isolated and
switched cards in group 7 or VLAN 7.

​Graph 4.2 - Network Card Configuration for Cluster (CoroSync)

​ fter adding the "Virtual Cards" to the Nodes, it is necessary to enter the Web administrator of each unit and configure
A
the Network card with a valid IP. Take into account that the card is grouped in VLAN 7
.

4.3 Check VNIC on each Node

In the three Nodes we are going to activate the new card following the scheme of
COSYSCO.NETWORK
NodeX000 -> 10.0.0.200
NodeX001 -> 10.0.0.201
​NodeX002 -> 10.0.0.202 ​

4.4 VNIC settings for CoroSync


POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 8/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

After making these changes, it is convenient to add a "Network Cards" Map to each Node . With this we ensure
that the services of the PROXMOX Cluster are switched through the appropriate channels.

4.5 Hosts configuration


We edit in each Node the file that allows us to define and resolve addresses:

vi /etc/hosts

We add the following:

# Nodes Cluster PROXMOX


10.42.0.200 nodeX000.cosysco.network nodeX000 pvelocalhost
10.42.0.201 nodeX001.cosysco.network nodeX001 pvelocalhost
10.42.0.202 nodeX002.cosysco.network nodeX002 pvelocalhost

# NIC's for CoroSync PROXMOX


10.0.0.200 nodeX000-corosync.cosysco.network nodeX000-corosync
10.0.0.201 nodeX001-corosync.cosysco.network nodeX001-corosync
10.0.0.202 nodeX002-corosync.cosysco.network nodeX002-corosync

Important!
After this configuration it is necessary to restart the three Nodes.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 9/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English


5. Activar Cluster PROXMOX
Show original Options ▼

The activation of the PROXMOX Cluster is a fairly simple process that takes 2 steps:
Creation of the Cluster in one of the Nodes which will become the "Primary Node".
Add the rest of the Nodes to the Cluster pointing to the "Primary Node".

In previous "911-ubuntu" publications on "Installing PROXMOX and how to create a Cluster", we presented a simple scenario
where the Nodes have only one "Network Card", for this reason it did not require more diligence. There is a separate link in this
documentation for Cluster communication (CoroSync)
.

Exchange of "Public Keys" between Nodes and Cluster Activation


PROXMOX
uses the "Alias" of the Nodes to intertwine the services and carry out the required procedures via SSH, for this reason
we will ensure that each Node has the "Public Keys" of each of the members of the Cluster.

We will repeat the following steps for each Node


:

5.1 We enter the NodeX000


ssh [email protected]

Get PublicKey Nodes:


ssh root@nodeX000 exit
ssh root@nodeX001 exit
ssh root@nodeX002 exit

Get PublicKey CoroSync:


ssh root@nodeX000-corosync exit
ssh root@nodeX001-corosync exit
ssh root@nodeX002-corosync exit

We leave NodeX000

5.2 We enter the NodeX001


ssh [email protected]

Get PublicKey Nodes:


ssh root@nodeX000 exit
ssh root@nodeX001 exit
ssh root@nodeX002 exit

Get PublicKey CoroSync:


ssh root@nodeX000-corosync
POWERED BY exit
ssh root@nodeX001-corosync exit
911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 10/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments
ss oot@ ode 00 co osy c e t
ssh root@nodeX002-corosync
Translated to: English exit Show original Options ▼

We leave the NodeX001

5.3 We enter the NodeX002


ssh [email protected]

Get PublicKey Nodes:


ssh root@nodeX000 exit
ssh root@nodeX001 exit
ssh root@nodeX002 exit

Get PublicKey CoroSync:


ssh root@nodeX000-corosync exit
ssh root@nodeX001-corosync exit
ssh root@nodeX002-corosync exit

We leave the NodeX002

5.4 Create Cluster and Add Nodes

We enter the NodeX000


ssh [email protected]
We create the cosyscoLabs Cluster
pvecm create cosyscoLabs -bindnet0_addr 10.0.0.200 -ring0_addr nodeX000-corosync
We enter the NodeX001:
ssh [email protected]
We add to the Primary Node -> NodeX001
pvecm add nodeX000-corosync -ring0_addr nodeX001-corosync
We enter the NodeX002:
ssh [email protected]
We add to the Primary Node -> NodeX002
pvecm add nodeX000-corosync -ring0_addr nodeX002-corosync

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 11/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

5.5 View of the PROXMOX Cluster


After carrying out the described steps, the Node that we activated as Primary, should present a new structure
in the "Server View" panel, showing the three Nodes.
In short, each Node maintains asynchronous communication to the Cluster, reporting its status and that of the "Virtual
Machines" it hosts. Communication management is done by CoroSync through the cards that are switched on VLAN 7.

6. Prepare links in "Virtual Machines" for Ceph


To enable "High Availability" in PROXMOX it is necessary to have a minimum of three Nodes and a Storage that
centralizes the "Virtual Disks" of the "Virtual Machines" and "Containers".
In "Business Environments" we will find SAN/NAS provided "Space Blocks" (Lun's FC & iSCSI, NFS) with "Fault
Tolerance", allowing the connection of multiple PROXMOX Nodes and enabling "High Availability" between them. More
information: PROXMOX NAS/SAN. In the following article we are going to present the basic concepts of storage
based on NAS/SAN and how PROXMOX works in virtualized environments.

Note: The concepts that we are going to use to enable "High Availability" in the Cluster are similar for environments
with SAN/NAS.

6.1 Add Network cards to the "Virtual Machines".

With the web administrator PROXMOX of the HOST, we are going to add in each "Virtual Machine" two
Network cards, with the characteristic that they are connected to a VLAN.
In this example we follow the scheme of COSYSCO.NETWORK, we select VLAN 13.
In the "Virtual Machines" the new "Virtual Cards" should automatically appear.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 12/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

Graph 6.2 - Ceph communication bridge

After adding the "Virtual Cards" to the Nodes, it is necessary to enter the Cluster Web Administrator and add a "Linux
Bond" to each unit that intertwines the two new cards. Take note that both cards are grouped in VLAN 13.

For this part of the article, the Nodes are in a Cluster and it is much easier to access each of them. So, following the
COSYSCO.NETWORK schema, we are going to configure it as follows:

6.3 Add new VNIC's


In each Node, we enter the Network and check the existence of the two new "Virtual Cards" and that they are
inactive. Take note of the name.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 13/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

6.4 Agregar "Linux Bond"


The next step is to create a "Linux Bond" Virtual Interface for multiple NIC's, which in short links two or more
network cards.

​ echnical Information: Linux Bond is presented as a Virtual Interface that links two or more "Network Cards",
T
providing a link tolerant to "Faults" and high performance in "Data Transfer". To use "Ceph Storage" it is necessary to
have links that switch efficiently.
We
are going to configure "Linux Bond" in each Cluster Node as follows
:

6.5 Linux Bond Configuration


IP Address: the following configuration is suggested:
Schema COSYSCO.NETWORK
NodeX000 -> 10.10.0. 200
NodeX001 -> 10.10.0. 201 NodeX002
-> 10.10.0. 202
Slaves: Enter the name of the two cards, in this example they are identified as ens20 and ens21 .
Mode: Broadcast, indicates that it transmits data on all interlaced cards "slaves".

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 14/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

7. Add Hard Drives in Nodes for Ceph


Graph 7.1 - Add Storage Units for Ceph
​ he storage unit (Hard Drive) where PROXMOX was installed, is used specifically for the Operating System, the
T
PROXMOX Hypervisor layer, Swap & KSM, store installer ISO's, LXC Templates, Local VM's, among other things. This
unit cannot be used to be part of the "Storage Ceph" for this reason it is necessary to add "Extra Storage Units" to the
Nodes for the provisioning of space of the "Storage Ceph" of the Cluster.

7.2 Add HDD


In each PROXMOX Node we are going to add two "Extra Storage Units". The point of using more than one unit is to
provide "Fault Tolerance" with Ceph. The "Storage Units" must be equal to or greater than the smallest disk assigned to
Ceph.

7.3 Review of new HDD


POWERED
After adding the BYUnits" to each Node, it is convenient to review their correct integration and size.
"Extra Storage

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 15/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

Important!
After configuring "Linux Bond" and adding the "Extra Storage Units" it is recommended to
restart each Node to start with the installation of the Ceph libraries and activation of the
service.

8. Install Ceph libraries and Service Activation


After restarting the Nodes , we install the Ceph libraries
.

8.1.1 PROXMOX Version 4.x


We enter each Node to install the Ceph libraries with the following command lines:
ssh [email protected]
pveceph install --version jewel
ssh [email protected]
pveceph install --version jewel
ssh [email protected]
pveceph install --version jewel

We activate the Ceph-Monitor service in the Link created with "Linux Bond".
NOTE: This is done only on the Primary Node, which in this example is
nodeX000
ssh [email protected]
pveceph init --network 10.10.0.0/24
pveceph createmon

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 16/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

8.1.2 PROXMOX Version 5.x


We enter each Node to install the Ceph libraries with the following command lines:
ssh [email protected]
pveceph install --version luminous
ssh [email protected]
pveceph install --version luminous
ssh [email protected]
pveceph install --version luminous

We activate the Ceph-Monitor service in the Link created with "Linux Bond".
NOTE: This is done only on the Primary Node, which in this example is
nodeX000
ssh [email protected]
pveceph init --network 10.10.0.0/24
pveceph createmon

8.2 Status of Storage Ceph


We enter the main Node of the Cluster, it shows us the status of the "Storage Ceph" that at this moment is "Alert",
because it does not find Ceph-OSD that are the services associated with the "Extra Storage Units" and They are also
responsible for the communication between the "Storage Ceph", the Disks, Storage and data replication.

8.3 Ceph Monitor


The next step is to enable "Ceph Monitoring" to the Remaining Nodes. This action automatically activates the Ceph-
Monitor, in the Nodes remotely.

NOTE: All the Cluster Nodes that share the "Extra Storage Units" resource must be integrated.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 17/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

8.4 Create Ceph-OSD


In each Node we are going to create Ceph-OSD which are the services associated with "Extra Storage Units". Every
time we give "Create ODS" it shows us the available units.

8.5 OSD View


Every time we finish with each Node, it has to show us a scenario similar to this example. NOTE: You can press "Reload"
to refresh the view.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 18/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

9. Present "Space Blocks" to the Cluster


After enabling "Extra Storage Units", the next step is to present the "Storage Ceph" to the Cluster as a "Storage RBD".
Before starting with the configuration, it is convenient to explain the relationship between Pools and RBD in order to
understand what we are going to configure with the PROXMOX Web Administrator.
Pools: In "Proxmox Storage" Pool is the term used to describe a GROUP of "Storage Units" to store Information.
In the case of Ceph, it corresponds to the group of "OSD's" defined for the space provisioning of the "Storage
Ceph".
RBD: RADOS Block Device, are "Virtual Storage Blocks" that represent the Ceph Pools before the Cluster.
"Virtual Storage Block" : It is the simulation of "Storage Units", allowing "LXC Virtual Machines and Containers"
to simply see a Hard Disk with Available Space.

9.1 View Ceph Storage


Our Storage Ceph must be presented in the following way indicating that everything works properly

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 19/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

9.2.1 Create Ceph Pool


In this example we create a pool named "ceph-vms" which will store the "Virtual Disks" of the "Virtual Machines". It is
recommended to use the values ​in the image. Click here for more information.

9.2.2 Technical information:

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 20/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

Ceph builds small containers called "Objects" which in turn are grouped (pg_num), for storage and replication in
the ODS.
On the Create Ceph Pool screen , the Size, Min. Size, Crush RuleSet, and pg_num fields are presented. In short,
it represents the pool of objects and their maximum and minimum replication in the ODS. To complete this
information, we will be working on an additional article that describes in detail the parameterization of these
fields as well as the monitoring of this type of infrastructure.

9.3 Create RBD (RADOS Block Device)


Next we create the Storage Ceph as RBD.

9.4.1 Create RDB for ceph-vms


It is suggested to enter the values ​on the screen.
POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 21/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

9.4.2 Description of screen fields "9.4.1 Create RBD for Ceph-vms"


ID: Enter the name that we are going to give the Storage Ceph RBD. In this example ceph-vms

Pool: Name of the pool to which the Storage will be associated. In this example it is ceph-vms
Monitor(s): It is important to enter the IP of each "Linux Bond", it is suggested that you enter:
COSYSCO.NETWORK Scheme
NodeX000 -> 10.10.0. 200
NodeX001 -> 10.10.0. 201 NodeX002
-> 10.10.0. 202 NOTE: IP's are entered with a separating space.
​Nodes: Nodes that will have access.
Enable: To Enable or Disable Storage.
Content: Indicates the type of content that will be accepted.
Disk Image is for "Virtual Machines" and Container is for "LXC".
LXC has extra parameters, for this reason it is convenient that the "Virtual Machines" do not coexist with LXC in
the same Pool, they must be separated.
KRBD: Enables multiple disk and snapshot support for LXC.

9.5 Review of new Storage


After carrying out the previous steps, the new Storage should appear in the Cluster and in each Node. It appears that it
cannot connect. To solve this situation we continue with step 9.6

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 22/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

9.6 Copy Administration key


We enter the Primary Node:
ssh [email protected]
Create Ceph Key Directory
mkdir /etc/pve/priv/ceph
We copy the key with the name ceph-vms
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/ceph-vms.keyring
If you add another Pool and Storage you must copy the key with the name of the new Storage, Example:
ceph-NUEVO.keyring

10. "Migration" and "High Availability" tests


We are reaching the final phase of this article, where the fun begins and we justify the time spent by seeing the results.
At this stage we are going to check the correct operation of the Cluster and the Storage Ceph, migrating "Virtual
Machines" between the Nodes to then activate "High Availability" rules.

Before starting, it is convenient to point out the following:


"Nested Virtualization" is a scenario for testing and landing concepts. We cannot use it for a Production
Environment or expect good performance, even more so if we give the environment little resource.
Due to the limitation in Memory and Processors, it is suggested to use "Virtual Machines" with basic services.
In this documentation we are going to use a "Virtual Machine" with the following
:
Ubuntu Server 16.04 with Apache Server
1 processor 64bit
32gb of HDDBY
POWERED created in ceph-
vms ​384mb of
911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 23/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments
RAM
Translated to: English Show original Options ▼

You can also use LXC which optimizes the resource consuming a minimum of RAM and CPU.
Important: Don't forget to create a Pool and Storage RBD for LXC and enable KRBD extended properties

10.1 Review Storage RBD


The "Storage RDB" should show us the available space. If in one case it continues without connection, it is suggested to
review the step "​9.6 Copy Administration key".

10.2 Create "Virtual Machine"


In the following example a VM is created and left powered off to start with basic tests.
Note: In this documentation we have restored a "VM" to make our work easier.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 24/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

10.3 Migrate in Offline mode


The next step is to validate the migration of the VM in "Offline" mode (the Online option is deactivated), validating that
everything works properly.

10.4 Review Migration task


After validating that the "Offline" migration works properly, we carry out the same step to transfer to NodeX002. Next
step: Back to nodeX001 and then nodeX000.

As long as the "Offline" migration , both outward and return, works properly, we can conclude that the communication
between the Nodes and Storage works properly. The next step is to turn on the VM and carry out the same tests, taking
into account that this scenario only shows the concepts and that at some point the VM may stay due to the limited
POWERED BY
nature of the scenario
.
911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 25/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

10.5 Enabling High Availability

One of the advantages of creating a "Cluster" is the enabling of "High Availability" which, in short, is to define automatic
migration rules to the "Virtual Machines", between the Nodes in case of contingencies. More information:
https://pve.proxmox.com/wiki/High_Availability

10.6 Create "HA Group"


Before activating "High Availability" rules, at least one HA group must be created, which in short is to identify a set of
Nodes to be presented to the rules. Sometimes it is necessary to have several groups with the same Nodes with
different values ​in Priority.

10.7 Add HA Rule


To finish, we must create the "High Availability Rules" through Resource, which are nothing more than records
indicating the behavior of the "Virtual Machine" or "Container LXC" in case of eventualities in the Node where it is
hosted.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 26/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

10.8 Description of screen fields "10.7 Add HA Rules"


Max. Restart: Maximum number of attempts to restart the VM or LXC on a Node after its failed start.
Max. Relocate: Maximum number of VM or LXC relocation attempts in a Node, to then jump to the next.
Request State: We select Started
Started: Defines the HA rule that the "Virtual Machine or LXC" where it is transferred, must be restarted.
Stopped: Defines the HA rule that the "Virtual Machine or LXC" is transferred and remains offline.
​Disabled: Defines the HA rule that the "Virtual Machine or LXC" will try to recover in the same Node.

10.9 "High Availability" test run


Well friends, we have reached the end and now it is up to you to experiment with the Virtualized Environment
.

10.10 "High Availability" Rule View


After activating "High Availability" the "Virtual Machine" is turned on.

10.11 Simulate "Cluster/CoroSync" error


POWERED
It is recommended BY the "Cluster/CoroSync" card, this simulates the communication drop of the Node
to deactivate
allowing to observe the "High Availability" behavior
911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 27/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments
allowing to observe the High Availability behavior.
Translated to: English Show original Options ▼

10.12 Alerts HA Rule


After disconnecting the communication card from "Cluster/CoroSync" the alert is enabled.

10.13 Automatic Migration


After a few minutes, we will observe that the "Virtual Machine" moves to another node.

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 28/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

Finished PROXMOX Ceph

PROXMOX Storage Directory with SSHFS. How to Generate Backups, Restore, Import and
Export Virtual Machines in a "Remote Directory". This article aims to demonstrate how easy it can be to manage
this type of task through a graphical environment .

GO BACK

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 29/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments
Tweet
Translated to: English Show original Options ▼

9 Comments

Maxwell 3/4/2018
3/4/2018 09:15:22
09:15:22

Majestic explanation, very well detailed. I would like this article to cover using SAN/NAS instead
of Ceph. Thank you for everything you post. A hug.

REPLY

Rafael 3/4/2018
3/4/2018 22:27:49
22:27:49

Excellent Handbook.
It worked perfectly for me.

Thank you.

REPLY

systems 4/26/2018
4/26/2018 09:59:49
09:59:49

Hello, the guide that you have assembled is impressive.

I get lost in step 8


This unit cannot be used to be part of the "Storage Ceph" for this reason it is necessary to add
"Extra Storage Units" to the Nodes for the provisioning of space of the "Storage Ceph" of the
Cluster.

I have an extra drive on two of the four nodes I have, but when I try to create the OSD it doesn't
detect it.

Could you help me please?

Thanks in advance.

Salu2.

REPLY

Nico 6/10/2018
6/10/2018 02:03:10
02:03:10
POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 30/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

One of the best online guides I've seen from proxmox.

REPLY

David 3/28/2019
3/28/2019 14:13:59
14:13:59

Thanks for the information, finally an excellently explained article for Proxmox.

REPLY

Ricardo 4/30/2019
4/30/2019 05:46:45
05:46:45

I wanted to congratulate you for the currada of the post.


I'm going to take a look at your whole website, very interesting.

Thank you!

REPLY

Out 6/26/2019
6/26/2019 12:59:48
12:59:48

Very good guide, I plan to implement it in a client!

We had already created the Cluster with Proxmox, but the HA option would really help the
business a lot

REPLY

Out 6/26/2019
6/26/2019 13:04:31
13:04:31

Very good guide, I plan to implement it in a client!

We had already created the Cluster with Proxmox, but the HA option would really help the
business a lot

REPLY

Stanley 10/6/2019
10/6/2019 12:55:07
12:55:07

POWERED BY
Excellent explanation.
911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 31/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

I would like to see how you do the migration from XenServer 6.X to ProxMox 6, since it is
Translated to: English Show original Options ▼

something that I am trying to do and your guide would be fantastic.

Greetings and congratulations.

REPLY

Leave a Reply.
Name (required)

Email (not published)

Website

Comments (required)

Notify me of new comments to this post by email


SUBMIT

Editor:
Juan Estuardo Hernandez
Consultant Free Software,

Organization and Methods.

POWERED
Accumulating BY is
information
only the first step towards
911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 32/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments
only the first step towards
Translated to: English Show original Options ▼

wisdom. But sharing


information is the first step
towards community.
Henry Louis Gates, Jr.

Conditions of use:

The content of the 911-


ubuntu website and this
article in particular, is "Free"
and can be consulted by
anyone who wishes.

Please “Do not make copies


of our articles”. If you want
to share it, you can refer our
publications as a link, so
interested people will get our
latest updates.

Like everything in life, nothing


is perfect, so if you notice any
errors or want to improve the
content of these articles, you
can send us a message which
will be welcome. (Form at the
bottom of the page).

Thank you for continuing to


refer to 911-Ubuntu for
Office Environment and
Corporate Networks.

POWERED BY
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License
911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 33/34
4/12/22, 9:47 PM PROXMOX CEPH Laboratory - Configure Ubuntu, Proxmox, Zabbix & NethServer for Office environments

Translated to: English Show original Options ▼

POWERED BY

911-ubuntu.weebly.com/proxmox-ceph/proxmox-ceph-laboratorio 34/34

You might also like