RedHat Virtualisation Overview
RedHat Virtualisation Overview
Spatial Services
Ian Evans
August 2021
Table of Contents
Background 3
Datacentres 3
Clusters 3
Storage 3
GPU Systems 4
Networking 4
Additional Tasks 13
Add another Disk 13
Grow the Root LVM File System 14
Snmp - Solarwinds 16
ServiceNow Discovery 17
Falcon-sensor 18
Application Deployments 19
Satellite 20
Host Collections 20
Subscriptions 20
Spatial Services 21
Revenue NSW 21
Activation Keys 21
Sync Plans and Status 22
Content Views 22
Patching Overview 23
Useful Commands 23
Check Registration Status 23
Updating the Virtual/Physical Machines 23
Addenda 25
Bathurst RHV Hardware 25
Background
● Currently Spatial Services has a Datacentres at Bathurst and in Silverwater
● As such there are separate RHEV Manager Consoles at both sites
https://srv-bx-revm.lands.nsw/ovirt-engine/
https://srv-sw-revm.lands.nsw/ovirt-engine/
Datacentres
The topology of RHVM is basically the same on both instances with Development, Testing
and Production Datacentres, plus a VDI datacentre
Clusters
Similarly, there are corresponding Clusters for each Datacentre:
Storage
Storage is all SAN/Fibre Channel aside from a common NFS export storage Domain
GPU Systems
There are a number of specialised GPU Systems both in Bathurst and Silverwater – these
are primarily used to host specialised Windows 10 desktop/workstation instances, with some
utilising NVidia GPU hardware, as well as dedicated LUNs from a storage perspective
Networking
In the existing systems be it Bathurst or Silverwater there is no separate network
segmentation
Creating a Virtual Machine
In the past we have used automated templates however currently the frequency of virtual
machine builds is relatively small – as such the current practice has been to “build from
scratch”. However the plan was to build a “golden image” based on what has been done
with Revenue, where a minimalist image/template has been built and customised as per
PEN Test recommendations - and this can be used as a base image to template/clone from.
Example Virtual Machine Build Process
Build a RedHat 7 virtual machine to the following specs
● CPU: 2
● Memory: 4 GB memory
● Disk: 25 GB root partition, 30 GB /data partition
We will then extend the root partition by 10GB as an exercise.
Server will be built in the Bathurst UAT Datacenter, with IP, DNS etc being configured as
well.
Procedure Prerequisites
● Determine server name and allocate IP address
● Check storage is available in the relevant domain
Naming Standards
Server naming standards in Spatial are as follows:
srv-[bx|sw]-xxx[d|q|p]x
srv = server
bx|sw = Bathurst |Silverwater
xxx = type of server (tas = tomcat application server, aws = apache web
server)
d|q|p = Dev, UAT, Prod tier
X = number if a (possible) cluster
Examples: srv-bx-awsp3, srv-bx-noded1, srv-bx-sasq1 …
IP Address Allocation
Within the Bathurst DC, on the old network there is a supernet with respect to IP addresses
for Redhat Virtual systems:
SuperNet 10.114.64.0/22
10.114.64.1 Gateway
10.114.64.2-40 Static RHEV-H
10.114.64.41-254 DHCP Scope
10.114.65.0-255 VM Production Static
10.114.66.0-255 VM UAT Static
10.114.67.0-255 VM DEV Static
So for our example we will name the server srv-bx-ac3q1 and allocate 10.114.66.78 as the
IP
(Question – how do we determine the IP address 😊)
Creating the Virtual Machine
Log in to the relevant Red Hat Virtualisation Manager, check you have storage in the
relevant Storage domain and proceed as follows:
Proceed to Compute > Virtual Machines
Select New and Enter
Select relevant Cluster via the Pulldown --> BX_UaT_Cluster
Select Operating System --> Red Hat Enterrprise Linux 7.x x64
Instance Type: --> Custom
Optimised for: --> Server
Name: --> srv-bx-ac3q1
Description: --> Demo AC3 Build Machine
Comment: --> Group or other useful info
Create Instance Image: --> VM disk partition
Define size: --> 25 GiB (and check storage domain )
Allocation Policy: --> Preallocated
OK – this will now create the disk
Select the Network Interface --> ovirtmgmt/ovitmgmt
Page Break
Now proceed to the System Tab on the left hand side menu
Set Memory Size: --> 4096
Set Virtual CPUs: --> 4
Set Time zone: --> GMT+10 E. Australia Standard Time
Check the Provide Custom SN Policy --> Needed for ServiceNow (Check Vm ID)
You will now be returned to the list of VM’s and you should be able to find the machine you
just created – it will be down
● Select the VM
● Select Run Once via the top menu pull down --> you should see the DVD
attached
● Now you can change the boot sequence .. but there is no need to if you select the
menu option
Select the VM and right click .. and when it goes green you should be able to Open the
console via tht remote viewer
Select Menu Item 2 and the DVD should boot – you can watch the boot as per normal
Shortly you should see a standard Red Hat Install sequence:
On the following screen there are some options to check and or modify as appropriate:
Date and Time Australia/Sydney
Keyboard English US
Language English (Australia)
Installation Source Local media
Software Selection Minimal Install
Installation Destination Usually can allow Auto
Kdump Usually disable
Network To be configured – set name, IP, GW etc etc
Security Policy NA
Network Settings
Select Manual re IPv4
Add IP Address: 10.114.66.78/22 GW 10.114.64.1
DNS Servers: 10.114.76.176 10.114.76.177
Search Domains: lands.nsw
Disable IPv6
#------------------------------------------------------------------------------#
# Time Servers
#----------------------------- ntp.csinfra.nsw.gov.au -------------------------#
#server 10.83.75.54
#----------------------------- ntp[1|2].csinfra.nsw.gov.au --- SilverWater ----#
server 10.86.254.11
server 10.86.254.12
#----------------------------- ntp[3|4].csinfra.nsw.gov.au --- Unanderrra ----#
server 10.84.254.11
server 10.84.254.12
#------------------------------------------------------------------------------#
# End
#------------------------------------------------------------------------------#
Registration
Now register the VM so we can patch and/or install relevant software
yum -y localinstall http://srv-bx-sat.lands.nsw/pub/katello-ca-consumer-latest.noarch.rpm
if time wrong then date mmddhhmm
subscription-manager register --org="Spatial_Services" --activationkey="DEV-Key-RHEL5"
subscription-manager register --org="Spatial_Services" --activationkey="DEV-Key-RHEL6"
subscription-manager register --org="Spatial_Services" --activationkey="DEV-Key-RHEL7"
Install Agents
● yum install katello-agent
● yum install ovirt-guest-agent-common
● yum update
● Reboot
System should now be ready for installation of additional software, additional
disks and the like.
Additional Tasks
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 22G 1.9G 20G 9% /
/dev/sda1 1014M 137M 878M 14% /boot
/dev/sdb1 30G 33M 30G 1% /data
Grow the Root LVM File System
We currently we have the 25GB initial root file system, however we want to extend it by
10GB – so from 25GB to 35GB
fdisk -l /dev/sda
Edit the VM in RHVM and Extend the size of the root disk by 10GB
Fdisk /dev/sda
● Delete the LVM root partition - /dev/sda2
● Create a new partition – in effect recreate but with changed end point
● Set type to LVM (8e)
pvresize /dev/vda2
pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name rhel
PV Size <34.00 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 8703
Free PE 2560
Allocated PE 6143
PV UUID Zx3Ixr-Y8oQ-ylYz-v819-GsIJ-eqEE-okcgVA
lvdisplay
--- Logical volume ---
LV Path /dev/rhel/root
LV Name root
VG Name rhel
LV UUID FdlG7g-1uSA-5tMa-Mcmo-UJRm-hgf7-1UIa9M
LV Write Access read/write
LV Creation host, time srv-bx-ac3q1, 2021-08-17 00:30:37 +1000
LV Status available
# open 1
LV Size <21.50 GiB
Current LE 5503
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
lvdisplay
--- Logical volume ---
LV Path /dev/rhel/root
LV Name root
VG Name rhel
LV UUID FdlG7g-1uSA-5tMa-Mcmo-UJRm-hgf7-1UIa9M
LV Write Access read/write
LV Creation host, time srv-bx-ac3q1, 2021-08-17 00:30:37 +1000
LV Status available
# open 1
LV Size <31.50 GiB
Current LE 8063
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
#-----------------------------------------------------------------------------#
# Name /etc/snmp/snmpd.conf
#-----------------------------------------------------------------------------#
# History
#-----------------------------------------------------------------------------#
# 11.03.2021 IE Initial review and tidy up re Solarwinds Deployment
#-----------------------------------------------------------------------------#
#-----------------------------------------------------------------------------#
# AGENT BEHAVIOUR
#-----------------------------------------------------------------------------#
# agentAddress udp:127.0.0.1:161
#-----------------------------------------------------------------------------#
agentAddress udp:161
#-----------------------------------------------------------------------------#
# SNMPv3 AUTHENTICATION
#-----------------------------------------------------------------------------#
# Created in /var/lib/net-snmp/snmpd.conf
# Full read only access for SNMPv3
#-----------------------------------------------------------------------------#
view systemonly included .1
rouser authOnlyUser
rouser DCS-RO authpriv
#-----------------------------------------------------------------------------#
# End
#-----------------------------------------------------------------------------#
Cat snmpd-net-snmpd.conf: (you can create via: createUser DCS-RO SHA "SNMPR3adOnly" AES
SNMPR3adOnly … or simply copy the line below into /var/lib/net-snmp/snmpd.conf)
#------------------------------------------------------------------------------#
# New Additions may be required
#------------------------------------------------------------------------------#
# /bin/ls, \
# /bin/cat /etc/httpd/conf.d/manual.conf, \
# /bin/cat /etc/httpd/conf.d/site-ssl.conf, \
# /bin/cat /etc/httpd/conf.d/wsgi.conf, \
# /bin/cat /etc/httpd/conf.d/php.conf, \
# /bin/cat /etc/httpd/conf.d/welcome.conf, \
# /bin/cat /etc/httpd/conf.d/perl.conf \
#------------------------------------------------------------------------------#
Falcon-sensor
DCS has standardised on Crowdstrike Falcon-sensor as the preferred solution with respect
to anti-virus and malware protection. As such it is a “next gen” product and does not work in
the same way as more traditional virus/pattern scanners work.
#------------------------------------------------------------------------------#
Short and Sweet Version
#------------------------------------------------------------------------------#
scp -p /apps/IE/Falcon/falcon-sensor-6.16.0-11307.el7.x86_64.rpm SERVER:/usr/local/src (7)
scp -p /apps/IE/Falcon/falcon-sensor-6.16.0-11307.el6.x86_64.rpm SERVER:/usr/local/src (6)
/opt/CrowdStrike/falconctl -s --cid=50721C88ED5446B2B2D059373B8A37C1-92
#------------------------------------------------------------------------------#
To check status and if can stop and start:
#------------------------------------------------------------------------------#
systemctl status falcon-sensor
systemctl stop falcon-sensor
systemctl start falcon-sensor
systemctl status falcon-sensor
OR
service falcon-sensor status
service falcon-sensor stop
service falcon-sensor start
service falcon-sensor status
Application Deployments
Apparently Application tasks such as installation, configuration, and deployment are out of
scope with respect to AC3 responsibilities - however certainly deployments to UAT and
PROD tiers are carried out by the OPS Team in Spatial (although Revenue is different
again). As such, these tasks may come within scope re AC3 at some poin. Typically
deployments are requested/controlled via Jira tickets - see example below:
● Implementation Overview:
● Refer release notes for details. brief Implementation Steps:
1) Ensure tomcat is shutodwn
2) rename old gpr.war file in webapps folder to gpr.war.old for quick rollback
3) delete gpr folder from the webapps folder
4) Copy new war file to tomcat webapps folder -- /usr/local/tomcat/webapps/gpr.war
7) restart the tomcat
● Backout Overview:
a) Stop tomcat -- /usr/local/tomcat/bin/shutdown.sh
b) delete /usr/local/tomcat/webapps/gpr folder and clean temp directories
c) Restore the old deployment files
cp /usr/local/tomcat/webapps/gpr.war.old /usr/local/tomcat/webapps/gpr.war
cp /usr/local/tomcat/webapps/gpr-log4j.xml.old /usr/local/tomcat/webapps/
gpr-log4j.xml
cp /usr/local/tomcat/webapps/gpr-tomcat.properties.old
/usr/local/tomcat/webapps/gpr-tomcat.properties
d) ensure symlink /usr/local/tomcat/lib/gpr-tomcat.properties reads
gpr-tomcat.properties file. similarlycheck for xml file
e) restart gpr
● Testing Overview:
1. Log on to the GPR System through the valuationtest.six.nsw.gov.au
2. verify the displayed version is 3.0.0.99
5. verify all functionalities work properly.
Satellite
Satellite is used both in Spatial Services and also in Revenue NSW primarily for patch
management and deployment. Spatial Satellite is a single instance hosted on the RHVM
infrastructure, whereas in Revenue there is Satellite and an additional Capsule server.
Host Collections
In Spatial Services we define a number of Host Collections which equate to the Hypervisor
Hosts, the Deployment Tiers and the VDI infrastructure
Subscriptions
The subscriptions are of three types:
Premium Smart Virtualisation - for production Hypervisor Tier (Linux VMs)
Standard Smart Virtualisation - for DEV and UAT tiers
Standard RHLinux - for hosting Windows VDI solutions
The Standard RH Linux subscriptions are allocated to a number of Dell Blades in Silverwater
and also to the specialist GPU 2RU servers with the Nvidia GPU cards, 3 systems in
Bathurst and 2 systems in Silverwater.
Spatial maintain Subscriptions and support with Red Hat via a third party - Glintech
Revenue maintain Subscriptions and support with HPE - L3 with Red Hat.
Spatial Services
Revenue NSW
Activation Keys
Spatial maintain Activation Keys for each Tier and/or system type - Revenue have something
similar.
Sync Plans and Status
Repositories can be synchronised on demand or scheduled and then promoted from the
Library to DEV, UAT and Production - note Revenue also have a Systest tier.
Content Views
Content Views allow control of what content is published to what tier and therefore provide a
mechanism to publish to each Tier in a controlled manner.
Patching Overview
For an upcoming “outage and/or patch weekend” the following steps would be carried out:
Useful Commands
In some cases you may need to wait for Satellite to update the status of the VM .. even
overnight
yum update
3. RabbitMQ Servers
The main issue here is reboot order - they are a pair but if you going through a
reboot/patch cycle you need to do the secondary down first, then the primary … then
bring up the primary .. then bring up the secondary
srv-bx-rmqp1/srv-bx-rmqp2
srv-bx-rmqq1/srv-bx-rmqq2
Addenda