100% found this document useful (1 vote)
137 views

RedHat Virtualisation Overview

This document provides an overview of RedHat Virtualisation (RHV) at Spatial Services, including creating virtual machines, installing operating systems, and additional configuration tasks. It discusses the RHV infrastructure with datacenters in Bathurst and Silverwater, clusters, storage, and networking. The process of building a virtual machine is outlined, including prerequisites, naming standards, allocating an IP address, creating the VM, installing Red Hat Enterprise Linux 7, and additional configuration steps. Satellite management of hosts and subscriptions is also briefly mentioned.

Uploaded by

Bal Biyo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
137 views

RedHat Virtualisation Overview

This document provides an overview of RedHat Virtualisation (RHV) at Spatial Services, including creating virtual machines, installing operating systems, and additional configuration tasks. It discusses the RHV infrastructure with datacenters in Bathurst and Silverwater, clusters, storage, and networking. The process of building a virtual machine is outlined, including prerequisites, naming standards, allocating an IP address, creating the VM, installing Red Hat Enterprise Linux 7, and additional configuration steps. Satellite management of hosts and subscriptions is also briefly mentioned.

Uploaded by

Bal Biyo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

RedHat Virtualisation Overview

Spatial Services
Ian Evans
August 2021
Table of Contents
Background 3
Datacentres 3
Clusters 3
Storage 3
GPU Systems 4
Networking 4

Creating a Virtual Machine 5


Procedure Prerequisites 5
Naming Standards 5
IP Address Allocation 5
Creating the Virtual Machine 6
Installing the Operating System 7
Registration 11
Install Agents 12

Additional Tasks 13
Add another Disk 13
Grow the Root LVM File System 14
Snmp - Solarwinds 16
ServiceNow Discovery 17
Falcon-sensor 18
Application Deployments 19

Satellite 20
Host Collections 20
Subscriptions 20
Spatial Services 21
Revenue NSW 21
Activation Keys 21
Sync Plans and Status 22
Content Views 22
Patching Overview 23
Useful Commands 23
Check Registration Status 23
Updating the Virtual/Physical Machines 23

Addenda 25
Bathurst RHV Hardware 25
Background
● Currently Spatial Services has a Datacentres at Bathurst and in Silverwater
● As such there are separate RHEV Manager Consoles at both sites
https://srv-bx-revm.lands.nsw/ovirt-engine/
https://srv-sw-revm.lands.nsw/ovirt-engine/

Datacentres
The topology of RHVM is basically the same on both instances with Development, Testing
and Production Datacentres, plus a VDI datacentre

Clusters
Similarly, there are corresponding Clusters for each Datacentre:

Storage
Storage is all SAN/Fibre Channel aside from a common NFS export storage Domain
GPU Systems
There are a number of specialised GPU Systems both in Bathurst and Silverwater – these
are primarily used to host specialised Windows 10 desktop/workstation instances, with some
utilising NVidia GPU hardware, as well as dedicated LUNs from a storage perspective

Networking
In the existing systems be it Bathurst or Silverwater there is no separate network
segmentation
Creating a Virtual Machine
In the past we have used automated templates however currently the frequency of virtual
machine builds is relatively small – as such the current practice has been to “build from
scratch”. However the plan was to build a “golden image” based on what has been done
with Revenue, where a minimalist image/template has been built and customised as per
PEN Test recommendations - and this can be used as a base image to template/clone from.
Example Virtual Machine Build Process
Build a RedHat 7 virtual machine to the following specs
● CPU: 2
● Memory: 4 GB memory
● Disk: 25 GB root partition, 30 GB /data partition
We will then extend the root partition by 10GB as an exercise.
Server will be built in the Bathurst UAT Datacenter, with IP, DNS etc being configured as
well.

Procedure Prerequisites
● Determine server name and allocate IP address
● Check storage is available in the relevant domain

Naming Standards
Server naming standards in Spatial are as follows:
srv-[bx|sw]-xxx[d|q|p]x
srv = server
bx|sw = Bathurst |Silverwater
xxx = type of server (tas = tomcat application server, aws = apache web
server)
d|q|p = Dev, UAT, Prod tier
X = number if a (possible) cluster
Examples: srv-bx-awsp3, srv-bx-noded1, srv-bx-sasq1 …

IP Address Allocation
Within the Bathurst DC, on the old network there is a supernet with respect to IP addresses
for Redhat Virtual systems:

SuperNet 10.114.64.0/22
10.114.64.1 Gateway
10.114.64.2-40 Static RHEV-H
10.114.64.41-254 DHCP Scope
10.114.65.0-255 VM Production Static
10.114.66.0-255 VM UAT Static
10.114.67.0-255 VM DEV Static

So for our example we will name the server srv-bx-ac3q1 and allocate 10.114.66.78 as the
IP
(Question – how do we determine the IP address 😊)
Creating the Virtual Machine
Log in to the relevant Red Hat Virtualisation Manager, check you have storage in the
relevant Storage domain and proceed as follows:
Proceed to Compute > Virtual Machines
Select New and Enter
Select relevant Cluster via the Pulldown --> BX_UaT_Cluster
Select Operating System --> Red Hat Enterrprise Linux 7.x x64
Instance Type: --> Custom
Optimised for: --> Server
Name: --> srv-bx-ac3q1
Description: --> Demo AC3 Build Machine
Comment: --> Group or other useful info
Create Instance Image: --> VM disk partition
Define size: --> 25 GiB (and check storage domain )
Allocation Policy: --> Preallocated
OK – this will now create the disk
Select the Network Interface --> ovirtmgmt/ovitmgmt

Page Break

Now proceed to the System Tab on the left hand side menu
Set Memory Size: --> 4096
Set Virtual CPUs: --> 4
Set Time zone: --> GMT+10 E. Australia Standard Time
Check the Provide Custom SN Policy --> Needed for ServiceNow (Check Vm ID)

Other Tabs can basically be left “As Is”

Installing the Operating System


So at this stage we have prepared the Virtual Machine with appropriate settings and empty
disk image .. the next stage is to install the operating system.
Select Boot Options
● Attach CD --> Select say rhel-server-7.8-x86_64-dvd.iso
● Enable boot menuto select device
● Press OK

You will now be returned to the list of VM’s and you should be able to find the machine you
just created – it will be down
● Select the VM
● Select Run Once via the top menu pull down --> you should see the DVD
attached
● Now you can change the boot sequence .. but there is no need to if you select the
menu option
Select the VM and right click .. and when it goes green you should be able to Open the
console via tht remote viewer

Select Menu Item 2 and the DVD should boot – you can watch the boot as per normal
Shortly you should see a standard Red Hat Install sequence:

On the following screen there are some options to check and or modify as appropriate:
Date and Time Australia/Sydney
Keyboard English US
Language English (Australia)
Installation Source Local media
Software Selection Minimal Install
Installation Destination Usually can allow Auto
Kdump Usually disable
Network To be configured – set name, IP, GW etc etc
Security Policy NA

Network Settings
Select Manual re IPv4
Add IP Address: 10.114.66.78/22 GW 10.114.64.1
DNS Servers: 10.114.76.176 10.114.76.177
Search Domains: lands.nsw
Disable IPv6

The NIC should show as Connected …


● Select Done
● Begin Installation
● Set default root password
● --> password for example --> Double click if simple
After a couple of minutes the install should be finished and the system can be rebooted and
checked for network connectivity etc
Reboot … this time boot off the Hard disk
● Login via the console
● Check network connectivity shu
● Shutdown the VMs
● Unmount the ISO DVD
● Boot the VM and complete the base configuration
Final Configuration Steps
There are a number of final steps to finalise the VM – this includes:
● Disable selinux – vi /etc/selinux/config
● Disable iptables, ip6tables, firewalld as appropriate
systemctl stop firewalld; systemctl disable firewalld
● Configure time synchronisation - traditionally ntpd but now chronyd is the new
default
#------------------------------------------------------------------------------#
# Name: /etc/chrony.conf
#------------------------------------------------------------------------------#
# History
#------------------------------------------------------------------------------#
# 02.04.2020 IE Revenue Time Servers
# 05.08.2020 AS New NTP Cluster – ntp.csinfra.nsw.gov.au (10.83.75.54)CR7658
# 14.09.2020 IE Repoint to the 4 individual switches - not the loadbalancer
#------------------------------------------------------------------------------#
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
hwtimestamp *
keyfile /etc/chrony.keys
logdir /var/log/chrony
log measurements statistics tracking

#------------------------------------------------------------------------------#
# Time Servers
#----------------------------- ntp.csinfra.nsw.gov.au -------------------------#
#server 10.83.75.54
#----------------------------- ntp[1|2].csinfra.nsw.gov.au --- SilverWater ----#
server 10.86.254.11
server 10.86.254.12
#----------------------------- ntp[3|4].csinfra.nsw.gov.au --- Unanderrra ----#
server 10.84.254.11
server 10.84.254.12
#------------------------------------------------------------------------------#
# End
#------------------------------------------------------------------------------#
Registration
Now register the VM so we can patch and/or install relevant software
yum -y localinstall http://srv-bx-sat.lands.nsw/pub/katello-ca-consumer-latest.noarch.rpm
if time wrong then date mmddhhmm
subscription-manager register --org="Spatial_Services" --activationkey="DEV-Key-RHEL5"
subscription-manager register --org="Spatial_Services" --activationkey="DEV-Key-RHEL6"
subscription-manager register --org="Spatial_Services" --activationkey="DEV-Key-RHEL7"

subscription-manager register --org="Spatial_Services" --activationkey="UAT-Key-RHEL5"


subscription-manager register --org="Spatial_Services" --activationkey="UAT-Key-RHEL6"
subscription-manager register --org="Spatial_Services" --activationkey="UAT-Key-RHEL7"

subscription-manager register --org="Spatial_Services" --activationkey="PROD-Key-RHEL5"


subscription-manager register --org="Spatial_Services" --activationkey="PROD-Key-RHEL6"
subscription-manager register --org="Spatial_Services" --activationkey="PROD-Key-RHEL7"

subscription-manager release --set=5Server


subscription-manager release --set=6Server
subscription-manager release --set=7Server

subscription-manager repos –list


+----------------------------------------------------------+
Available Repositories in /etc/yum.repos.d/redhat.repo
+----------------------------------------------------------+
Repo ID: rhel-7-server-rpms
Repo Name: Red Hat Enterprise Linux 7 Server (RPMs)
Repo URL:
https://srv-bx-sat.lands.nsw/pulp/repos/Spatial_Services/UAT/Base-Repo-RHEL7/content/dist/r
hel/server/7/7Server/$basearch/os
Enabled: 1

Repo ID: rhel-7-server-satellite-tools-6.4-rpms


Repo Name: Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (RPMs)
Repo URL:
https://srv-bx-sat.lands.nsw/pulp/repos/Spatial_Services/UAT/Base-Repo-RHEL7/content/dist/r
hel/server/7/7Server/$basearch/sat-tools/6.4/o
s
Enabled: 0

Repo ID: rhel-7-server-rh-common-rpms


Repo Name: Red Hat Enterprise Linux 7 Server - RH Common (RPMs)
Repo URL:
https://srv-bx-sat.lands.nsw/pulp/repos/Spatial_Services/UAT/Base-Repo-RHEL7/content/dist/r
hel/server/7/7Server/$basearch/rh-common/os
Enabled: 0

subscription-manager repos --enable=rhel-7-server-satellite-tools-6.4-rpms


subscription-manager repos --enable= rhel-7-server-rh-common-rpms

subscription-manager repos --list-enabled


+----------------------------------------------------------+
Available Repositories in /etc/yum.repos.d/redhat.repo
+----------------------------------------------------------+
Repo ID: rhel-7-server-rpms
Repo Name: Red Hat Enterprise Linux 7 Server (RPMs)
Repo URL:
https://srv-bx-sat.lands.nsw/pulp/repos/Spatial_Services/UAT/Base-Repo-RHEL7/content/dist/r
hel/server/7/7Server/$basearch/os
Enabled: 1

Repo ID: rhel-7-server-satellite-tools-6.4-rpms


Repo Name: Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (RPMs)
Repo URL:
https://srv-bx-sat.lands.nsw/pulp/repos/Spatial_Services/UAT/Base-Repo-RHEL7/content/dist/r
hel/server/7/7Server/$basearch/sat-tools/6.4/os
Enabled: 1

Repo ID: rhel-7-server-rh-common-rpms


Repo Name: Red Hat Enterprise Linux 7 Server - RH Common (RPMs)
Repo URL:
https://srv-bx-sat.lands.nsw/pulp/repos/Spatial_Services/UAT/Base-Repo-RHEL7/content/dist/r
hel/server/7/7Server/$basearch/rh-common/os
Enabled: 1

Install Agents
● yum install katello-agent
● yum install ovirt-guest-agent-common
● yum update
● Reboot
System should now be ready for installation of additional software, additional
disks and the like.
Additional Tasks

Add another Disk


A common task is to add an additional disk
● Select the virtual machine
● Edit the Virtual machine
● Go to General and Add a disk – Create additional 30GB disk (Image)
● Possibly add a description – Data or whatever
● Once the disk has been created check you can see it
fdisk -l
● Create a partition table
fdisk /dev/sdb
n
p
Enter Enter (accept the defaults )
w
● Now format the file system
mkfs.xfg /dev/sdb1
modify fstab

Create mount point and mount


#------------------------------------------------------------------------------#
# /etc/fstab
#------------------------------------------------------------------------------#
# Created by anaconda on Sat Apr 25 08:46:45 2020
#------------------------------------------------------------------------------#
/dev/mapper/rhel-root / xfs defaults 0 0 UUID=56ead803-7763-4c8c-93db-c581d93bb731
/boot xfs defaults 0 0 /dev/mapper/rhel-swap swap swap defaults 0 0
#------------------------------------------------------------------------------#
/dev/sdb1 /data xfs defaults 0 0
#------------------------------------------------------------------------------#
mkdir /data
mount /data

df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 22G 1.9G 20G 9% /
/dev/sda1 1014M 137M 878M 14% /boot
/dev/sdb1 30G 33M 30G 1% /data
Grow the Root LVM File System
We currently we have the 25GB initial root file system, however we want to extend it by
10GB – so from 25GB to 35GB
fdisk -l /dev/sda

Disk /dev/sda: 26.8 GB, 26843545600 bytes, 52428800 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0000eb42

Device Boot Start End Blocks Id System


/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 52428799 25164800 8e Linux LVM

Edit the VM in RHVM and Extend the size of the root disk by 10GB

Fdisk /dev/sda
● Delete the LVM root partition - /dev/sda2
● Create a new partition – in effect recreate but with changed end point
● Set type to LVM (8e)

Device Boot Start End Blocks Id System


/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 73400319 35650560 8e Linux LVM

Now either reboot or run partprobe


pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name rhel
PV Size <24.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 6143
Free PE 0
Allocated PE 6143
PV UUID Zx3Ixr-Y8oQ-ylYz-v819-GsIJ-eqEE-okcgVA

pvresize /dev/vda2
pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name rhel
PV Size <34.00 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 8703
Free PE 2560
Allocated PE 6143
PV UUID Zx3Ixr-Y8oQ-ylYz-v819-GsIJ-eqEE-okcgVA

lvdisplay
--- Logical volume ---
LV Path /dev/rhel/root
LV Name root
VG Name rhel
LV UUID FdlG7g-1uSA-5tMa-Mcmo-UJRm-hgf7-1UIa9M
LV Write Access read/write
LV Creation host, time srv-bx-ac3q1, 2021-08-17 00:30:37 +1000
LV Status available
# open 1
LV Size <21.50 GiB
Current LE 5503
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0

lvextend -l +100%FREE /dev/rhel/root


Size of logical volume rhel/root changed from <21.50 GiB (5503 extents) to <31.50 GiB
(8063 extents).
Logical volume rhel/root successfully resized.

lvdisplay
--- Logical volume ---
LV Path /dev/rhel/root
LV Name root
VG Name rhel
LV UUID FdlG7g-1uSA-5tMa-Mcmo-UJRm-hgf7-1UIa9M
LV Write Access read/write
LV Creation host, time srv-bx-ac3q1, 2021-08-17 00:30:37 +1000
LV Status available
# open 1
LV Size <31.50 GiB
Current LE 8063
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0

Finally grow the filesystem itself – depending on FS type


resize2fs /dev/rhel/root
xfs_growfs /
df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 8.7M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/rhel-root 32G 1.9G 30G 6% /
/dev/sdb1 30G 33M 30G 1% /data
/dev/sda1 1014M 137M 878M 14% /boot
tmpfs 395M 0 395M 0% /run/user/0
Snmp - Solarwinds
Snmp is being used as the means for Solarwinds to monitor infrastructure – as a base
standard v3 will be the preferred option as it provides an authentication mechanism rather
than just a community string.
Install snmp rpms (if necessary)
yum install net-snmp net-snmp-utils

Ensure service is stopped


RHEL5 service snmpd stop
RHEL6 service snmpd stop
RHEL7 systemctl stop snmpd

Copy new snmpd.conf file from /apps/IE/SNMP/snmpd.conf.base TARGET:/etc/snmp/snmpd.conf


RHEL5 Copy new snmpd.conf file from /apps/IE/SNMP/net-snmp-snmpd.conf
TARGET:/var/net-snmp/snmpd.conf
RHEL67 Copy new snmpd.conf file from /apps/IE/SNMP/net-snmp-snmpd.conf
TARGET:/var/lib/net-snmp/snmpd.conf

Then start the daemon

RHEL56 service snmpd start


RHEL7 systemclt start snmpd

Check with an SNMP WALK if you wish:


snmpwalk -v 2c -c c0ll3ct 10.200.39.17
snmpwalk -v3 -l authPriv -u DCS-RO -a SHA -A SNMPR3adOnly -x AES -X SNMPR3adOnly server

#-----------------------------------------------------------------------------#
# Name /etc/snmp/snmpd.conf
#-----------------------------------------------------------------------------#
# History
#-----------------------------------------------------------------------------#
# 11.03.2021 IE Initial review and tidy up re Solarwinds Deployment
#-----------------------------------------------------------------------------#

#-----------------------------------------------------------------------------#
# AGENT BEHAVIOUR
#-----------------------------------------------------------------------------#
# agentAddress udp:127.0.0.1:161
#-----------------------------------------------------------------------------#
agentAddress udp:161

#-----------------------------------------------------------------------------#
# SNMPv3 AUTHENTICATION
#-----------------------------------------------------------------------------#
# Created in /var/lib/net-snmp/snmpd.conf
# Full read only access for SNMPv3
#-----------------------------------------------------------------------------#
view systemonly included .1
rouser authOnlyUser
rouser DCS-RO authpriv
#-----------------------------------------------------------------------------#
# End
#-----------------------------------------------------------------------------#

Cat snmpd-net-snmpd.conf: (you can create via: createUser DCS-RO SHA "SNMPR3adOnly" AES
SNMPR3adOnly … or simply copy the line below into /var/lib/net-snmp/snmpd.conf)

createUser DCS-RO SHA "SNMPR3adOnly" AES SNMPR3adOnly


ServiceNow Discovery
For ServiceNOW discovery and service mapping an ITOM user must be created on each
system with a public key to allow passwordless access.
Additionally a customised suduers file should be installed, ideally in /etc/sudoers.d/svc-sn-dis
# ServiceNow Discovery - ITOM Project - install in sudoers.d (preferably)
#------------------------------------------------------------------------------#
# History
#------------------------------------------------------------------------------#
# 11.08.2021 IE Add some specific cat files, ss query, and ls location
# 12.08.2021 IE Move to /etc/sudoers.d/svc-sn-dis or svn-sn-dis-prod
#------------------------------------------------------------------------------#
svc-sn-dis ALL = (root) NOPASSWD:/bin/cat /etc/corosync/corosync.conf, \
/usr/bin/cat /var/lib/pacemaker/cib/cib.xml, /bin/cat \
/etc/VRTSvcs/conf/config/main.cf, /bin/netstat, /bin/ps, \
/sbin/dmidecode, /sbin/dmsetup ls, /sbin/dmsetup table *, \
/sbin/fdisk -l, /sbin/lsof, /sbin/multipath -ll, \
/usr/bin/dmsetup ls, /usr/sbin/dmidecode, /usr/sbin/dmsetup ls,\
/usr/sbin/dmsetup table *, /usr/sbin/fdisk -l, /usr/sbin/lsof, \
/usr/sbin/multipath -ll, /sbin/clustat -v, /sbin/clustat -x, \
/usr/bin/find, /usr/bin/ls, /usr/bin/stat, /usr/bin/cut -d, \
/usr/bin/cut -f1, /usr/sbin/ldm list, /usr/sbin/ldm \
list-rsrc-group -a, /usr/sbin/ldm -V, /sbin/iscsiadm list \
initiator-node, /sbin/iscsiadm list target -S, /usr/sbin/sneep \
-T, /usr/bin/prtvtoc /dev/rdsk/, /usr/sbin/virtinfo -a, \
/usr/sbin/fcinfo, /usr/sbin/ss -tlnp, /usr/sbin/ss -tenp

#------------------------------------------------------------------------------#
# New Additions may be required
#------------------------------------------------------------------------------#
# /bin/ls, \
# /bin/cat /etc/httpd/conf.d/manual.conf, \
# /bin/cat /etc/httpd/conf.d/site-ssl.conf, \
# /bin/cat /etc/httpd/conf.d/wsgi.conf, \
# /bin/cat /etc/httpd/conf.d/php.conf, \
# /bin/cat /etc/httpd/conf.d/welcome.conf, \
# /bin/cat /etc/httpd/conf.d/perl.conf \
#------------------------------------------------------------------------------#
Falcon-sensor
DCS has standardised on Crowdstrike Falcon-sensor as the preferred solution with respect
to anti-virus and malware protection. As such it is a “next gen” product and does not work in
the same way as more traditional virus/pattern scanners work.

#------------------------------------------------------------------------------#
Short and Sweet Version
#------------------------------------------------------------------------------#
scp -p /apps/IE/Falcon/falcon-sensor-6.16.0-11307.el7.x86_64.rpm SERVER:/usr/local/src (7)
scp -p /apps/IE/Falcon/falcon-sensor-6.16.0-11307.el6.x86_64.rpm SERVER:/usr/local/src (6)

yum -y localinstall /usr/local/src/falcon-sensor-6.16.0-11307.el7.x86_64.rpm (RHEL7)


yum -y localinstall /usr/local/src/falcon-sensor-6.16.0-11307.el6.x86_64.rpm (RHEL6)

/opt/CrowdStrike/falconctl -s --cid=50721C88ED5446B2B2D059373B8A37C1-92

systemctl status falcon-sensor OR chkconfig falcon-sensor status <--RHEL7 or RHEL6


/opt/CrowdStrike/falconctl -s --aph=cls-bx-proxy --app=3128
<--set proxy if needed
/opt/CrowdStrike/falconctl -g --aph --app

<--confirm config or not :-)


/opt/CrowdStrike/falconctl -s --apd=FALSE
<--activate proxy if needed

#------------------------------------------------------------------------------#
To check status and if can stop and start:
#------------------------------------------------------------------------------#
systemctl status falcon-sensor
systemctl stop falcon-sensor
systemctl start falcon-sensor
systemctl status falcon-sensor

OR
service falcon-sensor status
service falcon-sensor stop
service falcon-sensor start
service falcon-sensor status
Application Deployments
Apparently Application tasks such as installation, configuration, and deployment are out of
scope with respect to AC3 responsibilities - however certainly deployments to UAT and
PROD tiers are carried out by the OPS Team in Spatial (although Revenue is different
again). As such, these tasks may come within scope re AC3 at some poin. Typically
deployments are requested/controlled via Jira tickets - see example below:

● Implementation Overview:
● Refer release notes for details. brief Implementation Steps:
1) Ensure tomcat is shutodwn
2) rename old gpr.war file in webapps folder to gpr.war.old for quick rollback
3) delete gpr folder from the webapps folder
4) Copy new war file to tomcat webapps folder -- /usr/local/tomcat/webapps/gpr.war
7) restart the tomcat
● Backout Overview:
a) Stop tomcat -- /usr/local/tomcat/bin/shutdown.sh
b) delete /usr/local/tomcat/webapps/gpr folder and clean temp directories
c) Restore the old deployment files
cp /usr/local/tomcat/webapps/gpr.war.old /usr/local/tomcat/webapps/gpr.war
cp /usr/local/tomcat/webapps/gpr-log4j.xml.old /usr/local/tomcat/webapps/
gpr-log4j.xml
cp /usr/local/tomcat/webapps/gpr-tomcat.properties.old
/usr/local/tomcat/webapps/gpr-tomcat.properties
d) ensure symlink /usr/local/tomcat/lib/gpr-tomcat.properties reads
gpr-tomcat.properties file. similarlycheck for xml file
e) restart gpr
● Testing Overview:
1. Log on to the GPR System through the valuationtest.six.nsw.gov.au
2. verify the displayed version is 3.0.0.99
5. verify all functionalities work properly.
Satellite
Satellite is used both in Spatial Services and also in Revenue NSW primarily for patch
management and deployment. Spatial Satellite is a single instance hosted on the RHVM
infrastructure, whereas in Revenue there is Satellite and an additional Capsule server.

https://srv-bx-sat.lands.nsw/users/login Version: 6.8.1 Bathurst


https://vsatp1.internal.osr.nsw.gov.au/users/login Version: 6.4.2.2 Unanderra
https://vcapp1.internal.osr.nsw.gov.au/users/login Silverwater

Host Collections
In Spatial Services we define a number of Host Collections which equate to the Hypervisor
Hosts, the Deployment Tiers and the VDI infrastructure

Subscriptions
The subscriptions are of three types:
Premium Smart Virtualisation - for production Hypervisor Tier (Linux VMs)
Standard Smart Virtualisation - for DEV and UAT tiers
Standard RHLinux - for hosting Windows VDI solutions

The Standard RH Linux subscriptions are allocated to a number of Dell Blades in Silverwater
and also to the specialist GPU 2RU servers with the Nvidia GPU cards, 3 systems in
Bathurst and 2 systems in Silverwater.

Spatial maintain Subscriptions and support with Red Hat via a third party - Glintech
Revenue maintain Subscriptions and support with HPE - L3 with Red Hat.
Spatial Services

Revenue NSW

Activation Keys
Spatial maintain Activation Keys for each Tier and/or system type - Revenue have something
similar.
Sync Plans and Status
Repositories can be synchronised on demand or scheduled and then promoted from the
Library to DEV, UAT and Production - note Revenue also have a Systest tier.

Content Views
Content Views allow control of what content is published to what tier and therefore provide a
mechanism to publish to each Tier in a controlled manner.
Patching Overview
For an upcoming “outage and/or patch weekend” the following steps would be carried out:

● Refresh existing content views for RHEL 6, 7, 8 ..


(note any RHEL ⅘ systems are left as is normally)
● Publish a new version to the Library
● Promote the new version into DEV Tier
● Update some DEV systems with reboot to test if any obvious issues
● Update to the rest of DEV tier - can be done during business hours, although reboot
may need to happen after hours depending on the Developers
● Carry out basic smoke testing
● Check with developers, application owners for any obvious issues
● Promote this version to UAT Tier
● Rinse and repeat through the UAT Tier - again can mostly be done during business
hours in the week/weeks preceding the PROD Patch Weekend
● Production Tier - normally done on the Outage/Patch weekend
● Promote same version to PROD Tier
> Reboot
> Patch
> Reboot
> Smoke test
> Application owner testing and validation

● Address any miscreants

Useful Commands

Check Registration Status


subscription-manager status
+-------------------------------------------+
System Status Details
+-------------------------------------------+
Overall Status: Current

If there is an issue you may need to do some of the following:


yum clean all
rm -rf /var/cache/yum
Or Register the VM with Satellite
subscription-manager unregister
subscription-manager register --org="Spatial_Services" -activationkey="XXX-Key-RHELX"

In some cases you may need to wait for Satellite to update the status of the VM .. even
overnight

Updating the Virtual/Physical Machines


There are a number of ways of running the updates, command line on the VM, for loop from
say srv-bx-opsadm, using something like PAC Manager, SuperPutty or via Satellite itself.
However my understanding is you will be using BIGFix - which will somehow integrate into
Satellite. In essence all you really will be doing is:

yum update

However be aware of the following points:


● Red Hat 4 or 5 systems do not work via subscription-manager/Satellite - these are
few and far between but there are a few legacy RHEL 5 systems
● CentOS systems - there are a number of CentOS systems - some legacy, so no
updates apply and some more recent, which can either be wired to Satellite or go
DIRECT to upstream repositories via proxy server. The proxy server can be set
permanently but often I do not have it set, and just set via ENV when I want to do an
update. Again - you can probably wire up directly to BigFix.

There are a couple of special cases re updates:


1. HAPROXY Servers
Production and UAT are done as HA pairs - two physical machines in an
active/passive configuration. Normally I would patch/update the standby first to see
any issues, then failover to the standby and patch the primary node.
srv-bx-hap1/srv-bx-hap2
srv-bx-haq1/srv-bx-haq2

2. Proxy Servers in Bathurst


There are two Squid proxy servers in Bathurst - srv-bx-proxy3, srv-bx-proxy4 - which
can be referenced individually, but are usually referenced via a front-end HAProxy
name cls-bx-proxy. So there is the option of disabling access to one of the proxies
and doing patching during business hours - however, these proxies are now mainly
used for server access as normal users proxy via the McCafee DCS proxy.

3. RabbitMQ Servers
The main issue here is reboot order - they are a pair but if you going through a
reboot/patch cycle you need to do the secondary down first, then the primary … then
bring up the primary .. then bring up the secondary
srv-bx-rmqp1/srv-bx-rmqp2
srv-bx-rmqq1/srv-bx-rmqq2
Addenda

Bathurst RHV Hardware

Silverwater RHV Hardware

You might also like