Some Useful Commands
Some Useful Commands
--------------------------------------------------------------------
#Networks & Subnet & SRIOV Port Creation:
--------------------------------------------------------------------
#Openstack Commands:
os flavor create --ram <> --vcpus <> --disk <> --property hw:cpu_cores='<>' --property
hw:cpu_sockets='<>' --property hw:cpu_thread_policy='require' --property
hw:mem_page_size='1048576' --property hw:numa_nodes='1' --property hw:numa_id=<>
--property aggregate_instance_extra_specs:<key>='value' <Flavor-name>
---------------------------------------------------------------------
List SRIOV Interfaces on a compute node and some other useful information:
Compute Node:
ip link set <physical-int> promisc on
ip link set <physical-int> vf <vf-num> trust on
ip link set <physical-int> vf <vf-num> state auto
ip link set <physical-int> vf <vf-num> mac <>
/sys/class/net/<physical-int>/device/sriov/<vf-number>
/opt/opwv/integraVNF/bin/run-vnfc-phase.sh create
Un-registration:
1. Stop all sca services on VM to be re-installed
/etc/init.d/oamsca-v8.2 stop
2. Make sure there is no running process left with opwv user
ps -ef |grep -I opwv
3. Run unregister command from VM itself
/opt/opwv/registrationServer/bin/runUnRegClient.sh
4. Login to OAM GUI and search for FE name in both systems OAM 8.2 and IntegraAndes
-> remove any entries if found (This step is only needed for FECP nodes) You should expect
one entry for each system.
6. Modify FECP hosts records for the FE on both OAM01 and OAM02 with new IP details
7. Instantiate new FE
----------------------------------------------------------------------
OWM Resizing Steps:
OAM:
1) On CVIM: create flavors, create aggregate list and the the new network.
2) Un-register all VMs from OAM01 and OAM02
3) Update VNF-Info (openwave_OAM_1-vnf-info.xml AND openwave_OAM_2-vnf-info.xml)of
OAM01 and OAM02.
4) Un-deploy VNF-Info of OAM VMs and remove them from NSO as well.
5) Load merge OAM01 and OAM02 VNF-Info
FEs:
1) Update VNF-Info, VNFD (openwave_FE.yaml) (To contain the newly created network_For
FEs ONLY).
2) Package the VNFpackage on /var/opt/ncs/vnfpackages/openwave_FE/
3) Un-deploy VNF-Info (openwave_FE_1-vnf-info.xml, openwave_FE_2-vnf-info.xml,
openwave_FE_3-vnf-info.xml, openwave_FE_4-vnf-info.xml)of all VMs and remove them from
NSO as well.
4) Remove VNFD (openwave_FE.yaml) from NSO.
5) Upload the newly modified VNFD.yaml
6) Load merge VNF-Info of all FEs.
7) Register the VMs on OAM.
LVS:
1) Update VNF-Info (openwave_LVS_1-vnf-info.xml, openwave_LVS_2-vnf-info.xml).
2) Un-deploy VNF-Info of LVS VMs and remove them from NSO as well.
3) Load merge LVS01 and LVS02 VNF-Info
VOS:
1) Update VNF-Info (openwave_VOS_1-vnf-info.xml).
2) Un-deploy VNF-Info of VOS VMs and remove them from NSO as well.
3) Load merge VOS VNF-Info
------------------------------------------------------------------------
Archiving & Splitting a huge file:
tar -cvzf 3.4.6.tar.gz <>
tar -tvf 3.4.6.tar.gz
tar -xvzf 3.4.6.tar.gz -C <> =======> We need to make sure that the directory already exist.
[edit]
LOGIN@TESTBED_ELEMENTS% set system network static-routes 52.28.184.151/32 interface
internet next-hop 10.98.250.181
[ok][2022-02-21 09:20:06]
[edit]
LOGIN@TESTBED_ELEMENTS% set system network static-routes 52.24.225.89/32 interface
internet next-hop 10.98.250.181
[ok][2022-02-21 09:20:39]
LOGIN@TESTBED_ELEMENTS% commit
Commit complete.
[edit]
LOGIN@TESTBED_ELEMENTS% commit
Commit complete.
------------------------------------------------------------------------
Creating a windows image for openstack:
How to fix Windows could not parse or process the unattend answer file for Pass Specialize:
https://techglimpse.com/solve-windows-parse-process-unattend-answer-file/
VirtIO Installation:
https://techglimpse.com/create-windows-10-qcow2-image-openstack/?msclkid=31cfcce2ac1d11
ecbe73de801cf7ca66
Cloud init:
https://cloudbase.it/cloudbase-init/#download
-------------------------------------------------------------------------
VM Migration Steps:
Queued: The Compute service has accepted the request to migrate an instance, and migration
is pending.
Preparing: The Compute service is preparing to migrate the instance.
Running: The Compute service is migrating the instance.
Post-migrating: The Compute service has built the instance on the destination Compute node
and is releasing resources on the source Compute node.
Completed: The Compute service has completed migrating the instance and finished releasing
resources on the source Compute node.
-------------------------------------------------------------------------
NRFU Notes:
on ESC:
mona is sub system of ESC, Monitors all VMs.
health.sh
escadm status
cat /opt/cisco/esc/esc_database/snmp.conf
vimmanager communicates with VIM
tail -f /var/log/esc/etsi-vnfm/e*
tail -f /var/log/esc/escmanager.log
escadm ip_trans
/opt/cisco/esc/confd/bin/confd_cli -u admin -C
show running-config esc_datamodel tenants tenant etsi_tenant deployments deployment <>
escadm vim show
escadm etsi restart
oct4-esc-1(config)# no esc_datamodel tenants tenant etsi_tenant deployments deployment
oct4-esc-sol3-vnf-info-te_cirros_autoscaling_01
oct4-esc-1(config)# commit
curl -k -u nfvo:G3pwF7
On NSO:
ncs_cli -C
show running-config snmp
devices device oct4-cvim connect
show alarms
alarms purge-alarms
show devices list
show packages package tailf-hcc
show high-availability status
show running-config high-availability
show ncs-state ha
high-availability be-master
show hcc bgp
high-availability enable
show running-config nfv vnfd te_cirros_mon_err
show running-config nfv vnf-info te_cirros_mon_err_01
show nfv vnf-info-plan <>
show nfv internal sol3-deployment-plan <>
show nfv internal sol3-deployment-result <>
# nfv vnf-info te_cirros_mon_err_01 scale aspect-id default_sa number-of-steps 1 type
SCALE_<IN/OUT>
show running-config devices device oct4-esc-sol3 config vnf-subscriptions
nfv vnfd <> version <>
nfv settings etsi-sol3 server ip-address
nfv vnf-info te_cirros_autoscaling_01 un-deploy
no nfv vnf-info <>
nfv internal sol3-deployment-plan <> plan component sol3-unmanaged-vm-device <>
force-back-track
nfv internal sol3-deployment-plan <> plan component vnf <> force-back-track
nfv internal sol3-deployment-plan <> plan component self self force-back-track
nfv internal sol3-deployment-plan <> plan component vdu <> force-back-track
#unhide nfvo
(config)# no nfv internal sol3-deployment <>
nfv internal sol3-deployment-result oct4-esc-sol3-vnf-info-te_cirros_autoscaling_01 actions
terminate
nfv cisco-nfvo:actions vnf lcm retry service <> lcm-id <>
netstat -lntp | grep 9090
#(config) nfv vnf-info te_cirros_autoscaling_01 touch
--> Scaling should be enabled first on the descriptor.
--> Deployment flavor on VNFD.
--->
systemctl restart ncs
On VNF-INFO:
<additional-parameters>
<id>VIM_ZONE</id>
<value>nova:OCT4-VIM1-CN-004</value>
</additional-parameters>
on cvim:
MGMT:
ciscovim cloud-sanity list test all
cat /root/openstack-configs/setup_data.yaml
ciscovim list-nodes
ciscovim remove-computes --setupfile <> <>
ciscovim list-secrets --getpassword ADMIN_USER_PASSWORD
ciscovim list-secrets
ciscovim list-steps
on compute:
docker ps = dp
ciscovim add-compute -setupfile <> <>
on AO:
virsh destroy <>
virsh start <>
scope chassis
show cups-utilization
on cirros:
udhcpc eth1
on compute:
tcpdump -i <> -vvvn
===================================
On VM: dd if=/dev/zero of=/dev/zero &
VNFD Tosca
Many VNF-info based on number of required VMs.
VDU exist inside VNFD.
=========
On Nso:
su - admin
ncs_cli -C
show high-availability
=============
On ESC:
sudo escadm portal start
/opt/cisco/esc/confd/bin/confd_cli -u admin -C
config
no esc_datamodel tenants tenant etsi_tenant deployments deployment auto-esc-sol3-vnf-inf<>
commit
lsof -i -P
======================
ssh OCT4-VIM1-CTRL-1
cephmon
ceph -s
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++