0% found this document useful (0 votes)
10 views

Some Useful Commands

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Some Useful Commands

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

#Domain & Project Creation:

openstack domain create --description "Domain description" <domain-name>


openstack project create --domain <domain-name> \
--description "Project Description" <project-name>
openstack user create --email <> --description <> --domain <domain-name> --password
<password> <username>
openstack role add --user <user-id> --domain <domain-name> admin
openstack role add --user <user-id> --project <project-name> admin

--------------------------------------------------------------------
#Networks & Subnet & SRIOV Port Creation:

openstack network create --share --external --provider-network-type vlan --provider-segment <>


--provider-physical-network <> <network-name>
openstack subnet create --subnet-range <> --network <network-name> <subnet-name>

openstack port create --vnic-type direct --fixed-ip


subnet=<subnet-name>,ip-address=<IP-address> --network <network-name> --binding-profile
trusted=true --disable-port-security <port-name>

--------------------------------------------------------------------
#Openstack Commands:

openstack domain create --description "<Domain-Description>" <Domain-name>

openstack project create --domain <Domain-name> \


--description "<Project-Description>" <Project-name>

openstack user create --email "<User-email>" --description "<User-Description>" --domain


<Domain-name> --password "<user-password>" <user-name>

openstack role add --user <user-id> --domain <Domain-name> <role-name>

openstack role add --user <user-id> --project <Project-name> <role-name>

openstack aggregate create <aggregate-name>


openstack aggregate add host <aggregate-name> <host>
openstack aggregate set --property <key>='value' <aggregate>

os flavor create --ram <> --vcpus <> --disk <> --property hw:cpu_cores='<>' --property
hw:cpu_sockets='<>' --property hw:cpu_thread_policy='require' --property
hw:mem_page_size='1048576' --property hw:numa_nodes='1' --property hw:numa_id=<>
--property aggregate_instance_extra_specs:<key>='value' <Flavor-name>

Possible Flavor Metadata:


https://docs.openstack.org/nova/rocky/user/flavors.html#:~:text=In%20OpenStack%2C%20flavo
rs%20define%20the%20compute%2C%20memory%2C%20and,of%20a%20virtual%20server%
20that%20can%20be%20launched.

os volume create --size <> --non-bootable <volume-name>

os image create --file <path-of-image> --container-format bare --disk-format qcow2


<Target-Name>

openstack server create --image <image-name> --flavor <Flavor-name> --nic


net-id=<>,v4-fixed-ip=<Fixed-Ip> --port=<port-id> --config-drive True --user-data
<Day0-Config-Path> --availability-zone nova:<Target-Compute> <Instance-name> --wait
openstack server add volume <Instance-name> <volume-name>

openstack server create --image <image-name> --flavor <Flavor-name> --nic


net-id=$(openstack network list --name <Network-name> -c ID -f value),v4-fixed-ip=<Fixed-Ip>
--user-data <Day0-Config-Path> --config-drive True <Instance-name>

nova interface-attach --port-id <port-id> <instance-id>

nova interface-detach <instance-id> <port-id>

nova interface-attach --net-id <network-id> --fixed-ip <Fixed-Ip> <instance-id>

openstack server resize --flavor <Flavor-name> <Instance-name>


openstack server resize confirm <Instance-name>

---------------------------------------------------------------------
List SRIOV Interfaces on a compute node and some other useful information:

lstopo-no-graphics |grep -e NUMANode -e sriov


libvirt virsh nodeinfo
libvirt virsh capabilities
libvirt virsh nodecpumap
numactl -H
compute cat /etc/nova/nova.conf | grep vcpu_pin_set
cat /etc/cmdline
MGMT Node:
cat openstack-configs/setup_data.yaml
cat openstack-configs/openstack_config.yaml

Compute Node:
ip link set <physical-int> promisc on
ip link set <physical-int> vf <vf-num> trust on
ip link set <physical-int> vf <vf-num> state auto
ip link set <physical-int> vf <vf-num> mac <>
/sys/class/net/<physical-int>/device/sriov/<vf-number>

A static network interface definition File (For Example here: ens4.2305):


DEVICE="ens4.2305"
BOOTPROTO="static"
ONBOOT="yes"
TYPE="Ethernet"
IPADDR=192.168.45.2
NETMASK=255.255.255.0
NETWORK=192.168.45.0
USERCTL="yes"
PEERDNS="yes"
IPV6INIT="no"
PERSISTENT_DHCLIENT="1"
----------------------------------------------------------------------
FE Registration & Un-Registration:
Registration:
puppet apply /etc/puppet/modules/opwv_virtualized_host/tests/onboot.pp

puppet apply /etc/puppet/modules/integraVNF/manifests/onboot.pp

puppet apply /etc/puppet/modules/integravnf/manifests/onboot.pp

/opt/opwv/integraVNF/bin/run-vnfc-phase.sh create

Un-registration:
1. Stop all sca services on VM to be re-installed
/etc/init.d/oamsca-v8.2 stop
2. Make sure there is no running process left with opwv user
ps -ef |grep -I opwv
3. Run unregister command from VM itself
/opt/opwv/registrationServer/bin/runUnRegClient.sh

4. Login to OAM GUI and search for FE name in both systems OAM 8.2 and IntegraAndes
-> remove any entries if found (This step is only needed for FECP nodes) You should expect
one entry for each system.

5. On OAM01 and OAM02 do a backup of /etc/hosts file


cp -p /etc/hosts /etc/hosts.backup

6. Modify FECP hosts records for the FE on both OAM01 and OAM02 with new IP details
7. Instantiate new FE

----------------------------------------------------------------------
OWM Resizing Steps:
OAM:
1) On CVIM: create flavors, create aggregate list and the the new network.
2) Un-register all VMs from OAM01 and OAM02
3) Update VNF-Info (openwave_OAM_1-vnf-info.xml AND openwave_OAM_2-vnf-info.xml)of
OAM01 and OAM02.
4) Un-deploy VNF-Info of OAM VMs and remove them from NSO as well.
5) Load merge OAM01 and OAM02 VNF-Info

FEs:
1) Update VNF-Info, VNFD (openwave_FE.yaml) (To contain the newly created network_For
FEs ONLY).
2) Package the VNFpackage on /var/opt/ncs/vnfpackages/openwave_FE/
3) Un-deploy VNF-Info (openwave_FE_1-vnf-info.xml, openwave_FE_2-vnf-info.xml,
openwave_FE_3-vnf-info.xml, openwave_FE_4-vnf-info.xml)of all VMs and remove them from
NSO as well.
4) Remove VNFD (openwave_FE.yaml) from NSO.
5) Upload the newly modified VNFD.yaml
6) Load merge VNF-Info of all FEs.
7) Register the VMs on OAM.

LVS:
1) Update VNF-Info (openwave_LVS_1-vnf-info.xml, openwave_LVS_2-vnf-info.xml).
2) Un-deploy VNF-Info of LVS VMs and remove them from NSO as well.
3) Load merge LVS01 and LVS02 VNF-Info
VOS:
1) Update VNF-Info (openwave_VOS_1-vnf-info.xml).
2) Un-deploy VNF-Info of VOS VMs and remove them from NSO as well.
3) Load merge VOS VNF-Info
------------------------------------------------------------------------
Archiving & Splitting a huge file:
tar -cvzf 3.4.6.tar.gz <>
tar -tvf 3.4.6.tar.gz
tar -xvzf 3.4.6.tar.gz -C <> =======> We need to make sure that the directory already exist.

split -b 10000m <>.tar.gz "3.6.2.tar.gz.part"


------------------------------------------------------------------------
DPI AL Day0-Config:
set data-plane engine-mode plos
set system timezone Africa/Cairo
set system hostname <Hostname>
set system authentication users cli admin
set system authentication users cli elements
set system authentication users cli operator
set system authentication users cli readonly
set system network interface admin enabled
set system network interface admin ipv4 static address <IP-address>
set system network interface admin ipv4 static prefix-length 29
set system network interface admin ipv4 static router <Gateway>
set system network interface admin physical-interfaces eth0
set system network interface aux_service ipv4 static address <IP-address>
set system network interface aux_service ipv4 static prefix-length 29
set system network interface aux_service physical-interfaces eth1
set system ntp server 172.20.24.20
set system snmp receivers 10.98.36.5:3304
set system snmp receivers 10.98.36.9:3304
set system snmp enabled
set system snmp traps true
set system snmp community sandvine@tb
------------------------------------------------------------------------
Setting a static route on DPI Element Node & DNS:
LOGIN@TESTBED_ELEMENTS% set system network static-routes 52.33.167.114/32 interface
internet next-hop 10.98.250.181
[ok][2022-02-21 09:19:18]

[edit]
LOGIN@TESTBED_ELEMENTS% set system network static-routes 52.28.184.151/32 interface
internet next-hop 10.98.250.181
[ok][2022-02-21 09:20:06]

[edit]
LOGIN@TESTBED_ELEMENTS% set system network static-routes 52.24.225.89/32 interface
internet next-hop 10.98.250.181
[ok][2022-02-21 09:20:39]

LOGIN@TESTBED_ELEMENTS% set system network static-routes 52.74.59.224/32 interface


internet next-hop 10.98.250.181
[ok][2022-02-21 09:21:05]

LOGIN@TESTBED_ELEMENTS% commit
Commit complete.

LOGIN@TESTBED_ELEMENTS% set system network nameserver 10.8.8.8


[ok][2022-02-21 09:22:31]

[edit]
LOGIN@TESTBED_ELEMENTS% commit
Commit complete.
------------------------------------------------------------------------
Creating a windows image for openstack:

How to Create Windows image for OpenStack:


https://techglimpse.com/create-windows-10-qcow2-image-openstack/

How to fix Windows could not parse or process the unattend answer file for Pass Specialize:
https://techglimpse.com/solve-windows-parse-process-unattend-answer-file/

VirtIO Installation:
https://techglimpse.com/create-windows-10-qcow2-image-openstack/?msclkid=31cfcce2ac1d11
ecbe73de801cf7ca66

Cloud init:
https://cloudbase.it/cloudbase-init/#download
-------------------------------------------------------------------------
VM Migration Steps:
Queued: The Compute service has accepted the request to migrate an instance, and migration
is pending.
Preparing: The Compute service is preparing to migrate the instance.
Running: The Compute service is migrating the instance.
Post-migrating: The Compute service has built the instance on the destination Compute node
and is releasing resources on the source Compute node.
Completed: The Compute service has completed migrating the instance and finished releasing
resources on the source Compute node.
-------------------------------------------------------------------------
NRFU Notes:
on ESC:
mona is sub system of ESC, Monitors all VMs.
health.sh
escadm status
cat /opt/cisco/esc/esc_database/snmp.conf
vimmanager communicates with VIM
tail -f /var/log/esc/etsi-vnfm/e*
tail -f /var/log/esc/escmanager.log
escadm ip_trans
/opt/cisco/esc/confd/bin/confd_cli -u admin -C
show running-config esc_datamodel tenants tenant etsi_tenant deployments deployment <>
escadm vim show
escadm etsi restart
oct4-esc-1(config)# no esc_datamodel tenants tenant etsi_tenant deployments deployment
oct4-esc-sol3-vnf-info-te_cirros_autoscaling_01
oct4-esc-1(config)# commit
curl -k -u nfvo:G3pwF7

puppet apply /etc/puppet/modules/integravnf/manifests/onboot.pp

On NSO:
ncs_cli -C
show running-config snmp
devices device oct4-cvim connect
show alarms
alarms purge-alarms
show devices list
show packages package tailf-hcc
show high-availability status
show running-config high-availability
show ncs-state ha
high-availability be-master
show hcc bgp
high-availability enable
show running-config nfv vnfd te_cirros_mon_err
show running-config nfv vnf-info te_cirros_mon_err_01
show nfv vnf-info-plan <>
show nfv internal sol3-deployment-plan <>
show nfv internal sol3-deployment-result <>
# nfv vnf-info te_cirros_mon_err_01 scale aspect-id default_sa number-of-steps 1 type
SCALE_<IN/OUT>
show running-config devices device oct4-esc-sol3 config vnf-subscriptions
nfv vnfd <> version <>
nfv settings etsi-sol3 server ip-address
nfv vnf-info te_cirros_autoscaling_01 un-deploy
no nfv vnf-info <>
nfv internal sol3-deployment-plan <> plan component sol3-unmanaged-vm-device <>
force-back-track
nfv internal sol3-deployment-plan <> plan component vnf <> force-back-track
nfv internal sol3-deployment-plan <> plan component self self force-back-track
nfv internal sol3-deployment-plan <> plan component vdu <> force-back-track
#unhide nfvo
(config)# no nfv internal sol3-deployment <>
nfv internal sol3-deployment-result oct4-esc-sol3-vnf-info-te_cirros_autoscaling_01 actions
terminate
nfv cisco-nfvo:actions vnf lcm retry service <> lcm-id <>
netstat -lntp | grep 9090
#(config) nfv vnf-info te_cirros_autoscaling_01 touch
--> Scaling should be enabled first on the descriptor.
--> Deployment flavor on VNFD.
--->
systemctl restart ncs
On VNF-INFO:
<additional-parameters>
<id>VIM_ZONE</id>
<value>nova:OCT4-VIM1-CN-004</value>
</additional-parameters>

on cvim:
MGMT:
ciscovim cloud-sanity list test all
cat /root/openstack-configs/setup_data.yaml
ciscovim list-nodes
ciscovim remove-computes --setupfile <> <>
ciscovim list-secrets --getpassword ADMIN_USER_PASSWORD
ciscovim list-secrets
ciscovim list-steps
on compute:
docker ps = dp
ciscovim add-compute -setupfile <> <>
on AO:
virsh destroy <>
virsh start <>
scope chassis
show cups-utilization

on cirros:
udhcpc eth1

on compute:
tcpdump -i <> -vvvn

===================================
On VM: dd if=/dev/zero of=/dev/zero &
VNFD Tosca
Many VNF-info based on number of required VMs.
VDU exist inside VNFD.

=========
On Nso:
su - admin
ncs_cli -C
show high-availability

show nfv vnf-info-plan <>


nfv vnf-info <> un-deploy
nfv vnf-info <> re-deploy
zombies service /nfv/cisco-nfvo:internal/sol3-deployment
show nfv vnf-info-plan <>
nfv cisco-nfvo:action vnf lcm retry service <> lcm-id <>
show nfv internal sol3-deployment-result <>

=============
On ESC:
sudo escadm portal start
/opt/cisco/esc/confd/bin/confd_cli -u admin -C
config
no esc_datamodel tenants tenant etsi_tenant deployments deployment auto-esc-sol3-vnf-inf<>
commit

lsof -i -P
======================
ssh OCT4-VIM1-CTRL-1
cephmon
ceph -s

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++

You might also like