Tca Userguide
Tca Userguide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2023 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
1 Introduction 11
Common Abbreviations 12
API Documentation 15
Deployment Architecture 15
Supported Features on Different VIM Types 17
2 Getting Started 18
Viewing the Dashboard 18
5 Kubernetes Policies 36
Overview of Kubernetes Policies 37
CNF Global Permission Enforcement 39
Types of Kubernetes Policies 40
Lifecycle of an RBAC Policy 40
Create a Policy Manually 42
Edit a Policy 44
Clone a Policy 44
Download a Policy 45
Delete a Policy 45
Finalize a Policy 45
Grant a Policy 45
Edit a Policy Grant 47
Delete a Policy Grant 48
View VIM Policy Grants 49
View Policy Grants For The CNF Package 49
VMware, Inc. 3
VMware Telco Cloud Automation User Guide
VMware, Inc. 4
VMware Telco Cloud Automation User Guide
VMware, Inc. 5
VMware Telco Cloud Automation User Guide
VMware, Inc. 6
VMware Telco Cloud Automation User Guide
VMware, Inc. 7
VMware Telco Cloud Automation User Guide
VMware, Inc. 8
VMware Telco Cloud Automation User Guide
VMware, Inc. 9
VMware Telco Cloud Automation User Guide
31 Appendix 480
Enable Virtual Hyper-Threading 480
A1: PTP Overview 483
A2: Host Profile for PTP and ACC100 486
Prerequisites for ACC100 and PTP 487
Obtaining the Custom File for ACC100 487
Host Profile for PTP in Passthrough mode and ACC100 488
Host Profile for PTP over VF and ACC100 491
Applying Host Profile to Cell Site Group 494
CSAR Configuration for PTP and ACC100 495
Symmetric Layout - Dual Socket Two NUMA System 497
Hyper-threading and NUMA 498
ACC 100 Support for ESXi 8.0 Upgrade 500
PTP Notifications 500
Install O-Cloud DaemonSet 501
Integrate Sidecar with DU Pod 502
Setup User/Group/Storage Policy in vCenter Server for vSphere CSI 503
VMware, Inc. 10
Introduction
1
The VMware Telco Cloud Automation User Guide provides information about how to use
VMware Telco Cloud Automation™. Steps to add your virtual and container infrastructure, and to
create and manage network functions and services, are covered in this guide.
Intended Audience
This information is intended for Telco service providers and users who want to use VMware
Telco Cloud Automation for designing and onboarding network functions and services. It is also
intended for users who want to transition to the cloud-native architecture with Container-as-a-
Service (CaaS) automation, and manage the Kubernetes clusters from a centralized system. To
deploy and activate the VMware Telco Cloud Automation Manager and TCA-CP services, see the
VMware Telco Cloud Automation Deployment Guide.
n A native integration for Virtualized Infrastructure Managers (VIMs) and cloud products such
as VMware vCloud NFV, vSphere-based clouds, VMware on mega-cloud providers, and
Kubernetes clouds. These integrations streamline your CSP orchestrations and optimize your
NFV Infrastructure (NFVI) resource use.
n A standard-driven generic VNF manager (G-VNFM) and NFV Orchestration (NFVO) modular
components to integrate any multi-vendor Management and Network Orchestration (MANO)
architecture.
VMware, Inc. 11
VMware Telco Cloud Automation User Guide
n VMware Telco Cloud Manager™ - Provides Telcos with NFV-MANO capabilities and enables
the automation of deployment and configuration of Network Functions and Network
Services.
n VMware Telco Cloud Automation Control Plane (TCA-CP) - Provides the infrastructure for
placing workloads across clouds using VMware Telco Cloud Automation.
n Common Abbreviations
n API Documentation
n Deployment Architecture
Common Abbreviations
Some of the frequently used abbreviations that are used in this guide are listed here with their
descriptions.
NFV
Network Functions Virtualization - The process of decoupling a network function from its
proprietary hardware appliance and running it as a software application in a virtual machine.
VMware, Inc. 12
VMware Telco Cloud Automation User Guide
VNF
A Virtual Network Function (VNF) is a collection of virtual machines interconnected with virtual
links. A VNF exposes its functionality through external connection points. It is managed by a
Virtual Network Function Manager (VNFM), and it can be composed into a higher-level Network
Service (NS) by a Network Function Virtualization Orchestrator (NFVO).
Network Service
A Network Service is a collection of network functions: Virtual (VNF), Cloud-Native (CNF), or
Physical (PNF); interconnected with virtual or physical links. It is managed by an NFVO. A
network function exposes its functionality through external connection points.
CNF
A Cloud-Native Network Function (CNF) is a containerized network function that uses cloud-
native principles. CNFs are designed to run inside containers. Containerization makes it possible
to run services and onboard applications on the same cluster, while directing network traffic to
correct pods.
NFVI
Network Functions Virtualization Infrastructure - Is the foundation of the overall NFV architecture.
It provides the physical compute, storage, and networking hardware that hosts the VNFs. Each
NFVI block can be thought of as an NFVI node and many nodes can be deployed and controlled
geographically.
MANO
Management and Orchestration - Manages the resources in the infrastructure, orchestration, and
life cycle operations of VNFs, CNFs, and Network Services.
VIM
Virtualized Infrastructure Manager - Is a functional block of the MANO and is responsible
for controlling, managing, and monitoring the NFVI compute, storage, and network hardware,
the software for the virtualization layer, and the virtualized resources. The VIM manages the
allocation and release of virtual resources, and the association of virtual to physical resources,
including the optimization of resources.
NFVO
NFV Orchestrator - Is a central component of an NFV-based solution. It brings together different
functions to make a single orchestration service that encompasses the whole framework and has
a well-organized resource use.
VMware, Inc. 13
VMware Telco Cloud Automation User Guide
VNFM
A VNF Manager (VNFM) is responsible for the lifecycle management of Virtual Network
Functions (VNF). It interacts with VIM, NFVO, and Network Function Catalog during lifecycle
management operations.
NFD
Network Function Descriptor - Is a deployment template that describes a network function
deployment and operational requirement. It is used to create a network function where life-cycle
management operations are performed.
SVNFM
Specific VNFM. SVNFMs are tightly coupled with the VNFs they manage.
GVNFM
Generic VNFM.
Kubernetes Pods
Kubernetes Pods are inspired by pods found in nature (pea pods or whale pods). The Pods are
groups of containers that share networking and storage resources from the same node. They are
created with an API server and placed by a controller. Each Pod is assigned an IP address, and all
the containers in the Pod share storage, IP address, and port space (network namespace).
CSI
Container Storage Interface. A specification designed to enable persistent storage volume
management on Container Orchestrators (COs) such as Kubernetes. The specification allows
storage systems to integrate with containerized workloads running on Kubernetes. Using CSI,
storage providers, such as VMware, can write and deploy plug-ins for storage systems in
Kubernetes without a need to modify any core Kubernetes code.
VMware, Inc. 14
VMware Telco Cloud Automation User Guide
CNI
Container Network Interface. The CNI connects Pods across nodes, acting as an interface
between a network namespace and a network plug-in or a network provider and a Kubernetes
network.
TCA-CP
VMware Telco Cloud Automation Control Plane. Previously known as VMware HCX for Telco
Cloud.
API Documentation
You can also operate VMware Telco Cloud Automation using APIs.
To view the VMware Telco Cloud Automation API Explorer, Click the Help (?) icon from
the top-right corner of the VMware Telco Cloud Automation user interface and select API
Documentation.
Deployment Architecture
The VMware Telco Cloud Automation implements the architecture that is outlined and defined at
a high-level through logical building blocks and core components.
VMware, Inc. 15
VMware Telco Cloud Automation User Guide
vCenter Server
(Authentication)
VMware Telco
Cloud Automation
Manager
SVNFM
Integration
RabbitMQ RabbitMQ
n vCenter Server is used for authenticating and signing in to VMware Telco Cloud Automation.
n VMware Telco Cloud Automation supports registration of supported SOL 003 based
SVNFMs.
n VMware Telco Cloud Automation Control Plane (TCA-CP) is deployed on the VIM and paired
with VMware Telco Cloud Automation Manager.
n VMware Telco Cloud Automation Manager connects with TCA-CP to communicate with the
VIMs. The VIMs are cloud platforms such as vCloud Director, vSphere, Kubernetes Cluster, or
VMware Integrated OpenStack.
n vRealize Orchestrator is registered with TCA-CP and is used to run NFV workflows. You can
register for each VIM or for the entire network of VIMs. For information about registering
vRealize Orchestrator with TCA-CP, see VMware Telco Cloud Automation Deployment Guide.
n RabbitMQ is used to track VMware Cloud Director and VMware Integrated OpenStack
notifications and is required only for deployments on these clouds.
VMware, Inc. 16
VMware Telco Cloud Automation User Guide
Kubernetes n 1.22.9 ✓ ✓ ✓
n 1.22.13
n 1.22.17
n 1.23.10
n 1.23.16
n 1.24.10
VMware, Inc. 17
Getting Started
2
Complete these high-level tasks to start using VMware Telco Cloud Automation.
For steps to install and set up these components, see the VMware Telco Cloud Automation
Deployment Guide.
2 Create roles and assign permissions. See Chapter 4 Managing Roles and Permissions.
Clouds
Alarms
Displays alarms that are in the Critical and Warning states.
Network Functions
Displays the number of instantiated and not instantiated network functions and catalogs. To
go to the Network Function Catalog page, click the Catalog icon.
Network Services
Displays the number of instantiated and not instantiated network services. To go to the
Network Service Catalog page, click the Catalog icon.
VMware, Inc. 18
VMware Telco Cloud Automation User Guide
Displays the percentage of CPU, memory, and storage allocated across the clouds.
Resource Utilization
Displays the percentage of CPU, memory, and storage resources used across the clouds.
VMware, Inc. 19
Add an Active Directory
3
Adding active directory in VMware Telco Cloud Automation.
VMware Telco Cloud Automation now supports authentication through vCenter and Active
Directory. You can configure Active Directory for a new deployment or you can upgrade the
already deployed VMware Telco Cloud Automation to the latest version and configure the Active
Directory settings in the upgraded VMware Telco Cloud Automation.
You can log in to the VMware Telco Cloud Automation Appliance Manager and configure
the Active Directory settings to integrate VMware Telco Cloud Automation with your Active
Directory server.
Note Ensure that the logon user name is less than or equal to 20 characters. If the logon user
name is more than 20 characters, the login works but the group retrieval of the user fails, causing
the login to VMware Telco Cloud Automation to fail.
Prerequisites
Note
n When using the Active Directory server, ensure the reachability of Active Directory server to
VMware Telco Cloud Automation Manager.
n Active directory is available only for Telco Cloud Automation Manager and not for Telco
Cloud Automation Control Plane.
n Only the users associated with adminGroupName can inherit the system administrator privileges
on VMware Telco Cloud Automation.
n Ensure that you have access to VMware Telco Cloud Automation Appliance Manager.
n Ensure that you have users and groups created in Active Directory server.
n To add a Active Directory for a new deployment of VMware Telco Cloud Automation, see
Add an Active Directory for New Deployment.
n To add a Active Directory for an existing deployment of VMware Telco Cloud Automation,
see Add an Active Directory for Existing Deployment.
VMware, Inc. 20
VMware Telco Cloud Automation User Guide
Follow the procedure to add the Active Directory support in a newly deployed VMware Telco
Cloud Automation Manager.
Procedure
2 Enter the required details for Activation, Datacenter Location, and System Name.
3 Click Continue to save the changes and continue with the deployment.
4 To add the authentication details, select the Active Directory option on the Select
Authentication Provider page.
5 Add the following details on the Connect Your Active Directory for TCA page:
Note You can add the Active Directory configuration for both VMware Telco Cloud
Automation Manager and the VMware Telco Cloud Automation Appliance Manager.
n Base Distinguished Name for Users - The base distinguished name for the users of the
LDAP directory.
n Base Distinguished Name for Groups - The base distinguished name for the groups of
the LDAP directory.
n Admin User Distinguished Name - The base distinguished name for the administrator of
the LDAP directory.
n Admin Group Name - Name of the administrator group of the LDAP directory.
6 Click Save to save the changes and continue with the deployment.
Follow the procedure to add the Active Directory support in an existing VMware Telco Cloud
Automation Manager.
Procedure
VMware, Inc. 21
VMware Telco Cloud Automation User Guide
Note
n You can add the Active Directory configuration for both VMware Telco Cloud Automation
Manager and the VMware Telco Cloud Automation Appliance Manager.
n Switching the authentication provider from the existing vCenter to Active Directory adds
Active Directory and deletes vCenter and SSO configurations. It also removes the access
to VMware Telco Cloud Automation for the existing users configured in vCenter and
permissions set in VMware Telco Cloud Automation.
n Base Distinguished Name for Users - The base distinguished name for the users of the
LDAP directory.
n Base Distinguished Name for Groups - The base distinguished name for the groups of
the LDAP directory.
n Admin User Distinguished Name - The base distinguished name for the administrator of
the LDAP directory.
n Admin Group Name - Name of the administrator group of the LDAP directory.
What to do next
Modify the user group for each permission and set to the active directory. For example,
for a system admin user, you can change the user group from vsphere/sysadmin to
cn=admingroup,ou=groups,dc=server,dc=net. For details, see Create Permission.
VMware, Inc. 22
Managing Roles and Permissions
4
A role is a predefined set of privileges. Privileges define the rights to perform actions and read
properties. For example, the Virtual Infrastructure Administrator role allows a user to read, add,
edit, and delete VIMs. This role also allows the user to perform all the life-cycle management
operations on a Kubernetes cluster template and a Kubernetes cluster instance.
As a vCenter Server user, when you configure vCenter Server in the VMware Telco Cloud
Automation appliance, you are assigned the System Administrator role to access VMware Telco
Cloud Automation. Use this role to create roles and permissions for your users.
n Enabling Users and User Groups to Access VMware Telco Cloud Automation
n Tokens
Note Ensure that the logon user name is less than or equal to 20 characters. If the logon user
name is more than 20 characters, the login works but the group retrieval of the user fails, causing
the login to VMware Telco Cloud Automation to fail.
To enable a specific vCenter Server user or a user group to access and use VMware Telco Cloud
Automation, you must perform the following steps:
VMware, Inc. 23
VMware Telco Cloud Automation User Guide
3 Assign the appropriate Roles to the user or user group. A Role determines the privileges that
the user or user group receives for accessing VMware Telco Cloud Automation.
4 To restrict access for your user or user group to specific objects, you can define the
restrictions in the Advance Filter criteria.
Users or user groups with the assigned Role can access and use VMware Telco Automation, and
perform tasks according to the specified permissions.
As a System Administrator, you can restrict a user to access only specific objects. For example,
you can assign permissions to VNF Administrators to access only specific VNFs. The Advance
Filter option allows you to provide object-level permissions to roles.
The two major object groups that have an implicit parent-child relationship are:
n Filters that are applied to objects at the parent level are also applied to child objects. For
example, you create permissions for your VNF Administrator with filters to view the VNF
Catalogs of a vendor. When the VNF Administrator logs in, they can view the VNF Catalogs
and the VNFs that belong to the vendor. Here, the parent object is the VNF Catalog and the
child object is the VNF.
VMware, Inc. 24
VMware Telco Cloud Automation User Guide
You can enable Advance Filter and assign object-level permissions when you create or edit
permissions. For steps to create permissions, see Create Permission.
System-defined Privileges
The following tables list the system-defined privileges:
VMware, Inc. 25
VMware Telco Cloud Automation User Guide
System Audit - Read privileges for all n Virtual Infrastructure Audit All
operations. n Partner System Read
n Network Service Instance Read
n Network Service Catalog Read
n Network Function Catalog Read
n Network Function Instance Read
n Role Audit
n Workflow Read
n System Audit
VMware, Inc. 26
VMware Telco Cloud Automation User Guide
Role Audit - Read privileges for all Role Audit Roles and Permissions
Roles operations.
VMware, Inc. 27
VMware Telco Cloud Automation User Guide
Network Function Catalog Design n Network Function Catalog Read n Network Function Catalog
- Design privileges for Network n Workflow Read n Workflow Catalog
Function Catalog. n Workflow Design
n Network Function Catalog
Design
Network Function Catalog n Network Function Catalog Read n Network Function Catalog
Instantiate - Instantiation privileges n Virtual Infrastructure Consume n Workflow Catalog
for Network Function Catalog. n Network Function Instance Read
n Workflow Read
n Network Function Catalog
Instantiate
Network Function Instance Read - Network Function Instance Read n Network Function Instance
Read privileges for Network Function n Network Function Catalog
Instance.
Network Function Instance Lifecycle n Network Function Instance Read n Network Function Instance
Management - Lifecycle management n Network Function Catalog n Workflow Catalog
privileges for Network Function Instantiate n Workflow Instance
Instance. n Network Function Catalog Read
n Virtual Infrastructure Consume
n Workflow Read
n Workflow Execute
n Network Function Instance
Lifecycle Management
VMware, Inc. 28
VMware Telco Cloud Automation User Guide
Network Service Catalog Design - n Network Service Catalog Read n Network Service Catalog
Design privileges for Network Service n Network Function Catalog Read n Workflow Catalog
Catalog. n Workflow Read
n Workflow Design
n Network Service Catalog Design
Network Service Catalog Read - n Network Function Catalog Read n Network Service Catalog
Read privileges for Network Service n Workflow Read n Workflow Catalog
Catalog. n Network Service Catalog Read
Network Service Catalog Instantiate n Network Service Catalog Read n Network Service Catalog
- Instantiation privileges for Network n Network Function Catalog Read n Workflow Catalog
Service Catalog. n Virtual Infrastructure Consume
n Network Function Instance Read
n Network Service Instance Read
n Workflow Read
n Network Service Catalog
Instantiate
Network Service Instance Lifecycle n Network Service Catalog n Network Service Instance
Management - Lifecycle Management Instantiate n Workflow Catalog
privileges for Network Service n Network Service Catalog Read n Workflow Instance
Instance. n Network Function Catalog Read
n Virtual Infrastructure Consume
n Network Function Instance Read
n Network Function Catalog
Instantiate
n Workflow Read
n Workflow Execute
n Network Service Instance
Lifecycle Management
Network Service Instance Read - Network Service Instance Read n Network Service Instance
Read privileges for Network Service n Network Service Catalog
Instance.
VMware, Inc. 29
VMware Telco Cloud Automation User Guide
System-defined Roles
The following table lists the system-defined roles.
Role Privileges
VMware, Inc. 30
VMware Telco Cloud Automation User Guide
Role Privileges
VMware, Inc. 31
VMware Telco Cloud Automation User Guide
Role Privileges
Create a Role
Create a role and assign specific permissions.
VMware, Inc. 32
VMware Telco Cloud Automation User Guide
Prerequisites
Procedure
2 From the top-right corner, click the drop-down menu next to the User icon. Go to
Authorization > Roles.
4 Enter the role name, an optional description, and select the privileges to be associated with
that role.
5 Click Save.
Results
Your role is created successfully and is displayed under the list of roles.
What to do next
n To delete a role, click Delete. To delete a role, you must delete all its associated permissions.
Create Permission
Create permissions that are applicable only to specific users and user groups.
Prerequisites
Procedure
2 From the top-right corner, click the drop-down menu next to the User icon. Go to
Authorization > Permissions.
VMware, Inc. 33
VMware Telco Cloud Automation User Guide
n User(s) / User Group(s) - Enter the user name or the group name to associate the
permission with. To validate the user and user group name and to associate the
permissions, click Validate.
Note
n When using Active Directory by group, you can provide the group in the following
format cn=admingroup,ou=groups,dc=server,dc=net.
n When using Active Directory by username, you can provide the user name in the
following format userName@ad.
n When using the vCenter, the format to enter the group name is domain\groupName.
n Configure Advanced Filters - Select this option if you want to add advanced filters such
as specific object type, attribute, metric, and their values. For example, you can associate
the permissions that you create for a Network Function Deployer to access a specific
Network Function Catalog, a Network Function Instance, Network Service Catalog,
Network Service Instance, or a Virtual Infrastructure. Click Add. You can also filter objects
in the catalog based on tags by adding specific tags and values to permissions.
5 Click Save.
Results
Your permission is created successfully and is displayed under the list of permissions.
Tokens
VMware Telco Cloud Automation generates a token each time a user remotely accesses a
Kubernetes cluster or a VMware Telco Cloud Automation Control Plane (TCA-CP).
n Virtual Infrastructure SSH Token: This token is generated when you use login credentials or
the embedded terminal session for accessing a Kubernetes cluster.
n Virtual Infrastructure REST: This token is generated when you use the Download Kube
Config option for accessing a Kubernetes cluster.
n Network Function SSH: This token is generated when you use login credentials or the
embedded terminal session for accessing a Network Function.
n Network Function REST: This token is generated when you use the Download Kube Config
option for accessing a Network Function.
n TCA-M: This token is generated when you use the Show Login Credentials or Open Terminal
options for accessing the TCA-CP.
VMware, Inc. 34
VMware Telco Cloud Automation User Guide
To view more information about a token, click the drop-down arrow against the token. A token
that is not utilized expires after eight hours. A system administrator can revoke a token at any
time.
VMware, Inc. 35
Kubernetes Policies
5
You can control the access to computational resources at various levels, such as:
n Cluster-level access control (binary): Defines whether the user can access the cluster or not.
n Namespace-level access control (binary): Defines whether a user can access the namespace
or not.
Telco Cloud Automation allows you to create different security domains within a single
Kubernetes cluster. These security domains are associated with users, network function
packages, or instances.
n Edit a Policy
n Clone a Policy
n Download a Policy
n Delete a Policy
n Finalize a Policy
n Grant a Policy
VMware, Inc. 36
VMware Telco Cloud Automation User Guide
n Determine the privileges required for a CNF instantiation and LCM operation.
A service account provides non-interactive and non-human access to services within the
Kubernetes cluster. Application Pods, system components, and entities, whether internal or
external to the cluster, use specific service account credentials. TCA uses service accounts to
communicate with Kubernetes.
The service accounts are generated in TCA during the Kubernetes VIM registration process or
during the workload cluster creation.
The following diagram illustrates the usage of service accounts for accessing the Kubernetes API.
Custom
initiate HELM POD Operator resource
instance
use
TCA uses the service account provided during VIM registration for the following purposes:
The PODs might use the service account to access the APIs.
Operators are software extensions to Kubernetes that use custom resources to manage
applications and their components. Operators follow Kubernetes principles, mainly the
control loop principle. Custom resource models the Kubernetes application, which has a
desired state and an actual state. The operator implements a custom controller to ensure
that the desired state is equal to the actual state.
VMware, Inc. 37
VMware Telco Cloud Automation User Guide
The controller resides in a pod and interacts with Kubernetes API in a control loop to move
the actual state to the desired state. The operator may perform (based on the designed CNF)
scheduled jobs on the application. For example, it creates consistent backups. Operators
are shipped in helm charts, including the custom resource definitions and the associated
controllers.
After the HELM resource construction phase is completed, TCA is not aware of what the
operator does on the Kubernetes cluster, similar to a POD with access to the Kubernetes API.
The purpose of the Kubernetes policy in TCA is to control the access level of each of these
entities to Kubernetes. Kubernetes prevents privilege escalation for their clients, which means
that a service account cannot create another service account with a higher level of privilege
than it already has. TCA builds on this principle by providing these entities with a restricted
service account instead of the unrestricted service account. Kubernetes policy controls the level
of restriction.
The level of restriction is defined through the permission model within TCA, which is illustrated in
the following diagram.
K8S cluster
2. filter=(...) 3. filter=(
name=VMware_.*)
User Permission Controls LCM (installation /
scale) to selected namespaces
1. filter=(...)
1 in * 1
NS CNF Namespace Resource
* * *
n CNF LCM / READ: Controls lifecycle operation execution, deletion, and read access to a CNF
instance. See Figure 5-1. Diagram 1.
n VIM: Controls the VIM instance to which you can deploy network functions. See Figure 5-2.
Diagram 2.
n Namespace: Controls which namespaces you can use in the clusterIf the CNF is restricted
to contain Kubernetes resources that reside in a namespace, then the application of
namespace-based RBAC is sufficient. However, if the CNF needs to read (get, list, watch)
or manage resources (create, update, patch, and delete) outside its namespaces, then
Kubernetes policies need to be applied. See Figure 5-3. Diagram 3.
VMware, Inc. 38
VMware Telco Cloud Automation User Guide
1 in * 1
NS CNF Namespace Resource
* * *
1
*
HELM
Cluster
Resource
*
CNF permission enforcement aims at running the HELM commands in the context of a restricted
service account. This restricted service account requires minimum permissions to perform the
LCM operations. The limited-service account created based on namespace access only might not
be adequate as it does not provide access to cluster-level resources.
NF deployer CNF
LCM
Service Role
HELM Cluster Role
Account Binding
2 TCA creates a service account with the necessary permissions (role bindings + roles and
cluster roles).
This step ensures that the CNF does not access any other resource than the one allowed by
the role binding.
VMware, Inc. 39
VMware Telco Cloud Automation User Guide
A virtual infrastructure administrator assigns the required privileges by extending the RBAC
permission model with policy and policy grants, which is illustrated in the following diagram.
providing access to *
Resource
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
VIM Policy rules:
- verbs: [ "create” ]
1
apiGroups: [ "apiextensions.k8s.io/v1" ]
filter=(...) 1 resources: [ "CustomResourceDefinition" ]
CNF package Policy Grant resourceNames: : [ ”NokiaMME1" ]
RBAC Policy
RBAC policy allows you to regulate access to computer or network resources based on the roles
of individual users. See Lifecycle of an RBAC Policy.
PSA Policy
PSA policy allows you to regulate access to computer or network resources by enforcing POD
security standards. You can implement the POD security at the cluster level or at the namespace
level by using the namespace labels.
The three levels of Pod Security are privileged, baseline, and restricted. If multiple PSA policies
are applied to a CNF, then a policy that has a more permissive Pod Security Standard is applied
to the CNF.
Note Both PSA and RBAC policies are applied only to CNFs that are in restricted mode, either
instantiated on restricted VIM or set to Restricted manually
The following table lists the privileges and the corresponding accessible objects.
VMware, Inc. 40
VMware Telco Cloud Automation User Guide
When you create a policy, it moves to the draft state with an expiration date set for the policy
automatically. In the draft state, you can edit a policy, and every time you edit a policy, the
expiration date is extended.
Note
n The draft policy is automatically deleted if you do not finalize it before the expiration date.
publish
depend on
Policy (draft) Policy (final) Policy Grant
edit
The policy and policy grant are used during LCM operations to prepare the context in which
HELM is executed. Before executing a HELM operation, TCA creates or updates a service account
and its corresponding roles, cluster roles, or role bindings to represent a context in which the
CNF should be running. Based on policies and policy grants, TCA creates a set of CNF-specific
roles or cluster roles and role bindings. These will make it possible for the service account
to access global resources. Roles are created based on HELM to namespace mapping in the
instantiated VNF to provide Read-Write access to the namespaces in which the CNF resides.
These service accounts reside on a TCA-specific namespace and are labeled with the policy
grant ID or the CNF instance ID. Proper labeling of the service accounts allows you to update or
delete them when you no longer require them.
VMware, Inc. 41
VMware Telco Cloud Automation User Guide
NF deployer CNF
LCM
A policy defines a set of Roles and ClusterRoles that provide additional access to the Kubernetes
resources. Since the Kubernetes resource names vary for every instance, and the policy
templates are fixed, TCA allows a policy to be applied for multiple CNF instances.
Procedure
VMware, Inc. 42
VMware Telco Cloud Automation User Guide
The following table illustrates the policy and sample policy definition.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: SomeOtherAppNamespace
purpose: GrantAccessForOtherAppSevices
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list"]
7 Click Next.
8 In the Add Policy Details page, browse and upload the YAML file that contains policy details
or enter the policy details similar to the sample provided in the preceding table.
9 Click Finish.
VMware, Inc. 43
VMware Telco Cloud Automation User Guide
Edit a Policy
You can edit a policy to make the changes as required.
Procedure
3 Click the vertical ellipse of the policy that you want to edit and click Edit.
4 Make the required changes to the Name, Description, or Type fields and click Next.
5 Click Finish.
Clone a Policy
You can clone any policy and change the name, description, and type of the policy to suit the
new policy.
Procedure
3 Click the vertical ellipse of the policy that you want to clone and click Clone.
4 Make the required changes to the Name, Description, and Type fields and click Next.
5 Click Finish.
VMware, Inc. 44
VMware Telco Cloud Automation User Guide
Download a Policy
You can download a Kubernetes RBAC or PSA policy as a JSON file.
Procedure
3 Click the vertical ellipse of the policy that you want to download and click Download.
Delete a Policy
You can delete a policy when you no longer require it.
Procedure
3 Click the vertical ellipse of the policy that you want to delete and click Delete.
4 Click Delete.
Finalize a Policy
Before finalizing a policy, ensure that no further changes are required, as you cannot make any
changes to the policy after finalizing it. You can grant a policy to a user only after finalizing it.
Procedure
3 Click the vertical ellipse of the policy that you want to finalize and click Finalize.
4 Click Finalize.
Grant a Policy
A policy grant is the grant of the requirement as defined in the policy for a selected VIM. A VIM
administrator grants the policy. Granting a policy establishes a connection between the policy,
the VIM on which the policy is granted, and filters of the objects to which the grant applies.
VMware, Inc. 45
VMware Telco Cloud Automation User Guide
Procedure
3 Click the vertical ellipse of the finalized policy that you want to grant and click Grant.
5 Select the cloud to which you want to grant the policy and Click Next.
6 From the ObjectType drop-down, select Network Function Catalog, Network Function
Instance, or Lifecycle Operation.
Each object type has different attributes. The following table illustrates the attributes and
their description for each object type.
VMware, Inc. 46
VMware Telco Cloud Automation User Guide
The following operators are available for each object type. You can select the required
operator.
n Equals to
n Not equals to
n Any of
n Matches.
Note If there are no filters for Network Function Catalog, the filters within the policy
grant match every Network Function Instance created from the given template. This is also
applicable to Network Function Instances and Lifecycle Operations.
10 Click Finish.
Procedure
VMware, Inc. 47
VMware Telco Cloud Automation User Guide
3 Click the vertical ellipse of the cloud instance in which you want to edit the policy grant and
click View Policy Grants.
4 In the Policy Grants tab, click the vertical ellipse of the policy grant you want to edit and click
Edit.
5 Click Next.
6 From the ObjectType drop-down, select Network Function Catalog, Network Function
Instance, or Lifecycle Operation.
10 Click Finish.
Procedure
3 Click the vertical ellipse of the cloud instance in which you want to delete the grant and click
View Policy Grants.
4 In Policy Grants tab, click the vertical ellipse of the grant you want to delete and click Delete.
5 Click Delete.
VMware, Inc. 48
VMware Telco Cloud Automation User Guide
Procedure
3 Click the vertical ellipse of the cloud instance in which you want to view the VIM policy grants
and click View Policy Grants.
Procedure
3 Click the CNF package for which you want to view the policy grants.
Grants.
A CNF template processor determines the global privileges or namespaces for a CNF.
VMware, Inc. 49
VMware Telco Cloud Automation User Guide
RBAC policies are generated based on the CNF package helm chart resources. If resources
created or accessed by CNF are outside the namespace, TCA creates a new RABC rule for that
resource.
Note
n Some of the resource names may be generated with the Helm release name or random
names from the Helm chart. Therefore, the CNF deployer or VIM Administrator should review
the automatically generated policies.
n Helm inspection may sometimes fail to detect the custom resource details if the resources are
deployed outside Helm. In such a scenario, a warning message is displayed in the description
of the generated policy template.
Procedure
3 Click the CNF package for which you want to create a policy automatically.
5 In the Inventory Details tab, click the browse icon in the Select Cloud field.
6 Click the radio button of the cloud instance that you want to select and click OK.
n Select Repository URL: Click this radio button to automatically display the repository URL.
n Specify Repository URL: Click this radio button to enter the repository URL, username,
and password in the respective fields.
VMware, Inc. 50
VMware Telco Cloud Automation User Guide
8 In the Inputs tab, provide input value for all the input parameters such as pf,
PHC2SYS_CONFIG_FILE, and PTP4L_CONFIG_FILE.
9 In the Review tab, review all the parameters and click Create Policy.
n policyType: KUBERNETES_RBAC
n name
n description
policyType: KUBERNETES_RBAC
name: Policy 1
description: My favourite policy
definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: SomeOtherAppNamespace
purpose: GrantAccessForOtherAppSevices
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list"]
Procedure
3 Click the CNF package from which you want to import the policies.
VMware, Inc. 51
VMware Telco Cloud Automation User Guide
5 Select the policies, which are embedded into the Network Function package that you want to
import from the CNF package.
Note You can edit the imported policies until they are granted.
VMware, Inc. 52
Working with Tags
6
Tags are labels that contain user-defined keys and values. You can attach tags to catalogs and
instances, and virtual infrastructure. Tagging makes it easier to search and sort resources, and to
assign specific rules to the resource.
Tags help in managing, grouping, and filtering resources in a catalog that has similar properties
or for restricting access to resources to a certain group of users. For example, you can apply
relevant tags when instantiating a network function catalog and filter the network function
instances using these tags. Or, you can assign an SSD tag to your network functions. This way,
you can gently enforce users to deploy these network functions only on VIMs having SSD as the
storage profile.
Users with the Tag Admin privilege can create, edit, or delete tags. The System Administrator
and Role Administrator roles have the Tag Admin privilege by default.
Note Existing tags from VMware Telco Cloud Automation version 1.8 and earlier are exported
and added to the list of tags in VMware Telco Cloud Automation 1.9 during upgrade.
VMware, Inc. 53
VMware Telco Cloud Automation User Guide
VMware, Inc. 54
Configuring Your Virtual
Infrastructure 7
Before creating and instantiating network functions and services, you must add your virtual
infrastructure to VMware Telco Cloud Automation.
Note VMware Telco Cloud Automation supports vSphere, VMware Cloud Director, Kubernetes
Cluster, VMware Tanzu, VMware Integrated OpenStack, VMware Cloud on AWS, Google VMware
Engine (GVE), and Microsoft Azure VMware Solution (AVS).
You can add a virtual infrastructure from the Infrastructure > Virtual Infrastructure page. The
Virtual Infrastructure page provides a graphical representation of clouds that are distributed
geographically. Details about the cloud such as Cloud Name, Cloud URL, Cloud Type, Tenant
Name, Connection Status, and Tags are also displayed. To view more information such as TCA-
CP URL, Location, User Name, Network Function Inventory, and so on, click the > icon on a
desired cloud.
Prerequisites
n To perform this task, you must have the Virtual Infrastructure Admin privileges.
Procedure
VMware, Inc. 55
VMware Telco Cloud Automation User Guide
3 Select the type of cloud. Based on the cloud type you select, enter the following virtual
infrastructure details:
Note VMware Telco Cloud Automation auto-imports self-signed certificates. To import, click
Import from the pop-up window and continue.
a For VMware Cloud Director and VMware Integrated OpenStack (VMware VIO):
Cloud URL Enter the TCA-CP cloud appliance URL. This URL is
used for making HTTP requests.
Tags Select the key and value pairs from the drop down
menus. To add more tags, click the + symbol.
Tenant Name Enter the organization name for vCloud Director. Enter
the project name for VIO.
Cloud URL Enter the TCA-CP cloud appliance URL. This URL is
used for making HTTPS requests.
Cluster Name Enter the cluster name that you provided when
registering the Kubernetes Cluster in TCA-CP Manager.
Kubernetes Config Enter the YAML kubeconfig file for your Kubernetes
Cluster.
VMware, Inc. 56
VMware Telco Cloud Automation User Guide
c For VMware vSphere, Microsoft Azure VMware Solution (AVS), and Google VMware
Engine (GVE):
Cloud URL Enter the TCA-CP cloud appliance URL. This URL is
used for making HTTP requests.
VMware Telco Cloud Automation Control Plane URL Enter the TCA-CP cloud appliance URL. This URL is
used for making HTTP requests.
EC2 Region Enter the region of your Elastic Compute Cloud (EC2)
systems.
4 Optionally, you can add tags to your cloud. Tags are used for filtering and grouping clouds,
network functions, and network services.
5 Click Validate.
6 Click Add.
Results
You have added the cloud to your virtual infrastructure. You can see an overview of your virtual
infrastructure on the Infrastructure > Virtual Infrastructure page together with a map showing
the physical location of each cloud.
What to do next
To configure additional clouds in your virtual infrastructure, click + Add. To modify your existing
infrastructure, click Edit or Delete.
For VMware Cloud Director, vSphere, and VIO, you must configure the deployment profiles for
your cloud.
VMware, Inc. 57
VMware Telco Cloud Automation User Guide
Prerequisites
You must have the Virtual Infrastructure Admin privileges to perform this task.
Procedure
2 Navigate to Infrastructure > Virtual Infrastructure and select the options symbol against the
virtual infrastructure.
Compute Profiles allow you to specify the underlying resource where the network functions
are deployed.
n Storage Profile - Select the storage profile from the pop-up window.
n Location - Enter the cloud location to add the compute profile. To add the compute
profile to the current cloud, select Same as VIM.
VMware, Inc. 58
VMware Telco Cloud Automation User Guide
6 Click Add.
Results
The compute profile is added to your cloud. To view the compute profile, navigate to
Infrastructure > Virtual Infrastructure and click the > icon against the cloud name.
The Resource Status column in the Virtual Infrastructure page displays the resource use of those
clouds that are configured with vCloud Director, vSphere, or VIO VIMs.
What to do next
To edit a compute profile, navigate to Infrastructure > Virtual Infrastructure and click the cloud
name. In the cloud details page, go to the desired compute profile and click the Edit icon.
Prerequisites
You must have the Virtual Infrastructure Admin privileges to perform this task.
Procedure
2 Navigate to Infrastructure > Virtual Infrastructure and select the desired virtual infrastructure
to edit.
VMware, Inc. 59
VMware Telco Cloud Automation User Guide
Prerequisites
You must have the Virtual Infrastructure Admin privileges to perform this task.
Procedure
2 Navigate to Infrastructure > Virtual Infrastructure and select the options symbol against the
virtual infrastructure.
a To synchronize only the missing information, for example, alarms, CNF inventory, worker
node IPs, PM reports, and Harbor repository in partner systems. Select Partial Sync from
the drop-down menu.
b To synchronize the entire virtual infrastructure inventory information, for example, alarms,
CNF inventory, worker node IPs, PM reports, and Harbor repository in partner systems.
Click Full Sync.
5 Click OK.
VMware, Inc. 60
Viewing Your Cloud Topology
8
VMware Telco Cloud Automation provides a visual topology of your cloud sites across
geographies. It enables administrators to manage network functions and services.
To view your cloud sites and services, perform the following steps:
Procedure
Results
The Clouds page displays the cloud sites that are registered to VMware Telco Cloud Automation.
What to do next
To view details of a cloud site such as Cloud Name, Cloud Type, User Name, and Status, point to
the cloud site.
VMware, Inc. 61
Working with Infrastructure
Automation 9
Infrastructure Automation can deploy the entire SDDC at Central, Regional or the Cell Site. It
automatically deploys the SDDC components such as vCenter, NSX, vSAN, vRO, vRLI, TCA-CP
on the target hosts. It simplifies the deployments and management of the telecommunication
infrastructure.
n Deployment Configurations
n Managing Domains
n Viewing Tasks
You can manage the telecommunication infrastructure through Infrastructure Automation. It also
deploys the application on various sites based on the site-specific requirements.
1 Prerequisites
All sites are ready. You can configure and initiate the network functions.
Prerequisites
Infrastructure Automation validates various prerequisites before beginning the actual
deployment.
VMware, Inc. 62
VMware Telco Cloud Automation User Guide
Different sites have different prerequisites that must be fulfilled before beginning the actual
deployment. Infrastructure Automation validates all these prerequisites to ensures easy and fast
deployment.
Note Ensure that you have different vSAN disk sizes for cache and capacity tiers, else the cloud
builder may not select the correct cache disk.
Host
n All the hosts in a domain are homogeneous.
n Each host has a minimum of one solid-state disk (SSD) and three solid-state disk/hard disk
drives for vSAN.
n Each host requires two physical NICs connected to the same physical switch.
Physical Switch
n Jumbo Frames enabled on the Physical Switch.
n Each ESXi server has a minimum of two physical NICs connected to the switch in trunk mode.
Access to all the VLANs (Management Network, vMotion Network, vSAN, NSX Host Overlay
Network, NSX Edge Overlay Network, Uplink 1 and Uplink 2) on the trunk port.
Domain
n Each domain has a minimum of four hosts.
n DNS name configured for all the appliances in all the domains.
n ESXi servers are installed for each domain through the PXE server or an ISO image.
n A common web server to access the software images at the central site.
Network
n VMware Telco Cloud Automation/ VMware Telco Cloud Automation Control Plane can require
an unrestricted communication to connect.tec.vmware.com and hybridity-depot.vmware.com
over port TCP 443 for license activation and updates.
n VMware Telco Cloud Automation uses different ports for different services. For details, see
VMware Telco Cloud Automation Ports.
VMware, Inc. 63
VMware Telco Cloud Automation User Guide
n Unique VLANs are created for following networks on the physical switch:
Management Network 1500 Used to connect the management components of the software like
vCenter, ESXi, NSX Manager, VMware Telco Cloud Automation, and
VMware Telco Cloud Automation Control Plane.
vMotion Network 9000 Used for the live migration of virtual machines. It is an L2 routable
network and used only for the vMotion traffic within a data center.
vSAN 9000 Used for the vSAN traffic. It is an L2 routable network and is used only for
the vSAN traffic within a data center.
NSX Host Overlay Network 9000 Used for the NSX Edge overlay traffic. Requires a routable with Host
overlay VLAN in the same site. This network requires a DHCP server
to provide IPs to the NSX host overlay vmk interfaces. The DHCP pool
should equal the number of ESXi hosts on this network.
NSX Edge Overlay Network 9000 Used for the overlay traffic between the hosts and Edge Appliances.
Uplink 1 9000 Used for the uplink traffic. Uplink 1 is in the same subnet as the Top of
Rack (ToR) switch uplink address.
Uplink 2 9000 A redundant path for the uplink traffic. Uplink 2 is in the same subnet as
the Top of Rack switch uplink address.
n Each ESXi server has a minimum of two physical NICs connected to the switch in trunk mode.
Access to all the VLANs (Management Network, vMotion Network, vSAN, NSX Host Overlay
Network, NSX Edge Overlay Network, Uplink 1 and Uplink 2) on the trunk port.
n Configure the same NTP server on both the Cloud Builder and the ESXi host.
n Run the command ntpq -pn on both the Cloud Builder and the ESXi host and check if the
output of the NTP server shows *.
n Name resolution through DNS for all appliances in all the domains.
VMware, Inc. 64
VMware Telco Cloud Automation User Guide
n DNS records for all appliances with forward and reverse resolution.
Note You can create custom naming schemes for the appliances. You can also select
the naming schemes from Appliance Naming Scheme from the drop-down menu when
configuring the global parameters or override the naming schemes when configuring
domains. The options available for the appliance naming scheme are:
n {applianceName}-{domain-Name}
n {applianceName}
n Custom
For example: If the naming scheme is set to {appliancename}-{domainname}, the name for a
Virtual Center appliance is vc-cdc1.telco.example.com, where:
n vc is the appliance name.
License
The licenses for the following components are required:
n VMware vSAN
Note The actual license requirements may change based on the components installed.
Software Version
vCenter Server 7.0 U1a, 7.0 U1c, 7.0 U2, 7.0 U2d, 7.0 U3k, 8.0b, 8.0u1
ESXi 7.0 U1a, 7.0 U1c, 7.0 U2, 7.0 U2d, 7.0 U3k, 8.0b, 8.0u1
Note For the RAN sites, you have to manually upgrade VMware vCenter and ESXi. For details,
see vCenter Upgrade.
VMware, Inc. 65
VMware Telco Cloud Automation User Guide
You can use the specification template to provide infrastructure details for automated
deployment. You can also upload a new specification or download the current specification of
the deployed infrastructure.
Note For the changes required in cloud native deployment, see Specification File for Cloud
Native.
Procedure
3 To upload a new specification for the infrastructure, click Upload Spec and select the new
specification file.
Prerequisites
Download the cloud native specification file from the Telco Cloud Automation.
VMware, Inc. 66
VMware Telco Cloud Automation User Guide
Procedure
u Open the Specification file and configure the following parameters for cloud native
deployment.
Parameter Description
pscUserGroup The username which creates the kubernetes clusters in the cloud native
VMware Telco Cloud Automation. You can specify this parameter under
settings section or the domains section. The pscUserGroup parameter under
settings section acts as global value and the pscuserGroup parameters
under domain overrides the value for that specific domain.
Note You must specify the pscUserGroup. You can specify the pscUserGroup
either in settings, or in domains or in both the settings and domains.
TCA_BOOTSTRAPPER The bootstrapper for the cloud native VMware Telco Cloud Automation.
Add the following details:
n type
n name
n ipIndex
n rootpassword
n adminpassword
TCA_MANAGEMENT_CLUSTER The cluster manager for the cloud native VMware Telco Cloud Automation.
Add the following details:
n type
n name
n ipIndex
n clusterPassword
TCA_CP The load balancer for VMware Telco Cloud Automation control plane (TCA-
CP).
Add the following details:
n type
n name
n ipIndex
VMware, Inc. 67
VMware Telco Cloud Automation User Guide
Parameter Description
TCA Load balancer for VMware Telco Cloud Automation manager in the cloud
native VMware Telco Cloud Automation.
Add the following details:
n type
n name
n ipIndex
n caCert
Note
n Encode the CA certificate with BASE64 encoding.
n For adding the images (.OVA files) for cloud builder deployment, see
Add Images or OVF.
Note
n You can use the domain settings to override the values provided in the settings.
n You cannot override the appliance type TCA_BOOTSTRAPPER appliance in the management
domain of a central site.
n You cannot override the appliance type TCA in the workload domain of a central site.
{
"domains": [
{
"name": "cdc",
"type": "CENTRAL_SITE",
"subType": "MANAGEMENT",
"enabled": true,
"preDeployed": {
"preDeployed": false
},
"minimumHosts": 3,
"location": {
"city": "Bengal\u016bru",
"country": "India",
"address": "",
"longitude": 77.56,
"latitude": 12.97
},
"switches": [
{
"name": "cdc-dvs001",
"uplinks": [
{
VMware, Inc. 68
VMware Telco Cloud Automation User Guide
"pnic": "vmnic0"
},
{
"pnic": "vmnic1"
}
]
}
],
"services": [
{
"name": "networking",
"type": "nsx",
"enabled": true,
"nsxConfig": {
"shareTransportZonesWithParent": false
}
},
{
"name": "storage",
"type": "vsan",
"enabled": true,
"vsanConfig": {
"vsanDedup": false
}
}
],
"networks": [
{
"switch": "cdc-dvs001",
"type": "management",
"name": "management",
"segmentType": "vlan",
"vlan": 3406,
"mtu": 1500,
"mac_learning_enabled": false,
"gateway": "172.17.6.253",
"prefixLength": 24,
"_comments": [
"If K8S master/worker nodes will be installed on this network,
then it requires DHCP configured on the network"
]
},
{
"switch": "cdc-dvs001",
"type": "vMotion",
"name": "vMotion",
"segmentType": "vlan",
"vlan": 3408,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.8.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.8.10",
VMware, Inc. 69
VMware Telco Cloud Automation User Guide
"end": "172.17.8.20"
}
]
},
{
"switch": "cdc-dvs001",
"type": "vSAN",
"name": "vSAN",
"segmentType": "vlan",
"vlan": 3409,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.9.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.9.10",
"end": "172.17.9.20"
}
]
},
{
"switch": "cdc-dvs001",
"type": "nsxHostOverlay",
"name": "nsxHostOverlay",
"segmentType": "vlan",
"vlan": 3407,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.7.253",
"prefixLength": 24,
"_comments": [
"This network requires DHCP configured on the network"
]
},
{
"switch": "cdc-dvs001",
"type": "nsxEdgeOverlay",
"name": "nsxEdgeOverlay",
"segmentType": "vlan",
"vlan": 3410,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.10.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.10.10",
"end": "172.17.10.20"
}
]
},
{
"switch": "cdc-dvs001",
"type": "uplink",
VMware, Inc. 70
VMware Telco Cloud Automation User Guide
"name": "uplink1",
"segmentType": "vlan",
"vlan": 3411,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.11.253",
"prefixLength": 24,
"ipAddresses": [
"172.17.11.100",
"172.17.11.101"
]
},
{
"switch": "cdc-dvs001",
"type": "uplink",
"name": "uplink2",
"segmentType": "vlan",
"vlan": 3410,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.10.253",
"prefixLength": 24,
"ipAddresses": [
"172.17.10.100",
"172.17.10.101"
]
}
],
"applianceOverrides": [
{
"name": "tb1-cdc-cb",
"enabled": true,
"id": "app-cc834fe9-2f5f-4d7c-9538-4f6cf84a0c3b",
"nameOverride": "tb1-cdc-cb",
"type": "CLOUD_BUILDER",
"ipIndex": 32,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-sddcmgr",
"enabled": true,
"id": "app-94dc5b6f-f034-4d01-be12-a9919bb851e9",
"nameOverride": "tb1-cdc-sddcmgr",
"type": "SDDC_MANAGER",
"ipIndex": 33,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vc",
"size": "small",
"enabled": true,
"id": "app-20ae3412-d7bb-46fb-a213-3eee4980c59b",
"nameOverride": "tb1-cdc-vc",
VMware, Inc. 71
VMware Telco Cloud Automation User Guide
"type": "VC",
"ipIndex": 31,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vro",
"enabled": true,
"id": "app-652abcba-954f-4ef2-b66d-ef3ac80ac923",
"nameOverride": "tb1-cdc-vro",
"type": "VRO",
"ipIndex": 40,
"rootPassword": "Base64 encoded password"
},
{
"name": "nsx-cdc",
"size": "large",
"enabled": true,
"id": "app-2d8b171b-8ed0-4093-9492-918e9cbb8881",
"nameOverride": "tb1-cdc-nsx",
"type": "NSX_MANAGER",
"ipIndex": 34,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"name": "nsx001",
"enabled": true,
"id": "app-69d10093-e451-40d4-8d11-091c87978037",
"nameOverride": "tb1-cdc-nsx01",
"parent": "tb1-cdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 35
},
{
"name": "nsx002",
"enabled": true,
"id": "app-ba4fdd21-7f99-4162-939e-7158f82bb4cd",
"nameOverride": "tb1-cdc-nsx02",
"parent": "tb1-cdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 36
},
{
"name": "nsx003",
"enabled": true,
"id": "app-f7fd0803-546a-43ae-8b8c-2112c128b12e",
"nameOverride": "tb1-cdc-nsx03",
"parent": "tb1-cdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 37
},
{
"name": "edgecluster001",
VMware, Inc. 72
VMware Telco Cloud Automation User Guide
"size": "large",
"enabled": true,
"id": "app-0bd34f11-7970-44eb-9ce0-e969e9a4ef80",
"nameOverride": "edge-cdc",
"tier0Mode": "ACTIVE_STANDBY",
"type": "NSX_EDGE_CLUSTER",
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"name": "nsx-edge001",
"enabled": true,
"id": "app-4f44afa4-e83d-4129-9fef-1854d762fc67",
"nameOverride": "tb1-cdc-edge01",
"parent": "edge-cdc",
"type": "NSX_EDGE",
"ipIndex": 38
},
{
"name": "nsx-edge002",
"enabled": true,
"id": "app-c3311d3a-0931-4b77-9f3f-d4e976e0e88f",
"nameOverride": "tb1-cdc-edge02",
"parent": "edge-cdc",
"type": "NSX_EDGE",
"ipIndex": 39
},
{
"name": "tb1-cdc-mgmt-clus",
"enabled": true,
"id": "app-313a7384-55d5-42ba-aa1e-023b935a3770",
"nameOverride": "tb1-cdc-mgmt-clus",
"type": "TCA_MANAGEMENT_CLUSTER",
"ipIndex": 45,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-bootstrapper-clus",
"enabled": true,
"id": "app-d7376ba4-fc02-4612-8fee-62f1df817b86",
"nameOverride": "tb1-cdc-bootstrapper-clus",
"type": "BOOTSTRAPPER_CLUSTER",
"ipIndex": 46,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-tca",
"enabled": true,
"id": "app-b90c397a-c33b-4eb7-80dc-d7fc072b1e13",
"nameOverride": "tb1-tca",
"type": "TCA",
"ipIndex": 42,
"rootPassword": "Base64 encoded password"
},
VMware, Inc. 73
VMware Telco Cloud Automation User Guide
{
"name": "tb1-cdc-tcacp",
"enabled": true,
"id": "app-21d4a277-de90-4c19-a2ea-19d67aa48f36",
"nameOverride": "tb1-cdc-tcacp",
"type": "TCA_CP",
"ipIndex": 43,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vrli",
"enabled": true,
"id": "app-c2acb9ef-9e9c-4f79-b792-fbc5016132e7",
"nameOverride": "tb1-cdc-vrli",
"type": "VRLI",
"ipIndex": 41,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "vsannfs",
"enabled": false,
"id": "app-85e35807-12c9-471e-bb5d-11c68c039af5",
"nameOverride": "tb1-cdcvsanfs",
"type": "VSAN_NFS",
"ipIndexPool": [
{
"start": 47,
"end": 49
}
],
"nodeCount": 3,
"shares": [
{
"name": "default-share",
"quotaInMb": 10240
}
],
"_comments": [
"FQDN for each appliance will be generated as {appliance.name}
{nodeIndex}-{domain.name}.{dnsSuffix}.",
"nodeCount should be same with host number provisioned in day1
operation.",
"Make sure ipIndexPool size larger than nodeCount",
"nodeCount should be same with host number provisioned in day1
operation."
],
"rootPassword": "Base64 encoded password"
}
],
"csiTags": {},
"csiCategories": {
"useExisting": false
}
},
VMware, Inc. 74
VMware Telco Cloud Automation User Guide
{
"name": "rdc",
"type": "REGIONAL_SITE",
"subType": "MANAGEMENT",
"enabled": false,
"preDeployed": {
"preDeployed": false
},
"minimumHosts": 3,
"location": {
"city": "Bengal\u016bru",
"country": "India",
"address": "",
"longitude": 77.56,
"latitude": 12.97
},
"licenses": {
"vc": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"nsx": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"esxi": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"vsan": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"tca_cp": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"vrli": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
]
},
"switches": [
{
"name": "rdc-dvs001",
"uplinks": [
{
"pnic": "vmnic0"
},
{
"pnic": "vmnic1"
}
]
}
],
"services": [
{
"name": "networking",
"type": "nsx",
"enabled": true,
"nsxConfig": {
VMware, Inc. 75
VMware Telco Cloud Automation User Guide
"shareTransportZonesWithParent": false
}
},
{
"name": "storage",
"type": "vsan",
"enabled": true,
"vsanConfig": {
"vsanDedup": false
}
}
],
"networks": [
{
"switch": "rdc-dvs001",
"type": "management",
"name": "management",
"segmentType": "vlan",
"vlan": 3406,
"mtu": 1500,
"mac_learning_enabled": false,
"gateway": "172.17.6.253",
"prefixLength": 24,
"_comments": [
"If K8S master/worker nodes will be installed on this network,
then it requires DHCP configured on the network"
]
},
{
"switch": "rdc-dvs001",
"type": "vMotion",
"name": "vMotion",
"segmentType": "vlan",
"vlan": 3408,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.8.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.8.21",
"end": "172.17.8.30"
}
]
},
{
"switch": "rdc-dvs001",
"type": "vSAN",
"name": "vSAN",
"segmentType": "vlan",
"vlan": 3409,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.9.253",
"prefixLength": 24,
VMware, Inc. 76
VMware Telco Cloud Automation User Guide
"ipPool": [
{
"start": "172.17.9.21",
"end": "172.17.9.30"
}
]
},,
{
"switch": "rdc-dvs001",
"type": "nsxHostOverlay",
"name": "nsxHostOverlay",
"segmentType": "vlan",
"vlan": 3407,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.7.253",
"prefixLength": 24,
"_comments": [
"This network requires DHCP configured on the network"
]
},
{
"switch": "rdc-dvs001",
"type": "nsxEdgeOverlay",
"name": "nsxEdgeOverlay",
"segmentType": "vlan",
"vlan": 3410,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.10.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.10.21",
"end": "172.17.10.30"
}
]
},
{
"switch": "rdc-dvs001",
"type": "uplink",
"name": "uplink1",
"segmentType": "vlan",
"vlan": 3411,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.11.253",
"prefixLength": 24,
"ipAddresses": [
"172.17.11.102",
"172.17.11.103"
]
},
{
"switch": "rdc-dvs001",
VMware, Inc. 77
VMware Telco Cloud Automation User Guide
"type": "uplink",
"name": "uplink2",
"segmentType": "vlan",
"vlan": 3410,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.10.253",
"prefixLength": 24,
"ipAddresses": [
"172.17.10.102",
"172.17.10.103"
]
}
],
"applianceOverrides": [
{
"name": "tb1-cdc-cb",
"enabled": true,
"id": "app-17d69bcf-a3c4-4f74-b9c9-777f7857afd8",
"nameOverride": "tb1-rdc-cb",
"type": "CLOUD_BUILDER",
"ipIndex": 52,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-sddcmgr",
"enabled": true,
"id": "app-7dbaab47-6995-4147-b652-4722c23cfa69",
"nameOverride": "tb1-rdc-sddcmgr",
"type": "SDDC_MANAGER",
"ipIndex": 53,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vc",
"size": "small",
"enabled": true,
"id": "app-b5ead9d7-0ac5-4a24-9b61-763527b3391f",
"nameOverride": "tb1-rdc-vc",
"type": "VC",
"ipIndex": 51,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vro",
"enabled": true,
"id": "app-890f0dd2-08c9-4b95-83d3-a4272ea93886",
"nameOverride": "tb1-rdc-vro",
"type": "VRO",
"ipIndex": 60,
"rootPassword": "Base64 encoded password"
},
VMware, Inc. 78
VMware Telco Cloud Automation User Guide
{
"name": "nsx-cdc",
"size": "large",
"enabled": true,
"id": "app-cfa7e716-6056-4843-924d-bdb950878e6a",
"nameOverride": "tb1-rdc-nsx",
"type": "NSX_MANAGER",
"ipIndex": 54,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"name": "nsx001",
"enabled": true,
"id": "app-1e02e9bd-a526-4343-a217-7e0b494b0c22",
"nameOverride": "tb1-rdc-nsx01",
"parent": "tb1-rdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 55
},
{
"name": "nsx002",
"enabled": true,
"id": "app-f4738cd2-7414-441c-9b0a-303962a784af",
"nameOverride": "tb1-rdc-nsx02",
"parent": "tb1-rdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 56
},
{
"name": "nsx003",
"enabled": true,
"id": "app-c7395a79-7390-4d8e-a47b-ceaa020fb138",
"nameOverride": "tb1-rdc-nsx03",
"parent": "tb1-rdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 57
},
{
"name": "edgecluster001",
"size": "large",
"enabled": true,
"id": "app-f9b7b4aa-ec57-406d-aad1-b0d237f3866f",
"tier0Mode": "ACTIVE_STANDBY",
"type": "NSX_EDGE_CLUSTER",
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"name": "nsx-edge001",
"enabled": true,
"id": "app-ec56d220-a465-42ac-9a21-774b0c8fbc81",
"nameOverride": "tb1-cc-edge01",
VMware, Inc. 79
VMware Telco Cloud Automation User Guide
"parent": "edgecluster001",
"type": "NSX_EDGE",
"ipIndex": 70
},
{
"name": "nsx-edge002",
"enabled": true,
"id": "app-7536f85c-3d57-4854-b1a9-444408f77582",
"nameOverride": "tb1-cc-edge02",
"parent": "edgecluster001",
"type": "NSX_EDGE",
"ipIndex": 71
},
{
"name": "tb1-cdc-bootstrapper",
"enabled": true,
"id": "app-551ee02b-b947-400d-b655-9c0b9db21813",
"nameOverride": "tb1-cdc-bootstrapper",
"type": "TCA_BOOTSTRAPPER",
"ipIndex": 44,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-mgmt-clus",
"enabled": true,
"id": "app-24357516-acb4-40f4-872e-bc0ee56c917f",
"nameOverride": "tb1-rdc-mgmt-clus",
"type": "TCA_MANAGEMENT_CLUSTER",
"ipIndex": 64,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-bootstrapper-clus",
"enabled": true,
"id": "app-89649f4c-8787-4fe6-8afb-7ffc5f622aad",
"nameOverride": "tb1-rdc-bootstrapper",
"type": "BOOTSTRAPPER_CLUSTER",
"ipIndex": 63,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-tcacp",
"enabled": true,
"id": "app-713f4f8f-5603-4c9f-9811-796b2523c6fc",
"nameOverride": "tb1-rdc-tcacp",
"type": "TCA_CP",
"ipIndex": 62,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vrli",
"enabled": true,
"id": "app-51ebc34e-3e46-4462-a836-c538ddd3847b",
VMware, Inc. 80
VMware Telco Cloud Automation User Guide
"nameOverride": "tb1-rdc-vrli",
"type": "VRLI",
"ipIndex": 61,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "vsannfs",
"enabled": false,
"id": "app-cd581ed8-f481-4ac4-ace3-ce84fade5d93",
"type": "VSAN_NFS",
"ipIndexPool": [
{
"start": 47,
"end": 49
}
],
"nodeCount": 3,
"shares": [
{
"name": "default-share",
"quotaInMb": 10240
}
],
"_comments": [
"FQDN for each appliance will be generated as {appliance.name}
{nodeIndex}-{domain.name}.{dnsSuffix}.",
"nodeCount should be same with host number provisioned in day1
operation.",
"Make sure ipIndexPool size larger than nodeCount",
"nodeCount should be same with host number provisioned in day1
operation."
],
"rootPassword": "Base64 encoded password"
}
],
"csiTags": {},
"csiCategories": {
"useExisting": false
}
}
],
"settings": {
"ssoDomain": "vsphere.local",
"pscUserGroup": "Administrators",
"enableCsiZoning": false,
"validateCloudBuilderSpec": true,
"csiRegionTagNamingScheme": "region-{domainName}",
"clusterCsiZoneTagNamingScheme": "zone-{domainName}",
"hostCsiZoneTagNamingScheme": "zone-{hostname}",
"dnsSuffix": "telco.net",
"airgapServer": {
"fqdn": "airgap-server.telco.net",
"caCert":
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZvVENDQTRtZ0F3SUJBZ0lKQU4rcEtkajNCdGFiTUEwR0NTcU
VMware, Inc. 81
VMware Telco Cloud Automation User Guide
dTSWIzRFFFQkRRVUFNR2N4Q3pBSkJnTlYKQkFZVEFsVlRNUkF3RGdZRFZRUUlEQWROZVZOMFlYUmxNUkV3RHdZRFZRU
UhEQWhOZVVOdmRXNTBlVEVPTUF3RwpBMVVFQ2d3RlRYbFBjbWN4RFRBTEJnTlZCQXNNQkUxNVFuVXhGREFTQmdOVkJB
TU1DMlY0WVcxd2JHVXVZMjl0Ck1CNFhEVEl5TURReU9ERXhNelF6TmxvWERUTXlNRFF5TlRFeE16UXpObG93WnpFTE1
Ba0dBMVVFQmhNQ1ZWTXgKRURBT0JnTlZCQWdNQjAxNVUzUmhkR1V4RVRBUEJnTlZCQWNNQ0UxNVEyOTFiblI1TVE0d0
RBWURWUVFLREFWTgplVTl5WnpFTk1Bc0dBMVVFQ3d3RVRYbENkVEVVTUJJR0ExVUVBd3dMWlhoaGJYQnNaUzVqYjIwd
2dnSWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElDRHdBd2dnSUtBb0lDQVFEWkV4M044VEs3NXk4RU5kVFd0WEl1cjFJ
R3Q0Z3oKaStEZmdCemR1NkJscnNSZ3RSc0UrcDR3Y0xzQ3B5NjJHNStsb0pLL0U5dlFoQWRQVkxvK1lBdlZXTEVkNjk
wdApQcW5iWHpDU3U0QjRHWVZ4Tytjd0ZlTTN5ZXBjYklDK2NGNVcrdndDaDZvaVZjS1RBVjNXeXIrVVd6TXYvem1VCj
dNNHdHbTY3VTJNOFJHR0JNY0FLOFBjblNwRzl5S01QcHA5eFVQZUx1UlhHalB6VFlXTGkySll4aERva3NLQysKVHYwT
25rTkQyUnM3UDZhU2VmSkJROTdvcVpxQllva0o4TjYzaTJpemcySDczM2F4S0Y4WVNUS2NibG5kQVVSNQpPVUMxMHZ3
OTNxaHdCekZVM1RrZzR1cUxvd3dxOHI0MC92VXE5Z2M3eFF2RlFNU3JvcldHVUphZjJHQkRzbUFRCmlXQnpIVmgvTk5
GdlkzQXBnLzhCRXpKRE9LUGxSTDlpQTZTUzFxaGlOVGlwZ3VEV0U3THVDeWJPd1l2QnN0SlIKd0ZIN0s1SDJWSkVjbF
RVdkZkZjJQZWJRU2tXLy9VeTFzQlVtRTcySXNQL2k3S0dhQ1dDUVZ4MHIzUXkwclVneQoxWFFtWlFsbUw5ZVpOc2Q5e
k9EYnk2eVlmL1Z4N1Z2b1FDQWtRZzJqYlVnTmJuTWZ4dWVuaFFHWjI0cW1XWXRqCnFoakJWcjBTU1lwUk5reGdwc2Vi
M3Y0bkRyNU1XczRzUldjWmlpOHZmdTZMUnNJclA1TERlMDRzaGtCeVJmZWYKQ2Z3MXFhc3FIalB6Z1g3N3pTTW9CSk5
LR2NUOFU4SEJKZ1Z2TWQ1bVFrbE1yVzYrNUJrMEpvK0FtM2xyb0tiNwppNWxVWnNPNzJiN29WUUlEQVFBQm8xQXdUak
FkQmdOVkhRNEVGZ1FVOEpBSnBpdUZtOGFDNDhTcnl0WkZNcENMCmZtVXdId1lEVlIwakJCZ3dGb0FVOEpBSnBpdUZtO
GFDNDhTcnl0WkZNcENMZm1Vd0RBWURWUjBUQkFVd0F3RUIKL3pBTkJna3Foa2lHOXcwQkFRMEZBQU9DQWdFQVRQaFFH
Rml4RzBNeGh0SEtkVzhQTHVwbGM4YlBtSmZuWnpVMApaUkRjRzVKNjhNT01CRW1Uc2lHY2h4djU0enF1RzB2ZHVhNHc
vRjhVYXd3bGk4Tkw3anlpYTRuU1oxbEczajAwClEzU1dCbk5kMmFVc1U2TGxrTkpHTFNsU2hYMDNEcGlHdXQxYzRrbl
djdGxzTkRoSm5ESUhzdzNDU1UrYjZKb1IKREJjbE9YVFBhT25GV2ZRMzhJc3Q5Nlk0dWxETXZLdEo2YkduOUtQdldIT
kNTeCswVFIzNkVYVWVzeTliOWR4RQpJYTFEbENlSFRja1AzOXMzTzkxeElXZE0xK1NDRXlHUklMOHZBK3BHTnk3RUJF
Rzlsd3ZvYWhKdFNlbHkyYU9ZCjZJbkVCaG0rL1pFNGtOc282VkVmblJKZnY2bVBRRlAwZTJJanI2aTI4NmNGOFQ5Wkh
pL2hyS3U0djdvSVpSNEoKbEFuTzBmQkNCcFZhL2NJa1R6WXhzSUZFTUVzTHFCSkJZaEZpWWdsVmthTVJiNnZWTW5yNE
l2bHI0VGRObytZTApDSXlmR3N2NWdyYzNZb1JiZ09vY3lYYkpvQmdBdy9pK3ZwMzllNU94ZWR1R3hwRGI0Z0hyNHkze
UdkVE4xWWVDCnJJR3FPdm5rYzZWcWNGbXpLakZndDNLSDQ4V3JoSWg2aU90ZFhQV3l1ektyWGdwSFI3WTRNdUN5K001
THFabXAKdGpzZVNYTEN0OCs2MVhLRGNFZEtLc3ltL2JPbEp1TDJVOW9VaUdFaVp6Q0wycFdxMWU0Z3doNTlwWWRJaUY
yQgpXRzhQaUx1eXZuOG9EZkEwdklIaUhVYlVDdkVkYXNSZTB2Z3JiMGwwSjBHVWlnM3J0MHZsNm4zMG1aa1gzVUs4Cj
BsS0NoSFE9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
},
"ntpServers": [
"172.17.6.14"
],
"dnsServers": [
"172.17.6.13"
],
"applianceNamingScheme": "{applianceName}",
"proxy": {
"enabled": false
},
"appliancesSharedWithManagementDomain": [
{
"type": "VRLI",
"enabled": false
}
]
},
"appliances": [
{
"type": "CLOUD_BUILDER",
"id": "app-f988dfbb-8392-436f-a66c-22deaec7919c",
"name": "tb1-cdc-cb",
"ipIndex": 32,
"enabled": true,
"adminPassword": "Base64 encoded password",
VMware, Inc. 82
VMware Telco Cloud Automation User Guide
VMware, Inc. 83
VMware Telco Cloud Automation User Guide
"type": "NSX_MANAGER_NODE",
"id": "app-2a285da3-0a18-474d-851b-0a1b84d31646",
"name": "nsx003",
"ipIndex": 37,
"parent": "nsx-cdc"
},
{
"type": "NSX_EDGE_CLUSTER",
"id": "app-6d8710c3-8004-4c95-a760-220febe7a358",
"name": "edgecluster001",
"size": "large",
"tier0Mode": "ACTIVE_STANDBY",
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"type": "TCA_BOOTSTRAPPER",
"id": "app-21890852-0f98-4fae-88bd-db316179e905",
"name": "tb1-cdc-bootstrapper",
"ipIndex": 44,
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"type": "TCA_MANAGEMENT_CLUSTER",
"id": "app-12634788-3203-4d33-8a01-05f1a9166a89",
"name": "tb1-cdc-mgmt-clus",
"ipIndex": 45,
"clusterPassword": "Base64 encoded password",
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "BOOTSTRAPPER_CLUSTER",
"id": "app-0bcbc44e-ac2e-45ed-8f53-e8d5002e030d",
"name": "tb1-cdc-bootstrapper-clus",
"ipIndex": 46,
"clusterPassword": "Base64 encoded password",
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "TCA",
"id": "app-f2d56bde-b2de-48d4-b6ef-372c46a4f3a5",
"name": "tb1-tca",
"ipIndex": 42,
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "TCA_CP",
"id": "app-8c13c502-ce1c-463d-a6c3-541b36e76558",
VMware, Inc. 84
VMware Telco Cloud Automation User Guide
"name": "tb1-cdc-tcacp",
"ipIndex": 43,
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "NSX_EDGE",
"id": "app-14f9bc62-bbcc-4e19-aae5-fba346bada85",
"name": "nsx-edge001",
"ipIndex": 38,
"parent": "edgecluster001"
},
{
"type": "NSX_EDGE",
"id": "app-777e027a-cfcd-46ee-a389-4d747786545a",
"name": "nsx-edge002",
"ipIndex": 39,
"parent": "edgecluster001"
},
{
"type": "VRLI",
"id": "app-5d1559fa-0850-429f-b4e3-0e1707e2d3b6",
"name": "tb1-cdc-vrli",
"ipIndex": 41,
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"type": "VSAN_NFS",
"id": "app-cc6659f3-1ef2-4d61-a388-14ba5afaa6c9",
"name": "vsannfs",
"ipIndexPool": [
{
"start": 47,
"end": 49
}
],
"nodeCount": 3,
"enabled": true,
"shares": [
{
"name": "default-share",
"quotaInMb": 10240
}
],
"_comments": [
"FQDN for each appliance will be generated as {appliance.name}{nodeIndex}-
{domain.name}.{dnsSuffix}.",
"nodeCount should be same with host number provisioned in day1 operation.",
"Make sure ipIndexPool size larger than nodeCount",
"nodeCount should be same with host number provisioned in day1 operation."
],
"rootPassword": "Base64 encoded password"
}
VMware, Inc. 85
VMware Telco Cloud Automation User Guide
],
"images": {
"cloudbuilder": "http://172.17.6.12/images/2.1_images/VMware-Cloud-
Builder-4.4.0.0-19312029_OVF10.ova",
"vro": "http://172.17.6.12/images/2.1_images/
O11N_VA-8.6.2.20205-19108182_OVF10.ova",
"tca": "http://172.17.6.12/images/2.1_images/VMware-Telco-Cloud-
Automation-2.1.0-19714586.ova",
"haproxy": [],
"kube": [
"http://172.17.6.12/images/2.1_images/photon-3-kube-v1.22.8-vmware.1-tkg.1-
d69148b2a4aa7ef6d5380cc365cac8cd-19632105.ova"
],
"vsphere_plugin": "http://172.17.6.12/images/2.1_images/vco-plugin.zip",
"vrli": "http://172.17.6.12/images/2.1_images/VMware-vRealize-Log-
Insight-8.6.2.0-19092412_OVF10.ova"
},
"deleteDomains": []
}
n Configuration
On this tab, you can configure global settings, appliances, and images or virtualization files
(OVF).
n Domains
On this tab, you can configure network and licenses for various sites. For example, you can
configure a central site, a regional site, a compute cluster, or a cell site group. You can also
add hosts.
Note The SDDC deployment starts only when the minimum number of hosts are registered for a
domain and the domain is enabled.
As a part of the SDDC deployment, the following software components are installed according to
the domain type.
VMware, Inc. 86
VMware Telco Cloud Automation User Guide
Compute Cluster A vCenter cluster. A central site or a regional site manages the compute cluster
Cell Site Group A set of ESXi hosts, where RAN is deployed. A central or regional site manages the hosts in
the Cell Site Group.
Infrastructure Automation deploys all the required application for the sites and makes the site
ready for network functions. Design and deployment of network services and function can start.
You can now create and initiate the network functions.
Roles
You can perform different operations based on your role.
Based on the roles and associated permissions, a user can perform different roles in
Infrastructure Automation.
System Administrator
A system administrator can manage all the sites.
A system administrator performs the management of the existing sites that are configured and
deployed. A system administrator performs operations that include:
Note The system administrator can perform operations depending on the permissions available
to the system administrator.
VMware, Inc. 87
VMware Telco Cloud Automation User Guide
Deployment Configurations
You can configure the global settings, appliance settings, and provide link to ISO images to
deploy.
You can configure Service settings and Proxy Config settings on the Global Settings page.
Note You can override the values for each domain when configuring the domains.
Procedure
VMware, Inc. 88
VMware Telco Cloud Automation User Guide
Field Description
DNS Suffix Address of the DNS suffix for each appliance. For example:
telco.example.com
DNS Server The IP address of the DNS server. You can add multiple DNS server IP,
separated by comma.
Based on the network type selected during the TCA deployment, you can
enter one of the following:
n IPv4 network type: Enter IPv4 addresses or FQDNs
n IPv6 network type: Enter only FQDNs
n Dual Stack network type: Enter IPv4 addresses or FQDNs for IPv4
interfaces and FQDNs only for IPv6 interfaces
NTP Server Name of the NTP server. For example: time.vmware.com. You can add
multiple NTP server address, separated by comma.
Based on the network type selected during the TCA deployment, you can
enter one of the following:
n IPv4 network type: Enter IPv4 addresses or FQDNs
n IPv6 network type: Enter only FQDNs
n Dual Stack network type: Enter IPv4 addresses or FQDNs for IPv4
interfaces and FQDNs only for IPv6 interfaces
5 To use the proxy server, enable the Proxy Config. Click the Enabled button.
Field Description
Protocol Proxy protocol. Select the value from the drop-down menu.
Proxy Password Optional. Password corresponding to the user name to access the proxy
server.
Proxy Exclusion Optional. List of IP and URLs to exclude from proxy. You can use special
characters to provide regular expression URLs. For example, *.abx.xyz.com.
Field Description
Region Tag Naming Scheme Tagging scheme for data center. Default value: region-{domainName}.
VMware, Inc. 89
VMware Telco Cloud Automation User Guide
Field Description
Cluster Zone Tag Naming Scheme Tagging scheme for compute cluster or hosts. Default value: zone-
{domainName}.
Host Zone Tag Naming Scheme The CSI tag for the hosts. Default value: zone-{hostname}.
n For Region tag, ensure that the naming scheme contains {domainName}. For example,
<text_identifier>-{domainName}.
n For Cluster Zone tag, ensure that the naming scheme contains the {domainName}. For
example, <text_identifier>-{domainName}.
n For Host Zone tag, ensure that the naming scheme contains the {hostname}. For
example, <text_identifier>-{hostname}.
9 Provide the address of the SaaS server . For example, connect.tec.vmware.com. It is used for
both the activation and the software updates.
Note
n The option is available when you set the Activation Mode to SaaS.
n When using the air-gapped server, set the Activation Mode to Standalone.
n You can provide the air-gapped server details for VMware Telco Cloud Automation
through cloud_spec.json file.
n When you provide the air-gapped server details through cloud_spec.json, remove the
SaaS section. Set the activation mode to Standalone.
n When you provide the air-gapped server details through cloud_spec.json, add the
certificate details only if you have a self-signed CA certificate.
Note The TCA SSO credentials are used by Infrastructure Automation for communicating
with the TCA Manager.
12 Provide the Appliance Naming Scheme. Select the value from the drop-down menu. This
naming scheme is used for all the appliances added to VMware Telco Cloud Automation.
13 To deploy the vRealize Log Insight in management domain and share it with workload
domain, enable the Share vRLI with management domain.
VMware, Inc. 90
VMware Telco Cloud Automation User Guide
Configure Appliances
Configure the IP index and password of various appliances available under the Appliance
Configuration.
You can configure the IP index and password for all the appliances available in Infrastructure
Automation.
Note IP index is the index of the IP address in the subnet which is configured in the Networks
under Domain section. The IP for each appliance is derived by adding the IP Index to the
subnet address, so that the administrator does not need to provide an IP for each appliance
in each domain. VMware Telco Cloud Automation recommends to follow a common IP addressing
scheme for all the domains. However, if required, you can override the IP Index for each domain.
Ensure that you provide the IP index based on the subnet value.
Note
n You can configure the Root Password, Admin Password, and Audit Password, and select
the Use above credentials for all the password fields to use the same password for all the
appliances.
n When creating the password for following appliances, ensure that you follow the password
guidelines
n For Cloudbuilder:
n Minimum password length for admin password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.
n Minimum password length for root password is 8 characters and must include at least
one uppercase, one lowercase, one digit, and one special character.
n vCenter
n The admin password length is between 8 to 20 character and must contain at least
one uppercase, one lowercase, one digit, and one special character (@!#$%?^).
n The root password length is between 8 to 20 character and must contain at least one
uppercase, one lowercase, one digit, and one special character (@!#$%?^).
n NSXT password
n Minimum length for root, admin, and audit password is 12 characters and must
contain at least one lower case, one uppercase, one digit, one special character.
The password should contain at least 5 different characters. Password cannot contain
three consecutive characters. Dictionary word is not allowed. The password should
not contain more than four monotonic character sequence.
Field Description
VMware, Inc. 91
VMware Telco Cloud Automation User Guide
Field Description
IP Index The last octet of the IP address. The first three octets of the IP address are
computed from the IP address of the gateway IP.
Note The IP index depends on management subnet prefix length. Ensure that you
provide IP index values within the IP range dictated by that subnet prefix length.
For example, if you use subnet prefix length of 24, then the subnet has 254 IPs.
Hence, the IP index value cannot exceed 254. If you use prefix length of 27 or 28,
then the subnet has 30 or 14 IPs, respectively. The IP index values must then not
exceed 30 or 14, respectively. Ensure that you check the values before adding the
IP index.
Note Minimum length of the password is 13 characters and it must include a special
character, a capital letter, a lower-case letter, and a number.
Note Minimum length of the password is 13 characters and it must include a special
character, a capital letter, a lower-case letter, and a number.
Audit Password Password of the audit user. Applicable only for NSX Manager, and NSX Edge
cluster.
Note Minimum length of the password is 13 characters and it must include a special
character, a capital letter, a lower-case letter, and a number.
Cluster Password Password for creating the cluster. Applicable only for VMware Telco Cloud
Automation management cluster and bootstrapper cluster.
Note Minimum length of the password is 13 characters and it must include a special
character, a capital letter, a lower-case letter, and a number.
NSX Edge Cluster Configuration Applicable only for NSX Edge Cluster.
n Name: Name of the NSX Edge cluster.
n IP: The fourth octet of the IP address applicable to the node.
n Size: Size of the NSX Edge cluster. Select the option from the drop-down menu.
n Tier0Mode: Whether to deploy the NSX Edge cluster in Active-Standby or
Active-Active. Select the option from the drop-down menu.
Node Count Number of vSAN NFS nodes. Minimum three and a maximum of eight nodes are
required. Applicable only for vSAN NFS.
IP Pool List of static IP indexes for vSAN NFS nodes. Each vSAN NFS node requires one IP.
Applicable only for vSAN NFS.
Shares Size of the NFS share. Applicable only for vSAN NFS.
Procedure
VMware, Inc. 92
VMware Telco Cloud Automation User Guide
Provide the location where the Infrastructure Automation can locate the install images for all
appliances. The web server stores all the images of application. Provide the complete link of each
appliance image.
Procedure
2 Click Images.
3 Click Edit.
Note
n You can add multiple images for VMware Tanzu Kubernetes Grid and VMware Tanzu
Kubernetes Grid - HA Proxy.
n Manual installation of vSAN requires additional files. For details, see vSAN Manual
approach and add the files required for manual approach in the image server.
The certificate authority (CA) issues the digital certificate. These certificates help to create a
secure connection between various appliances of a domain.
Procedure
2 Click Security.
VMware, Inc. 93
VMware Telco Cloud Automation User Guide
Field Value
Country The two-letter ISO code for the country where the organization is located.
Valid for days The number of for which the certificate is valid.
Organization The complete legal name of the organization. It can include suffixes such as
Inc, Corp, or LLC. Do not use abbreviation.
State The state or region where the organization is located. Do not use
abbreviation.
You can create a host profile with specific set of BIOS settings, firmware version, PCI devices,
and PCI groups. When you create a host and select a specific host profile for a domain, VMware
Telco Cloud Automation applies the configurations of the specified host profile to all the hosts
within that domain.
Note
n To upgrade Supermicro firmware, see Supermicro Firmware Upgrade.
n To obtain the firmware details, see Obtain the Current Firmware Version.
Prerequisites
Procedure
VMware, Inc. 94
VMware Telco Cloud Automation User Guide
Note To create a new host profile using the configuration file of other host profile, click Load
Configuration and select the required JSON file.
n Add the corresponding value of the BIOS key in the Value field.
n Add the identity of the firmware, that the vendor provides, in the Software field.
n Add the version of the firmware to which you want to upgrade the current firmware in the
Version field.
n Add the location of the firmware upgrade file in the Location field.
Note Ensure that you provide a valid URL. The URL must start with HTTP and end with
extensions .XML or .EXE.
n Add the value of checksum of the firmware upgrade file in the Checksum field.
n Select the value from the drop-down menu. You can select SR-IOV for SRIOV based
devices, PassThrough for PassThrough devices, or Custom for ACC100 devices.
n For the SRIOV device, configure the value of Number of Virtual Functions.
n For Custom (ACC100) devices, provide the configuration file required for ACC100 in
Configuration File field.
n To add a filter for PCI devices, click Add Filter. Provide the values of Key and Value field.
8 In the PCI Device Groups, to add create a device group click Add Group .
b To add a filter for the device group, click Add Filter and enter the key and value in the
Key and Value field. You can select the value from the drop-down list.
n NUMA ID
n Device ID
n Vendor ID
VMware, Inc. 95
VMware Telco Cloud Automation User Guide
n Alias
n Index
n Reserved cores per NUMA node - Number of cores reserved for ESXi process. For ESXi
version 7.0U2 and above, the default value is 1. For other ESXi versions, the default value
is 2.
n Reserved Memory per NUMA node - Memory reserved for ESXi process. The default
value is 512 MB.
n (Optional) Min. cores for CPU reservation per NUMA node - Number of physical core
reserved for each NUMA node. If you do not configure this parameter, the value from
reservedCoresPerNumaNode is applied. The default value is 3.
What to do next
You can modify a host profile, export the host profile details, or create a copy of the host profile
using clone function. You can also delete the host profile and refresh the host profile details.
Prerequisites
Procedure
3 Select the host profile on which you want to perform the operation.
6 To export the configurations of a host profile in a JSON file, click Export. You can use this
JSON file to create a new host profile.
8 To refresh the details of all the host profile on the Host Profile page, click Refresh.
VMware, Inc. 96
VMware Telco Cloud Automation User Guide
Supermicro firmware upgrade involves manual steps. These steps include creating upgrade
package, modifying the script and validating the integrity of the upgrade package.
Note Ensure that you upload the downloaded firmware upgrade package, the upgrade-
script.sh, and firmware-index.xml in the same absolute path.
Prerequisites
n Ensure that Telco Cloud Automation has permission to access the web server location to
obtain the uploaded files and packages.
Procedure
1 Create the upgrade-script.sh. Use the below example to create the upgrade script.
Note
n The example uses E810 card. To create the script for other cards, change the E810 to the
card name for which you need to create the script.
n Modify the command nvmupdaten64e with the required command based on the card type.
You can get the commands in readme.txt file in upgrade package.
datastore=$(esxcli storage filesystem list| awk '{ print $1 }' | tail -n +3 | head -n 1)
echo $datastore
cd $datastore/
check_version(){
#Dont forget the space added below
if echo "X" | ./nvmupdaten64e | grep "Update " ; then
echo "Inside check_version"
echo "./nvmupdaten64e"
return 1
else
echo "it's in else"
return 0
fi
}
VMware, Inc. 97
VMware Telco Cloud Automation User Guide
4 Create the firmware-index.xml. Use the below example to create the firmware index file.
<metaList>
<metadata>
<url>E810_NVMUpdatePackage_v2_32_ESX.tar.gz</url>
<checksum>fbbb201dfcc4c900e4fc5d3a6f4264110d4a32cdecec43c55d04164130b8d249</
checksum>
</metadata>
<metadata>
<url>upgrade-script.sh</url>
<checksum>0faa2fb41347377ad1435911abc4eb38246a7fcf5c3cdcea3e21e34778678cac</
checksum>
</metadata>
</metaList>
a url : In the first url tag, enter the name of the upgrade file.
b checksum : In the first checksum tag, enter the checksum generated for the upgrade
package file.
c url : In the second url tag, enter the name of the upgrade script file.
d checksum : In the second checksum tag, enter the checksum generated for the upgrade
script file.
VMware, Inc. 98
VMware Telco Cloud Automation User Guide
9 On the Host Profile page, to add new host profile, click Add.
10 To add firmware details, click Add Firmware. Enter the following details:
n Add the firmware name in the Name field. For Supermicro, this is a user defined field.
n Add the identity of the firmware, that the vendor provides, in the Software field. For
Supermicro, this is a user defined field.
n Add the version of the firmware to which you want to upgrade the current firmware in the
Version field.
n Add the checksum generated for the firmware-index.xml file in the Checksum field.
The task provide details on how to upgrade the dell firmware. You can add these details in the
host profile, for details, see Add a Host Profile.
Prerequisites
n Ensure that you have obtained the details of the firmware. To obtain the current firmware
version, see Obtain the Current Firmware Version.
n Ensure that you have uploaded the firmware file to a web server.
Procedure
2 Add the identity of the firmware, that the vendor provides, in the Software field. SoftwareID
represents the firmware identity. You can obtain the softwareID from the componentID in the
package.xml file bundled within the firmware package that you downloaded. To obtain the
softwareID, use the following steps:
a Download firmware upgrade file from Dell website. The upgrade packge is compressed
using zip format and the upgrade file uses .exe extension.
b Extract the upgrade zip package and obtain the package.xml file.
VMware, Inc. 99
VMware Telco Cloud Automation User Guide
c Search for componentID in package.xml. Get the componentID that matches your device.
For example, for ethernet 25G 2P XXV710 adapter, the componentID is 105834.
3 Add the version of the firmware to which you want to upgrade the current firmware in the
Version field.
4 Add the location of the firmware upgrade file in the Location field. You can download the
firmware upgrade package from Dell website. Upload the firmware upgrade package to a
web server.
5 Add the value of checksum of the firmware upgrade file in the Checksum field. You can
obtain the checksum value from the Dell website using which you downloaded the firmware
upgrade package. The below example shows all the details for firmware upgrade.
The process helps you to obtain the current firmware version of the device.
Procedure
a Login to iDRAC.
b Navigate to System.
2 To obtain the firmware version of network interface cards (NICs), follow the steps:
b Execute the esxcli network nic list command to get a list of all NICs.
c Execute the esxcli network nic get -n vmnic2|grep Firmware command to obtain the
firmware value.
Managing Domains
You can add, delete, and configure various sites to create the infrastructure.
You can add a management domain, workload domain for central site or regional sites. You can
add compute clusters, or cell sites in Infrastructure Automation. You can also add a host for each
site and perform security management for each appliance within domains.
You can modify the details of an already added site and view the appliances related to each site.
You can resynchronize the site details after modifying the configurations, to ensure that all the
configurations are working correctly.
Note
n Starting from Release 2.3, VMware Telco Cloud Automation terminates the support for
creating a central data center, regional data center, and compute cluster using Infrastructure
Automation. The feature will enter a maintenance mode starting from releases 2.1 and 2.2.
Post termination, users will have the option to add the pre-deployed data centers through
Infrastructure Automation in a VM-based deployment.
n You need to deploy the Telco Cloud Automation Control Plane manually.
n When configuring the Telco Cloud Automation Control Plane through Telco Cloud
Automation Appliance Manager, you must use FQDN for the vCenter.
n You must register the Telco Cloud Automation Control Plane on the Virtual Infrastructure
page of the Telco Cloud Automation Manager.
Harbor SMF
K8’s
GIT Node Pool-
Management
DP
Cluster
WD01/
Management WD01/ WD01/ Management WD02/ WD03/ WD02/
Aggregation Cell Site
Domain Compute Cluster Compute Cluster Domain Compute Cluster Compute Cluster Edge Site
Cluster
Prerequisites
n Obtain the required licenses and network information required for configuration.
n Regenerate the Self-Signed Certificates on ESXi Hosts. For details, see ESXi Host Certificate.
Procedure
5 To enable the provisioning of the site, Click the button corresponding to Enabled. You cannot
perform this operation on a disabled site.
Note
n For a pre-deployed domain, VMware Telco Cloud Automation shows only the required
configurations. Some of these configuration may not appear for non pre-deployed
domains.
n VMware Telco Cloud Automation does not perform any operation on a pre-deployed
workload domain. However, you can add compute cluster and cell site group to the
domain.
n VMware Telco Cloud Automation can auto-detect the resources if only one resource for
resource type is available in the vCenter. If multiple resources for each resource type are
available, you must fill the values.
n When you add a pre-deployed domain, always use Appliance Overrides to enter the
vCenter IP, FQDN, and password.
n For a pre-deployed domain, when adding the DVS name and management network in
Appliance Overrides, ensure that the names match the corresponding DVS name and
management network names in the vCenter.
Field Description
Minimum number of hosts Minimum number of hosts required for the site. The number of hosts cannot
be less than 4 or more than 64.
Select Host Profile Select the host profile from the drop-down list. The selected Host profile
gets associated with the each host in the management domain.
Location The location of the site. Click the button corresponding to the location.
Latitude Latitude of the compute cluster location. The details are automatically added
when you select the location. You can also modify the latitude manually.
Longitude Longitude of the compute cluster location. The details are automatically
added when you select the location. You can also modify the longitude
manually.
Settings You can modify the service settings and the proxy settings for each site.
These configurations override the global configuration available in Global
Configuration tab on Configuration page. For more details on service and
proxy parameters, see Configure Global Settings.
vSphere SSO Domain is available for local settings and not for global
settings. To configure the vSphere SSO Domain for a domain, enable the
Override and enter the required information in the corresponding Override
Value.
Field Description
Services You can enable the networking and storage operations for the specific site.
You can also enable or disable the compression and duplication of data
through
vSAN Deduplication and Compression option.
Note The duplication and compression works only on the all-flash disk
group. When you enable the vSAN Deduplication and Compression option,
you cannot create a hybrid storage group.
8 You can add new CSI categories or use the existing categories from the VMware VSphere
server. You can also create tags corresponding to the CSI categories. To add the CSI
Categories information, add the required information.
Note
n To configure the CSI Categories, enable the Override for the CSI Tagging under Settings,
and Override Value.
Field Description
Use Existing Whether to use the existing categories set in the underlying VMware
VSphere server. Click the corresponding button to enable or disable the
option.
Note When use the Use Existing, ensure that you provide values for
both the region categories and the zone categories as set in the underlying
VMware vSphere server.
n When creating Zone category in VMware VSphere, choose Hosts and
Clusters under Associable Object Types.
n When creating Region category in VMware VSphere, choose Datacentre
under Associable Object Types.
CSI Zone Tag The CSI tagging for compute clusters or hosts.
9 Add the Switch Configuration information. Click plus icon to add more switches and uplinks.
Field Description
Uplinks Select the network interface card (NIC) for the central site under Uplinks.
Note A central site requires minimum two NICs to communicate. NIC details
must match the actual configuration across all ESXi servers.
Note
n For vMotion and vSAN, the IP pool should equal the total number of ESXi hosts.
n You can click + sign under Networks to create additional VLAN or overlay network to
connect with additional applications.
n Add the gateway and prefix length when creating the VLAN application network if you
enable the networking service and deploy the edge cluster in NDC, RDC, or Compute
Cluster.
n Add the gateway and prefix length when creating the overlay network.
n Ensure that you use same switch for NSX overlay, Host overlay and uplinks for each
domain.
Field Description
Segment Type Segment type of the network. Select the value from the list.
Switch The switch details which the sites use for network access.
Prefix Length The prefix length for each packet for the network.
Note The Prefix length is applicable only for the IPv4 environment.
Note The gateway address is applicable only for the IPv4 environment.
11 (Optional) Add the Appliance Overrides information. Ensure that the appliance names match
the actual names entered in DNS. If they do not match, you can change the name.
Note
n For NSX-Edge cluster configuration:
n To override the Edge form factor, select the Size from the drop-down menu.
n To override the HA, select the Tier0Mode from the drop-down menu.
n You can configure the Root Password, Admin Password, and Audit Password, and select
the Use above credentials for all the password fields to use the same password for all
the appliances.
n When overriding the password for following appliances, ensure that you follow the
password guidelines
n For Cloudbuilder:
n Minimum password length for admin password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.
n Minimum password length for root password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.
n vCenter
n The root password length is between 8 to 20 character and must contain at least
one uppercase, one lowercase, one digit, and one special character (@!#$%?^).
n NSXT password
n Minimum length for root, admin, and audit password is 12 characters and
must contain at least one lower case, one uppercase, one digit, one special
character. The password should contain at least 5 different characters. Password
cannot contain three consecutive characters. Dictionary word is not allowed. The
password should not contain more than four monotonic character sequence.
Field Description
Field Description
Audit Password Password of the audit user. Applicable only for NSX Manager, and NSX Edge
cluster.
Cluster Password Password for creating the cluster. Applicable only for VMware Telco Cloud
Automation management cluster and bootstrapper cluster.
Name Override The new name of the appliance to override the previous name of appliance.
IP Index The IP index of the appliance. The value is fourth octet of the IP address.
The initial three octets are populated from the network address provided in
domain.
VMware Telco Cloud Automation uses IP index to calculate the IP address of
the appliance. It adds the IP Index to the base address of the management
network to obtain the IP address of the appliance.
Note
n IP index is applicable only for the IPv4 environment.
n The IP index depends on management subnet prefix length. Ensure that
you provide IP index values within the IP range dictated by that subnet
prefix length. For example, if you use subnet prefix length of 24, then
the subnet has 254 IPs. Hence, the IP index value cannot exceed 254.
If you use prefix length of 27 or 28, then the subnet has 30 or 14
IPs, respectively. The IP index values must then not exceed 30 or 14,
respectively. Ensure that you check the values before adding the IP
index.
What to do next
n Certificate Management.
You can modify the configuration of a management domain, add a host, view the list of
appliances applicable to the management site, and perform certificate management operations
such as generate Certificate Signing Request (CSR), download CSR, and retry the download or
generate CSR operations.
Note
n You cannot modify the CSI tagging information.
n You can add the CSI tagging information only for a new domain.
Procedure
4 Click Edit.
What to do next
n Certificate Management.
Prerequisites
n Regenerate the Self-Signed Certificates on ESXi Hosts. For details, see ESXi Host Certificate.
n Ensure that you configure the gateway for vMotion and vSAN network.
Procedure
4 To enable the provisioning of the site, click the button corresponding to Enabled. You cannot
perform operations in a disabled site.
5 To add an existing workload domain, click the button corresponding to Pre-Deployed. When
you enable Pre-Deployed, you must provide Default Resources.
Note
n For a pre-deployed domain, VMware Telco Cloud Automation shows only the required
configurations. Some of these configuration may not appear for non pre-deployed
domains.
n VMware Telco Cloud Automation does not perform any operation on a pre-deployed
workload domain. However, you can add compute cluster and cell site group to the
domain.
n VMware Telco Cloud Automation can auto-detect the resources if only one resource for
resource type is available in the vCenter. If multiple resources for each resource type are
available, you must fill the values.
n When you add a pre-deployed domain, always use Appliance Overrides to enter the
vCenter IP, FQDN, and password.
n For a pre-deployed domain, when adding the DVS name and management network in
Appliance Overrides, ensure that the names match the corresponding DVS name and
management network names in the vCenter.
Field Description
Minimum number of hosts Minimum number of hosts required for the site. The number of hosts cannot
be less than 4 or more than 64.
Select Host Profile Select the host profile from the drop-down list. The selected Host profile
gets associated with the each host in the workload domain.
Parent site Select the parent site from the drop-down menu.
Location The location of the site. Click to add the location details.
Field Description
Latitude Latitude of the compute cluster location. The details are automatically added
when you select the location. You can also modify the latitude manually.
Longitude Longitude of the compute cluster location. The details are automatically
added when you select the location. You can also modify the longitude
manually.
Settings You can modify the service settings and the proxy settings for each site.
These configurations override the global configuration available in Global
Configuration tab on Configuration page. For more details on service and
proxy parameters, see Configure Global Settings.
vSphere SSO Domain is available for local settings and not for global
settings. To configure the vSphere SSO Domain for a domain, enable the
Override and enter the required information in the corresponding Override
Value.
For a pre-deployed site, VMware Telco Cloud Automations shows vSphere
SSO Username. Set the value of vSphere SSO Username to user belonging
to the administrator group in the underlying VMware vCenter Server. If you
do not provide the value, system takes administrator as default value.
Services You can enable the networking and storage operations for the specific site.
You can also enable or disable the compression and duplication of data
through
vSAN Deduplication and Compression option.
Note The duplication and compression works only on the all-flash disk
group. When you enable the vSAN Deduplication and Compression option,
you cannot create a hybrid storage group.
7 You can add new CSI categories or use the existing categories from the VMware VSphere
server. You can also create tags corresponding to the CSI categories. To add the CSI
Categories information, add the required information.
Note
n To configure the CSI Categories, enable the Override for the CSI Tagging under Settings,
and Override Value.
Field Description
Use Existing Whether to use the existing categories set in the underlying the VMware
VSphere server. Click the corresponding button to enable or disable the
option.
Note When use the Use Existing, ensure that you provide the values for
both region categories and zone categories as set in the underlying VMware
vSphere server.
n When creating Zone category in VMware VSphere, choose Hosts and
Clusters under Associable Object Types.
n When creating Region category in VMware VSphere, choose Datacentre
under Associable Object Types.
CSI Zone Tag The CSI tagging for the compute clusters or hosts.
8 Add the Switch Configuration information. Click plus icon to add more switches and uplinks.
Field Description
Uplinks Select the network interface card (NIC) for the regional site under Uplinks.
Note
n For vMotion and vSAN, the IP pool should be equal to the total number of ESXi hosts.
n Add the gateway and prefix length when creating the VLAN application network if you
enable the networking service and deploy the edge cluster in NDC, RDC, or Compute
Cluster.
n Add the gateway and prefix length when creating the overlay network.
n Ensure that you use same switch for NSX overlay, Host overlay and uplinks for each
domain.
Field Description
Segment Type Segment type of the network. Select the value from the list.
Switch The switch details which the site uses to access network.
Prefix Length The Prefix length for each packet for the network.
Note The prefix length is applicable only for the IPv4 environment.
Note The gateway address is applicable only for the IPv4 environment.
10 (Optional) Add the Appliance Overrides information. Ensure that the appliance names match
the actual names entered in DNS. If they do not match, you can change the name.
Note
n For NSX-Edge cluster configuration:
n To override the Edge form factor, select the Size from the drop-down menu.
n To override the HA, select the Tier0Mode from the drop-down menu.
n You can configure the Root Password, Admin Password, and Audit Password, and select
the Use above credentials for all the password fields to use the same password for all
the appliances.
n When creating the password for following appliances, ensure that you follow the
password guidelines
n For Cloudbuilder:
n Minimum password length for admin password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.
n Minimum password length for root password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.
n vCenter
n The root password length is between 8 to 20 character and must contain at least
one uppercase, one lowercase, one digit, and one special character (@!#$%?^).
n NSXT password
n Minimum length for root, admin, and audit password is 12 characters and
must contain at least one lower case, one uppercase, one digit, one special
character. The password should contain at least 5 different characters. Password
cannot contain three consecutive characters. Dictionary word is not allowed. The
password should not contain more than four monotonic character sequence.
Field Description
Field Description
Audit Password Password of the audit user. Applicable only for NSX Manager, and NSX Edge
cluster.
Cluster Password Password for creating the cluster. Applicable only for VMware Telco Cloud
Automation management cluster and bootstrapper cluster.
Name Override The new name of the appliance to override the previous name of appliance.
IP Index IP index of the appliance. The value is fourth octet of the IP address. The
initial three octets are populated from the network address provided in
domain.
VMware Telco Cloud Automation uses IP index to calculate the IP address of
the appliance. It adds the IP Index to the base address of the management
network to obtain the IP address of the appliance.
Note
n IP index is applicable only for the IPv4 environment.
n The IP index depends on management subnet prefix length. Ensure that
you provide IP index values within the IP range dictated by that subnet
prefix length. For example, if you use subnet prefix length of 24, then
the subnet has 254 IPs. Hence, the IP index value cannot exceed 254.
If you use prefix length of 27 or 28, then the subnet has 30 or 14
IPs, respectively. The IP index values must then not exceed 30 or 14,
respectively. Ensure that you check the values before adding the IP
index.
What to do next
n Certificate Management.
You can modify the configuration of a workload domain, add a host, view the list of appliances
applicable to the management site, and perform certificate management operations such as
generate Certificate Signing Request (CSR), download CSR, and retry the download or generate
CSR operations.
Note
n You cannot modify the CSI tagging information.
n You can add the CSI tagging information only for a new domain.
Procedure
4 Click Edit.
What to do next
n Certificate Management.
Procedure
3 Click Add.
5 To enable the provisioning of the site, click the button corresponding to Enabled. You cannot
perform any operation on a disabled site.
Field Description
Minimum number of hosts Minimum number of hosts required for the site. The number of hosts cannot
be less than 4 or more than 64.
Select Host Profile Select the host profile from the drop-down list. The selected Host profile
gets associated with each host in the compute cluster domain.
Parent Site The management or workload domain that manages the cluster. Select from
the drop-down menu.
Latitude Latitude of the compute cluster location. The details are automatically added
when you select the location. You can also modify the latitude manually.
Longitude Longitude of the compute cluster location. The details are automatically
added when you select the location. You can also modify the longitude
manually.
Settings You can modify the service settings and the proxy settings for each site.
These configurations override the global configuration available in Global
Configuration tab on Configuration page. For more details on service and
proxy parameters, see Configure Global Settings.
vSphere SSO Domain is available for local settings and not for global
settings. To configure the vSphere SSO Domain for a domain, enable the
Override and enter the required information in the corresponding Override
Value.
Licenses Not applicable. The compute cluster uses the licenses of parent site.
Services n For a compute cluster, you can activate the NSX services. For certain
workloads, if you do not require these services, you can deactivate
these services.
n To use the network services of the parent site, click the Share Transport
Zones With Parent button.
n You can use the vSAN or localstore. Select the value from the drop-
down menu.
Click Enabled button to activate or deactivate the Networking or Storage
services.
6 You can add new CSI categories or use the existing categories from the VMware VSphere
server. You can also create tags corresponding to the CSI categories. To add the CSI
Categories information, add the required information under Settings.
Note
n To configure the CSI Categories, enable the Override for the CSI Tagging under Settings,
and Override Value.
n For a vSAN disabled compute cluster, ensure that the CSI Zone tag name must contain
{hostname}. For example, <text_identifier>-{hostname}.
Field Description
Use Existing Whether to use the existing categories set in the underlying the VMware
VSphere server. Click the corresponding button to activate or deactivate the
option.
Note When using the Use Existing, ensure that you provide the values for
both the region and the zone categories as set in the underlying VMware
vSphere server.
n When creating a Zone category in VMware VSphere, choose Hosts and
Clusters under Associable Object Types.
n When creating a Region category in VMware VSphere, choose
Datacentre under Associable Object Types.
CSI Region Tag The CSI tagging for the data center.
CSI Zone Tag The CSI tagging for the compute clusters or hosts.
7 Add the Switch Configuration information. Click plus icon to add more switches and uplinks.
Field Description
Uplinks Select the network interface card (NIC) for the compute cluster under
Uplinks.
Note
n For vMotion and vSAN, the IP pool should be equal to the total number of ESXi hosts. If
you do not provision the appliances, vSAN, nsxHostOverlay, nsxEdgeOverlay, uplinks are
optional.
n You can click + sign under Networks to create additional VLAN or overlay network to
connect with additional applications.
Field Description
Segment Type Segment type of the network. Select the value from the list.
Switch The switch details that the sites use for network access.
Prefix Length Prefix the length for each packet for the network.
9 (Optional) Add the Appliance Overrides information. Ensure that the appliance names match
the actual names entered in DNS. If they do not match, you can change the name.
n You can override the values of vSAN NFS and NSX Edge Cluster for the compute cluster
and deactivate the deployment of vSAN NFS and NSX Edge Cluster for the compute
cluster.
Field Description
Name Override The new name of the appliance to override the previous name of appliance.
Field Description
IP Index IP index of the appliance. The value is the fourth octet of the IP address. The
initial three octets are populated from the network address provided in the
domain.
VMware Telco Cloud Automation uses IP index to calculate the IP address of
the appliance. It adds the IP Index to the base address of the management
network to obtain the IP address of the appliance.
You can modify the configuration of a compute cluster and add a host.
Note
n Modifying of CSI tagging information is not applicable.
n You can add the CSI tagging information only for a new domain.
Procedure
4 Click Edit.
Prerequisites
Procedure
3 Click Add.
5 Click the button corresponding to Enabled, to enable the provisioning of the site. You cannot
perform any operation on a disabled site.
6 To add an existing cell site group, click the button corresponding to Pre-Deployed. When
you add a Pre-Deployed cell site group, you can override the following values.
To configure the values, enable the Override and enter the required information in the
corresponding Override Value.
Note
n VMware Telco Cloud Automation does not perform any operation on a pre-deployed
domain.
Field Description
Select Host Profile Select the host profile from the drop-down list. The selected Host profile
gets associated with each host in the cell site group.
Parent Domain Select the parent domain from the list. The parent site manages all the sites
within the cell site group.
Settings You can modify the service settings and the proxy settings for each site.
These configurations override the global configuration available in Global
Configuration tab on Configuration page. For more details on service and
proxy parameters, see Configure Global Settings.
vSphere SSO Domain is available for local settings and not for global
settings. To configure the vSphere SSO Domain for a domain, enable the
Override and enter the required information in the corresponding Override
Value.
8 You can add new CSI categories or use the existing categories from the VMware VSphere
server. You can also create tags corresponding to the CSI categories. To add the CSI
Categories information, add the required information under Settings.
Note
n To configure the CSI Categories, enable the Override for the CSI Tagging under Settings,
and Override Value.
n For CSI zone tag, ensure that the name must contain {hostname}. For example,
<text_identifier>-{hostname}.
Field Description
Use Existing Whether to use the existing categories set in the underlying the VMware
VSphere server. Click the corresponding button to activate or deactivate the
option.
Note When using the Use Existing, ensure that you provide the values for
both region categories and zone categories as set in the underlying VMware
vSphere server.
n When creating Zone category in VMware VSphere, choose Hosts and
Clusters under Associable Object Types.
n When creating Region category in VMware VSphere, choose Datacentre
under Associable Object Types.
CSI Region Tag The CSI tagging for the data center.
CSI Zone Tag The CSI tagging for the compute clusters or hosts.
9 Activate or deactivate the Enable datastore customizations option. By default, this feature is
activated. However, you can deactivate it by clicking the toggle button.
Note The Enable datastore customizations field is available only for non-pre-deployed cell
site groups.
n The datastores for all the hosts associated with this domain are named based on the disk
capacity and free space. For example, if the hostname is host201-telco.example.com, the
datastore with the highest capacity is named host201_localDS-0 where host201 is the
prefix, which is the substring preceding the first hyphen (-) and 0 is the index representing
the datastore with the highest capacity. The remaining datastores are named as host201-
DO-NOT-USE-0, host201-DO-NOT-USE-1, and so on, where the indexes 0 and 1 represent the
decreasing order of the free space. 0 represents the highest possible free space.
n The customization is applicable to all the hosts in the domain.
To change the delimiter for extracting prefixes from the host FQDNs, do the following:
a SSH to the TCA VM as admin.
n The datastores for all the hosts associated with this domain are named in the order in
which the datastores are fetched from vCenter. For example, if the host name is host201-
telco.example.com, the datastores are named host201-telco-localDS-0, host201-telco-
localDS-1, and so on, where host201-telco is the prefix, which is the first substring
before the dot (.) in the hostname and the indexes 0 and 1 represent the order in which
the datastores are fetched from vCenter.
10 Add the Switch Configuration information. Click plus icon to add more switches and uplinks.
Field Description
Uplinks Select the network interface card (NIC) for the site under Uplinks.
Note
n System defines the Management network for a cell site group. User can create custom
VLAN based application networks. All cell sites in a cell site group connect with same
management network.
n For the application network type, you can enable the mac address learning for the port
groups. To enable the mac address learning, enable the Mac Learning available under
Networks.
Field Description
Segment Type Segment type of the network. Select the value from the list.
Field Description
Switch The switch details that the sites use for network access.
Prefix Length The Prefix length for each packet for the network.
Note The Prefix length is applicable only for the IPv4 environment.
Note The gateway address is applicable only for the IPv4 environment.
What to do next
Custom mapping is an optional parameter in the domain specification. If you do not provide an
input, then the default mapping and policy is created. You can configure the custom uplink-pnic
mapping and teaming policy using the VMware Telco Cloud Automation web interface or APIs.
Note
n This feature is only applicable to a cell site group.
n You can specify the teaming policy and uplink category mapping only when you are creating
a new cell site group domain. Do not override the settings for the cell site group domains for
which the distributed virtual switches are already created.
{
"name": "test-csg-6",
"type": "CELL_SITE_GROUP",
"enabled": true,
"preDeployed": {
"preDeployed": false
},
"parent": "rdc1",
"switches": [
{
"name": "test-csg-6-dvs001",
"uplinks": [
{
"pnic": "vmnic0",
"name": "PortA1"
},
{
"pnic": "vmnic1",
"name": "PortA2"
}
]
},
{
"name": "test-csg-6-dvs002",
"uplinks": [
{
"pnic": "vmnic2",
"name": "PortB1"
},
{
"pnic": "vmnic3",
"name": "PortB2"
}
]
}
],
"networks": [
{
"type": "application",
"name": "dvs1-app-network-1",
"segmentType": "vlan",
"switch": "test-csg-6-dvs001",
"vlan": 0,
"mtu": 1500,
"mac_learning_enabled": false,
"uplinkTeamingPolicy": {
"uplinkPortOrder": {
"active": [
"PortA1"
],
"standby": [
"PortA2"
],
"unused": []
}
}
},
{
"type": "application",
"name": "dvs2-app-network-1",
"segmentType": "vlan",
"switch": "test-csg-6-dvs002",
"vlan": 0,
"mtu": 1500,
"mac_learning_enabled": false,
"uplinkTeamingPolicy": {
"uplinkPortOrder": {
"active": [
"PortB1"
],
"standby": [
"PortB2"
],
"unused": []
}
}
},
{
"type": "management",
"name": "management",
"segmentType": "vlan",
"switch": "test-csg-6-dvs001",
"vlan": 0,
"mtu": 1500,
"mac_learning_enabled": false,
"uplinkTeamingPolicy": {
"uplinkPortOrder": {
"active": [
"PortA1"
],
"standby": [
"PortA2"
],
"unused": []
}
}
}
],
"csiTags": {},
"csiCategories": {
"useExisting": false
}
}
To configure the custom uplink-pnic mapping and teaming policy from the web interface,
perform the following:
Procedure
4 Select the radio button corresponding to the Cell Site Group for which you want to configure
the uplinks.
6 Click the toggle button to configure uplinks and provide a unique name for each of the
uplinks.
7 Expand Networks and click the toggle button to specify a mapping for the uplink
categorization.
8 Click Save.
You can modify the configuration of cell site group, add a host, and modify the network
configurations related to cell site group.
Note
n Modifying CSI tagging information is not applicable.
n You can add the CSI tagging information only for newly added hosts to a cell site group after
the resync.
n Once a Cell Site Group domain has provisioned or failed hosts, changing the parent of this
Cell Site Group domain and then resyncing it does not migrate the hosts to the vCenter of the
newly selected parent.
n You can specify the parent of a Cell Site Group only when adding or creating the Cell Site
Group domain. You cannot change the parent of a Cell Site Group once hosts are added to it.
Prerequisites
Procedure
b Click Edit.
Procedure
2 Click the site type under which you want to synchronize the domain data.
4 Click Resync.
The Confirm Resync dialog box appears with the Partial Resync check box selected by
default. The Partial Resync option synchronizes the data of the unprovisioned cell site group
and retries for the failed host under the unprovisioned cell site group.
You can add a host to any site or site cluster. A minimum number of hosts are required for each
site type to function. You can define the minimum number of hosts for each site when adding the
site.
Note If a cell site domain has multiple Distributed Virtual Switches in it, then the switch to which
the management network is associated should also be mapped to use the vmnic that has the
vmk0 VMKernel network interface attached.
Prerequisites
n A site type for which you want to add a host is already added in Domains.
n When adding a host to the cell site group, ensure that you have at least either the parent
site or the cell site group provisioned. You cannot add a host to a cell site group that has an
unprovisioned parent site.
n Ensure that the certificate is generated with the server hostname as SAN by performing the
following:
a Log in to the ESXi host using an SSH client, Putty, or any other SSH client.
/sbin/generate-certificates
c Restart the hostd and vpxa services by running the following command:
Procedure
2 Select the data center for which you want to add a host.
Fields Description
You can add the IPMI information for the sites that have host profiles configured with BIOS
and firmware details.
n IPMI Username - User name to access the intelligent platform management interface
(IPMI).
n IPMI Address(FQDN) - Address of the IPMI interface. You must provide the fully qualified
domain name.
n Override datastore customization - Click the toggle button to override the datastore
customization configured at the domain level.
If you activate this option, the Enable datastore customizations field is made available.
n Enable datastore customizations - Click the toggle button to activate the datastore
customization on this host.
If you activate the datastore customization, the datastores on this host are named
based on the disk capacity and free space.
If you deactivate the datastore customization, the datastores on this host are named
in the order in which the datastores are fetched from vCenter.
Note
n When adding a host to a pre-deployed cell site group, you must add only the pre-
deployed host.
n A pre-deployed host means a host already added to the vCenter and configured as
required.
n Use Above credentials for all hosts - If you want to use same user name and password
for each host, select the checkbox.
n Use above IPMI credentials for all hosts - If you want to use same user name and
password to access IPMI for each host, select the checkbox.
7 Click Save.
Edit a Host
Modify an already created host.
Prerequisites
Procedure
2 Click the site type under which you want to modify the host.
4 Click Edit.
6 Select the host to edit. You can perform the following operations on the selected host:
n To delete a host with errors, click Delete and select Force Delete from the Delete page.
Procedure
3 Select the cell site group under which you want to synchronize the host data.
6 Click Resync.
The Confirm Resync dialog box appears with the Partial Resync check box selected by
default.
7 Determine whether you want to choose the partial Resync option or not based on the
following:
n If you choose the Partial Resync option, hosts are processed based on any of the
following conditions:
n Status of the cell site host is FAILED, and the status of the host setting is NOT
CONFIGURED
n Status of the cell site host is PROVISIONED, and the status of the host setting is
FAILED
n If you don't choose the Partial Resync option, hosts with any status except for IN
PROGRESS and DELETING are processed.
Delete a Domain
Starting from VMware Telco Cloud Automation version 2.1, the process for deleting domains has
changed.
Previous Behavior
Previously, to delete a domain you had to remove the domain definition from the domains list.
And at the back-end, VMware Telco Cloud Automation deleted the domain.
For example, the following code snippet is a sample cloud spec file with two domains - test1 and
test2.
{
"domains": [
{
"name": "test1",
...
},
{
"name": "test2",
...
}
],
"settings": {
...
},
"appliances": [
...
],
"images": {
...
}
}
To delete test1, you had to modify this cloud spec file by removing test1 from it.
{
"domains": [
{
"name": "test2",
...
}
],
"settings": {
...
},
"appliances": [
...
],
"images": {
...
}
}
Current Behavior
Now, to delete a domain you can add a list of strings (names of the domains you need to delete)
to the deleteDomains field in the cloud spec file. For example, deleteDomains" : ["cdc1",
"rdc1"].
It is optional to include or exclude the domain to be deleted in the domains list. The following
code snippet is an example for the new behavior. We provide test1 in deleteDomains list to
delete that domain.
{
"domains": [
{
"name": "test1",
...
},
{
"name": "test2",
...
}
],
"settings": {
...
},
"appliances": [
...
],
"images": {
...
},
"deleteDomains": ["test1"]
}
You can delete an enabled domain which is in a provisioned state, and has one or more hosts in
DELETE_FAILED state.
Prerequisites
n Remove the infrastructure associated with the domain. For example, management appliances
like vCenter, NSX manager, vRLI, vRO, TCA-CP, DVS, Portgroups, Host Folders, Network
Folders, DataCenters, Clusters, and ESXi hosts.
Procedure
1 Stop the tcf-manager docker container with the command docker stop tcf-manager .
2 Navigate to /common/lib/docker/volumes/tcf-manager-config/_data/ .
a Open the cloud_spec.json file, remove the entries of the domain as required.
b Open the cloud_config.json file, remove the entries of the domain as required.
c Open the ip_usage.json file, remove the entries of the domain as required.
3 Navigate to /common/lib/docker/volumes/tcf-manager-specs/_data/ .
a Open the certificates folder, remove the certificates of the domain as required.
b Open the csrs folder, remove the csr entries of the domain as required.
c Open the private folder, remove the entries of the domain as required.
4 Stop the tcf-manager docker container with the command docker start tcf-manager .
Certificate Management
You can perform Certificate Signing Request (CSR) for domain.
You can generate the CSR, upload SSL server certificate, and retry to generate the CSR.
Note
n Telco Cloud Automation supports only self-signed certificates.
n In the certificate, add a new line after -----BEGIN CERTIFICATE----- and before -----END
CERTIFICATE----.
n In the private key, add a new line after -----BEGIN PRIVATE KEY----- and before -----END
PRIVATE KEY-----.
Prerequisites
Certificate Authority (CA) is added. For details on adding CA, see Add Certificate Authority.
Procedure
4 Click Edit.
n To generate the CSR, click Generate CSR. It generates the CSR, signs the CSR and applies
the certificate on the selected appliances.
Viewing Tasks
You can view the status of the current and the past tasks executed in Infrastructure Automation.
You can view the status of all the tasks. This includes the current task and the older tasks. You
can view the progress, status, and the start and end time of the task.
Procedure
Containerized applications are more lightweight and flexible than virtual machines, and they
share the operating system. In this way, Kubernetes clusters allow for applications to be more
easily developed, moved, and managed. Kubernetes clusters allow containers to run across
multiple machines and environments: Virtual, physical, cloud-based, and on-premises. For more
information about Kubernetes clusters and its components, see the Kubernetes documentation at
https://kubernetes.io/docs/concepts/overview/.
There must be a minimum of one controller node and one worker node for a Kubernetes cluster
to be operational. For production and staging, the cluster is distributed across multiple worker
nodes. For testing, the components can all run on the same physical or virtual node.
Network functions require special customizations such as Real-Time Kernel and HugePages on
Kubernetes Worker nodes. The advantage of deploying Kubernetes clusters through VMware
Telco Cloud Automation is that, it customizes the Kubernetes clusters according to its network
function requirement before deploying the CNFs.
Note The Node Customization feature is applicable only when you deploy Kubernetes clusters
through VMware Telco Cloud Automation.
Design Cluster
Deploy Kubernetes
Onboard vSphere VIM Template for
Management Cluster
Management Cluster
5
1 2 3 4 7
6
vSphere VMs
Legend
You can upgrade the Kubernetes cluster through VMware Telco Cloud Automation.
Note When you upgrade a management cluster to the latest version, the certificate renewal of
the cluster is automatically enabled and the number of days defaults to 90.
The following table lists the Kubernetes upgrade compatibility for the Management cluster when
upgrading from VMware Telco Cloud Automation.
Before upgrading Kubernetes to the latest version, consider the following constraints and
prepare for the upgrade plan:
n VMware Telco Cloud Automation preserves the customization performed through previous
CNF instantiate / upgrade on the nodepools of the cluster. Any manual changes performed
directly on the nodes are not preserved.
n Applications may face downtime during kubernetes upgrade and may take some time to be
available for operations.
n Check and upgrade the required node pools in the Workload cluster.
n The IP addresses of master nodes and the worker nodes change after upgrade.
n If the upgrade fails, you can correct the configuration and perform the upgrade again.
n Ability to create, upgrade, and modify the workload cluster managed through the
management cluster.
n Ability to upgrade and instantiate the CNF in the workload cluster managed through the
management cluster.
Prerequisites
n Create an upgrade plan for the upgrading the cluster instance, considering the impact of
cluster downtime.
n Take backup of any manual customization added to the clusters. You must take the backup
manually.
Note You need to note down all the manual customization added to the clusters.
Procedure
4 Click the Options (⋮) symbol against the Kubernetes cluster that you want to upgrade.
6 In the Select Version field, select the Kubernetes version to upgrade from the list.
7 In the Virtual Machine Template, click the option to select the VM template applicable for the
new version of Kubernetes.
8 Click Upgrade.
What to do next
To get the latest IP address details of the node, view the Cluster Instances page.
When you define the Kubernetes cluster template, select whether it is a Management cluster
type or a Workload cluster type.
n Management cluster - A Management cluster is a Kubernetes cluster that performs the role
of the primary management and operational center. You use the Management cluster for
managing multiple Workload clusters.
n Workload cluster - The clusters where the actual application resides. Deploy network
functions on the Workload clusters.
When creating a Kubernetes cluster template for a Management cluster or a Workload cluster,
you must provide two types of configuration information:
n Cluster Configuration - Specify the details about the Container Storage Interfaces (CSI) such
as vSphere-CSI, NFS Client, and Container Network Interface (CNI) such as Antrea, Calico, and
Multus, version of Kubernetes, and tools such as Helm Charts.
n Master Node and Worker Node Configuration - Here, you specify the details about the
master node virtual machine and the worker node virtual machines. Specify details such as
the storage, CPU, memory size, number of networks, labels, number of replicas for the master
nodes, and worker nodes, and so on.
Addon Versions
VMware
Telco Cloud VMware Tanzu
Automation Kubernetes Grid Kubernetes
Type Name Version Version Version Version
VMware
Telco Cloud VMware Tanzu
Automation Kubernetes Grid Kubernetes
Type Name Version Version Version Version
Note Few of the add-on versions are controlled by TKG and therefore the versions may change
after the release. However, the version documented in the preceding table is the minimum
version available. For information on the updated versions, refer to the TKG documentation.
Prerequisites
To perform this operation, you require a role with Infrastructure Design privilege.
Procedure
2 Go to Infrastructure > Caas Infrastructure > Cluster Templates and click Add.
Note The supported Container Network Interface (CNI) for a Management cluster is Antrea.
4 Click Next.
n Memory - Memory in GB
n Replica - Number of controller node VMs to be created. The ideal number of replicas for
production or staging deployment is 3.
n Networks - Enter the labels to group the networks. The minimum number of labels
required to connect to the management network is 1. Network labels are used for
providing networks inputs when deploying a cluster. Meaningful network labels such
as N1, N2, N3, and so on, help the deployment users provide the correct network
preferences. To add more labels, click Add.
Note For the Management network, master node supports only one label.
n Labels (Optional) - Enter the appropriate labels for this profile. These labels are applied to
the Kubernetes node. To add more labels, click Add.
6 To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes,
click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.
7 Click Next.
8 In the Worker Node Configuration tab, add a node pool. A node pool is a set of nodes
that have similar properties. Pooling is useful when you want to group the VMs based on
the number of CPUs, storage capacity, memory capacity, and so on. You can add one node
pool to a Management cluster and multiple node pools to a Workload cluster, with different
groups of VMs. To add a node pool, enter the following details:
n Memory - Memory in GB
n Networks - Enter the labels to group the networks. Network labels provide networks
inputs when deploying a cluster. To add more labels, click Add.
n Labels - Enter the appropriate labels for this profile. These labels are added to the
Kubernetes node. To add more labels, click Add.
9 To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes,
click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.
Results
What to do next
This topic lists the steps to add AKOO using the Edit Cluster Configuration tab.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.
5 In the Cluster Configuration tab, under Networking, click Add and select ako-operator.
Option Description
AVI Controller
Controller Host Enter the AVI controller host name. The format is
[scheme://address[:port].
n Scheme: HTTP or HTTPS. Defaults to HTTPS if the
scheme is not specified.
n Address: IPv4 address or the host name of the AVI
controller.
n Port: If port is not specified, it defaults to the port of
the AVI controller.
User Name Enter the user name to log in to the AVI controller.
Trusted Certificate (Optional) Paste the trusted certificate in native multiline format for
secure communication with the AVI controller.
Option Description
Cloud Name Enter the cloud name configured in the AVI Controller.
Default Service Engine Group Enter the service engine group name configured in the
AVI Controller.
Default VIP Network Enter the VIP network name in the AVI Controller.
Default VIP Network CIDR Enter the VIP network CIDR in the AVI Controller.
Note If the certificate or password of the Avi Controller expires, you can edit the AKO
Operator configurations with the new certificate or password.
7 Click Add.
Note
n Ensure that storage size is 50 GB.
n Ensure that the network label length does not exceed 15 characters.
n Editing the Kubernetes cluster template does not change the cluster instances that are
already deployed.
To perform this operation, you require a role with Infrastructure Design privileges.
Procedure
2 Go to Infrastructure > CaaS Infrastructure and click the Cluster Templates tab.
4 Click Edit.
5 In the Edit Kubernetes Template wizard, make the required updates to the template details,
master node configuration, and worker node configuration fields.
Results
Note To make sure that all the features are available, download or upload a Kubernetes cluster
template of the same VMware Telco Cloud Automation version.
Procedure
2 Go to Infrastructure > CaaS Infrastructure and click the Cluster Templates tab.
4 To upload the JSON file to a different environment, navigate to the environment and log in to
the VMware Telco Cloud Automation web interface.
7 Click Upload.
The cluster template uploads to your environment and is available in the CaaS Infrastructure
> Cluster Templates tab.
Note You cannot delete a Kubernetes template when it is being used for deploying a cluster.
Procedure
2 Go to Infrastructure > CaaS Infrastructure and click the Cluster Templates tab.
4 Click Delete.
Results
For steps to upgrade the Kubernetes version of a cluster template, see Edit a Kubernetes Cluster
Template.
Prerequisites
n You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.
n A network must be present with the DHCP range and the static IP address of the same
subnet.
Procedure
Note Depending on the VMware Telco Cloud Automation setup, internet accessed or air-
gapped, the options available for the cluster may change.
n If you have saved a validated Management cluster configuration that you want to
replicate on this cluster, click Upload on the top-right corner and upload the JSON file.
The fields are then auto-populated with this configuration information and you can edit
them as required. You can also use the Copy Spec function of VMware Telco Cloud
automation instead of JSON file, for details, see Copy Spec and Deploy New.
n If you want to create a Management cluster configuration from the beginning, perform
the next steps.
Under the Advanced Options, you can select the Infrastructure for Management Cluster
LCM. The VMware Telco Cloud Automation uses this VIM and associated control planes for
cluster LCM operations.
5 Click Next.
6 The Select Cluster Template tab displays the available Kubernetes cluster templates. Select
the Management Kubernetes cluster template that you have created.
Note If the template displays as Not Compatible, edit the template and try again.
7 Click Next.
n Name - Enter the cluster name. The cluster name must be compliant with DNS hostname
requirements as outlined in RFC-952 and amended in RFC-1123.
n Password - Create a password to log in to the Master and Worker nodes. The default
user name is capv.
Note Ensure that the password meets the minimum requirements displayed in the UI.
n OS Image With Kubernetes - The pop-up menu displays the OS image templates in your
vSphere instance that meet the criteria to be used as a Tanzu Kubernetes Grid base OS
image with the selected Kubernetes version. If there are no templates, ensure that you
upload them to your vSphere environment.
n IP Version - Whether to use the IPv4 or IPv6 for cluster deployment. Select the value
from the drop-down list.
n Virtual IP Address - VMware Tanzu Kubernetes Grid deploys a kube-vip pod that
provides load-balancing services to the cluster API server. Thiskube-vip pod uses a
static virtual IP address to load-balance API requests across multiple nodes. Assign an
IP address that is not within your DHCP range, but in the same subnet as your DHCP
range.
n Syslog Servers - Add the syslog server IP address/FQDN for capturing the infrastructure
logs of all the nodes in the cluster.
n vSphere Cluster - Select the default vSphere cluster on which the Master and Worker
nodes are deployed.
n Resource Pool - Select the default resource pool on which the Master and Worker nodes
are deployed.
n VM Folder - Select the virtual machine folder on which the Master and Worker nodes are
placed.
n Datastore - Select the default datastore for the Master and Worker nodes to use.
n MTU (Optional) - Select the maximum transmission unit (MTU) in bytes for management
interfaces of control planes and node pools. If you do not select a value, the default value
is 1500.
n Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured
in the guest operating system of each node in the cluster. You can override this option on
the Master node and each node pool of the Worker node. To add a DNS, click Add.
n Airgap & Proxy Settings - Use this option when you need to configure the Airgap or the
Proxy environment for VMware Telco Cloud Automation. If you do not want to use the
Airgap or Proxy, select None.
Note You must use either airgap or proxy in an IPv6 setup. Do not select none for an
IPv6 setup.
n In an air-gapped environment:
n If you have added an air-gapped repository, select the repository using the
Airgap Repository drop-down menu.
n If you have not added an air-gapped repository yet and want to add one now,
select Enter Repository Details:
n In a proxy environment:
n If you have added a proxy, select the proxy using the Proxy Repository drop-
down menu.
n If you have not added proxy yet and want to add one now, select Enter Proxy
Details and provide the following details:
n HTTP Proxy - To route the HTTP requests through proxy, enter the URL or full
domain name of HTTP proxy. You must use the format FQDN:Port or IP:Port.
n HTTPS Proxy - To route the HTTPs requests through proxy, enter the URL
or full domain name of HTTPs proxy. You must use the format FQDN:Port or
IP:Port.
Note You must add the cluster node network CIDR, vCenter FQDN(s), harbor
FQDN(s) and any other host that you want to bypass the proxy in this list.
9 In Harbor, If you have defined a Harbor repository as a part of your Partner system, click Add
> Select Repository. To add a new repository, click Add > Enter Repository Detail.
10 Click Next.
11 In the Control Plane Node Configuration tab, provide the following details:
Note VMware Telco Cloud Automation displays the allocated CPU, Memory, and Storage
details along with number of Replica details of the master node. These configurations depend
on the Cluster template selected for Kubernetes Cluster deployment.
n vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the Master
node, select the vSphere cluster from here.
n Resource Pool (Optional) - If you want to use a different resource pool for the master
node, select the resource pool from here.
n Datastore (Optional) - If you want to use a different datastore for the master node, select
the datastore from here.
n Domain Name Servers - You can override the DNS. To add a DNS, click Add.
12 Click Next.
Note VMware Telco Cloud Automation displays the allocated CPU, Memory, and Storage
details along with number of Replica details of the master node. These configurations depend
on the Cluster template selected for Kubernetes Cluster deployment.
n vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the worker
node, select the vSphere cluster from here.
n Resource Pool (Optional) - If you want to use a different resource pool for the worker
node, select the resource pool from here.
n Datastore (Optional) - If you want to use a different datastore for the worker node, select
the datastore from here.
n Domain Name Servers - You can override the DNS. To add a DNS, click Add.
14 Click Next and review the configuration. You can download the configuration and reuse it for
deploying a cluster with a similar configuration.
15 Click Deploy.
When deploying a management cluster, the certificate renewal of the cluster is automatically
enabled and the number of days defaults to 90.
If the operation is successful, the cluster is created and its status changes to Active. If the
operation fails, the cluster status changes to Not Active. If the cluster fails to create, delete
the cluster, upload the previously downloaded configuration, and recreate it.
Results
The Management cluster is deployed and VMware Telco Cloud Automation automatically pairs it
with the cluster of the site.
Note You can deploy one Management cluster at a time. Parallel deployments are queued and
deployed in sequence.
What to do next
n You can view the Kubernetes clusters deployed through VMware Telco Cloud Automation
from the Kubernetes Cluster tab.
n To view more details of the Kubernetes cluster that you have deployed, change the
password, or to add syslog servers, go to CaaS Infrastructure > Cluster Instances and click
the cluster.
Procedure
3 Click ⋮ corresponding to the management cluster that you want to edit and select Edit
Control Plane Node Configuration.
4 (Optional) modify the value of Replicas to scale down or scale up the control plane nodes.
5 (Optional) to activate the machine health check, select Configure Machine Health Check. For
more information, see Machine Health Check.
6 Under Advanced Configuration, you can configure the node start-up timeout duration and
set the unhealthy conditions.
a Node Start Up Timeout- (Optional) Enter the time duration for Machine Health Check to
wait for the node to join the cluster. If the node does not join within the specified time,
Machine Health Check considers it unhealthy and starts the remediation process.
b Node Unhealthy Conditions - Set unhealthy conditions for the nodes. If any of these
conditions are met, Machine Health Check considers these nodes as unhealthy and starts
the remediation process.
7 Click SAVE.
Procedure
3 Click ⋮ corresponding to the management cluster that you want to edit and select Edit
Worker Node Configuration.
4 Select the node pool that you want to edit and click Edit.
5 Modify the value of Replicas to scale down or scale up the node pool.
6 To activate the machine health check, select Configure Machine Health Check. For more
information, see Machine Health Check.
7 Under Advanced Configuration, you can configure the Node Start Up Timeout duration and
set the unhealthy conditions.
a Node Start Up Timeout- (Optional) Enter the time duration for Machine Health Check to
wait for the node to join the cluster. If the node does not join within the specified time,
Machine Health Check considers it unhealthy.
b Node Unhealthy Conditions - Set unhealthy conditions for the nodes. If any of these
conditions are met, Machine Health Check considers these nodes as unhealthy and starts
the remediation process.
8 Click UPDATE.
The following table lists the Kubernetes upgrade compatibility for the Workload cluster when
upgrading from VMware Telco Cloud Automation.
VMware
Telco Cloud
Automation Existing Kubernetes Versions v1.22.17 v1.23.16 v1.24.10
Note Before upgrading to TCA 2.3, it is mandatory to upgrade all the Kubernetes clusters
deployed as part of TCA 2.1.x in TCA 2.2.
VMware Telco Cloud Automation uses VMware Tanzu Kubernetes Grid to create VMware Tanzu
Kubernetes clusters. VMware Tanzu Kubernetes Grid has concepts such as Management and
Workload clusters. The Management cluster manages the Workload clusters and both these
clusters can be deployed on different vCenter Servers.
For more information about the VMware Tanzu Kubernetes Grid concepts, see Tanzu Kubernetes
Grid Concepts at https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/index.html.
Prerequisites
n You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.
n You must have created a Management cluster or uploaded a Workload cluster template.
n A network must be present with a DHCP range and a static IP of the same subnet.
n For region: vSphere Datacenter has tags attached for the selected category.
n For zone: vSphere Cluster or hosts under the vSphere cluster has tags attached for the
selected category. Ensure that vSphere Cluster and hosts under vSphere cluster do not
share the same tags.
Procedure
n If you have saved a validated Workload cluster configuration that you want to replicate
on this cluster, click Upload on the top-right corner and upload the JSON file. The fields
are then auto-populated with this configuration information and you can edit them as
required. You can also use the Copy Spec function of VMware Telco Cloud automation
instead of JSON file, for details, see Copy Spec and Deploy New.
n If you want to create a Workload cluster configuration from the beginning, perform the
next steps.
4 Click Next.
5 The Select Cluster Template tab displays the available Kubernetes clusters. Select the
Workload Kubernetes cluster template that you have created.
Note If the template displays as Not Compatible, edit the template and try again.
6 Click Next.
n Name - Enter the cluster name. The cluster name must be compliant with DNS hostname
requirements as outlined in RFC-952 and amended in RFC-1123.
n Management Cluster - Select the Management cluster from the drop-down menu. You
can also select a Management cluster deployed in a different vCenter.
n Password - Create a password to log in to the Master node and the Worker node. The
default user name is capv.
Note Ensure that the password meets the minimum requirements displayed in the UI.
n OS Image With Kubernetes - The pop-up menu displays the OS image templates in your
vSphere instance that meet the criteria to be used as a Tanzu Kubernetes Grid base OS
image with the selected Kubernetes version. If there are no templates, ensure that you
upload them to your vSphere environment.
n Virtual IP Address - VMware Tanzu Kubernetes Grid deploys a kube-vip pod that
provides load-balancing services to the cluster API server. Thiskube-vip pod uses a
static virtual IP address to load-balance API requests across multiple nodes. Assign an
IP address that is not within your DHCP range, but in the same subnet as your DHCP
range.
n Syslog Servers - Add the syslog server IP address/FQDN for capturing the infrastructure
logs of all the nodes in the cluster.
n vSphere Cluster - Select the default vSphere cluster on which the Master and the Worker
nodes are deployed.
n Resource Pool - Select the default resource pool on which the Master and Worker nodes
are deployed.
n VM Folder - Select the virtual machine folder on which the Master and Worker nodes are
placed.
n Datastore - Select the default datastore for the Master and Worker nodes to use.
n MTU (Optional) - Select the maximum transmission unit (MTU) in bytes for management
interfaces of control planes and node pools. If you do not select a value, the default value
is 1500.
n Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured
in the guest operating system of each node in the cluster. You can override this option on
the Master node and each node pool of the Worker node. To add a DNS, click Add.
n Airgap & Proxy Settings - Use this option when you need to configure the Airgap or the
Proxy environment for VMware Telco Cloud Automation. If you do not want to use the
Airgap or the Proxy, select None.
n In an air-gapped environment:
n If you have added an air-gapped repository, select the repository using the
Airgap Repository drop-down menu.
n If you have not added an air-gapped repository yet and want to add one now,
select Enter Repository Details:
n In a proxy environment
n If you have added a proxy repository, select the repository using the Proxy
Repository drop-down menu.
n If you have not added a proxy repository yet and want to add one now, select
Enter Repository Details:
n HTTP Proxy - To route the HTTP requests through the proxy, enter the URL or
full domain name of the HTTP proxy.
n HTTPS Proxy - To route the HTTPs requests through the proxy, enter the URL
or full domain name of the HTTPs proxy.
n Harbor - If you have defined a Harbor repository as a part of your Partner system, click
Add > Select Repository. To add a new repository, click Add > Enter Repository Detail.
n NFS Client - Enter the server IP address and the mount path of the NFS client. Ensure that
the NFS server is reachable from the cluster. The mount path must be accessible to read
and write.
n If all the nodes inside the Kubernetes cluster do not have access to the shared datastore,
you can enable multi-zone. To enable multi-zone, provide the following details in the
vSphere CSI:
n Enable Multi-Zone - Click the corresponding button to enable the multi-zone feature.
n Region - Select the region from list of categories. VMware Telco Cloud Automation
obtains the information of categories created in the VMware vSphere server and
displays the list.
Note If you cannot find the region in the list, click Force Refresh to obtain the latest
list of categories from the VMware vSphere server.
n Zone - Select the zone from list of categories. VMware Telco Cloud Automation
obtains the information of zones created in the VMware vSphere server and displays
the list.
Note If you cannot find the zone in the list, click Force Refresh to obtain the latest list
of categories from the VMware vSphere server.
n vSphere CSI Datastore (Optional) - Select the vSphere CSI datastore. This datastore must
be accessible from all the nodes in the cluster. This datastore is provided as parameter
to default Storage Class. When you enable the multi-zone, the vSphere CSI Datastore is
disabled.
8 Click Next.
9 In the Control Plane Node Configuration tab, provide the following details:
n vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the master
node, select the vSphere cluster from here.
n Resource Pool (Optional) - If you want to use a different resource pool for the master
node, select the resource pool from here.
n Datastore (Optional) - If you want to use a different datastore for the master node, select
the datastore from here.
n Domain Name Servers - You can override the DNS. To add a DNS, click Add.
10 Click Next.
11 In the Worker Node Configuration tab, provide the following details for each node pool
defined in the template:
n vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the worker
node, select the vSphere cluster from here.
n Resource Pool (Optional) - If you want to use a different resource pool for the worker
node, select the resource pool from here.
n Datastore (Optional) - If you want to use a different datastore for the worker node, select
the datastore from here.
12 Click Next and review the configuration. You can download the configuration and reuse it for
deploying a cluster with a similar configuration.
13 Click Deploy.
If the operation is successful, the cluster is created and its status changes to Active. If the
operation fails, the cluster status changes to Not Active. If the cluster fails to create, delete
the cluster, upload the previously downloaded configuration, and recreate it.
Results
The Workload cluster is deployed and VMware Telco Cloud Automation automatically pairs it
with the cluster's site.
What to do next
n You can view the Kubernetes clusters deployed through VMware Telco Cloud Automation
from the Kubernetes Cluster tab.
n To view more details of the Kubernetes cluster that you have deployed, go to CaaS
Infrastructure > Cluster Instances and click the cluster.
Prerequisites
To perform this operation, you require a role with Infrastructure Design privileges.
Procedure
2 Go to Infrastructure > Caas Infrastructure > Cluster Templates and click Add.
4 Click Next.
n Kubernetes Version - Select the Kubernetes version from the drop-down menu. For the
list of supported versions, see Table 1-1. Supported Features on Different VIM Types.
n CNI - Click Add and select a Container Network Interface (CNI). The supported CNIs are
Multus, Calico, and Antrea. To add additional CNIs, click Add under CNI.
Note
n Either Calico or Antrea and only one of them must be present. Multus is mandatory
when the network functions require any CNI plug-ins such as SRIOV or Host-Device.
n You can add CNI plug-ins such as SRIOV as a part of Node Customization when
instantiating, upgrading, or updating a CNF.
Note VMware Telco Cloud Automation does not support dhcp in an IPv6
environment.
bandwidth
dhcp
flannel
host-local
loopback
ptp
static
vlan
bridge
firewall
host-device
ipvlan
macvlan
portmap
sbr
tuning
n CSI - Click Add and select a Container Storage Interface (CSI) such as vSphere CSI or
NFS Client. For more information, see https://vsphere-csi-driver.sigs.k8s.io/ and https://
github.com/kubernetes-sigs/nfs-subdir-external-provisioner.
Note You can create a persistence volume using vSphere CSI only if all nodes in cluster
have access to shared datastore.
n Timeout (Optional) (For vSphere CSI) - Enter the CSI driver call timeout in seconds.
The default timeout is 300 seconds.
n Storage Class - Enter the storage class name. This storage class is used to provision
Persistent Volumes dynamically. A storage class with this name is created in the
Kubernetes cluster. The storage class name defaults to vsphere-sc for the vSphere
CSI type and nfs-client for the NFS Client type.
n Default Storage Class - To set this storage class as default, enable the Default
Storage Class option. The storage class defaults to True for the vSphere CSI type.
It defaults to False for the NFS Client type. Only one of these types can be the default
storage class.
Note Only one vSphere CSI type and one NFS Client type storage class can be
present. You cannot add more than one storage class of the same type.
n Tools - The current supported tool is Helm. Helm helps in troubleshooting the deployment
or upgrade of a network function.
n Helm 3.x is pre-installed in the cluster and the option to select the Helm 3.x is removed
from cluster template.
n Helm version 2 is mandatory when the network functions deployed on this cluster
depend on Helm v2. The supported Helm 2 version is 2.17.0.
n If you provide Helm version 2, VMware Telco Cloud Automation automatically deploys
Tiller pods in the Kubernetes cluster. If you require Helm CLI to interact with your
Kubernetes cluster for debugging purposes, install Helm CLI manually.
n Note
n If you require any other version of Helm, apart from the installed versions, you
must install the required versions manually.
Click Add and select Helm from the drop-down menu. Enter the Helm version.
6 Click Next.
n Name - Name of the pool. The node pool name cannot be greater than 36 characters.
n Memory - Memory in GB
n Replica - Number of controller node VMs to be created. The ideal number of replicas for
production or staging deployment is 3.
n Networks - Enter the labels to group the networks. The minimum number of labels
required to connect to the management network is 1. Network labels are used for
providing networks inputs when deploying a cluster. Meaningful network labels such
as N1, N2, N3, and so on, help the deployment users provide the correct network
preferences. To add more labels, click Add.
n Labels (Optional) - Enter the appropriate labels for this profile. These labels are applied to
the Kubernetes node. To add more labels, click Add.
Note For the Management network, master node supports only one label.
8 To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes,
click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.
9 In the Worker Node Configuration tab, add a node pool. A node pool is a set of nodes that
have similar VMs. Pooling is useful when you want to group the VMs based on the number of
CPUs, storage capacity, memory capacity, and so on. You can add multiple node pools with
different groups of VMs. Each node pool can be deployed on a different cluster or a resource
pool.
Note All Worker nodes in a node pool contain the same Kubelet and operating system
configuration. Deploy one network function with infrastructure requirements on one node
pool.
You can create multiple node pools for the following scenarios:
n When you require the Kubernetes cluster to be spanned across multiple vSphere clusters.
n When the cluster is used for multiple network functions that require node customizations.
To add a node pool, enter the following details:
n Name - Name of the node pool. The node pool name cannot be greater than 36
characters.
n Memory - Memory in MB
n Networks - Enter the labels to group the networks. Networks use these labels to provide
network inputs during a cluster deployment. Add additional labels for network types such
as IPvlan, MacVLAN, and Host-Device. Meaningful network labels such as N1, N2, N3,
and so on, help users provide the correct network preferences during deployment. It is
mandatory to include a management interface label. SR-IOV interfaces are added to the
Worker nodes when deploying the network functions.
Apart from the management network, which is always the first network, the other labels
are used as interface names inside the Worker nodes. For example, when you deploy a
cluster using the template with the labels MANAGEMENT, N1, and N2, the Worker nodes
interface names are eth0, N1, N2. To add more labels, click Add.
n Labels - Enter the appropriate labels for this profile. These labels are applied to the
Kubernetes node and you can use them as node selectors when instantiating a network
function. To add more labels, click Add.
10 Under CPU Manager Policy, set CPU reservations on the Worker nodes as Static or
Default. For information about controlling CPU Management Policies on the nodes, see
the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-cluster/cpu-
management-policies/.
Note For CPU-intensive workloads, use Static as the CPU Manager Policy.
11 To enable Machine Health Check, select Configure Machine Health Check. For more
information, see Machine Health Check.
12 Under Advanced Configuration, you can configure the Node Start Up Timeout duration and
set the unhealthy conditions.
a (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check to
wait for a node to join the cluster. If a node does not join during the specified time,
Machine Health Check considers it unhealthy.
b Set unhealthy conditions for the nodes. If any of these conditions are met, Machine Health
Check considers these nodes as unhealthy and starts the remediation process.
13 To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes,
click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.
Results
What to do next
Note When you transform a v1 workload cluster to a v2 workload cluster, the certificate renewal
of the cluster is automatically enabled and the number of days defaults to 90.
Prerequisites
Note
n For a successful transformation of node pools, you must perform the sync esx info API call
before initiating a transform of imported v1 clusters.
n VMware Telco Cloud Automation changes the node pool name from <existing nodepool
name> to <cluster name>-<existing nodepool name >. For example, if a Workload Cluster
w1 has a node pool named np1, then after transformation node pool name becomes
w1-np1.
n You cannot use the v1 API on the transformed Workload Cluster. You can use the v2 API
to manage all the life-cycle management operations on this Workload Cluster.
Procedure
The CaaS Infrastructure dashboard lists the v1 and v2 clusters and their status.
2 Click the ⋮ menu against a v1 cluster that you want to transform, and click Transform Cluster.
Note After you transform a v1 Workload cluster to v2 Workload cluster, you cannot perform
any v1 operations on it.
3 Click Transform.
Results
VMware Telco Cloud Automation converts the v1 Workload Cluster to v2 Workload cluster. You
can now perform v2 life-cycle management operations on it.
Prerequisites
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster creation operation that you want
to stop.
4 Click Abort.
Results
VMware Telco Cloud Automation rolls back the progress of Kubernetes Cluster creation
operation and deletes all the deployed nodes. After VMware Telco Cloud Automation stops the
cluster creation operation, you cannot deploy the same cluster again.
Prerequisites
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.
5 In the Cluster Configuration tab, add a CNI, CSI, tool, AVI Kubernetes Operator (AKO), or
syslog server, and click Save.
n You cannot edit the Storage Class name in the vSphere-CSI (NFS Client is also not
supported).
n You can add CNI, CSI, or Tools, but cannot remove them.
n You cannot enable Multi-Zone on an existing Kubernetes cluster that is upgraded from
previous VMware Telco Cloud Automation versions. It is also not supported on a newly
created Workload cluster from a Management cluster that is upgraded from a previous
VMware Telco Cloud Automation version.
n You cannot enable or disable multi-zone if any persistent volumes (PV) provisioned
through vSphere CSI are present inside Kubernetes cluster.
Results
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.
5 In the Master Nodes tab, scale down or scale up the Worker nodes, add or remove labels,
and click Save.
Results
You have successfully edited the Master node configuration of your Kubernetes cluster instance.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.
5 To edit a node pool, select the node pool from the Worker Nodes tab, and click Edit.
6 Modify the value of Replicas to scale down or scale up the Worker nodes. You can also add
labels and use these labels as node selectors when instantiating the Network Functions.
7 To enable Machine Health Check, select Configure Machine Health Check. For more
information, see Machine Health Check.
8 Under Advanced Configuration, you can configure the Node Start Up Timeout duration and
set the unhealthy conditions.
a (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check to
wait for a node to join the cluster. If a node does not join during the specified time,
Machine Health Check considers it unhealthy.
b Set unhealthy conditions for the nodes. If any of these conditions are met, Machine Health
Check considers these nodes as unhealthy and starts the remediation process.
9 Click Update.
Results
You have successfully edited the Worker node configuration of a Kubernetes cluster instance in
your node pool.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster where you want to add the node
pool.
n Name - Enter the name of the node pool. The node pool name cannot be greater than 36
characters.
n Storage - Select the storage size. Minimum disk size required is 50 GB.
n vSphere Cluster (Optional) - To use a different vSphere Cluster, select the vSphere
cluster from here.
n Resource Pool (Optional) - To use a different resource pool, select the resource pool from
here.
n Datastore (Optional) - To use a different datastore, select the datastore from here.
n Labels - Add key-value pair labels to your nodes. You can use these labels as node
selectors when instantiating a network function.
n Label - Enter the labels to group the networks. Networks use these labels to
provide network inputs during a cluster deployment. Add additional labels for network
types such as IPvlan, MacVLAN, and Host-Device. Meaningful network labels such
as N1, N2, N3, and so on, help users provide the correct network preferences
during deployment. It is mandatory to include a management interface label. SR-IOV
interfaces are added to the Worker nodes when deploying the network functions.
Apart from the management network, which is always the first network, the other
labels are used as interface names inside the Worker nodes. For example, when you
deploy a cluster using the template with the labels MANAGEMENT, N1, and N2, the
Worker nodes interface names are eth0, N1, N2. To add more labels, click Add.
n Network - Select the network that you want to associate with the label.
n (Optional) MTU - Provide the MTU value for the network. The minimum MTU value is
1500. The maximum MTU value depends on the configuration of the network switch.
n Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured
in the guest operating system of each node in the cluster. You can override this option on
the Master node and each node pool of the Worker node. To add a DNS, click Add.
n CPU Manager Policy - Set CPU reservations on the Worker nodes as Static or
Default. For information about controlling CPU Management Policies on the nodes, see
the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-cluster/
cpu-management-policies/.
Note For CPU-intensive workloads, use Static as the CPU Manager Policy.
n Configure Machine Health Check - Click the corresponding button to enable the machine
health check. When you enable the Configure Machine Health Check, you can configure
the health check related options under Advanced Configuration. For details on Machine
Health Check, see Machine Health Check
n Under Advanced Configuration, you can configure the Node Start Up Timeout duration
and set the unhealthy conditions.
Note Node Start Up Timeout is applicable when the Machine Health Check is enabled.
1 (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check
to wait for a node to join the cluster. If a node does not join during the specified time,
Machine Health Check considers it unhealthy.
2 Set unhealthy conditions for the nodes. If the nodes meet any of these conditions,
Machine Health Check considers them as unhealthy and starts the remediation
process.
n To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes
nodes, select Use Linked Cloning for Cloning the VMs.
5 Click Add.
Results
You have successfully added the node pool to your Workload cluster.
Prerequisites
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster and select Edit Worker Node
Configuration.
4 Select the node pool, click Delete, and confirm the operation.
Results
Prerequisites
Ensure that your password meets the minimum security requirements listed in the interface.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to change the
password.
5 In the Change Password pop-up window, enter your new password and confirm it.
Results
Note It might take some time for the system to reflect the changed password.
This option is helpful when you are deploying multiple Kubernetes clusters with similar
configurations.
Procedure
2 Navigate to Infrastructure > Caas Infrastructure and select the Kubernetes cluster.
3 Click the Options (⋮) symbol against the Kubernetes cluster and select Copy Spec and
Deploy New.
Results
VMware Telco Cloud Automation automatically generates a new cluster template with the
configuration of the Kubernetes cluster that you copied.
Note This operation is available from VMware Telco Cloud Automation version 1.9 onwards.
Note The Retry option retries the cluster creation operation from the point of failure. Which
means, you cannot edit the properties and recreate the cluster from the beginning using Retry.
Procedure
3 To retry a failed Kubernetes Cluster creation operation, click the Options symbol against the
failed Kubernetes cluster and click Retry.
Results
What to do next
To edit a cluster and then retry the cluster creation operation, you must delete the cluster and
recreate it.
Cluster Details
You can view the details of cluster and the health of various components associated to the
cluster.
n The Components sections provides details of various components and their health:
Note Telco Cloud Automation obtains the health status directly from Kubernetes and
displays that health status on Telco Cloud Automation user interface.
n Unhealthy : The components is not working fine and has some faults.
Note
n When the cluster is under upgrade or under creation, the status may show
Unknown.
You can click on the Components to view the details of Pods associated with each
Components.
Note Telco Cloud Automation obtains the health status directly from Kubernetes and
displays that health status on Telco Cloud Automation user interface. Kubernetes maintains
the conditions and details.
n Details - Shows the Namespace, Node name, Creation timestamp, and IP associated with
the Pod.
n Conditions - Shows the status of Initialized, Ready, Containers Ready, and POD
Scheduled conditions.
n Containers - Shows the Name, State, and Started At time of the container.
Cluster Configuration
The Cluster Configuration tab displays information about the Kubernetes version of the cluster,
upgrade history, its CNI and CSI configurations, any tools such as Helm associated with the
cluster, syslog server details, and Harbor repository details. To edit any of the configuration
information, click Edit.
For the Management Cluster, you can view the name, version, and status of the nodeconfig-
operator and vmconfigoperator under tools on the Cluster Configuration tab. To view more
details of the tools, click the name of the operator.
n nodeconfig-operator
n Details - Shows the version and the health status of the operator.
n Pods - Shows the Name, Created, Ready Container, and Phase of the Pod. To
view more details of a pod, click the name of the pod.
n Containers - Shows the Name, State, and Started At time of the container.
n vmconfigoperator
n K8s Resources - Shows the Namespace, Created, Replica, and Ready Replicas of the
Deployment.
n vmconfig-operator
n Pods - Shows the Name, Created, Ready Container, and Phase of the Pod. To
view more details of a pod, click the name of the pod.
n Containers - Shows the Name, State, and Started At time of the container.
n Details - You can view the hardware details like CPU, Storage, Memory, and Replicas, along
with the name of the node. You can also view the status of node whether the node is active
or inactive.
n VMs - You can view various details of the VMs like Memory pressure, Disk pressure, PID
pressure, and Ready State. You can also click the VMs to view more details of that VM.
n Node Details - Shows the hardware and operating system related details of the VM. This
includes:
n Architecture
n Kernel Version
n Kubelet Version
n OS Image
n Operating System
n Conditions - Shows various health conditions like Memory Pressure, Disk Pressure, PID
Pressure, and Ready State of the node pool.
n Addresses - Shows the Hostname, InternalIP and ExternalIP associated with the VM.
Worker Nodes
The Worker Nodes tab displays the existing node pools of a Kubernetes cluster. To view more
details of the node pool such as its name, CPU size, memory size, storage size, number of
replicas, node customization details, and its status, click the name of the node pool. When you
click the name of the node pool, you can view the following details:
n Details - You can view the hardware details like CPU, Storage, Memory, and Replicas. You
can also view the status of node pool whether the node pool is active or inactive.
n Labels - You can view the various labels associated with the node pool.
n Network - You can view the network details of the node pool.
n CPU Manager Policy - You can view the type of CPU manager policy associated with the
node pool.
n VMs - You can view various details of the VMs like Memory pressure, Disk pressure, PID
pressure, and Ready State. You can also click the VMs to view the details of that VM.
n NodePool Details
n Node Details - Shows the hardware and the operating system related details of the
VM. This includes:
n Architecture
n Kernel Version
n Kubelet Version
n OS Image
n Operating System
n Conditions - Shows various health conditions like Memory Pressure, Disk Pressure,
PID Pressure, and Ready State of the node pool.
n Addresses - Shows the Hostname, InternalIP, and ExternalIP associated with the VM.
n Node Customizations - Shows the Kernel and Network details after node customization.
You can also add a node pool to the cluster, edit the number of replicas on a node pool, and
delete a node pool from here.
Tasks
The Tasks tab displays the progress of the cluster-level tasks and their status.
n Management Cluster - Displays the progress of Management cluster tasks along with the
progress of all the Workload cluster tasks that the cluster manages. It also displays the node
pool tasks of all the Workload clusters.
n Workload Cluster - Displays the progress of the Workload cluster tasks along with the
progress of its node pool tasks.
You can apply filters to view the progress of specific operations and specific clusters.
You can enable Machine Health Check and define the unhealthy conditions for the controller to
monitor when creating the node pool cluster template. You can also edit the Machine Health
Check conditions on an existing node pool under a Workload cluster. Machine Health Check
monitors the node pools for any unhealthy nodes and tries to remediate by recreating them.
For example, set the maximum duration a node can remain in the not ready state to 15 minutes
after which, the Machine Health Check controller triggers a remediation. For more details on
machine health check, see https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/
healthchecking.html.
For steps to enable and configure Machine Health Check when creating a Workload cluster
template, see Create a v1 Workload Cluster Template.
For steps to configure Machine Health Check on an existing node pool, see Edit a Kubernetes
Cluster Node Pool.
There may be instances when you want to power down a virtual machine to perform certain
maintenance activities. To avoid Machine Health Check remediating during the down time,
you can place the node pools in Maintenance Mode. For steps to place the Worker node in
Maintenance Mode, see Place Nodes in Maintenance Mode.
Procedure
3 Click the Kubernetes cluster that requires the Worker nodes to be placed in Maintenance
Mode.
5 Click the Options (⋮) symbol against the node and select Enter Maintenance Mode.
Results
The node is placed in Maintenance Mode and the Machine Health Check controller does not
remediate if there is a system down time.
Example
When you place a Worker node in Maintenance Mode and power it off, it does not power on until
you remove it from Maintenance Mode.
What to do next
To remove the Worker node from Maintenance Mode, click the Options (⋮) symbol against the
node and select Exit Maintenance Mode.
Upgrading Add-Ons
You can now upgrade the add-on operators to a later version from VMware Telco Cloud
Automation.
In a scenario where you upgrade VMware Telco Cloud Automation to a newer patch release
but the underlying VMware Tanzu Kubernetes Grid cluster remains the same, you can upgrade
only the add-ons. Upgrade Management cluster operators such as nodeconfig-operator and
vmconfig-operator, and Workload cluster operators such as CNIs and CSIs to their later versions.
n Cannot perform operations on the workload cluster that uses management clusters with
earlier add-ons.
Implications of Not Upgrading Add-ons in the Workload Cluster
If the operators are not the latest versions, the corresponding workload clusters display a
warning for upgrading the add-ons. To upgrade workload cluster add-ons individually, use the
Upgrade Add-Ons option.
Upgrade Add-Ons
Upgrade the add-ons in a Management cluster or a Workload cluster.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to upgrade.
Results
Add-On Failures
When you provide a wrong input for an add-on when creating a cluster, the cluster creation
operation does not fail immediately.
For example, If you provide a wrong server IP address for a CSI add-on, the cluster creation does
not fail. After the cluster creation operation is completed, a warning message is displayed against
the Kubernetes cluster listing the add-ons that failed during the operation. You can then edit the
Kubernetes cluster and update the CSI add-on details.
vSphere-CSI
Option Description
Storage Class Enter the storage class name. This storage class is used to
provision persistent volumes dynamically. A storage class
with this name is created in the Kubernetes cluster.
NFS-Client
Option Description
Storage Class Enter the storage class name. This storage class is used to
provision persistent volumes dynamically. A storage class
with this name is created in the Kubernetes cluster.
NFS Server Address For an IPv4 cluster, enter the IPv4 address or FQDN of the
NFS Server. For an IPv6 cluster, enter the FQDN.
Path Enter the server IP address and the mount path of the NFS
client. Ensure that the NFS server is reachable from the
cluster. The mount path must also be accessible to read
and write.
Harbor
If if have already registered a Harbor, you can click Select Registered Harbor and select the
Harbor from the list. Else you can click Add New Harbor and provide the following details:
Option Description
Helm
This add-on has no configuration.
Multus
Option Description
Log File Path Path where you want to store the log files.
System Settings
Option Description
With this upgrade, users now have better control over cluster failures. You can now view the
status of all the components at a granular level and act on a failure while the cluster creation is
in progress. Also, during the cluster creation process, you can edit or delete a node pool or an
add-on at any point if there is an error.
For example, if an add-on IP address is incorrect, you can view the error immediately, edit the
IP address, and provide the correct one while the cluster creation is in progress. Even though
VMware Telco Cloud Automation is deprecating v1 clusters, you can still perform cluster life-cycle
operations on v1 clusters using the new user interface. However, to access the new features of
the v2 user interface such as granular updates, new add-ons, stretched clusters, and so on, you
must transform your clusters to V2 APIs. You can transform v1 Workload clusters to v2 Workload
clusters using the Transform Cluster option in the VMware Telco Cloud Automation UI. For more
information about transforming a v1 Workload cluster to v2 Workload cluster, see Transform v1
Workload Cluster to v2 Workload Cluster.
Note For this release, you cannot deploy a v2 Management cluster or perform any v2 life-cycle
management operations on Management clusters.
Anti-affinity Rules
Anti-affinity rule for K8s worker nodes is enabled in VMware Telco Cloud Automation by default.
Anti-affinity is specific to workload clusters and ensures that the nodes deployed are spread
across different hosts.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: node.cluster.x-k8s.io/esxi-host
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
nodeSelector:
"telco.vmware.com/nodepool": "npg-1"
containers:
- name: nginx-server
image: harbor-repo.vmware.com/ecp_snc/nginx:1.23.1
By default, workload clusters on vSphere and standalone management clusters follow anti-
affinity rules to deploy node pool workers and control plane nodes on different ESXi hosts.
The following diagram illustrates the node placements when the anti-affinity rules are enabled.
workload
cluster
Control
Plane Control Plane Control Plane Control Plane
Node Node Node
Node
Pool1
Worker Worker Worker
Node Node Node
Note When you upgrade a v2 workload cluster to the latest version, the certificate renewal of
the cluster is automatically enabled and the number of days defaults to 90.
The following table lists the Kubernetes upgrade compatibility for the Workload cluster when
upgrading from VMware Telco Cloud Automation.
VMware
Telco Cloud
Automation Existing Kubernetes Versions v1.22.17 v1.23.16 v1.24.10
n About the backward compatibility, please refer to CaaS Upgrade Backward Compatibility.
Prerequisites
n You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.
n You must have created a Management cluster or uploaded a Workload cluster template.
n A network must be present with a DHCP range and a static IP of the same subnet.
n For region: vSphere data center has tags attached for the selected category.
n For zone: vSphere Cluster or hosts under the vSphere cluster has tags attached for the
selected category. Ensure that vSphere Cluster and hosts under vSphere cluster does not
share the same tags.
Procedure
4 In the Workload Cluster Deployment wizard, enter information for each of the sub-categories:
5 1. Destination Info
n Management Cluster - Select the Management cluster from the drop-down menu. You
can also select a Management cluster deployed in a different vCenter.
n Destination Cloud - Select a cloud on which you want to deploy the Kubernetes cluster.
Advanced Options - Provide the secondary cloud information here. These options are
applicable when creating stretch clusters.
n (Optional) Secondary Cloud - Select the secondary cloud. It is required for stretched
cluster creation.
n (Optional) NF Orchestration VIM - Provide the details of the VIM. VMware Telco Cloud
Automation uses this VIM and associated Control Planes for NF life cycle management.
6 Click Next.
7 2. Cluster Info
n Name - Enter a name for the Workload cluster. The cluster name must be compliant with
DNS hostname requirements as outlined in RFC-952 and amended in RFC-1123.
n TCA BOM Release - The TCA BOM Release file contains information about the Kubernetes
version and add-on versions. You can select multiple BOM release files.
Note After you select the BOM release file, the Security Options section is made
available.
n Proxy Repository Access - Available only when the selected management cluster uses a
proxy repository. Select the proxy repository from the drop-down list.
n Airgap Repository Access - Available only when the selected management cluster uses a
airgap repository. Select the airgap repository from the drop-down list.
n Cluster (pods) CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the
CIDR pool to assign IP addresses to pods in the cluster.
n Service CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the CIDR
pool to assign IP addresses to the services in the cluster.
n Enable Autoscaler - Click the toggle button to activate the autoscaler feature.
The autoscaler feature automatically controls the replica count on the node pool by
increasing or decreasing the replica counts based on the workload. If you activate this
feature for a particular cluster, you cannot deactivate it after the deployment.When you
activate the autoscaler feature, the following fields are displayed:
Note The values in these fields are automatically populated from the cluster. However,
you can edit the values.
n Min Size - Sets a minimum limit to the number of worker nodes that autoscaler should
decrease.
n Max Size - Sets a maximum limit to the number of worker nodes that autoscaler
should increase.
n Max Node - Sets a maximum limit to the number of worker and control plane nodes
that autoscaler should increase. The default value is 0.
n Max Node Provision Time - Sets the maximum time that autoscaler should wait for
the nodes to be provisioned. The default value is 15 minutes.
n Delay After Add - Sets the time limit for the autoscaler to start the scale-down
operation after a scale-up operation. For example, if you specify the time as 10
minutes, autoscaler resumes the scale-down scan after 10 minutes of adding a node.
n Delay After Failure - Sets the time limit for the autoscaler to restart the scale-down
operation after a scale-down operation fails. For example, if you specify the time as 3
minutes and there is a scale-down failure, the next scale-down operation starts after 3
minutes.
n Delay After Delete - Sets the time limit for the autoscaler to start the scale-down
operation after deleting a node. For example, if you specify the time as 10 minutes,
autoscaler resumes the scale-down scan after 10 minutes of deleting a node.
n Unneeded Time - Sets the time limit for the autoscaler to scale-down an unused node.
For example, if you specify the time as 10 minutes, any unused node is scaled down
only after 10 minutes.
8 Click Next.
9 Security Options
n Click the Enable toggle button to apply the customized audit configuration. Otherwise,
the default audit configuration is applied to the workload cluster.
n Click the POD Security Default Policy toggle button to apply the POD security policies to
the workload cluster.
n POD Security Standard Audit: Policy violation adds an audit annotation to the event
recorded in the audit log, but does not reject the POD.
n POD Security Standard Warn: Policy violation displays an error message on the UI,
but does not reject the POD.
Select one of the following options from the preceding drop-down lists:
n Restricted: A fully restrictive policy that follows the current POD security
hardening best practices for providing permissions.
n To configure Control Plane node placement, click the Settings icon in the Control Plane
Node Placement table.
VM Placement
n Resource Pool - Select the default resource pool on which the Control Plane node is
deployed.
n VM Folder - Select the virtual machine folder on which the Control Plane node is
placed.
n Datastore - Select the default datastore for the Control Plane node.
VM Size
n Number of vCPUs - To ensure that the physical CPU core is used by the same
node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-
enabled, and if the network function requires NUMA alignment and CPU reservation.
n Cores Per Socket (Optional) - Enter the number of cores per socket if you require
more that 64 cores.
Network
Labels
n To add the appropriate labels for this profile, click Add Label. These labels are added
to the Kubernetes node.
Advanced Options
n Clone Mode - Specify the type of clone operation. Linked Clone is supported on
templates that have at least one snapshot. Otherwise, the clone mode defaults to Full
Clone.
n Certificate Expiry Days - Specify the number of days for automatic certificate renewal
by TKG before its expiry. By default, the certificate expires after 365 days. If you
specify a value in this field, the certificate is automatically renewed before the set
number of days. For example, if you specify the number of days as 50, the certificate
is renewed 50 days before its expiry, which is after 315 days.
The default value is 90 days. The minimum number of days you can specify is 7 and
the maximum is 180.
Note You cannot edit the number of days after you deploy the cluster.
n n
n Click Apply.
11 Add-Ons
a From the Select Add-On wizard, select the add-on and click Next.
12 Click Next.
13 Node Pools
n A node pool is a set of nodes that have similar properties. Pooling is useful when
you want to group the VMs based on the number of CPUs, storage capacity, memory
capacity, and so on. You can add one node pool to a Management cluster and multiple
node pools to a Workload cluster, with different groups of VMs. To add a Worker node
pool, click Add Worker Node Pool.
VM Placement
n Resource Pool - Select the default resource pool on which the node pool is deployed.
n VM Folder - Select the virtual machine folder on which the node pool is placed.
n Enable Autoscaler - This field is available only if autoscaler is enabled for the
associated cluster. At the node level, you can activate or deactivate autoscaler based
on your requirement.
The following field values are automatically populated from the cluster.
n Min Size (Optional) - Sets a minimum limit to the number of worker nodes that
autoscaler should scale down. Edit the value, as required.
n Max Size (Optional) - Sets a maximum limit to the number of worker nodes that
autoscaler should scale up. Edit the value, as required.
Note
n Using autoscaler on a cluster does not automatically change its node group
size. Therefore, changing the maximum or minimum size does not scale up or
scale down the cluster size. When you are editing the autoscaler-configured
maximum size of the node pool, ensure that the maximum size limit of the
node pool is lesser than or equal to the current replica count.
n You can view the scale-up and scale-down events under the Events tab of the
Telco Cloud Automation portal.
VM Size
n Number of Replicas - Number of node pool VMs to be created. The ideal number of
replicas for production or staging deployment is 3.
Note The Number of Replicas field is unavailable if autoscaler is enabled for the
node.
n n Number of vCPUs - To ensure that the physical CPU core is used by the same
node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-
enabled, and if the network function requires NUMA alignment and CPU reservation.
n Cores Per Socket (Optional) - Enter the number of cores per socket if you require
more that 64 cores.
Network
n ADD NETWORK DEVICE - Click this button to add a dedicated NFS interface to the
node pool, select the interface, and then enter the following:
n Interface Name - Enter the interface name as tkg-nfs to reach the NFS server.
Labels
n To add the appropriate labels for this profile, click Add Label. These labels are added
to the Kubernetes node.
Advanced Options
n Clone Mode - Specify the type of clone operation. Linked Clone is supported on
templates that have at least one snapshot. Otherwise, the clone mode defaults to Full
Clone.
n Click Apply.
Results
The cluster details page displays the status of the overall deployment and the deployment status
of each component.
Prerequisites
Procedure
4 In the Create Workload Cluster Template wizard, enter information for each of the sub-
categories:
5 Template Info
6 1. Destination Info
n Management Cluster - Select the Management cluster from the drop-down menu. You
can also select a Management cluster deployed in a different vCenter.
n Destination Cloud - Select a cloud on which you want to deploy the Kubernetes cluster.
Advanced Options - Provide the secondary cloud information here. These options are
applicable when creating stretch clusters.
n (Optional) Secondary Cloud - Select the secondary cloud. It is required for stretched
cluster creation.
n (Optional) NF Orchestration VIM - Provide the details of the VIM. VMware Telco Cloud
Automation uses this VIM and associated Control Planes for NF life cycle management.
7 Click Next.
8 2. Cluster Info
n TCA BOM Release - The TCA BOM Release file contain information about the Kubernetes
version and add-on versions. You can select multiple BOM release files.
n Proxy Repository Access - Available only when the selected management cluster uses a
proxy repository. Select the proxy repository from the drop-down list.
n Airgap Repository Access - Available only when the selected management cluster uses a
airgap repository. Select the airgap repository from the drop-down list.
n Cluster (pods) CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the
CIDR pool to assign IP addresses to pods in the cluster.
n Service CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the CIDR
pool to assign IP addresses to the services in the cluster.
n Enable Autoscaler - Click the toggle button to activate the autoscaler feature.
The autoscaler feature automatically controls the replica count on the node pool by
increasing or decreasing the replica counts based on the workload. If you activate this
feature for a particular cluster, you cannot deactivate it after the deployment.When you
activate the autoscaler feature, the following fields are displayed:
Note The values in these fields are automatically populated from the cluster. However,
you can edit the values.
n Min Size - Sets a minimum limit to the number of worker nodes that autoscaler should
decrease.
n Max Size - Sets a maximum limit to the number of worker nodes that autoscaler
should increase.
n Max Node - Sets a maximum limit to the number of worker and control plane nodes
that autoscaler should increase. The default value is 0.
n Max Node Provision Time - Sets the maximum time that autoscaler should wait for
the nodes to be provisioned. The default value is 15 minutes.
n Delay After Add - Sets the time limit for the autoscaler to start the scale-down
operation after a scale-up operation. For example, if you specify the time as 10
minutes, autoscaler resumes the scale-down scan after 10 minutes of adding a node.
n Delay After Failure - Sets the time limit for the autoscaler to restart the scale-down
operation after a scale-down operation fails. For example, if you specify the time as 3
minutes and there is a scale-down failure, the next scale-down operation starts after 3
minutes.
n Delay After Delete - Sets the time limit for the autoscaler to start the scale-down
operation after deleting a node. For example, if you specify the time as 10 minutes,
autoscaler resumes the scale-down scan after 10 minutes of deleting a node.
n Unneeded Time - Sets the time limit for the autoscaler to scale-down an unused node.
For example, if you specify the time as 10 minutes, any unused node is scaled down
only after 10 minutes.
9 Click Next.
n To configure Control Plane node placement, click the Settings icon in the Control Plane
Node Placement table.
VM Placement
n Resource Pool - Select the default resource pool on which the Control Plane node is
deployed.
n VM Folder - Select the virtual machine folder on which the Control Plane node is
placed.
n Datastore - Select the default datastore for the Control Plane node.
VM Size
n Number of vCPUs - To ensure that the physical CPU core is used by the same
node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-
enabled, and if the network function requires NUMA alignment and CPU reservation.
n Cores Per Socket (Optional) - Enter the number of cores per socket if you require
more that 64 cores.
Network
Labels
n To add the appropriate labels for this profile, click Add Label. These labels are added
to the Kubernetes node.
Advanced Options
n Clone Mode - Specify the type of clone operation. Linked Clone is supported on
templates that have at least one snapshot. Otherwise, the clone mode defaults to Full
Clone.
n Certificate Expiry Days - Specify the number of days for automatic certificate renewal
by TKG before its expiry. By default, the certificate expires after 365 days. If you
specify a value in this field, the certificate is automatically renewed before the set
number of days. For example, if you specify the number of days as 50, the certificate
is renewed 50 days before its expiry, which is after 315 days.
The default value is 90 days. The minimum number of days you can specify is 7 and
the maximum is 180.
Note You cannot edit the number of days after you deploy the cluster.
n n
n Click Apply.
11 Add-Ons
a From the Select Add-On wizard, select the add-on and click Next.
12 Click Next.
13 Node Pools
n A node pool is a set of nodes that have similar properties. Pooling is useful when
you want to group the VMs based on the number of CPUs, storage capacity, memory
capacity, and so on. You can add one node pool to a Management cluster and multiple
node pools to a Workload cluster, with different groups of VMs. To add a Worker node
pool, click Add Worker Node Pool.
VM Placement
n Resource Pool - Select the default resource pool on which the node pool is deployed.
n VM Folder - Select the virtual machine folder on which the node pool is placed.
n Enable Autoscaler - This field is available only if autoscaler is enabled for the
associated cluster. At the node level, you can activate or deactivate autoscaler based
on your requirement.
The following field values are automatically populated from the cluster.
n Min Size (Optional) - Sets a minimum limit to the number of worker nodes that
autoscaler should scale down. Edit the value, as required.
n Max Size (Optional) - Sets a maximum limit to the number of worker nodes that
autoscaler should scale up. Edit the value, as required.
Note
n Using autoscaler on a cluster does not automatically change its node group
size. Therefore, changing the maximum or minimum size does not scale up or
scale down the cluster size. When you are editing the autoscaler-configured
maximum size of the node pool, ensure that the maximum size limit of the
node pool is lesser than or equal to the current replica count.
n You can view the scale-up and scale-down events under the Events tab of the
Telco Cloud Automation portal.
VM Size
n Number of Replicas - Number of node pool VMs to be created. The ideal number of
replicas for production or staging deployment is 3.
Note The Number of Replicas field is unavailable if autoscaler is enabled for the
node.
n n Number of vCPUs - To ensure that the physical CPU core is used by the same
node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-
enabled, and if the network function requires NUMA alignment and CPU reservation.
n Cores Per Socket (Optional) - Enter the number of cores per socket if you require
more that 64 cores.
Network
n ADD NETWORK DEVICE - Click this button to add a dedicated NFS interface to the
node pool, select the interface, and then enter the following:
n Interface Name - Enter the interface name as tkg-nfs to reach the NFS server.
Labels
n To add the appropriate labels for this profile, click Add Label. These labels are added
to the Kubernetes node.
Advanced Options
n Clone Mode - Specify the type of clone operation. Linked Clone is supported on
templates that have at least one snapshot. Otherwise, the clone mode defaults to Full
Clone.
n Click Apply.
Overview
You can view the details of cluster and the health of various components associated to the
cluster.
n Management Cluster URL - The URL of the management cluster API server.
n The Configuration and Control Plane section provides details of various components and
their health:
Note Telco Cloud Automation obtains the health status directly from Kubernetes and
displays that health status on Telco Cloud Automation user interface.
n Unhealthy : The components is not working fine and has some faults.
Note
n When the cluster is under upgrade or under creation, the status may show
Unknown.
n The Pods information. Which contains pod Name, Created, Ready Containers and
Phase.
Note Telco Cloud Automation obtains the health status directly from Kubernetes
and displays that health status on Telco Cloud Automation user interface. Kubernetes
maintains the conditions and details.
n Details - Shows the Namespace, Node name, Creation Timestamp, and IP associated
with the Pod.
n Conditions - Shows the status of Initialized, Ready, Containers Ready, and POD
Scheduled conditions.
n Containers - Shows the Name, State, and Started At time of the container.
n The Node Pools section provides details of Node Pools and their health:
n It shows the K8s resource Name, Kind, Namespace, Created, Desired, Ready,
Replica, Ready Replicas, etc.
n The Conditions section provides the conditions details like Type, Status, Reason, Severity,
Message and Last Transition Time, and You could click the Show More to get the CR of
TcaKubernetesCluster and TcaKubeControlPlane.
n The Cluster Global configuration section provides the global configuration of cluster:
n Details - Shows the cluster details like CNI Type, Endpoint IP, Pods, Services, TCA Bom
Release Reference, NF Orchestration VIM.
n Cloud Providers - Shows the cloud providers details like VIM name, Datacenter, Type.
n The Control Plane Configuration section provides the details of control plane nodes:
n Details - Shows the Control Plane hardware details like:Name, CPU, Memory, Storage,
Replicas, Folder, Resource Pool, Cloud Name, Datacenter, Datastore, TCA Bom Release
Reference, Clone Mode, Template.
n Network - Shows the network details like Network Name, MTU, DHCP4.
n Nodes - Shows the various details of the VMs like Memory pressure, Disk pressure, PID
pressure, Ready State and K8S version.
n Node Details - Shows the hardware and operating system related details of the VM.
Which contains Architecture, Kernel Version, Kubelet Version, OS Image, Container
Runtime Version, Kube Proxy Version, Operating System.
n Conditions - Shows various health conditions like Memory Pressure, Disk Pressure,
PID Pressure, and Ready State of the node pool.
n Addresses - Shows the Hostname, InternalIP and ExternalIP associated with the VM.
Node Pools
The Node Pools tab displays the existing node pools of a Kubernetes cluster. To view more
details of the node pool such as its name, CPU size, memory size, storage size, number of
replicas, node customization details, and its status, click the name of the node pool, then you can
view the following details:
n The Conditions section provides the conditions details, like Type, Status, Reason, Severity,
Message and Last Transition Time, and You could click the Show More to get the CR of
TcaNodePool, NodePolicy and NodePolicyMachineStatus.
n The Details section provides the hardware details of the node pool. This contains Name,
Replicas, CPU, Memory, Storage, Clone Mode, Cloud, Datacenter, Resource Pool, VM
Folder, Datastore, VM Template, Manage Network, CPU Manager Policy, Reservation for
Kubernetes Processes, Reservation for System Processes, TCA Bom Release Reference,
Domain Name Servers.
n The Labels section provides the various labels associated with the node pool.
n The Machine Health Check section provides the details of the Machine Health Check.
n The Nodes section provides various details of the VMs like VM Name, IP, Memory pressure,
Disk pressure, PID pressure, Ready State and K8S version.
n In the Node Pool Details tab, it shows Node Details, Conditions, Addresses, Labels and
Allocatable/Capacity.
n Node Details - Shows the hardware and the operating system related details of
the VM. This contains Architecture, Kernel Version, Kubelet Version, OS Image,
Container Runtime Version, Kube Proxy Version, Operating System.
n Conditions - Shows various health conditions like Memory Pressure, Disk Pressure,
PID Pressure, and Ready State of the node pool.
n Addresses - Shows the Hostname, InternalIP, and ExternalIP associated with the VM.
n In the Node Customisations tab, it shows the details of node customisations. Which
contains Status, NUMA Alignment, Kernel, Network, Tuned Profile, File Injection etc.
n In the Events tab, it shows the list of the events performed. Which contains Message,
Type, Owner, Resource Name, Resource Type, Reason, Count, First Occurrence and Last
Occurrence.
Note You can apply filters to view the details of specific Node Pool.
Add-Ons
The Add-Ons tab displays the existing Add-Ons of a Kubernetes cluster.
n All - In this table, it list all Add-Ons, and the details like Name, Type, Status, Revision and
Created.
n Add-On Categories- Add-Ons are also divided into several categories, and you can see the
corresponding Add-On list under each category table. Categories include:
n Single Add-On details - Click the add-on name to view the K8s resources details. It shows the
K8s resource Name, Kind, Namespace, Created, Desired, Ready, Replica, Ready Replicas,
etc.
Note You can apply filters to view the details of specific Add-On.
Events
The Events tab displays the progress of the cluster-level events and their status.
n The Events table shows the list of the events performed. Which includes Message,
Type, Owner, Resource Name, Resource Type, Reason, Count, First Occurrence and Last
Occurrence.
Note You can apply filters to view the details of specific Event.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.
5 In the Configuration tab Destination Info step, could select new cloud under the Advanced
Options.
6 In the Control Plane Info step, click the configuration icon to the right of the control panel
row.
7 Click the configure icon, there will show Control Plane Node Info dialog.
8 Edit the Number of replicas to scale down or scale up the Control Plane nodes.
11 After the Control Plane dialog close, click Next jump to Ready to Deploy, Click Deploy.
Results
You have successfully edited the Kubernetes Cluster and its Control Plane configuration.
Prerequisites
This option is helpful when you are deploying multiple Kubernetes clusters with similar
configurations.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.
5 In the Configuration tab, all inputs default value is same as current workload cluster, edit the
Destination Info, Cluster Info, Control Plane Info, Add-Ons, Node Pools configuration, and click
Next until to Ready to Deploy, then click Deploy.
Results
Note This operation is available from VMware Telco Cloud Automation version 2.1 onwards.
Note The Retry option retries the cluster creation operation from the point of failure. If you want
to change the configuration of the cluster and retry the creation, you must delete the old cluster
and recreate it.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to retry.
Results
Prerequisites
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster creation operation that you want
to stop.
4 Click Abort.
Results
VMware Telco Cloud Automation rolls back the progress of Kubernetes Cluster creation
operation and deletes all the deployed nodes. After VMware Telco Cloud Automation stops the
cluster creation operation, you cannot deploy the same cluster again.
Delete a Cluster
You can delete the Kubernetes Workload cluster.
Prerequisites
Procedure
Results
Procedure
5 There will show a Node Pool Details Dialog, edit node pool configuration and click Add.
n Name - Enter the name of the node pool. The node pool name cannot be greater than 36
characters.
n Resource Pool - Select the resource pool for the node pool.
n Cores per Socket (Optional) - Select the number of cores per socket in the node pool.
n Disk Size - Select the disk size. Minimum disk size required is 50 GB.
n Labels - Add key-value pair labels to your nodes, which to be used as node selectors
when instantiating a network function.
n Network - Select the network that you want to associate with the label.
n (Optional) MTU - Provide the MTU value for the network. The minimum MTU value is
1500. The maximum MTU value depends on the configuration of the network switch.
n (Optional) DNS - Enter a valid DNS IP address as Domain Name Servers. These DNS
servers are configured in the guest operating system of each node in the cluster. You
can override this option on the Master node and each node pool of the Worker node.
Multiple DNS servers can be separated by commas.
n Labels - Add key-value pair labels to your nodes, which to be used as node selectors
when instantiating a network function.
n Select the Clone Mode, enable clone mode will use the vSphere linked clone feature
to create machines for Kubernetes nodes.
n To enable Machine Health Check, select Configure Machine Health Check. For more
information, see Machine Health Check.
n (Optional) Enter the Node Start Up Timeout time duration for Machine Health
Check to wait for a node to join the cluster. If a node does not join during the
specified time, Machine Health Check considers it unhealthy.
n Kubeadmin Config Template - Set CPU reservations on the Worker nodes as Static or
Default. For information about controlling CPU Management Policies on the nodes,
see the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-
cluster/cpu-management-policies/.
Note For CPU-intensive workloads, use Static as the CPU Manager Policy.
6 Click Apply.
7 After the Node Pool dialog close, click Next jump to Ready to Deploy, Click Deploy.
Results
You have successfully added the node pool of a Kubernetes cluster instance.
Procedure
4 Select the Node Pools tab, click the Options (⋮) symbol against the node pool that you want
to edit.
5 Click Edit, there will show out Node Pool Details dialog.
6 Edit the Number of replicas to scale down or scale up the node pool nodes.
7 Click Add Label or Remove to edit the node labels, which to be used as node selectors as
node selectors when instantiating the network functions.
9 To enable Machine Health Check, select Configure Machine Health Check. For more
information, see Machine Health Check.
a (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check to
wait for a node to join the cluster. If a node does not join during the specified time,
Machine Health Check considers it unhealthy.
10 Click Apply.
11 After the Node Pool dialog close, click Next jump to Ready to Deploy, Click Deploy.
Results
You have successfully edited the Node Pool configuration of a Kubernetes cluster instance.
Prerequisites
Procedure
4 Select the Node Pools tab, click the Options (⋮) symbol against the node pool that you want
to edit.
Results
You have successfully deleted the Node Pool of a Kubernetes cluster instance.
This option is helpful when you are deploying multiple node pools with similar configurations.
Procedure
2 Navigate to Infrastructure > Caas Infrastructure and select the Kubernetes cluster.
4 Select the Node Pools tab, click the Options (⋮) symbol against the Kubernetes cluster that
you want to edit.
6 In the Configuration tab Node Pools step, there is a new node pool with the -copy suffix. In
the last column of this node pool, click Configure.
7 In the Configuration tab, all inputs default value is same as current node pool, After confirm
the node pool configuration, click Apply .
Results
VMware Telco Cloud Automation automatically generates a new Node Pool with the
configuration of the Node Pool that copied.
Deploy Add-Ons
You can deploy Add-Ons to your Kubernetes Workload cluster.
Procedure
6 Select one add-on and configure that, then click Ok. About the add-on configuration, please
refer to Managing Add-ons for v2 Workload Clusters.
7 After the Configure Add-on dialog close, click Next jump to Ready to Deploy, Click Deploy.
Results
Edit a Add-On
You can reconfigure Add-on of your Kubernetes cluster.
Procedure
4 Select the Add-Ons tab, click the Options (⋮) symbol against the node pool that you want to
edit.
8 Refer to Managing Add-ons for v2 Workload Clusters configure add-on, and Click Ok.
9 After the Add-on Configuration dialog close, click Next jump to Ready to Deploy, Click
Deploy.
Results
You have successfully edited the Add-On configuration of a Kubernetes cluster instance.
Delete a Add-on
You can delete a Add-on of your Kubernetes cluster.
Procedure
4 Select the Add-Ons tab, click the Options (⋮) symbol against the node pool that you want to
edit.
Results
Note Refer to the Upgrade v2 Workload Kubernetes Cluster Version, select the supported
kubernetes version to upgrade Control Plane first. Add-Ons is upgraded with the upgrade of the
Control Plane. After the upgrade is complete, upgrade node pool to see Upgrade Cluster Node
Pool.
Prerequisites
Before upgrade the Workload Cluster, make sure Management cluster has upgraded to the
VMware Telco Cloud Automation 2.2, and the workload cluster in Provisioned status.
Procedure
3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to upgrade.
5 In the Configuration tab Cluster Info step, select the new TCA BOM Release which you want
upgrade to, then click Next.
6 In the Control Plane Info step, click the configuration icon to the right of the control panel
row.
7 Select the VM Template from the available template that suite the TCA BOM Release you
selected. Then click Apply.
8 After the Control Plane dialog close, click Next jump to Ready to Deploy, Click Deploy.
9 The upgrade process can be monitored below the rows of the Cluster, the upgrade events
will be updated in the Events tab.
10 If the upgrade is a failure, click the Retry option to continue upgrade the Control Plane .
However, configuration modification is not allowed before Retry.
Results
You have successfully upgraded the Kubernetes Cluster instance and its Control Plane.
Note Starting with VMware Telco Cloud Automation 2.2, which separated the Control Panel
upgrade from the Node Pool upgrade, there are several options for upgrading Node Pool:
n Keep Node Pools in current version
n Node Pool is one version lower than or equal to the Control Panel
Prerequisites
Before upgrade the Workload Cluster Node Pool, make sure Workload Cluster Control Plane has
upgraded, and the Workload Cluster in Provisioned status.
Procedure
4 Select the Node Pools tab, click the Options (⋮) symbol against the node pool that you want
to upgrade.
5 Click Edit, there will show out Node Pool Details dialog.
6 Select the new TCA BOM Release which you want upgrade to.
7 Select the VM Template from the available template which suite for the TCA BOM Release
you selected.
9 After the Node Pool dialog is closed, click Next jump to Ready to Deploy, Click Deploy.
10 The upgrade process can be monitored below the rows of the Node Pool, the upgrade
events will be updated in the Events tab.
11 If the upgrade is failure, click the Retry option to continue upgrade the Node Pool. However,
configuration modification is not allowed before Retry.
Results
Add-Ons Configurations
Use the following reference while configuring Add-Ons on your v2 Workload cluster.
vsphere-csi
Option Description
Storage Class Enter the storage class name. This storage class is used to
provision persistent volumes dynamically. A storage class
with this name is created in the Kubernetes cluster.
Option Description
ADD NEW STORAGECLASS Click this button to add one or more storage classes.
nfs-client
Option Description
Storage Class Enter the storage class name. This storage class is used to
provision persistent volumes dynamically. A storage class
with this name is created in the Kubernetes cluster.
NFS Server Address For an IPv4 cluster, enter the IPv4 address or FQDN of the
NFS Server. For an IPv6 cluster, enter the FQDN.
Path Enter server IP address and mount path of the NFS client.
Ensure that the NFS server is reachable from the cluster.
The mount path must also be accessible to read and write.
harbor
If a Harbor has already been registered, click Select Registered Harbor and select the
appropriate Harbor from the list. Otherwise, click Add New Harbor and provide the following
details:
Option Description
helm
This add-on has no configuration.
multus
Caution Do NOT delete multus add-on once it is provisioned, as this might prevent creating or
deleting pods on the workload cluster. See multus-cni known issue #461.
Option Description
Log File Path Path where you want to store the log files.
systemsettings
Option Description
load-balancer-and-ingress-service(aka AKO)
Load-balancer-and-ingress-service add-on also known as AKO(AVI Kubernetes Operator) add-
on.
Note
1 To install load-balancer-and-ingress-service(AKO) add-on for a Workload cluster, you must
add AKOO(AVI Kubernetes Operator - Operator) on the Management cluster. For information
about adding AKOO, see Add AVI Kubernetes Operator - Operator.
2 Service engine group can not be shared by more than one TCA clusters, even if load-
balancer-and-ingress-service(AKO) add-on is deleted from the original cluster or the original
cluster is deleted already. To use a service engine group which was used by other cluster,
delete the service engine group from Avi Controller UI and recreate it.
Option Description
Cloud Name Enter the cloud name configured in the AVI Controller.
Default Service Engine Group Enter the service engine group name configured in the AVI
Controller.
Default VIP Network Enter the VIP network name in the AVI Controller.
Default VIP Network CIDR Enter the VIP network CIDR in the AVI Controller.
Option Description
Service Type Enter the ingress method for the service. Choose from the
following options:
n Node Port
n Cluster IP
n Node Port Local - Available only for Antrea CNI.
Network Name Enter the cluster node network name. To add a network,
click Add Network.
Promethues
Prometheus provides Kubernetes-native deployment and management of Prometheus and
related monitoring components.
Note
1 To customize additional prometheus configurable fields via the Custom Resources(CRs) tab,
see Advanced Configuration for Prometheus Add-On.
2 Some parameters(e.g. PVC parameters, service type, port) are immutable after prometheus
add-on provisioned. See Configurable parameters.
Option Description
Use Reference Configs Click the toggle button to use the reference configurations.
Storage Class Name The name of the Storage Class. Default Storage Class will
be used if not set.
Storage Enter the size of the Persistent Volume Claim (PVC). The
default value is 150 GB.
fluent-Bit
Note
1 Do not set cpu-manager-policy is to static for node pools as this may lead to crashing of
fluent-bit deamonset pods.
2 To customize additional fluent-bit configurable fields(inputs, outputs, filters, parsers) via the
Custom Resources(CRs) tab, see Advanced Configuration for Fluent-bit Add-On.
3 To update the provisioned fluent-bit configuration, manually restart all fluent-bit pods to
make the new configuration take effect.
Option Description
Use Reference Configs Click the toggle button to use the reference configurations.
[Service]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
whereabouts
This add-on has no configuration.
cert-manager
This add-on has no configuration.
Note In certain scenarios, the cainjector pod or webhook pod of cert-manager add-on can be in
CrashLoopBackOff status while the cert-manager add-on status on UI will be Unhealthy. In such
case, restart the CrashLoopBackOff pod with command kubectl delete pod -n cert-manager
<crash-pod-name> to recover.
velero
Velero is used to back up and restore a workload cluster.
Note After changing the "Backup Storage" configuration (such as, Storage URL and Storage
Bucketname), existing ResticRepositories CR should be deleted manually in order to continue
using Restic to back up Persistent Volumes data.
Option Description
Credential
Backup Storage
Option Description
Note For example, enter minio if you are using the MinIO
service.
Storage Bucket Name Enter name of the storage bucket where the backup
should be restored.
Note
n This field appears only if the storage URL is in HTTPS
format.
n Also append https-proxy certificate if velero is behind
https-proxy.
Note
n You must install cert-manager before installing any of the TKG standard extensions.
n The following TKG standard extensions which are supported by the VMware Telco Cloud
Automation addons cannot be installed through TKG standard extension: cert-manager,
multus-cni, whereabouts, fluent-bit, promethesus.
n For TKG standard extension configurations and other information, see Installing and
Managing Packages with the Tanzu CLI.
Option Description
Configurable parameters
Note Some parameters are only applicable for certain topology(e.g. NSX-T environment) or
certain feature(e.g. Provide cluster control plane HA with Avi). Customize these parameters
carefully base on your actual environment.
metadata:
name: load-balancer-and-ingress-service
clusterName: wc0
spec:
name: load-balancer-and-ingress-service
clusterRef:
name: wc0
namespace: wc0
config:
stringData:
values.yaml: |
cloudName: vcenter-cloud0
defaultServiceEngineGroup: wc0-se-group
defaultVipNetwork: oam-vip-dvpg
defaultVipNetworkCidr: 172.16.73.0/24
extraConfigs:
ingress:
serviceType: ClusterIP
nodeNetworkList:
- networkName: cluster-mgmt-dvpg
cidrs:
- 172.16.68.0/22
metadata:
name: load-balancer-and-ingress-service
clusterName: wc0
spec:
name: load-balancer-and-ingress-service
clusterRef:
name: wc0
namespace: wc0
config:
stringData:
values.yaml: |
cloudName: vcenter-cloud0
defaultServiceEngineGroup: wc0-se-group
defaultVipNetwork: oam-vip-dvpg
defaultVipNetworkCidr: 172.16.73.0/24
extraConfigs:
ingress:
serviceType: ClusterIP
nodeNetworkList:
- networkName: cluster-mgmt-dvpg
cidrs:
- 172.16.68.0/22
aviObjects:
aviinfrasettings:
- metadata:
name: ais0
spec:
seGroup:
name: wc0-se-group
network:
vipNetworks:
- networkName: oam-vip-dvpg
l7Settings:
shardSize: MEDIUM
- metadata:
name: ais1
spec:
seGroup:
name: wc0-se-group
network:
vipNetworks:
- networkName: sig-vip-dvpg
l7Settings:
shardSize: MEDIUM
gatewayclasses:
- metadata:
name: gwc0
spec:
controller: ako.vmware.com/avi-lb
parametersRef:
group: ako.vmware.com
kind: AviInfraSetting
name: ais0
gateways:
- metadata:
name: gw0
namespace: gw0
spec:
gatewayClassName: gwc0
listeners:
- protocol: TCP
port: 80
routes:
selector:
matchLabels:
ako.vmware.com/gateway-namespace: gw0
ako.vmware.com/gateway-name: gw0
group: v1
kind: Service
- protocol: TCP
port: 8081
routes:
selector:
matchLabels:
ako.vmware.com/gateway-namespace: gw0
ako.vmware.com/gateway-name: gw0
group: v1
kind: Service
n In this sample CR, two aviinfrasetting objects ais0ais1, one gatewayclass object gwc0, and
one gateway object gw0 will be created or updated, if already exist.
n Aviinfrasetting objects can be created with enableRhi: true and bgpPeerLabels as needed.
n TCA will create namespace(if not exist) for gateway objects but will not delete the
namespace when deleting the gateway objects.
Configurable parameters
ingress.tlsCertificate.tls Optional certificate string Generated cert tls.crt is a key and not
.crt for ingress if nested.
you want to use
your own TLS
certificate. A self
signed certificate is
generated by default.
ingress.tlsCertificate.tls Optional certificate string Generated cert key tls.key is a key and not
.key private key for nested.
ingress if you want
to use your own TLS
certificate.
metadata:
name: prometheus
spec:
clusterRef:
name: wc0
namespace: wc0
name: prometheus
namespace: wc0
config:
stringData:
values.yaml: |
prometheus:
deployment:
replicas: 1
containers:
args:
- --storage.tsdb.retention.time=5d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries2
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
service:
type: NodePort
port: 80
targetPort: 9090
pvc:
accessMode: ReadWriteOnce
storage: 150Gi
config:
prometheus_yml: |
global:
evaluation_interval: 1m
scrape_interval: 1m
scrape_timeout: 10s
rule_files:
- /etc/config/alerting_rules.yml
- /etc/config/recording_rules.yml
- /etc/config/alerts
- /etc/config/rules
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'kube-state-metrics'
static_configs:
- targets: ['prometheus-kube-state-metrics.tanzu-system-
monitoring.svc.cluster.local:8080']
- job_name: 'node-exporter'
static_configs:
- targets: ['prometheus-node-exporter.tanzu-system-
monitoring.svc.cluster.local:9100']
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__,
__meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: node
- job_name: kubernetes-nodes-cadvisor
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- replacement: kubernetes.default.svc:443
target_label: __address__
- regex: (.+)
replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
source_labels:
- __meta_kubernetes_node_name
target_label: __metrics_path__
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- job_name: kubernetes-apiservers
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: default;kubernetes;https
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
- __meta_kubernetes_endpoint_port_name
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- alertmanager.tanzu-system-monitoring.svc:80
- kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_namespace]
regex: default
action: keep
- source_labels: [__meta_kubernetes_pod_label_app]
regex: prometheus
action: keep
- source_labels: [__meta_kubernetes_pod_label_component]
regex: alertmanager
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_probe]
regex: .*
action: keep
- source_labels: [__meta_kubernetes_pod_container_port_number]
regex:
action: drop
alerting_rules_yml: |
{}
recording_rules_yml: |
groups:
- name: vmw-telco-namespace-cpu-rules
interval: 1m
rules:
- record: tkg_namespace_cpu_usage_seconds
expr: sum by (namespace) (rate
(container_cpu_usage_seconds_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_cpu_throttled_seconds
expr: sum by (namespace)
(((rate(container_cpu_cfs_throttled_seconds_total[5m])) ) > 0 or kube_pod_info < bool 0)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_cpu_request_core
expr: sum by (namespace) (kube_pod_container_resource_requests_cpu_cores)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_cpu_limits_core
expr: sum by (namespace) (kube_pod_container_resource_limits_cpu_cores >
0.0 or kube_pod_info < bool 0.1)
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-namespace-mem-rules
interval: 1m
rules:
- record: tkg_namespace_mem_usage_mb
expr: sum by (namespace) (container_memory_usage_bytes{container!
~"POD",container!=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_mem_rss_mb
expr: sum by (namespace) (container_memory_rss{container!~"POD",container!
=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_mem_workingset_mb
expr: sum by (namespace) (container_memory_working_set_bytes{container!
~"POD",container!=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_mem_request_mb
expr: sum by (namespace)
(kube_pod_container_resource_requests_memory_bytes) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_mem_limit_mb
- record: tkg_pod_cpu_request_core
expr: sum by (pod) (kube_pod_container_resource_requests_cpu_cores)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_cpu_limit_core
expr: sum by (pod) (kube_pod_container_resource_limits_cpu_cores > 0.0 or
kube_pod_info < bool 0.1)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_cpu_throttled_seconds
expr: sum by (pod)
(((rate(container_cpu_cfs_throttled_seconds_total[5m])) ) > 0 or kube_pod_info < bool 0)
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-pod-mem-rules
interval: 1m
rules:
- record: tkg_pod_mem_usage_mb
expr: sum by (pod) (container_memory_usage_bytes{container!
~"POD",container!=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_mem_rss_mb
expr: sum by (pod) (container_memory_rss{container!~"POD",container!
=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_mem_workingset_mb
expr: sum by (pod) (container_memory_working_set_bytes{container!
~"POD",container!=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_mem_request_mb
expr: sum by (pod) (kube_pod_container_resource_requests_memory_bytes) /
(1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_mem_limit_mb
expr: sum by (pod) ((kube_pod_container_resource_limits_memory_bytes /
(1024*1024) )> 0 or kube_pod_info < bool 0)
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-pod-network-rules
interval: 1m
rules:
- record: tkg_pod_network_tx_bytes
expr: sum by (pod) (rate
(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_rx_bytes
expr: sum by (pod) (rate (container_network_receive_bytes_total{container!
~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_tx_packets
expr: sum by (pod) (rate
(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_rx_packets
expr: sum by (pod) (rate
(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_tx_dropped_packets
expr: sum by (pod) (rate
(container_network_transmit_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_rx_dropped_packets
expr: sum by (pod) (rate
(container_network_receive_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_tx_errors
expr: sum by (pod) (rate
(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_rx_errors
expr: sum by (pod) (rate
(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_total_bytes
expr: sum by (pod) (rate
(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_bytes_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_total_packets
expr: sum by (pod) (rate
(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_total_drop_packets
expr: sum by (pod) (rate
(container_network_receive_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m])
+ rate (container_network_transmit_packets_dropped_total{container!~"POD",pod!="",image!=""}
[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_total_errors
expr: sum by (pod) (rate
(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-pod-other-rules
interval: 1m
rules:
- record: tkg_pod_health_container_restarts_1hr_count
expr: sum by (pod)
(increase(kube_pod_container_status_restarts_total[1h]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_health_unhealthy_count
expr: min_over_time(sum by (pod) (kube_pod_status_phase{phase=~"Pending|
Unknown|Failed"})[15m:1m])
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-node-cpu-rules
interval: 1m
rules:
- record: tkg_node_cpu_capacity_core
expr: sum by (node) (kube_node_status_capacity_cpu_cores)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_allocate_core
expr: sum by (node) (kube_node_status_allocatable_cpu_cores)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_usage_seconds
expr: (label_replace(sum by (instance)
(rate(container_cpu_usage_seconds_total[5m])), "node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_throttled_seconds
expr: sum by (instance)
(rate(container_cpu_cfs_throttled_seconds_total[5m]))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_request_core
expr: sum by (node) (kube_pod_container_resource_requests_cpu_cores)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_limits_core
expr: sum by (node) (kube_pod_container_resource_limits_cpu_cores)
labels:
job: kubernetes-service-endpoints
- name: vmw-telco-node-mem-rules
interval: 1m
rules:
- record: tkg_node_mem_capacity_mb
expr: sum by (node) (kube_node_status_capacity_memory_bytes / (1024*1024))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_allocate_mb
expr: sum by (node) (kube_node_status_allocatable_memory_bytes /
(1024*1024))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_request_mb
(rate(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_tx_dropped_packets
expr: (label_replace(sum by
(instance) (rate(container_network_transmit_packets_dropped_total{container!~"POD",pod!
="",image!=""}[5m])), "node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_rx_dropped_packets
expr: (label_replace(sum by
(instance) (rate(container_network_receive_packets_dropped_total{container!~"POD",pod!
="",image!=""}[5m])), "node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_tx_errors
expr: (label_replace(sum by (instance)
(rate(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_rx_errors
expr: (label_replace(sum by (instance)
(rate(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_total_bytes
expr: label_replace((sum by (instance) (rate
(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_bytes_total{container!~"POD",pod!="",image!=""}[5m]))), "node",
"$1", "instance", "(.*)")
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_total_packets
expr: label_replace((sum by (instance) (rate
(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m]))), "node",
"$1", "instance", "(.*)")
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_total_drop_packets
expr: label_replace((sum by (instance) (rate
(container_network_transmit_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m])
+ rate (container_network_receive_packets_dropped_total{container!~"POD",pod!="",image!=""}
[5m]))), "node", "$1", "instance", "(.*)")
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_total_errors
expr: label_replace((sum by (instance) (rate
(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m]))), "node",
"$1", "instance", "(.*)")
labels:
job: kubernetes-service-endpoints
- name: vmw-telco-node-other-rules
interval: 1m
rules:
- record: tkg_node_status_mempressure_count
expr: sum by (node)
(kube_node_status_condition{condition="MemoryPressure",status="true"})
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_diskpressure_count
expr: sum by (node)
(kube_node_status_condition{condition="DiskPressure",status="true"})
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_pidpressure_count
expr: sum by (node)
(kube_node_status_condition{condition="PIDPressure",status="true"})
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_networkunavailable_count
expr: sum by (node)
(kube_node_status_condition{condition="NetworkUnavailable",status="true"})
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_etcdb_bytes
expr: (label_replace(etcd_db_total_size_in_bytes, "instance", "$1",
"instance", "(.+):(\\d+)")) * on (instance) group_left (node) (avg by (instance, node)
(label_replace ((kube_pod_info), "instance", "$1", "host_ip", "(.*)")) )
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_apiserver_request_total
expr: sum((label_replace(apiserver_request_total, "instance", "$1",
"instance", "(.+):(\\d+)")) * on (instance) group_left (node) (avg by (instance, node)
(label_replace ((kube_pod_info), "instance", "$1", "host_ip", "(.*)")) )) by (node)
labels:
job: kubernetes-service-endpoints
ingress:
enabled: false
virtual_host_fqdn: prometheus.system.tanzu
prometheus_prefix: /
alertmanager_prefix: /alertmanager/
prometheusServicePort: 80
alertmanagerServicePort: 80
alertmanager:
deployment:
replicas: 1
service:
type: ClusterIP
port: 80
targetPort: 9093
pvc:
accessMode: ReadWriteOnce
storage: 2Gi
config:
alertmanager_yml: |
global: {}
receivers:
- name: default-receiver
templates:
- '/etc/alertmanager/templates/*.tmpl'
route:
group_interval: 5m
group_wait: 10s
receiver: default-receiver
repeat_interval: 3h
kube_state_metrics:
deployment:
replicas: 1
service:
type: ClusterIP
port: 80
targetPort: 8080
telemetryPort: 81
telemetryTargetPort: 8081
node_exporter:
daemonset:
hostNetwork: false
updatestrategy: RollingUpdate
service:
type: ClusterIP
port: 9100
targetPort: 9100
pushgateway:
deployment:
replicas: 1
service:
type: ClusterIP
port: 9091
targetPort: 9091
cadvisor:
daemonset:
updatestrategy: RollingUpdate
n ClusterIP – use the default configuration then prometheus service only can be accessed in
workload cluster. The service can also be exposed via ingress however this depends on
ingress controller and some other munual configuraitons.
n Loadbalancer – leverages Avi load balancer to expose service, this deployment method
depends on load-balancer-and-ingress-service addon. TCA does not support to specify static
VIP for prometheus service, Avi will allocate a VIP from default VIP pool for prometheus
service, then other external components can integrate with Prometheus with URL http://
<prometheus-VIP>.
Configurable parameters
metadata:
name: fluent-bit
clusterName: wc0
spec:
name: fluent-bit
clusterRef:
name: wc0
namespace: wc0
config:
stringData:
values.yaml: |
fluent_bit:
config:
service: |
[Service]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Systemd_Filter _SYSTEMD_UNIT=containerd.service
Read_From_Tail On
outputs: |
[OUTPUT]
Name syslog
Match *
Host 1.2.3.4
Port 514
Mode udp
Syslog_Format rfc5424
Syslog_Hostname_key tca_cluster_name
Syslog_Appname_key pod_name
Syslog_Procid_key container_name
Syslog_Message_key message
filters: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
[FILTER]
Name nest
Match kube.*
Operation lift
Nested_Under kubernetes
[FILTER]
Name record_modifier
Match *
Record tca_cluster_name wc0
parsers: |
[PARSER]
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?
<logtag>[^ ]*) (?<message>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
2 Collect all Kubernetes container logs and systemd logs for kubelet.service and
containerd.service in fluent_bit.config.inputs value.
3 Use output of type syslog to integrate fluent-bit with VMware vRealize LogInsight, replace
the host IP address 1.2.3.4 to your vRealize LogInsight IP address.
4 Use the default filter of type kubernetes in fluent_bit.config.filters value, add a filter of
type nest and a filter of type record_modifier to process the native logs so that the logs can
be easily filtered out and displayed pretty on vRealize LogInsight. Remember to replace the
tca_cluster_name wc0 to your cluster name in record_modifier filter.
VMware Telco Cloud Automation supports to back up and restore the entire TKG management
cluster nodes (VMs) on top of the same infrastructure.
Note
n Partial backup or restore of TKG management cluster nodes is not supported. You must
backup all cluster nodes and restore them all together.
n Restored management cluster must be associated with the same TCA-CP appliance.
n The infrastructure, including vCenter, networking configuration and datastore must be same
for source and restored cluster node VMs. And the infrastructure must be available to restore
cluster node VMs on it.
n Backup and restore of Kubernetes Persistent Volumes in TKG management cluster is not
supported.
n Before you start to restore all cluster nodes from backups, power off old node VMs, and
then remove them from vCenter inventory or delete them from disk in vCenter. Otherwise old
node VMs may be powered on and join the restored cluster.
n During restoration, keep all node VMs power off until they are all restored. Then power on
them.
Note
n Only v2 workload clusters are supported.
n Workload cluster should have at least one nodepool as otherwise the status of velero pod
will be in Pending after add-on installation.
The following procedure describes the steps to install the Velero add-on by editing a workload
cluster configuration.
Prerequisites
An S3-compatible object storage with sufficient disk space should be ready for velero to store
backups and associated artifacts. For example, minio. To know more about installation of minio,
refer to Install and Configure an S3-Compatible Object Storage
Procedure
5 Select the Options (three dots) icon corresponding to the Velero add-on and click on Edit.
6 In the Add-on Configuration window, enter the configuration details. See Add-On
Configuration Reference for v2 Workload Clusters.
7 Click Ok.
Note
n Only v2 workload clusters are supported.
n Source and target workload clusters should be under same vCenter and managed by the
same management cluster.
n Some Kubernetes resources in the cluster won't be backed up. If an attempt is made to
backup or restore these Restricted Resources, the backup or restore will be marked as
"Partially Failed".
After you install the Velero add-on on a workload cluster, you can run the Velero commands on
the web terminal connected with the cluster using the Embedded SSH Client.
Alternatively, you can run the Velero commands on the standalone Velero client. See Install
Standalone Velero Client.
Prerequisites
Procedure
3 Open the web terminal by clicking Options (three dots) corresponding to the workload
cluster you want to backup and then selecting Open Terminal.
4 On the Web terminal, check the service health of Velero by running the following command:
Alternatively, you can check the service health of Velero by performing the following:
5 Set an environmental variable to exclude the cluster resources from backing up.
# export TCA_VELERO_EXCLUDE_RESOURCES="issuers.cert-manager.io,certificates.cert-
manager.io,certificaterequests.cert-manager.io,gateways.networking.x-
k8s.io,gatewayclasses.networking.x-k8s.io"
# export TCA_VELERO_EXCLUDE_NAMESPACES="velero,tkg-system,tca-system,tanzu-
system,kube-system,tanzu-system-monitoring,tanzu-system-logging,cert-manager,avi-system"
Option 1: Anotate the pod which mounts volumes to Persistent Volumes created with nfs-
client storage class to backup using Restic.
You can choose to add the above annotation to the template metadata in the deployment
controller to avoid re-annotating in case the annotated pods restart.
Option 2: Change the default PV backup plugin to Restic. This will allow Restic to back up all
the types of Persistent Volumes, including the ones created with vSphere CSI plugin.
7 Check the backup status and related CR and wait until the processes are "Completed".
If you annotate pods and use Restic to back up PV data, check the status of
podvolumebackups.
What to do next
Prerequisites
n Source and target clusters must be associated with the same Management Cluster and must
be under the same vCenter server.
n Source and target clusters must be associated with the same Kubernetes version.
Procedure
Note
n Add nodepools manually as those will not be copied from the source cluster spec.
n If TCA cert-manager add-on is enabled in the source cluster and CNF is configured to use
this add-on, cert-manager service won't renew certificates requested by this CNF after
restoration. Remedy and reconfigure the CNF from TCA to generate missing resources
after the restoration process.
c Open the web terminal by clicking on the Options (three dots) corresponding to the
workload cluster you want to restore and then selecting Open Terminal.
d On the Web terminal, check the service health of Velero by running the following
command:
Alternatively, you can check the velero addon health status from TCA UI
g Check the restoration status and related CR. Wait until the processes are "Completed".
Note If the Network Function pod requires late binding for nodepool VMs, the restored
pods might be in Pending status. Follow Remediate Network Functions to heal.
What to do next
Procedure
3 Click on the Options (three dots) corresponding to the network function that you want to
remediate and select Remedy.
Note The Remedy option is available only if Cloud-Native Network Function and Network
Function are instantiated.
4 Click Continue if you have already restored the cluster via Velero successfully.
5 In the Create Network Function Instance window, enter the following details under Inventory
Detail:
n Name - Enter a different name for the new network function instance.
n Select Cloud - Select a cloud from your network on which you can instantiate the network
function.
Note You can select the node pool only if the network function instantiation requires
infrastructure customization.
n Tags (Optional) - Select the key and value pairs from the drop-down menus.
7 (Optional) To track and monitor the progress of the remediation process, select Inventory >
Network Services and verify that Instantiated is displayed in the State column.
Note In the State column, if Instantiated is displayed, it indicates that the remediation
process is completed successfully and the network function is recovered and ready for use.
Note
n Only v2 workload clusters are supported.
n Only persistent volumes created through vSphere CSI can be backed up.
n Some Kubernetes resources in the cluster won't be backed up. If an attempt is made to
backup or restore these Restricted Resources, the backup or restore will be marked as
"Partially Failed".
Prerequisites
Procedure
3 Open the web terminal by clicking the Options (three dots) corresponding to the workload
cluster you want to backup and then selecting Open Terminal.
4 On the Web terminal, check the service health of Velero by running the following command:
5 Set an environmental variable to exclude the cluster resources from backing up.
# export TCA_VELERO_EXCLUDE_RESOURCES="issuers.cert-manager.io,certificates.cert-
manager.io,certificaterequests.cert-manager.io,gateways.networking.x-
k8s.io,gatewayclasses.networking.x-k8s.io"
Option 1: Annotate the pod which mounts volumes to Persistent Volumes created with nfs-
client storage class to back up using Restic.
This annotation can also be provided in a pod template spec if you use a
controller to manage your pods. To quickly set the annotation on a pod template
(.spec.template.metadata.annotations) without modifying the full manifest, use 'kubectl patch'
command. For example:
Option 2: Change default PV backup plugin to Restic. This will allow Restic to back up all the
types of Persistent Volumes, including the ones created with vSphere CSI plugin.
7 Check the backup status and related CRs and wait until the processes are "Completed".
If you annotate pods and use Restic to back up PV data, check the status of
podvolumebackups.
What to do next
Note
n If TCA add-on load-balancer-and-ingress-service is enabled in the source cluster
and a CNF is defined to create Kubernetes resources gatewayclasses.networking.x-
k8s.io or gateways.networking.x-k8s.io in the Helm Chart, CNF pods in the restore
namespaces will be in "Pending" state after restoration is complete. Recreate the resources in
the restored cluster with new service engine group setting. It is recommended to define these
resources in the TCA add-on instead.
n If TCA cert-manager add-on is enabled in the cluster and CNF is configured to use this
add-on, cert-manager service won't be able to renew certificates requested by this CNF any
more after the namespace where the CNF is restored. In such case, reconfigure the CNF from
TCA to generate missing resources after the restore process.
Prerequisites
Procedure
3 Open the web terminal by clicking the Options (three dots) corresponding to the workload
cluster you want to backup and then selecting Open Terminal.
4 On the Web terminal, check the service health of Velero by running the following command:
8 Check backup status and related CR. Wait until the processes are "Completed".
Prerequisites
Procedure
3 Open the web terminal by clicking Options (three dots) corresponding to the workload
cluster you want to back up and then selecting Open Terminal.
4 On the web terminal, check the service health of Velero by running the following command:
Also you can use "--include-resources" flag to back up all persistent volumes under some
namespaces:
For more resource filtering method, please refer to velero resource filtering doc.
1. firstly label the pvc/pv. For nfs pv, please also label the pod which is mounting this
pv
# kubectl -n <cnf-namespaces> label pvc <example-pvc> <key>=<value>
# kubectl label pv <example-pv> <key>=<value>
# kubectl -n <cnf-namespaces> label pod <example-pod> <key>=<value>
2. backup the k8s resources matching the label selector
# velero backup create <example-pv-backup> --selector <key>=<value> --default-volumes-to-
restic
You can also choose to back up all the persistent volumes using Restic plugin under some
namespaces:
6 Check backup status and related CRs and wait until the processes are "Completed".
If you annotate pods and use restic to back up PV, check the status of podvolumebackups
CR.
What to do next
Prerequisites
Procedure
3 Open the web terminal by clicking Options (three dots) corresponding to the workload
cluster you want to back up and then selecting Open Terminal.
4 On the web terminal, check the service health of Velero by running the following command:
6 Delete the CNF application k8s resources and old pv that will be restored from a backup.
Note: Refer to CNF's inventory page from TCA UI to determine which kind of controller the
CNF is using, for example, when the CNF's controller is Deployment, delete the deployment
CR:
8 Check backup status and related CR and wait until the processes are "Completed".
If the restoration contains PV data backup using Restic, check the status of
podvolumerestores CR.
9 Reconfigure the CNF from TCA UI with empty json file to override values. It will create a
new deployment using the restored PVs. If there are new pods created, delete the legacy
ones manually. Note: During "Overrides values" step, please upload an empty json file with
content "{}" or empty yaml file with content "---".
Schedule a Backup
You can set up the back schedule at a specific time. The schedule time format is defined by Cron
expression. For example, the command below creates a backup that runs at 3:00 AM every day.
Delete a Schedule
Use the following command to delete schedules.
Note Deleting the backup schedule won't delete the backups created by the schedule.
Delete a Backup
You can delete a backup resource including all the data in the object storage by running the
following command:
Backup Hooks
When performing a backup, you can specify one or more commands to execute in a container in
a pod while that pod is being backed up. There are two ways to specify hooks: annotations on
the pod itself, and in the Backup spec. For more infomation, refer to velero offical doc.
2. Change the limits and request memory settings from the default of 512Mi and 128Mi to 512Mi
and 256Mi.
ports:
- containerPort: 8085
name: metrics
protocol: TCP
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
For information on the S3-compatible object storage services that Velero supports, see S3-
Compatible object store providers.
Prerequisites
n Your environment has a Linux VM with sufficient storage to install MinIO and store backups.
MinIO service will not operate if disk has less than 1GB of free disk space.
Procedure
# curl -O https://dl.minio.io/server/minio/release/linux-amd64/minio
# chmod +x minio
# mv minio /usr/local/bin
# mkdir -p /usr/local/share/minio
5 Create a new folder to store MinIO data files and grant ownership of the folder to minio-users.
# mkdir -p /usr/local/share/minio
# chown minio-user:minio-user /usr/local/share/minio
6 Create a new folder for MinIO configuration files and grant ownership of the folder to minio-
users.
# mkdir -p /etc/minio
# chown minio-user:minio-user /etc/minio
7 Create a new file for default configurations of MinIO service and enter the details.
# vim /etc/default/minio
Option Description
Note If the IP address is not specified, MinIO will bind to every address
configured on the server. Therefore, it is recommended to specify the IP
address that Velero add-on can connect to. The default port is 9000.
MINIO_VOLUMES="/usr/local/share/minio/"
MINIO_OPTS="-C /etc/minio --address 10.196.46.27:9000"
MINIO_ACCESS_KEY="minio"
MINIO_SECRET_KEY="minio123"
# curl -O https://raw.githubusercontent.com/minio/minio-service/master/linux-systemd/
minio.service
# mv minio.service /etc/systemd/system
# systemctl daemon-reload
For more information on installing and configuring MinIO storage service, see MinIO
Documentation.
13 If necessary, enable Transport Layer Security (TLS) encryption of incoming and outgoing
traffic on MinIO. For more information, see Enabling TLS.
Prerequisites
Before using standalone velero client, download the kubeconfig file of workload cluster. Refer to
Access Kubernetes Clusters Using kubeconfig
Procedure
1 Download the supported version of the signed Velero binary for vSphere with Tanzu from the
VMware Product Downloads Page.
Note Ensure that you are using Velero binary signed by VMware so that you are eligible for
support from VMware.
2 Open a command line and change the directory to the Velero CLI download.
# chmod +x velero-linux-vX.X.X_vmware.1.
5 Move the Velero CLI to the following system path for global availability.
# cp velero-linux-vX.X.X_vmware.1 /usr/local/bin/velero
# velero version
7 Append the --kubeconfig option for every velero command, for example:
Earlier, to access a cluster, a user logged in as a Cluster API Provider vSphere (CAPV) user.
The downside to this method was that it provided the user with unrestricted access across all
clusters.
Now, a user can remotely access Kubernetes clusters from VMware Telco Cloud Automation
using one of the following methods:
n Access using an external SSH terminal with a one time generated token from VMware Telco
Cloud Automation.
n Download and use the kubeconfig file provided by VMware Telco Cloud Automation. This
file contains as endpoint the external address of VMware Telco Cloud Automation and the
token for accessing the Kubernetes cluster.
This way, only those users who have the required permissions can access the cluster and
perform only those operations that are allowed based on their privileges.
Prerequisites
1 Ensure that Kubernetes CLI Tool (kubectl) is installed on your local system.
Procedure
4 Use kubectl to interact with the cluster using the downloaded kubeconfig.yamlfile.
Results
You can now perform cluster operations based on your user privileges.
Prerequisites
Ensure that you have installed an external SSH client on your local system.
Procedure
VMware Telco Cloud Automation generates a one-time token, user name, and password.
4 Using these login credentials, you can SSH into a Kubernetes cluster and perform cluster
operations based on your user privileges. The SSH connection is established to the endpoint
of VMware Telco Cloud Automation on port 8501.
For example:
Procedure
Results
A terminal opens and VMware Telco Cloud Automation connects with the Kubernetes cluster.
You can now perform Kubernetes cluster operations based on your user privileges.
Prerequisites
1 Ensure that you have installed an external SSH client on your local system.
2 Ensure that Kubernetes CLI Tool (kubectl) is installed on your local system.
Procedure
Results
The recovery kubeconfig file establishes a remote connection and lists the pods that are
running on the cluster. You can now perform cluster operations on them.
VMware Telco Cloud Automation is integrated with the VMware Tanzu Kubernetes Grid (TKG).
New version of TKG requires you to upgrade the management cluster. After upgrading the
VMware Telco Cloud Automation version, perform the following steps in the sequence provided.
1 Upgrade the management cluster. For details, see Upgrade Management Kubernetes Cluster
Version. For implications of not upgrading the management cluster, see Implications of Not
Upgrading Management Cluster.
2 Upgrade the workload cluster. For details, see Upgrade Management Kubernetes Cluster
Version. For implications of not upgrading the workload cluster, see Implications of Not
Upgrading Workload Cluster.
Note
n For details on supported versions, see Supported Features on Different VIM Types.
n Upgrade Validations
Upgrade Validations
From version 2.0, VMware Telco Cloud Automation has automated the upgrade validations. The
VMware Telco Cloud Automation performs the following validations when upgrading Kubernetes
clusters to a newer version.
n Whether the Control Plane and Worker nodes are in a healthy state and fail the upgrade if
they are in the Not Ready state.
n Whether key deployment parameters such as names of the folders and network paths are
not renamed or moved.
n Whether the operators are running and display a warning if they are not.
n The Management cluster upgrade is successful even when the underlying workload clusters
are down.
n If the operators are not the latest versions, the corresponding workload clusters display a
warning for upgrading the add-ons. To upgrade Workload cluster add-ons individually, use
the Upgrade Add-Ons option.
n Whether the control planes are in a healthy state and fail the upgrade if they are in the Not
Ready state.
n Whether the worker nodes are in a healthy state and show a warning if they are in the Not
Ready state.
n Whether key deployment parameters such as names of the folders and network paths are
not renamed or moved.
n Whether the operators are running and display a warning if they are not.
n Whether the Management cluster is reachable and at least one Worker node is in the Ready
state.
n Whether the vmconfig-operator on the Management cluster is running. Cluster upgrade fails
if vmconfig-operator is not running.
n After the VMware ESXi connects, VMware Telco Cloud Automation upgrades the failed
nodes.
n After you remediate any environment issue, click Retry for completing the upgrade.
Attention The VMware Telco Cloud Automation Manager and VMware Telco Cloud Automation
Control Plane are mandatorily upgraded from 2.2.x to 2.3. The management cluster supports only
V1 version.
TCA 2.2.x
Management TCA 2.3.0
cluster (v1.24.10) Management
Cluster Operations in UI [Before upgrade] Cluster (v1.24.10) Comments
Attention The Management cluster is mandatorily upgraded from 1.23.10 to 1.24.10. The
following table lists the workload cluster compatibility after management cluster upgrade to
v1.24.10.
Edit Worker Node Add New No Yes Yes Yes Yes Yes
Configuration NodePool
Run Diagnosis
Attention The Management cluster is mandatorily upgraded from 1.23.10 to 1.24.10. The
following table lists the workload cluster compatibility after management cluster upgrade to
v1.24.10.
Edit Workload Edit Control Plane No Yes Yes Yes Yes Yes
Cluster Node
Configuration
n Container Network Interface (CNI ) and Container Storage Interface (CSI) diagnosis
n Advanced diagnosis
n Node diagnosis
Note Approximately 30 minutes are required to complete the diagnosis. And, within that time, if
another diagnosis is submitted, the data is overridden with the last submitted diagnosis result.
Procedure
3 Click the Options (⋮) symbol corresponding to the Kubernetes cluster on which you want to
run the diagnosis.
5 (Optional) select one or more test cases that you want to apply for the cluster diagnosis.
Note If you don't select any test case, the system runs a default diagnosis on the cluster.
7 Click the cluster name and navigate to the Diagnosis tab to monitor the progress of the
diagnosis.
After the diagnosis is complete, the DOWNLOAD option is available to download the
diagnosis report and perform a detailed analysis.
Prerequisites
n Must comply with TOSCA Simple Profile in YAML version 1.2 or TOSCA Simple Profile for
NFV version 1.0.
Procedure
5 (Optional) To add a tag, select the key and value pairs from the drop down menus. You can
add more than one tag.
6 Click Browse and select the network function descriptor (CSAR) file.
7 Click Upload.
Results
The specified network function is added to the catalog. You can now instantiate the function or
use it to create a network service.
What to do next
n To create a network service that includes the network function, see Design a Network Service
Descriptor.
n To obtain the CSAR file corresponding to a network function, select the function in the
catalog and click Download.
n To add or remove tags for your network function, select the desired network function and
click the Edit Tags icon.
n To remove a network function from the catalog, stop and delete all instances using the
network function. Delete all the Network Service catalogs that are using the network function.
Then select the function in the catalog and click Delete.
Prerequisites
Procedure
3 Click Onboard.
6 Tags (Optional)- Enter the tags to associate your VNF descriptor with.
8 Click Design.
n Version - The version of the network function TOSCA file. This text box is not editable.
a Under Network Function Properties, enter information for the following fields:
n Heal
n Scale
n Scale To Level
n Workflow
n Operate
n Upgrade Package
c The Draft Versions pane displays the available versions of the Network Function catalog
that you can edit. Click the Options (⋮) icon and select the draft that you want to view or
edit.
a Add internal networks (Virtual Links) to your VNF by dragging the icon from the toolbar
into the design area. During Instantiation, VMware Telco Cloud Automation creates
networks for these virtual links. You can override them and select the existing networks if
necessary.
n Network Name
n Description
n CIDR
n DHCP
n (Optional) Gateway IP
n Start IP Address
n End IP Address
To configure additional settings for your network, click the pencil icon against the
network.
b Add virtual machines (VDU) by dragging the icon from the Toolbar into the design area.
In the Configure VDU pane, specify the following settings for each VDU:
n Image Name - The name of the VM template that is on the backing vCenter Server of
your cloud.
Note
n The image name you enter must match the virtual machine template name on the
vCenter Server.
Note This option is applicable when you configure VMware Integrated OpenStack as
your VIM.
n OVF Properties (Optional) - OVF properties are the OVF inputs to provide to the VM
template. Enter the property, description, type such as string, boolean, or number,
and default value. To make this information mandatory, select the Required option.
n Connection Points - Select an internal or external connection point from the Add
Connection Point drop-down menu:
n Internal Connection Point - Links the VDU to an existing virtual link that is added
to the VNF. At least one virtual link is required for internal connection points.
Note To enable the Depends On option, you must configure more than one VDU.
Note You must add at least one virtual link before configuring the internal connection
points for your VDUs.
You can modify VDU settings at a later stage by clicking the pencil icon on the desired
VDU.
11 In the Rules tab, add an affinity or anti-affinity rule. For more information, see Working with
Affinity Rules.
12 In the Scaling Policies tab, add scaling aspects and instantiation levels. For more information,
see Scaling Policies.
14 To save your descriptor as a draft and work on it later, click Save Draft. For information about
working with different draft versions, see Edit Network Function Descriptor Drafts.
Results
The specified network function is added to the catalog. You can now instantiate the function or
use it to create a network service.
What to do next
n To create a network service that includes the network function, see Design a Network Service
Descriptor.
n To obtain the CSAR file corresponding to a network function, select the function in the
catalog and click Download.
n To add or remove tags, go to Catalog > Network Function and click the desired network
function. Then click Edit.
n To remove a network function from the catalog, stop and delete all instances using the
network function. Then select the function in the catalog and click Delete.
Prerequisites
Procedure
5 Tags (Optional)- Enter the tags to associate your VNF descriptor with.
6 Type - Select the network function type as Cloud Native Network Function.
7 Click Design.
n Version - The version of the network function TOSCA file. This field is not editable.
a Under Network Function Properties, enter information for the following fields:
n Heal
n Scale
n Scale To Level
n Workflow
n Operate
n Upgrade Package
c The Draft Versions pane displays the available versions of the Network Function catalog
that you can edit. Click the Options (⋮) icon and select the draft that you want to view or
edit.
a From the Components toolbar, drag a Helm Chart into the design area. Helm is
a Kubernetes application manager used for deploying CNFs. Helm Charts contain a
collection of files that describe a set of Kubernetes resources. Helm uses the resources
from Helm Charts for orchestrating the deployment of CNFs on a Kubernetes cluster.
n Chart Version - Version number of the chart from the Helm repository.
n Helm Version - Select the version of the Helm from the drop-down menu.
n (Optional) Helm Scale Properties - You can add the helm properties required for
scale. You can also specify if the property is mandatory or optional for scale.
n (Optional) Depends On - Specify the Helm to be deployed before deploying this Helm.
In a scenario where you deploy many Helms, there can be dependencies between the
Helms regarding the order in which they are deployed. This option enables you to
specify their deployment order.
12 To save your descriptor as a draft and work on it later, click Save Draft. For information about
working with different draft versions, see Edit Network Function Descriptor Drafts.
Results
The specified network function is added to the catalog. You can now instantiate the function or
use it to create a network service.
What to do next
n To create a network service that includes the network function, see Design a Network Service
Descriptor.
n To obtain the CSAR file corresponding to a network function, select the function in the
catalog and click Download.
n To add or remove tags, go to Catalog > Network Package and click the desired network
function. Then click Edit.
n To remove a network function from the catalog, stop and delete all instances using the
network function. Then select the function in the catalog and click Delete.
You can use the VMware Telco Cloud Automation to customize the infrastructure requirements
of the node pools. You can define these customizations through user interface and the system
adds these customizations to the corresponding TOSCA file. For more details on the TOSCA
components, see TOSCA Components.
Prerequisites
Procedure
4 Select Design Network Function Descriptor on the Onboard Network Function page. Add
the following details:
n Tags: Associated tags for the network package. Select the key and value from the drop-
down menu.
n Network Function: Select the type of network function. For infrastructure designer, select
Cloud Native Network Function.
5 Click Design.
n Network Adapter - Click Add to add a new network adapter. Enter the following details:
n (Optional) Target Driver - Select the value from the drop-down menu.
n Interface Name - Name of the interface for the vmxnet3 device. This property is
displayed when you select vmxnet3 in Device Type.
n PF Group - Enter the name of the PF group for which you want to add the network
adaptor.
n Shared Across NUMA - Select the button to enable or disable sharing of the devices
across NUMA.
Note Shared Across NUMA is appliable only when NUMA Alignments is enabled.
n Additional Properties - This property is displayed when you select vmxnet3 in Device
Type.
n CTX Per Dev - To configure the Multiple Context functionality for vNIC traffic
managed through Enhanced Datapath mode, select the value from the drop-down
menu. For more details, see CTX Per Dev. For more details on Enhanced Datapath
settings, see Configuration to Support Enhanced Data Path Support.
Note When you select Target Driver, the system automatically adds the required DPDK
in Kernel Modules and dependent custom packages in the Custom Packages.
n PCI Pass Through - Click Add to enter the PTP or PCI Devices.
Note When you add a PCI Pass Through device, the system automatically adds
the required Linux-rt in Kernel Type, DPDK in Kernel Modules, and dependent custom
packages in the Custom Packages.
Note
n To use the PTP PHC services, enable PCI passthrough on PF0 on ESXi server
when the E810 card is configured with multiple PF groups.
n To use the PTP VF services, disable the PCI passthrough on PF0 and enable the
SRIOV on both the PFs. E810 card supports 1 VF as PTP and the other VF serves
as SRIOV VF NICs for network traffic.
a Device Type - You can select to add a PTP device or a NIC device. To use
a physical device, select NIC. To use a virtual device, select PTP from the drop-
down menu.
Note To upgrade the device type from PTP PF to PTP VF, delete the existing
PTP PF device and add the new PTP VF device. Do not change the device type
from NIC to PTP directly in the CSAR file.
b Shared Across NUMA - Select the button to enable or disable sharing of the
devices across NUMA.
c PF Group - Enter the name of the PF group for which you want to PCI Pass
Through device.
n Source - To provide input through file, select File from the drop-down menu.
To provide input during network function instantiation, select Input from the
drop-down menu.
Note To select File from the Source menu, you must first upload the required
file in Artifacts folder available under the Resources tab.
n Content - Name of the file. The value is automatically displayed based on the
Source value.
Note
n Before adding the ACC100 Adapter PCI device, ensure the ACC100 Adapter is
enabled in the VMware ESXI server. For details, see Configuring the ESXI Driver
for the Intel vRAN Accelerator ACC100 Adapter.
n You can add the ACC100 Adapter on the workload clusters with kubernetes
version 1.20.5, 1.19.9, or 1.18.17. For workload cluster upgrade, see Upgrade
Management Kubernetes Cluster Version.
a Shared Across NUMA - Select the button to enable or disable sharing of the
devices across NUMA.
Note Based on the Target driver, the system automatically adds the required
Linux in Kernel Type, pciutils and DPDK modules.
d PF Group - Enter the name of the PF group for which you want to PCI device.
n Kernel Type - Select the Name and Version from the drop-down menu.
n Kernel Arguments - Click Add to add a new kernel argument. Add the Key and Value
in respective text box.
Note For hugepagesz and default_hugepagez, you can select the value from
the drop-down menu. For other arguments, you can specify the key and value in
respective text box.
n Custom Packages - Click Add to add a new custom kernel package. Add the Name
and Version in the respective text box.
n Files - You can add a file for injection. Click Add and select the file from the drop-down
menu in Content and provide the file path of the target system where the file will be
uploaded in Path text box respectively.
Note To view the file in the drop-down menu, you must upload the file in the scripts
folder. You can upload only .JSON, .XML, and .conf files.
n To add the stalld service, select the stalld from the drop-down menu.
n To add the syslog-ng service, select the syslog-ng from the drop-down menu. When
you select syslog-ng, the Add Service Config Files pop-up appears. Select the
required configuration files for syslog-ng service.
n (Optional) Tuned Profiles - Enter the name of the tuned profile. You can add multiple
tuned profiles separated by commas.
Note When you add a tuned profile, system adds the tuned package in the Custom
Packages.
n NUMA Alignments - Click the corresponding button to enable or disable the support for
NUMA alignments.
n Latency Sensitivity - You can set the latency value for high performance profiles. Select
the value from the drop-down menu. You can select both high and low. Default value is
normal.
n I/O MMU Enabled - Click the corresponding button to enable or disable the I/O MMU.
Configuring the ESXI Driver for the Intel vRAN Accelerator ACC100 Adapter
Manual process to add the ESXi driver for Intel vRAN Accelerator ACC100 adapter.
You must enable the support for ACC100 adapter in the VMware ESXi server, before you can
configure the ACC100 adapter in VMware Telco Cloud Automation.
Procedure
c Copy the downloaded .VIB file to the ESXi host where the devices are present.
a Switch the ESXi host into the maintenance mode. To switch the ESXi server into
maintenance mode, power down the virtual machines running on the host. You can
also migrate the virtual machines using vMotion. For details on maintenance mode, see
maintenance mode.
b Log in to the ESXi host shell, and run the following command:
# esxcli software vib install -d <full-path-to-vib> --no-sig-check
For example,
After the driver is successfully installed and loaded, check the description of the lspci
command:
c Find the accelerator manually. You can also search the term FEC to find the accelerators.
g Click Save.
n You can download the bbdev-cli tool and the user guide from https://www.intel.com/
content/www/us/en/download/19758.
2 Copy the .zip file to ESXi. For example, esxcli software component apply -d /tmp/
Intel-ibbd-tools_1.0.7-1OEM.700.1.0.15843807_17865363.zip.
[root@plab-ran-esx3:~] /opt/intel/bbdev-cli -l
{
"Devices": [
"{Name:/devices/ifec/dev1, Type:ACC100, Address:0000:d8:00.0}",
"{Name:/devices/ifec/dev0, Type:ACC100, Address:0000:3b:00.0}"
]
}
[root@plab-ran-esx3:~]
The acc100_config_vf_5g.cfg file contains default configuration for 5GNR FlexRAN l1app.
This configuration is applied, by default.
n To apply a new configuration file, use the example.
You can get the device name through the bbdev-cli -l command.
Enhanced data path or ENS provides superior network performance. It targets NFV workload
and uses the DPDK capabilities to enhance the network performance. To support ENS on
VMware Telco Cloud Automation, make the following changes in the Infrastructure Requirements
Designer.
Prerequisites
n Install latest ENS NIC drivers compatible with NIC Model and remove the older drivers. For
details, see Enhanced Data Path.
n Ensure that you create a new DVS for ENS workloads and prepare the DVS with NSX-T ENS.
n If you want to use multiple NICs as Uplinks in NSX-T, VMware Telco Cloud Automation
recommends to set the NIC Teaming policy to Load Balance Source.
Note
n When you create DVS for ENS, all connected Portgroups or NSX-T segments start leveraging
Enhanced Datapath.
n VMware Telco Cloud Automation does not recommend the use of ENS DVS for VMotion and
vSAN traffic.
Procedure
1 Do not set isNumaConfigNeeded in CSAR as NSX-T can automatically aligns the VNIC of Node
Pool with NUMA, PNIC, and ENS L-cores. However if you set isNumaConfigNeeded = True in
CSAR, then VMware Telco Cloud Automation tries to align the Node Pool with NUMA and
ENS L-cores.
infra_requirements:
node_components:
latency_sensitivity: high
isNumaConfigNeeded: [true | false] <---- if True, TCA will send only memory pinning
to VMConfig operator if underlying network ENS enabled.
infra_requirements:
node_components:
network:
devices:
- deviceType: vmxnet3 <-- vmxnet3
networkName: F1U
resourceName: vmxnet3res <-- resourceName
targetDriver: [igb_uio | vfio-pci] <-- driver name
additionalProperties: <-- ENS related config i.e.., Setting
multi context for device
ctxPerDev: [1|2|3]
Note:
i. If ‘targetDriver’ is not provided, then providing interfaceName for the device is
mandatory.
ii. If ctxPerDev is set, map the device to ENS capable network while instantiating CNF.
iii. ctxPerDev property is applied per interface level to the Node pool and can be
seen in VM vmx file as ethernetX.ctxPerDev (where X is the interface number).
A VM-VM affinity rule specifies whether selected individual virtual machines run on the same host
or kept on separate hosts. This type of rule is used to create affinity or anti-affinity between
individual virtual machines that you select.
An affinity rule ensures that the specified virtual machines are placed together on the same
host. An anti-affinity rule ensures that the specified virtual machines do not share the host. You
can create an anti-affinity rule to guarantee that certain virtual machines are always on different
physical hosts. This way, not all virtual machines are at risk when one of the hosts encounters an
issue.
Prerequisites
Procedure
3 Click the network function on which you want to create affinity rules and click Edit.
a Add the name of the affinity rule in text box corresponding to Rule Name.
b To create affinity among the VDUs, select the VDU from the list.
a Add the name of the anti-affinity rule in text box corresponding to Rule Name.
b To create an anti-affinity rule among the VDUs, select the VDU from the list.
Results
Example
VDU 1, VDU 2 The deployed VDUs are always kept The deployed VDUs are always kept
together on the same ESXi host even apart on different ESXi hosts. for
for scaled-out instances. scaled-out instances, an anti-affinity
rule is created for every permutation
and combination.
VDU 1 All the scaled VDU instances of VDU All the scaled VDU instances of VDU
1 are kept together on the same ESXi 1 are kept apart on different ESXi
host. hosts and only one anti-affinity rule is
created.
Scaling Policies
The Scaling Policies tab provides an interface to configure scaling aspects and instantiation
levels for the VDU instances in a VNF.
Using Scaling Policies, you can adjust to changing VNF workload demands by increasing or
decreasing the VDU instances. For example, you can scale up the number of VDU instances in a
VNF in anticipation of heavy usage over the weekend.
What is an Aspect?
Aspects are the logical grouping of one ore more VDU instances in a VNF. Scaling aspects define
the VDU instances to scale in discreet steps. Each scale level of a scaling aspect defines a valid
size of the VNF.
Procedure
2 Design a Virtual Network Function Descriptor. For more information, see Design a Virtual
Network Function Descriptor.
3 In the Network Function Designer page, click the Scaling Policies tab.
4 To add scaling aspects, click Add under Scaling Aspects.The Add Aspect wizard is displayed.
b Max Scale Level - Use the slide bar to select the total number of scale steps to apply for
the aspect.
c Under Available Scaling Steps, select the scaling steps to assign to your aspect. To
create a scaling step, click Create Scaling Step.
d The Scaling Steps table displays details of the scaling steps that are assigned to the
aspect.
5 To add an instantiation level, click Add. The Add Instantiation Level wizard is displayed.
b To make this instantiation level as the default level, select Default Level.
c Assign a scaling aspect to the instantiation level. To add a scaling aspect, click Add
Aspect.
6 To save this scaling policy as a draft and edit it at a later time, click Save As Draft.
What to do next
1 Go to Catalog > Network Function and click the network function on which you want to
update the scaling policy.
2 Click Edit.
3 In the Network Function Designer page, add scaling aspects or add instantiation levels.
5 To create a duplicate of the Network Function that contains the scaling updates, click Save as
New.
Designing Workflows
Starting from release 2.0, VMware Telco Cloud Automation provides a Workflow designer in the
user interface for defining life-cycle management workflows.
The Workflow designer in VMware Telco Cloud Automation is available for Network Functions
(VNFs and CNFs) and Network Services. Using the Workflow designer, you can now create a
workflow, upload an existing workflow specification in JSON format from your local system, or
select a workflow from the Resources folder in VMware Telco Cloud Automation.
You can design workflows for the following life-cycle events, or add a custom workflow. The list
can differ for VNFs, CNFs, and Network Services:
n Instantiate Start
n Instantiate End
n Heal Start
n Heal End
n Scale Start
n Scale End
n Terminate Start
n Terminate End
n Upgrade Start
n Upgrade End
You can also deactivate a life-cycle event if your network function does not support it.
The workflow attributes and parameters are the variables that workflows use to transfer data.
Orchestrator saves a workflow token every time a workflow runs, recording the details of that
specific run of the workflow.
Workflow Parameters
Workflows receive input parameters and generate output parameters when they run.
Input Parameters
Input parameters are read-only variables. Most workflows require a certain set of input
parameters to run. An input parameter is an argument that the workflow processes when it
starts. The user, an application, another workflow, or an action passes input parameters to a
workflow for the workflow to process when it starts.
For example, if a workflow resets a virtual machine, the workflow requires as an input parameter
the name of the virtual machine.
To modify the value supplied by the workflow caller, or to read the information using an input
parameter, copy the input parameter to an attribute.
Output Parameters
Output parameters are write-only variables. A workflow's output parameters represent the result
from the workflow run. Output parameters can change when a workflow or a workflow element
runs.
For example, if a workflow creates a snapshot of a virtual machine, the output parameter for the
workflow is the resulting snapshot.
To read the value of a variable, use an attribute within the workflow. To pass the value of that
attribute to the workflow caller, copy the attribute to an output parameter.
Workflow Attributes and Variables
Use attributes to pass information between the schema elements inside a workflow.
Attributes are read and write variables. It is a common design pattern to copy input parameters
to attributes at the beginning of a workflow so that you can modify the value if necessary within
the workflow. It is a common design pattern to copy attributes to output parameters at the end
of a workflow so that you can read the value if necessary within the workflow.
Workflow Bindings
Bindings populate elements with data from other elements by binding input and output
parameters to workflow attributes.
With parameter bindings, you can explicitly state whether you want each of your workflow
variables to be accessible.
Inward Binding
Outward Binding
You can change the value stored by a variable. That is, you can write out to the variable.
Create a Workflow
You can create a new life-cycle event workflow for your network function or network service
using the Workflow designer.
You can create a workflow when designing a network function or network service descriptor, or
add workflows at a later stage. In this example, we look at designing a workflow when designing
a network function descriptor.
Procedure
1 Follow the steps for designing a network function descriptor. See Designing a Network
Function Descriptor.
3 Under Life Cycle Events, select an event for designing the workflow.
5 From the drop-down menu, you can select a workflow type for each step. The commonly
used workflows are:
6 Inbound variables: You can assign valid default values to inbound variables. You can also map
the inbound variable to the variables in the Worflow Interface pane on the right.
a To assign a default value for an inbound variable, click the edit icon against the inbound
variable.
b To add an inbound variable, click the + icon. To delete an inbound variable, click the -
icon.
c You can add valid input values from the Worflow Interface pane on the right and map
them to the inbound variables.
7 Outbound variables: You can map an outbound variable to a valid output value from the
Workflow Interface pane on the right.
8 To insert a new step, click the + icon on the left of the step.
9 To change the order of each step, click the step number and select a new step number from
the drop-down menu.
10 To save the workflow as draft, click Save on the top-right section of the page.
What to do next
Prerequisites
Use the Network Function Designer to create a network function descriptor and save the design
as a draft.
Procedure
4 From the table, locate the desired network descriptor draft and click the Edit icon.
5 To select a draft version for editing, from the Draft Versions table, click the Options symbol ⋮
against the draft and select View. You can restore a previous version from here.
Prerequisites
Note The following steps are valid only on macOS and Linux operating systems. On a Windows
operating system, use the relevant commands to edit the CSAR file.
Procedure
1 Download the CSAR file that you want to edit. For more information, see Download a
Network Function Package.
4 Update the descriptor_id field with the new descriptor ID. You can also update the NFD.yaml
with any other changes, as appropriate.
6 You can also add any other supporting files to their respective folders or edit the existing
files.
For example, you can add a script to the Artifacts > scripts folder.
8 Upload the CSAR file VMware Telco Cloud Automation. For more information, see Upload a
Network Function Package.
Procedure
Results
Network functions from different vendors have their own unique set of infrastructure
requirements. Defining these requirements in the network functions ensure that they are
instantiated and deployed in a cluster without you having to log in to their master or worker
nodes.
To customize the cluster according to network function requirements, you must add the
requirements in the network function catalog. Go to Catalog > Network Function tab. Click the
network function that requires a customization and select the Infrastructure Requirements tab.
VMware Telco Cloud Automation added a custom extension called infra_requirements to the
TOSCA. In this extension you can define the node, Containers as a Service (CaaS), and Platform
as a Service (PaaS) components:
1 Under node_components, you can define the requirements for the node. These requirements
include kernel type, kernel version, kernel arguments, required packages, tuned
configuration. You can also define networks to be configured for worker node. All the
changes are applied on the worker nodes of the node pool.
2 Under caas_components, define the CaaS components such as CNIs to be installed on each
worker node. At present, supports SRIOV.
After you define the components of infra_requirements in the CNF catalog, the nodepool is
customized according to the differences detected between the CNF catalog and the actual
configuration present in the nodepool during instantiation.
Node Customization
You can customize nodepools of the clusters using network function catalog defined in a TOSCA
(Topology and Orchestration Specification for Cloud Applications) file.
VMware Telco Cloud Automation uses Network Function TOSCA (Topology and Orchestration
Specification for Cloud Applications) extensions to determine the requirements for different VIMs.
n Latency sensitivity
n Tuned profile
n Kernel Update
n Kernel Modules
n GRUB config (all configurations used for the CPU isolation, hugepages config.)
Note The maximum CPU or memory resource allocated to worker nodes within node pools
cannot exceed the CPU or memory resource available at the underlying ESXi host level.
TOSCA Components
You can modify the node components and CaaS components in TOSCA for different Kubernetes
VIMs.
To support various network functions, the Worker nodes may require a customization in the
TOSCA. These customizations include the kernel-related changes, custom packages installations,
network adapter, SRIOV, DPDK configurations, and CPU Pinning of the Worker nodes on which
you deploy the network functions.
Node Components
n Kernel: The Kernel definition uses multiple arguments that require a customization.
n kernel_type: Kernel type for the worker nodes. The kernel types are:
The kernel type depends on the network function workload requirement. The required
Linux version is downloaded from TDNF repo[VMware Photon Linux] during customization.
kernel type
infra_requirements:
node_components:
kernel:
kernel_type:
name: linux-rt
version: 4.19.132-1.ph3
n kernel_args: Kernel boot parameters for tuning values that you can adjust when the
system is running. These parameters configure the behavior of the kernel such as
isolating CPUs. These parameters are free form strings. They are defined as 'key' → name
of the parameter and optionally 'value' → if any arguments are provided.
kernel_args
infra_requirements:
node_components:
kernel:
kernel_args:
- key: nosoftlockup
- key: noswap
- key: softlockup_panic
value: 0
- key: pcie_aspm.policy
value: performance
- key: intel_idle.max_cstate
value: 1
- key: mce
value: ignore_ce
- key: fsck.mode
value: force
Huge Pages
infra_requirements:
node_components:
kernel:
kernel_args:
- key: default_hugepagesz
value: 1G
- key: hugepagesz
value: 1G
- key: hugepages
value: 17
Note:
i. This order should be maintained.
ii. Nodes will be restarted to set these values
iii. supported hugepagesz are 2M | 1G
isolcpus
infra_requirements:
node_components:
kernel:
kernel_args:
- key: isolcpus
value: 2-{{tca.node.vmNumCPUs}}
Note: TCA will replace the {{tca.node.vmNumCPUs}} with vCPUs configured on the worker
node.
n kernel_modules: To install any kernel modules on Worker nodes. For example, dpdk, sctp,
and vrf.
Note When configuring dpdk, ensure that the corresponding pciutils package is
specified under custom_packages.
dpdk
infra_requirements:
node_components:
kernel:
kernel_modules:
- name: dpdk
version: 19.11.1
For a details on supported DPDK versions, see Supported DPDK and Kernel Versions.
n custom_packages: Custom packages include the lxcfs, tuned, pci-utils, and ptp. The
required packages are downloaded from TDNF repo[VMware Photon Linux] during
customization.
custom_packages
infra_requirements:
node_components:
custom_packages:
- name: pciutils
version: 3.6.2-1.ph3
- name: tuned
version: 2.13.0-3.ph3
- name: linuxptp
version: 2.0-1.ph3
Note: Make sure these packages are available on VMWARE TDNF Repository
Note While configuring tuned, ensure that the corresponding tuned package is specified
under custom_packages
tuned
infra_requirements:
node_components:
additional_config:
- name: tuned # <--- for setting tuned
value: '[{"name":"custom-profile"}]' # <--- list of profile names to activate.
infra_requirements:
node_components:
file_injection:
- source: file
content: ../Artifacts/scripts/custom-tuned-profile.conf
#<-- File path location which is embedded in CSAR
path: /etc/tuned/custom-profile/tuned.conf #<-- Target location of the
configuration file. Location should align with name of the profile.
- source: file
n isNumaConfigNeeded: This feature tries to find a host and a NUMA node that can fit the
VM with the given requirements and assign it. It is useful for high-performance profile
Network Functions such as DU, which require a high throughput. This sets CPU and Memory
reservations to maximum on the Worker node. It sets the affinity for the Worker node cpus to
the ESXi cpus.
isNumaConfigNeeded
infra_requirements:
node_components:
isNumaConfigNeeded: [true | false]
latency_sensitivity
infra_requirements:
node_components:
latency_sensitivity:
[high | low]
n ptp: It is used for customizing PTP services. You can use the configuration files for ptp4l and
phc2sys services customization.
Note
n You must add PTP4L_CONFIG_FILE in User Input section of the catalog.
n Destination path (worker node path) is abstracted out from these services. VMware
Telco Cloud Automation copies content of phc2sys and ptp4l configuration files to /etc/
sysconfig/phc2sys and /etc/ptp4l.conf respectively on the worker node.
PTP
infra_requirements:
node_components:
ptp:
phc2sys:
source: file # <-- Content will come from file
embedded in CSAR
content: ../Artifacts/scripts/phc2sys # <-- Source path location relative to
Definitions folder
ptp4l:
source: input # <-- Content will come from user input
while NF instantiation
content: PTP4L_CONFIG_FILE # <-- Variable name to hold user input
file content while NF instantiation
Note While specifying passthrough device configurations, ensure that the corresponding
linuxptp package is specified under custom_packages.
passthrough_devices
infra_requirements:
node_components:
passthrough_devices:
- device_type: NIC
pf_group: ptp
isSharedAcrossNuma: [true|false] # <-- This sets the passthrough device to be
sharable across NUMAs. If not present, defaults to falseumber of adapters required
Note:
1. For now the values are hardcoded
2. If 'isSharedAcrossNuma' is set to true, make sure to set
'infra_requirements.node_components.isNumaConfigNeeded' to true.
n network: Creates network adapters on the nodes. For SRIOV, the given resource name will be
allocatable resource on the node.
Network
infra_requirements:
node_components:
network:
devices:
- deviceType: # <-- Network Adapter type [sriov]
networkName: # <-- Input label for User Input to provide Network while
NF Instantiation. Refer below section how to define these input
resourceName: # <-- This is the label the device will be exposed in K8s
node.
dpdkBinding: # <-- The driver this device should used. If not
mentioned, then default OS driver will be used.
count: 3 # <- Number of adapters required.
interfaceName: # <- Sets the interface name inside GUEST OS for this
adapter. Valid only if the dpdkBinding is not "vfio-pci" and "igb_uio"
isSharedAcrossNuma: [true|false] # <-- This sets the network device to
be sharable across NUMAs. If not present, defaults to false
additionalProperties:
mtu: # <-- Optional Input label for user input to provide
Network MTU while NF Instantiation. Refer below section how to define these input
Note:
1. for 'networkName' refer below section
2. dpdkBinding
- igb_uio
- vfio-pci
3. Make sure to have 'pciutils' custom packages and 'dpdk' kernel modules.
4. If 'isSharedAcrossNuma' is set to true, make sure to set
'infra_requirements.node_components.isNumaConfigNeeded' to true.
5. MTU is not allowed if dpdk driver is set on the interface. TCA would throw
validation error during NF catalog onboarding
6. If MTU value is not provided during NF instantiation, default value 1500 would be
set on the interface
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.lmn:
derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
properties:
F1U: # <--- label that is provided
infra_requirements.node_components.network.devices.networkName
required: true
propertyName: F1U # <--- label that is provided
infra_requirements.node_components.network.devices.networkName
description: ''
default: ''
type: string
format: network # <- to show the network drop down
PTP4L_CONFIG_FILE: # <-- label that is provided in
infra_requirements.node_components.ptp.ptp4l.content
required: true
propertyName: PTP4L_CONFIG_FILE # <-- label that is provided in
infra_requirements.node_components.ptp.ptp4l.content
description: ''
default: ''
type: string
format: file # <-- to show drop down to select file
helm-abc:
type: tosca.nodes.nfv.Vdu.Compute.Helm.helm-abc
properties:
:
configurable_properties:
additional_vnfc_configurable_properties:
type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.lmn
:
:
F1U: '' # <-- Same label provided above
PTP4L_CONFIG_FILE: '' # <-- Same label provided above
n Services: Defines the systemd service configurations. You can define stalld and syslog-ng
service.
stalld
infra_requirements:
node_components:
services:
- name: stalld #<------ Only stalld, syslog-ng are supported
Note
n Ensure that you specify the stalld custom package in custom_packages.
n Use the file injection method to to upload the modified configuration file for stalld service
in /etc/sysconfig/stalld.
syslog-ng
infra_requirements:
node_components:
services:
- name: syslog-ng
serviceConfigFiles:
- name: /etc/syslog-ng/conf.d/serv.conf #<------ Config file of syslog-ng by
Systemd Service so that NodeConfig will monitor this file ans restart syslog-ng when its
content changed
Note Use the file injection method to to upload the required service configuration files for
syslog-ng service and provide the path of the uploaded files in serviceConfigFiles.
caas_components
You can configure CaaS components, such as CNI, CSI, Helm for the Kubernetes. You can install
CNI plugins on Worker nodes during CNF instantiation. Provide CNIs such as SRINOV in Cluster
Configuration in the CaaS Infrastructure.
infra_requirements:
caas_components:
- name: srinov
type: cni
The root node tosca.nodes.nfv.VMware.VNF defines the VNF definition like CaaS and NodeConfig
related requirements in the TOSCA.
The infra_requirements property at the root node defines these infrastructure requirements for
the Network Function.
The sample shows customized TOSCA with the infrastructure requirements definition.
TOSCA Sample
tosca_definitions_version: tosca_simple_yaml_1_2
description: Network Function description
imports:
- etsi_nfv_sol001_common_2_7_1_types.yaml
- etsi_nfv_sol001_vnfd_2_7_1_types.yaml
- vmware_nfv_custom_vnfd_2_7_1_types.yaml
node_types:
tosca.nodes.nfv.VMware.CNF.testnf:
derived_from: tosca.nodes.nfv.VMware.CNF
interfaces:
Vnflcm:
type: tosca.interfaces.nfv.VMware.Vnflcm
tosca.nodes.nfv.Vdu.Compute.Helm.testnf:
derived_from: tosca.nodes.nfv.Vdu.Compute.Helm
properties:
configurable_properties:
type: tosca.datatypes.nfv.VnfcConfigurableProperties.testnf
required: true
data_types:
tosca.datatypes.nfv.VnfcConfigurableProperties.testnf:
derived_from: tosca.datatypes.nfv.VnfcConfigurableProperties
properties:
additional_vnfc_configurable_properties:
type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.testnf
description: Describes additional configuration for VNFC that can be configured
required: true
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.testnf:
derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
properties:
values:
required: false
propertyName: values
description: ''
default: ''
type: string
format: file
vlan3:
required: true
propertyName: vlan3
description: Network interface providing PF config for sriov with ipam
default: ''
type: string
format: network
vlan4:
required: true
propertyName: vlan4
description: Network interface providing PF config for sriov with igb_uio
default: ''
type: string
format: network
vlan5:
required: true
propertyName: vlan5
description: Network interface providing PF config for sriov with vfio-pci
default: ''
type: string
format: network
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
USERNAME:
name: USERNAME
type: string
description: K8s master username
required: true
default: ''
format: string
PASSWORD:
name: PASSWORD
type: password
description: K8s master password
required: true
default: ''
format: password
IP:
name: IP
type: string
description: K8s master ip address
required: true
default: ''
format: string
K8S_NAMESPACE:
name: K8S_NAMESPACE
type: string
description: K8S namespace for testnf
required: true
default: testnf
format: string
NAD_FILE:
name: NAD_FILE
type: string
description: The NAD Config file
required: true
default: ''
format: file
NODE_POOL_FOR_CAT:
name: NODE_POOL_FOR_CAT
type: string
description: Node pool to enable CAT (Cache Allocation Technology), leave it empty if
CAT is not required.
required: false
default: ''
format: string
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
nsCreateResult:
name: nsCreateResult
type: string
description: ''
copyNADResult:
name: copyNADResult
type: string
description: ''
nadCreateResult:
name: nadCreateResult
type: string
description: ''
tosca.datatypes.nfv.VMware.Interface.InstantiateEndInputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
USERNAME:
name: USERNAME
type: string
description: K8s master username
required: true
default: ''
format: string
PASSWORD:
name: PASSWORD
type: password
description: K8s master password
required: true
default: ''
format: password
IP:
name: IP
type: string
description: K8s master ip address
required: true
default: ''
format: string
K8S_NAMESPACE:
name: K8S_NAMESPACE
type: string
description: K8S namespace for testnf
required: true
format: string
NODE_POOL_FOR_CAT:
name: NODE_POOL_FOR_CAT
type: string
description: Node pool to verify CAT (Cache Allocation Technology), leave it empty if
CAT is not enabled.
required: false
default: ''
format: string
tosca.datatypes.nfv.VMware.Interface.InstantiateEndOutputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
nsCheckResult:
name: nsCheckResult
type: string
description: ''
nadCheckResult:
name: nadCheckResult
type: string
description: ''
copyResult:
name: copyResult
type: string
description: ''
topology_template:
substitution_mappings:
node_type: tosca.nodes.nfv.VMware.CNF.testnf
node_templates:
testnf:
properties:
descriptor_id: vnfd_4501ecbe-4414-11eb-bf08-9b885
provider: VMware
vendor: VMware
product_name: testnf
version: 2.0.0
id: testnf
software_version: 2.0.0
descriptor_version: 2.0.0
flavour_id: default
flavour_description: default
vnfm_info:
- gvnfmdriver
infra_requirements:
node_components:
isNumaConfigNeeded: true
kernel:
kernel_type:
name: linux-rt
version: 4.19.198-5.ph3
kernel_args:
- key: nosoftlockup
- key: noswap
- key: softlockup_panic
value: 0
- key: pcie_aspm.policy
value: performance
- key: intel_idle.max_cstate
value: 1
- key: mce
value: ignore_ce
- key: fsck.mode
value: force
- key: fsck.repair
value: yes
- key: nowatchdog
- key: cpuidle.off
value: 1
- key: nmi_watchdog
value: 0
- key: audit
value: 0
- key: processor.max_cstate
value: 1
- key: intel_pstate
value: disable
- key: isolcpus
value: 4-{{tca.node.vmNumCPUs}}
- key: skew_tick
value: 1
- key: irqaffinity
value: 0-3
- key: selinux
value: 0
- key: enforcing
value: 0
- key: nohz
value: 'on'
- key: nohz_full
value: 4-{{tca.node.vmNumCPUs}}
- key: rcu_nocb_poll
value: 1
- key: rcu_nocbs
value: 4-{{tca.node.vmNumCPUs}}
- key: idle
value: poll
- key: default_hugepagesz
value: 1G
- key: hugepagesz
value: 1G
- key: hugepages
value: 8
- key: intel_iommu
value: 'on'
- key: iommu
value: pt
- key: clock
value: tsc
- key: clocksource
value: tsc
- key: tsc
value: reliable
kernel_modules:
- name: dpdk
version: '20.11'
custom_packages:
- name: pciutils
version: 3.6.2-1.ph3
- name: tuned
version: 2.13.0-4.ph3
- name: linuxptp
version: 3.1-1.ph3
- name: stalld
version: 1.3.0-8.ph3
file_injection:
- source: file
content: ../Artifacts/scripts/realtime-variables.conf
path: /etc/tuned/realtime-variables.conf
- source: file
content: ../Artifacts/scripts/testnf-stalld.conf
path: /etc/sysconfig/stalld
additional_config:
- name: tuned
value: '[{"name":"realtime"}]'
network:
devices:
- deviceType: sriov
networkName: vlan3
resourceName: sriovpass
- deviceType: sriov
networkName: vlan4
resourceName: sriovigbuio
dpdkBinding: igb_uio
count: 2
- deviceType: sriov
networkName: vlan5
resourceName: sriovvfio
dpdkBinding: vfio-pci
caas_components:
- name: sriov
type: cni
interfaces:
Vnflcm:
instantiate_start:
implementation: ../Artifacts/workflows/PreInstantiation_WF.json
description: Configure testnf using a configmap
inputs:
type: tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters
USERNAME:
PASSWORD:
IP:
K8S_NAMESPACE: testnf
NAD_FILE: ''
NODE_POOL_FOR_CAT: ''
outputs:
type: tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters
nsCreateResult: ''
copyNADResult: ''
nadCreateResult: ''
instantiate_end:
implementation: ../Artifacts/workflows/PostInstantiation_WF.json
description: Configure testnf using a configmap
inputs:
type: tosca.datatypes.nfv.VMware.Interface.InstantiateEndInputParameters
USERNAME:
PASSWORD:
IP:
K8S_NAMESPACE: ''
NODE_POOL_FOR_CAT: ''
outputs:
type: tosca.datatypes.nfv.VMware.Interface.InstantiateEndOutputParameters
nsCheckResult: ''
nadCheckResult: ''
type: tosca.nodes.nfv.VMware.CNF.testnf
testnf1:
type: tosca.nodes.nfv.Vdu.Compute.Helm.testnf
properties:
name: testnf
description: Chart for testnf
chartName: testnf-du
chartVersion: 2.0.0
helmVersion: v3
configurable_properties:
additional_vnfc_configurable_properties:
values: ''
vlan3: ''
vlan4: ''
vlan5: ''
interface_types:
tosca.interfaces.nfv.VMware.Vnflcm:
derived_from: tosca.interfaces.nfv.Vnflcm
instantiate_start:
description: interface description
instantiate_end:
description: interface description
The table lists DPDK version and compatible Photon OS kernel versions.
For upgrading kernel versions when running Telco Cloud Automation, see How to upgrade
Photon Kernel when running Telco Cloud Automation.
For adding new versions of kernels that are later than the supported versions, see Enabling
Additional Photon-RT Kernel Versions in Telco Cloud Automation.
Linux-4. ✓ ✓ ✓ ✓
19.104-3
.ph3
4.19.98- ✓ ✓ ✓ ✓
rt40-4.p
h3-rt
Linux- ✓ ✓ ✓ ✓
rt-4.19.9
8-
rt40-4.p
h3
Linux-4. ✓ ✓ ✓ ✓
19.97-2.
ph3
Linux-4. ✓ ✓ ✓ ✓
19.124-1.
ph3
Linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
32-1.ph3
Linux-4. ✓ ✓ ✓ ✓ ✓ ✓ ✓
19.132-1.
ph3
Linux-4. ✓ ✓ ✓ ✓ ✓
19.115-3.
ph3
Linux-4. ✓ ✓ ✓ ✓ ✓
19.145-2.
ph3
Linux-4. ✓ ✓ ✓ ✓ ✓
19.154-1.
ph3
Linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
54-1.ph3
Linux-4. ✓ ✓ ✓ ✓ ✓
19.154-11
.ph3
linux-4.1 ✓ ✓ ✓ ✓ ✓
9.174-5.
ph3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
74-4.ph
3
linux-4.1 ✓ ✓ ✓ ✓ ✓
9.177-2.
ph3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
77-2.ph
3
linux-4.1 ✓ ✓ ✓ ✓ ✓
9.189-5.
ph3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
77-4.ph
3
linux-4.1 ✓ ✓ ✓ ✓ ✓
9.177-4.
ph3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
77-5.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
77-7.ph
3
linux-4.1 ✓ ✓ ✓ ✓ ✓
9.191-2.
ph3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
91-2.ph3
linux-4.1 ✓ ✓ ✓ ✓ ✓
9.198-4.
ph3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-4.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-5.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-6.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-9.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-10.p
h3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-11.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
45-2.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
32-2.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-13.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
47-6.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-14.p
h3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-15.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
56-2.ph
3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-18.ph
3
linux- ✓ ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-21.ph
3
linux- ✓ ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-22.p
h3
linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
64-7.ph
3
linux- ✓ ✓ ✓ ✓ ✓ ✓ ✓
rt-4.19.2
64-6.ph
3
linux-4.1 ✓ ✓ ✓ ✓ ✓ ✓ ✓
9.264-6.
ph3
linux- ✓ ✓ ✓ ✓ ✓ ✓ ✓
rt-4.19.2
72-4.ph
3
linux-4.1 ✓ ✓ ✓ ✓ ✓ ✓ ✓
9.272-4.
ph3
Note VMware Telco Cloud Automation 2.0 supports linux-rt-4.19.198-5 and above only on
a new workload cluster or an upgraded workload cluster.
Example 1
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Network Function description
imports:
- vmware_etsi_nfv_sol001_vnfd_2_5_1_types.yaml
node_types:
tosca.nodes.nfv.VMware.CNF.cu-up-1.8:
derived_from: tosca.nodes.nfv.VMware.CNF
interfaces:
Vnflcm:
type: tosca.interfaces.nfv.Vnflcm
tosca.nodes.nfv.Vdu.Compute.Helm.cuup-helm-chart:
derived_from: tosca.nodes.nfv.Vdu.Compute.Helm
properties:
configurable_properties:
type: tosca.datatypes.nfv.VnfcConfigurableProperties.cuup-helm-chart
required: true
data_types:
tosca.datatypes.nfv.VnfcConfigurableProperties.cuup-helm-chart:
derived_from: tosca.datatypes.nfv.VnfcConfigurableProperties
properties:
additional_vnfc_configurable_properties:
type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.cuup-helm-chart
description: Describes additional configuration for VNFC that can be configured
required: true
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.cuup-helm-chart:
derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
properties:
values:
required: true
propertyName: values
description: Overrides for chart values
default: ''
type: string
format: file
BHU:
required: true
propertyName: BHU
description: ''
default: ''
type: string
format: network
F1U:
required: true
propertyName: F1U
description: ''
default: ''
type: string
format: network
E1C:
required: true
propertyName: E1C
description: ''
default: ''
type: string
format: network
MGMT:
required: true
propertyName: MGMT
description: ''
default: ''
type: string
format: network
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
USERNAME:
name: USERNAME
type: string
description: K8s master username
required: true
default: capv
format: string
PASSWORD:
name: PASSWORD
type: password
description: K8s master password
required: true
default:
format: password
IP:
name: IP
type: string
description: K8s master ip address
required: true
default:
format: string
K8S_NAMESPACE:
name: K8S_NAMESPACE
type: string
description: K8S namespace for CU-UP
required: true
default:
format: string
NAD_FILE:
name: NAD_FILE
type: string
description: NAD Config File
required: true
default: ''
format: file
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
nsCreateResult:
name: nsCreateResult
type: string
description: ''
topology_template:
substitution_mappings:
node_type: tosca.nodes.nfv.VMware.CNF.cu-up-1.8
node_templates:
cu-up-1.8:
node_type: tosca.nodes.nfv.VMware.CNF.cu-up-1.8
properties:
descriptor_id: nfd_4e7599b5-9a44-4000-850c-7ec65d2f2423
provider: Vendor01
product_name: CU-UP
version: '1.0'
id: id
software_version: '1.3.4761'
descriptor_version: '1.8'
flavour_id: default
flavour_description: default
vnfm_info:
- gvnfmdriver
infra_requirements:
node_components:
isNumaConfigNeeded: false
kernel:
kernel_type:
name: linux
version: 4.19.132-1.ph3
kernel_modules:
- name: dpdk
version: 19.11.1
kernel_args:
- key: default_hugepagesz
value: 1G
- key: hugepagesz
value: 1G
- key: hugepages
value: 10
- key: transparent_hugepage
value: never
- key: intel_idle.max_cstate
value: 1
- key: iommu
value: pt
- key: intel_iommu
value: 'on'
- key: tsc
value: reliable
- key: idle
value: pool
- key: intel_pstate
value: disable
- key: rcu_nocb_poll
value: 1
- key: clocksource
value: tsc
- key: pcie_aspm.policy
value: performance
- key: skew_tick
value: 1
- key: isolcpus
value: 11-17
- key: nosoftlockup
- key: nohz
value: 'on'
- key: nohz_full
value: 11-17
- key: rcu_nocbs
value: 11-17
custom_packages:
- name: pciutils
version: 3.6.2-1.ph3
- name: tuned
version: 2.13.0-1.ph3
network:
devices:
- deviceType: sriov
networkName: F1U
resourceName: ani_netdevice_cuup_f1u
dpdkBinding: igb_uio
- deviceType: sriov
networkName: BHU
resourceName: ani_netdevice_cuup_bhu
dpdkBinding: igb_uio
- deviceType: sriov
networkName: E1C
resourceName: ani_netdevice_cuup_e1c
- deviceType: sriov
networkName: MGMT
resourceName: ani_netdevice_cuup_mgmt
count: 5
additional_config:
- name: tuned
value: '[{"name":"vendor01-cu"}]'
file_injection:
- source: file
content: ../Artifacts/scripts/tuned.conf
path: /etc/tuned/cu/tuned.conf
- source: file
content: ../Artifacts/scripts/cpu-partitioning-variables.conf
path: /etc/tuned/cpu-partitioning-variables.conf
caas_components:
- name: sriov
type: cni
description: Network Function description
interfaces:
Vnflcm:
instantiate_start:
implementation: ../Artifacts/workflows/CUUP_PreInstantiation_Steps.json
description: Configure Vendor01 CU-UP
inputs:
type: >-
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters
USERNAME: capv
PASSWORD:
IP:
K8S_NAMESPACE:
NAD_FILE: ''
outputs:
type: >-
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters
nsCreateResult: ''
cuup-helm-chart:
type: tosca.nodes.nfv.Vdu.Compute.Helm.cuup-helm-chart
properties:
name: cuup-helm-chart
description: cu-up
chartName: cuup-helm-chart
chartVersion: 1.3.4760
helmVersion: v3
id: cuup-helm-chart
configurable_properties:
additional_vnfc_configurable_properties:
type: >-
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.cuup-helm-chart
values: ''
BHU: ''
F1U: ''
E1C: ''
MGMT: ''
policies:
- policy_scale:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: scale
interface_type: operation
isEnabled: true
- policy_workflow:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: workflow
interface_type: operation
isEnabled: true
- policy_reconfigure:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: reconfigure
interface_type: operation
isEnabled: true
- policy_update:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: update
interface_type: operation
isEnabled: true
- policy_upgrade:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: upgrade
interface_type: operation
isEnabled: true
- policy_upgrade_package:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: upgrade_package
interface_type: operation
isEnabled: true
- policy_instantiate_start:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: instantiate_start
interface_type: workflow
isEnabled: true
- policy_instantiate_start:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: instantiate_start
interface_type: workflow
isEnabled: true
Example 2
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Network Function description
imports:
- vmware_etsi_nfv_sol001_vnfd_2_5_1_types.yaml
node_types:
tosca.nodes.nfv.VMware.CNF.du-1.8:
derived_from: tosca.nodes.nfv.VMware.CNF
interfaces:
Vnflcm:
type: tosca.interfaces.nfv.Vnflcm
tosca.nodes.nfv.Vdu.Compute.Helm.du-helm-chart:
derived_from: tosca.nodes.nfv.Vdu.Compute.Helm
properties:
configurable_properties:
type: tosca.datatypes.nfv.VnfcConfigurableProperties.du-helm-chart
required: true
data_types:
tosca.datatypes.nfv.VnfcConfigurableProperties.du-helm-chart:
derived_from: tosca.datatypes.nfv.VnfcConfigurableProperties
properties:
additional_vnfc_configurable_properties:
type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.du-helm-chart
description: Describes additional configuration for VNFC that can be configured
required: true
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.du-helm-chart:
derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
properties:
Input Yaml:
required: true
propertyName: Input Yaml
description: ''
default: ''
type: string
format: file
F1U:
required: true
propertyName: F1U
description: ''
default: ''
type: string
format: network
F1C:
required: true
propertyName: F1C
description: ''
default: ''
type: string
format: network
MGMT:
required: true
propertyName: MGMT
description: ''
default: ''
type: string
format: network
FH:
required: true
propertyName: FH
description: ''
default: ''
type: string
format: network
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
USERNAME:
name: USERNAME
type: string
description: K8s master username
required: true
default: ''
format: string
PASSWORD:
name: PASSWORD
type: password
description: K8s master password
required: true
default: ''
format: password
IP:
name: IP
type: string
description: K8s master ip address
required: true
default: ''
format: string
NAD_FILE:
name: NAD_FILE
type: string
description: The NAD Config file
required: true
default: ''
format: file
K8S_NAMESPACE:
name: K8S_NAMESPACE
type: string
description: K8S namespace for DU
required: true
default: ''
format: string
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
nsCreateResult:
name: nsCreateResult
type: string
description: ''
copyNADResult:
name: copyNADResult
type: string
description: ''
nadCreateResult:
name: nadCreateResult
type: string
description: ''
topology_template:
substitution_mappings:
node_type: tosca.nodes.nfv.VMware.CNF.du-1.8
node_templates:
du-1.8:
node_type: tosca.nodes.nfv.VMware.CNF.du-1.8
properties:
descriptor_id: nfd_4e7599b5-9a44-4000-850c-7ec65d2f2422
provider: Vendor01
product_name: DU
version: '1.0'
id: id
software_version: '1.3.4761'
descriptor_version: '1.8'
flavour_id: default
flavour_description: default
vnfm_info:
- gvnfmdriver
infra_requirements:
node_components:
isNumaConfigNeeded: true
kernel:
kernel_type:
name: linux-rt
version: 4.19.132-1.ph3
kernel_modules:
- name: dpdk
version: 19.11.1
kernel_args:
- key: nosoftlockup
- key: noswap
- key: softlockup_panic
value: 0
- key: pcie_aspm.policy
value: performance
- key: intel_idle.max_cstate
value: 1
- key: mce
value: ignore_ce
- key: fsck.mode
value: force
- key: fsck.repair
value: yes
- key: nowatchdog
- key: cpuidle.off
value: 1
- key: nmi_watchdog
value: 0
- key: audit
value: 0
- key: processor.max_cstate
value: 1
- key: intel_pstate
value: disable
- key: isolcpus
value: 8-{{tca.node.vmNumCPUs}}
- key: skew_tick
value: 1
- key: irqaffinity
value: 0-7
- key: selinux
value: 0
- key: enforcing
value: 0
- key: nohz
value: 'on'
- key: nohz_full
value: 8-{{tca.node.vmNumCPUs}}
- key: rcu_nocb_poll
value: 1
- key: rcu_nocbs
value: 8-{{tca.node.vmNumCPUs}}
- key: idle
value: poll
- key: default_hugepagesz
value: 1G
- key: hugepagesz
value: 1G
- key: hugepages
value: 17
- key: intel_iommu
value: 'on'
- key: iommu
value: pt
- key: kthreads_cpu
value: 0-7
- key: clock
value: tsc
- key: clocksource
value: tsc
- key: tsc
value: reliable
custom_packages:
- name: pciutils
version: 3.6.2-1.ph3
- name: tuned
version: 2.13.0-3.ph3
- name: linuxptp
version: 2.0-1.ph3
additional_config:
- name: tuned
value: '[{"name":"vendor01-du"}]'
file_injection:
- source: file
content: ../Artifacts/scripts/tuned.conf
path: /etc/tuned/du/tuned.conf
- source: file
content: ../Artifacts/scripts/cpu-partitioning-variables.conf
path: /etc/tuned/cpu-partitioning-variables.conf
- source: file
content: ../Artifacts/scripts/realtime-variables.conf
path: /etc/tuned/realtime-variables.conf
network:
devices:
- deviceType: sriov
networkName: F1U
resourceName: ani_netdevice_du_f1u
dpdkBinding: igb_uio
- deviceType: sriov
networkName: F1C
resourceName: ani_netdevice_du_f1c
- deviceType: sriov
networkName: FH
resourceName: ani_netdevice_du_fh
dpdkBinding: vfio-pci
- deviceType: sriov
networkName: MGMT
resourceName: ani_netdevice_du_mgmt
count: 6
passthrough_devices:
- device_type: NIC
pf_group: ptp
caas_components:
- name: sriov
type: cni
interfaces:
Vnflcm:
instantiate_start:
implementation: ../Artifacts/workflows/DU-Preinstantion-WF.json
description: Configure DU using a configmap
inputs:
type: >-
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters
USERNAME: ''
PASSWORD: ''
IP: ''
NAD_FILE: ''
K8S_NAMESPACE: ''
outputs:
type: >-
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters
nsCreateResult: ''
copyNADResult: ''
nadCreateResult: ''
du-helm-chart:
type: tosca.nodes.nfv.Vdu.Compute.Helm.du-helm-chart
properties:
name: du-helm-chart
description: Chart for DU
chartName: du-helm-chart
chartVersion: 1.3.4761
helmVersion: v3
id: du-helm-chart-1.0
configurable_properties:
additional_vnfc_configurable_properties:
type: >-
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.du-helm-chart
Input Yaml: ''
F1U: 'cellsite-F1U'
F1C: 'cellsite-F1C'
MGMT: 'cellsite-mgmt'
FH: 'cellsite-FH'
policies:
- policy_scale:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: scale
interface_type: operation
isEnabled: true
- policy_workflow:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: workflow
interface_type: operation
isEnabled: true
- policy_reconfigure:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: reconfigure
interface_type: operation
isEnabled: true
- policy_update:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: update
interface_type: operation
isEnabled: true
- policy_upgrade:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: upgrade
interface_type: operation
isEnabled: true
- policy_upgrade_package:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: upgrade_package
interface_type: operation
isEnabled: true
- policy_instantiate_start:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: instantiate_start
interface_type: workflow
isEnabled: true
Procedure
4 Select a location in your local drive and save the CSAR package.
When you edit and update a network function package, the CSAR upgrades to comply with the
latest SOL001 standards.
Procedure
n Click the desired Network Function catalog and select the General Properties tab.
b Click Edit.
4 To save the changes and work on the general properties later, click Save.
Results
Procedure
n Click the desired Network Function catalog and select the Topology tab.
4 For more information about designing Network Function descriptors, see Designing a
Network Function Descriptor.
5 To save the changes and work on the topology later, click Save.
Results
Procedure
n Click the desired Network Function catalog and select the Infrastructure Requirements
tab.
a Click Edit.
4 For more information about using the Infrastructure Requirements Designer, see
Infrastructure Requirements Designer.
5 To save the changes and work on the infrastructure requirements later, click Save.
Results
Procedure
n Click the desired Network Function catalog and select the Scaling Policies tab.
a Click Edit.
4 For more information about using the scaling policies, see Scaling Policies.
5 To save the changes and work on the scaling policies later, click Save.
Results
Procedure
n Click the desired Network Function catalog and select the Rules tab.
n Click the Options menu (⋮) against the Network Function, click Edit, and select the Rules
tab.
4 For information about adding Affinity rules, see Create Affinity Rules.
5 To save the changes and work on the rules later, click Save.
Results
Edit Workflows
Edit the life cycle event workflows of your Network Function.
Procedure
n Click the desired Network Function catalog and select the Workflows tab.
a Click Edit.
5 To save the changes and work on the workflows later, click Save.
Results
Procedure
n Click the desired Network Function catalog and select the Resources tab.
b Click Edit.
4 To save the changes and work on the source files later, click Save.
Results
If you have configured VMware Integrated OpenStack as your VIM, you can define certain EPA
attributes for increasing the performance capabilities of your VNFs. You can provide attribute
values that are higher than the default value.
n CPU Pinning
n SR-IOV
VMware Integrated OpenStack supports NUMA aware placement on the underlying vSphere
platform. This feature provides low latency and high throughput to Virtual Network Functions
(VNFs) that run on telecommunications environments. To achieve low latency and high
throughput, it is important that vCPUs, memory, and physical NICs that are used for VM traffic
are aligned on the same NUMA node.
You can enable the EPA capabilities on a VDU using the Network Function Descriptor on
VMware Telco Cloud Automation. For more information, see Design a Virtual Network Function
Descriptor.
Add the Cloud-init script and the key by editing the NFD.yaml file. Perform the following steps:
Procedure
3 Select the desired Network Function, click the ⋮ menu, and click Edit.
4 Click the Resources tab and click Edit (pencil icon) against NFD.yaml.
5 Under the VDU properties, update the values of key_name and user-data under the boot_data
property.
size: 4 GiB
boot_data:
content_or_file_data:
data:
key_name: xkey
user-data: |
#!/bin/bash
#this is a test
touch /tmp/abc.log
Users are assigned roles and each role has specific permissions. With role-based access control,
you can restrict access to authorized users that have the required permissions to perform
operations on the CNF.
For example, a user having the Network Function Deployer role can view all CNFs but can
perform life cycle management operations only on permitted CNFs.
Prerequisites
1 Ensure that you have installed an external SSH client on your local system.
2 Ensure that Kubernetes CLI Tool (kubectl) is installed on your local system.
Procedure
4 Use kubectl with the downloaded kubeconfig.yaml file for establishing a remote
connection with the CNF.
Prerequisites
Ensure that you have installed an external SSH client on your local system.
Procedure
VMware Telco Cloud Automation generates a one-time token, user name, and password.
4 Use these login credentials to access the CNF and perform operations based on your user
privileges.
Procedure
Results
A terminal opens and VMware Telco Cloud Automation connects with the CNF. You can now
perform operations based on your user privileges.
Overriding Tags
You can override tags when objects are not compatible with each other. For example, if you have
a cloud with a CNF tag and you want to instantiate a network function catalog with the VNF
tag, you can override the tag. On the Select Cloud pop-up window, expand Advanced Filters,
deselect the CNF tag, and click Apply.
Note When you override a tag, you are explicitly bypassing the system validations and verifying
the success yourself.
Prerequisites
n Upload all required images and templates to your vCenter Server instance.
n To use a vApp template, upload the required vApp template to its corresponding catalog in
VMware Cloud Director.
Procedure
n Select Cloud - Select a cloud from your network on which to instantiate the network
function.
Note You can select the node pool only if the network function instantiation requires
infrastructure customization and the required CSAR file is already included.
2 On the Select Node Pool, select the node pool and click Next.
3 On the Customization Required , review the node configuration. If you do not require
node customization, then in the Advanced Settings, select Skip Node Customization.
For a VMware Cloud Director based cloud, you can use either a vApp template or a template
from the vSphere Client. For a vSphere based cloud, you can only select a vSphere Client
template.
n Select Compute Profile - Select a compute profile from the drop-down menu. For
allocating compute profiles for each VDC, click Advanced Configuration.
n Select Storage Profile (Optional) - Select a specific storage profile from the list of storage
profiles that are defined in the compute profile.
n Prefix (Optional) - Enter a prefix. All entities that are created for this VNF are prefixed
with this text. Prefixes help in personalizing and identifying the entities of a VNF.
n Instantiation Level - Select the level of instances to create. The default level is 0.
n Tags (Optional) - Select the key and value pairs from the drop down menus. To add more
tags, click the + symbol.
n Templates - You can select the templates for the network function instantiation from the
following options:
n vApp - To use vApp templates, select this option and select the appropriate catalog
from VMware Cloud Director in Select Catalog.
n VNF - To use a single vApp template, select this option and select the appropriate
catalog from VMware Cloud Director in Select Catalog.
n vCenter - To use the existing VM template available in vCenter, select this option.
n Select Catalog - The option appears when you select vApp or VNF in Templates. You can
use this option to select the appropriate catalog from VMware Cloud Director.
n Grant Validation - When you deploy a VNF, Grant validates whether the required images
are available on the target system. It also validates whether the required resources such
as CPU, memory, and storage capacity are available for a successful deployment. To
configure Grant, go to Advanced Settings > Grant Validation and select one of the
options:
n Enable: Run validation and feasibility checks for the target cloud. Fail fast if the
validation fails.
n Enable and Ignore: Run validation and feasibility checks for the target Kubernetes
cluster. Ignore failures.
n Disable: Do not run validation or feasibility checks for the target cloud.
Note
1 When selecting VNF as templates, Grant Validation fails if the following conditions
are not met:
n The number of VMs inside the vApp template and the number of VDUs inside the
VNF does not match.
n The names of VMs inside the vApp template and the names of VDUs inside the
VNF does not match.
3 vApp networks in the vApp template are retained if the vApp network name matches
with the virtual internal network name defined in the VNF.
4 If there is no match for the vApp network name, the vApp network is deleted.
n Auto Rollback - During a failure, the Auto Rollback option allows you to retain the Helm
release and Kubernetes resources. To configure Auto Rollback, go to Advanced Settings
> Auto Rollback and select one of the options:
n Enable: During failure, do not retain Helm release and Kubernetes resources.
n Disable: During failure, retain Helm release and Kubernetes resources for debugging.
5 Click Next.
6 In the Network Function Properties tab, the Connection Point Network Mapping table lists
the details of all the VDUs and connection points that are available:
a To map a network to the VDU, click the Options (…) button against the VDU and select
one of the following options:
n Auto Create Network (For internal connection points only): By default, VMware Telco
Cloud Automation creates an internal network.
You can provide the mapping between connection points and an existing network.
VMware Telco Cloud Automation creates and manages the network.
This option is available only for pre-instantiated workflows. You use Refer From
Workflow option, to refer to the network not created or managed through
VMware Telco Cloud Automation. It uses the network details obtained from the pre-
instantiated workflow to create the VM. For details on external network reference,
see External Network Referencing.
n Map Network to All Connection Points: Map the network to all the external
connection points.
b Click OK.
7 Click Next.
n The required OVF properties for each VDU within the VNF. Depending on the
instantiation level that you have selected, there can be multiple instances deployed for
each VDU. Ensure that you enter the correct information for each VDU.
n Any pre-workflows or post-workflows that are defined as a part of the Network Function.
10 Click Instantiate.
Results
VMware Telco Cloud Automation creates the virtual machines and networks required by your
network function on the cloud that you specified. To view a list of all instantiated functions,
select Inventory > Network Function. To track and monitor the progress of the instantiation
process, click the Expand icon on the network function and navigate further. When Instantiated
is displayed in the State column for a network function, it indicates that the instantiation process
is completed successfully and the function is ready to use.
To view more details about an instantiated VNF, go to Inventory > Network Function and click
the VNF. The General Info tab displays all the details about the instantiated VNF.
If you no longer want to use an instantiated network function, click the Options (three dots) icon
and select Terminate. Then select the network function and click Delete.
Prerequisites
n Upload all required images and templates to your vCenter Server instance.
Note
n Ensure that all Harbor repository URLs contain the appropriate port numbers such as 80, 443,
8080, and so on.
n Ensure that all the image repository URLs within the values.yaml file contain the appropriate
Harbor port numbers.
Procedure
n Select Cloud - Select a cloud from your network on which to instantiate the network
function. If you have created the Kubernetes cluster instance using VMware Telco Cloud
Automation, select the node pool.
Note You can select the node pool only if the network function instantiation requires
infrastructure customization and the required CSAR file is already included.
2 On the Select Node Pool, select the node pool and click Next.
3 On the Customization Required , review the node configuration. If you do not require
node customization, then in the Advanced Settings, select Skip Node Customization.
n Tags (Optional) - Select the key and value pairs from the drop down menus. To add more
tags, click the + symbol.
n Grant Validation - When you deploy a CNF, Grant validates whether the required images
are available on the target system. It also validates whether the required resources such
as CPU, memory, and storage capacity are available for a successful deployment. Specific
to CNFs, it downloads the Helm chart and performs a dry run of the operations on
the cluster. If Grant encounters errors, it provides detailed error messages. To configure
Grant, go to Advanced Settings > Grant Validation and select one of the options:
n Enable: Run validation and feasibility checks for the target Kubernetes cluster. Fail
fast if the validation fails.
n Enable and Ignore: Run validation and feasibility checks for the target Kubernetes
cluster. Ignore failures.
n Disable: Do not run validation or feasibility checks for the target Kubernetes cluster.
n Auto Rollback - During a failure, the Auto Rollback option allows you to retain the
Helm release and Kubernetes resources. To configure Auto Rollback, go to Advanced
Settings > Auto Rollback and select one of the options:
n Enable: During failure, do not retain Helm release and Kubernetes resources.
n Disable: During failure, retain Helm release and Kubernetes resources for
debugging.
5 Click Next.
n Repository URL
n Select Repository URL - If you have added Harbor as the third-party repository
provider, select the Harbor repository URL from the drop-down menu.
n Specify Repository URL - Specify the repository URL. Optionally, enter the user name
and password to access the repository.
7 Click Next.
9 The Inputs tab displays any instantiation properties. Provide the appropriate inputs and click
Next.
11 Click Instantiate.
Results
VMware Telco Cloud Automation creates the virtual machines and networks required by your
network function on the cloud that you specified. To view a list of all instantiated functions,
select Network Functions > Inventory. To track and monitor the progress of the instantiation
process, click the Expand icon on the network function and navigate further. When Instantiated
is displayed in the State column for a network function, it indicates that the instantiation process
is completed successfully and the function is ready to use.
To view more details about an instantiated CNF, go to Network Functions > Inventory and click
the CNF. The General Info tab displays all the details about the instantiated CNF.
If you no longer want to use an instantiated network function, click the Options (three dots) icon
and select Terminate. Then select the network function and click Delete.
You can reference externally created networks when creating network functions. When
instantiating a network function, use preinstantiated network workflows to map between
the connection points and external network IDs. When network instantiation starts, the
preinstantiated network workflow obtains the network information, which VMware Telco Cloud
Automation uses for creating the virtual machines.
Ensure that the pre-instantiation workflow returns correct Network ID. Every unique
network ,that is used as part of the VNF, must have a unique output at the pre-instantiation
workflow.
The value of the Network ID (output field) must map to any of the following:
n MoRef (Managed Object Reference ID) of a Distributed Virtual Portgroup. For example:
dvportgroup-39.
n MoRef (Managed Object Reference ID) of a NSX-T segment within vCenter. For example:
network-o45554.
n UUID of Provider / Tenant network to which VMs will connect. For example: 46947191-
e484-4dcc-adea-3b31a850a7d1.
For detailed procedure on network referencing and instantiation, see Instantiate a Virtual
Network Function
Prerequisites
Note This action is not supported on a Cloud Native Network Function (CNF).
Procedure
3 Click the Options (three dots) icon for the desired network function and select Heal.
4 In the Heal page, enter a reason for healing the network function.
5 Select whether to restart or recreate the network function and click Next.
6 In the Inputs tab, enter the input variables required for starting and stopping the heal
function. Provide any required inputs appropriately. Click Next.
Results
To view relevant information and recent tasks, click the Expand (>) icon on the network function.
Prerequisites
Note
n Scale aspects and minimum and maximum values cannot be identified for network functions
that are imported from a partner system. For these network functions, you must enter the
valid values manually.
n The scale to level feature is not supported for network functions that are imported from a
partner system.
n You can set the instantiation scale when instantiating a Virtual Network Function (VNF).
Verify that the network function descriptor for the instantiated network function includes scaling
aspects. Network functions without scaling aspects cannot be scaled.
Procedure
a Click the Options (three dots) icon for the desired network function and select Scale.
c Drag the scroll bar to select the number of steps to scale to be performed. The default
number of steps is 0.
d Click Next.
e In the Inputs tab, enter the input variables required for starting and ending the scale.
These credentials are required for running a workflow.
f Click Next.
a Click the Options (three dots) icon for the desired network function and select Scale To
Level.
b Select whether to scale the entire network function or only certain aspects.
d In the Inputs tab, enter the input variables required for starting and ending the scale to
level. Provide any required inputs appropriately.
e Click Next.
What to do next
To view relevant information and recent tasks, click the Expand (>) icon on the network function.
Procedure
3 Click the ⋮ icon against the CNF you want to scale, and select Scale.
4 In the Scale tab, click Browse and upload the YAML file that contains the Helm Chart values.
5 Click Next.
7 Click Next.
8 In the Review tab, review the YAML file and click Finish.
Results
The CNF uses the new Helm values from the YAML file to scale accordingly.
Prerequisites
Procedure
3 Click the Options (three dots) icon for the desired network function and select Operate. You
can also click the network function and select Actions > Operate.
4 In the Operate dialog box, change the power state to Started or Stopped.
n Graceful Stop - Shuts down the guest operating systems of the VDUs. Optionally, enter
the Graceful Stop Timeout time in seconds.
6 Click OK.
Results
The VDUs in the instantiated network function powers on or powers off according to your
selection.
Prerequisites
n To run a vRealize Orchestrator workflow, you must register vRealize Orchestrator with
VMware Telco Cloud Automation Control Plane (TCA-CP). For more information, see the
VMware Telco Cloud Automation Deployment Guide.
Procedure
3 Click the Options (three dots) icon for the desired network function and select Run a
Workflow.
What to do next
To view relevant information and recent tasks of a network function, click the Expand (>) icon on
the network function.
Prerequisites
Procedure
3 Click the Options (three dots) icon for the desired network function and select Terminate.
VMware Telco Cloud Automation checks for inputs based on the workflows that you added
for the catalog. If there are any inputs, you can update them here.
Results
To view relevant information and recent tasks, click the Expand (>) icon on the network function.
You can customize the layout of the network function inventory. You can hide or un-hide the
columns displayed for the network function inventory.
Procedure
4 Click the checkbox corresponding to the cloumn names in the Show Columns pop-up menu.
n Retry - This option retries the network function instantiation operation from its current failed
state. If the Retry operation does not succeed, the network function instance goes back to
the Not Instantiated - Error state.
n Rollback - This option rolls the instantiated network function instance back to its
uninstantiated state. VMware Telco Cloud Automation cleans up any deployed resources and
the network function instance changes to Not Instantiated - Rolled Back state.
n Reset State - This option resets the network function instance to its last known successful
state. The network function instance goes back to Not Instantiated - Completed state and
does not work as expected if you re-instantiate it. Ensure that you delete this instance and
clean up any deployed resources.
Note The Retry, Rollback, and Reset State options are not available for CNF upgrade
operations.
Procedure
3 Click the Options (three dots) icon for the desired network function:
n Retry
n Rollback
n Reset State
Procedure
3 Click the Options (three dots) icon corresponding to the CNF you want to reconfigure and
select Reconfigure.
n Helm Override: Select this option to override Helm parameters by providing a YAML file.
n Helm Repository: Select this option to update the repository URL of one or more Helm
charts of a CNF instance.
n Repository URL: Select or type the repository URL from where you want to fetch the
chart.
n Both: Select to update both Helm properties and repository URLs of one or more Helms
of the CNF instance.
For more information on updating the repository, see Updating CNF Repository from
Chartmuseum to OCI.
5 In the Inputs tab, enter the appropriate properties and click Next.
6 In the Review tab, review the YAML file and/or the repository URL and click Finish.
Note
n If your CNF is using Chartmuesum, and Harbor is upgraded to a version that does not support
Chartmuesum, then the CNF LCM operations fail. In such a scenario, you are alerted with
the message, “CNF is not upgraded to OCI-based helm charts, all consecutive CNF LCM
operations may fail.”
n After the Chartmuesum Helm charts are migrated to OCI, you can update the CNF Helm
repository.
Before updating the CNF repository from Chartmuseum to OCI, you must perform the following:
To update the CNF repository from Chartmuseum to OCI, perform one of the following:
n Reconfigure the CNF to point to the OCI repository instead of ChartMuseum. See Reconfigure
a Container Network Function.
n Upgrade the CNF to point to the OCI registry. See Upgrade a CNF.
Note Harbor with OCI repositories is listed in oci:// URI and the Harbor with ChartMuseum
repositories is listed in https:// URI.
After updating the CNF repository from Chartmuseum to OCI, ensure the following:
n The alarm is cleared on the TCA instance, which indicates that all the CNF charts are pointing
to the OCI repositories.
Recommendations
n If Harbor is upgraded in place, reconfigure or upgrade the CNF when chartmuseum is still
supported for the Harbor. This ensures that the charts are available both in Chartmuseum and
OCI repositories so that auto rollback can be executed in case of CNF upgrade or reconfigure
failures.
n If Harbor is upgraded by creating a new Harbor instance, retain the existing Harbor version
until the CNF migration is completed.
Prerequisites
n Verify that your network service descriptor complies with the following standards:
n Must comply with TOSCA Simple Profile in YAML version 1.2 or TOSCA Simple Profile for
NFV version 1.0.
Procedure
5 Click Browse and select the network service descriptor (CSAR) file.
6 Click Upload.
Results
The specified network service is added to the catalog. You can now instantiate the network
service.
What to do next
n To obtain the CSAR file corresponding to a network service, select the function in the catalog
and click Download.
n To remove a network service from the catalog, first terminate and delete all instances using
the network service. Then select the service in the catalog and click Delete.
Prerequisites
Procedure
4 Enter a unique name for your network function and click Design.
5 In the Network Service Catalog Properties pane, enter the following information:
You can add custom workflows using vRealize Orchestrator. For information about adding
custom workflows, see #unique_242.
a Click Add Workflow and select the desired workflow from the drop-down menu:
n Instantiate Start
n Instantiate End
n Heal Start
n Heal End
n Scale Start
n Scale End
n Terminate Start
n Terminate End
n Custom
c Enter any input and output variables specified in your script and select whether they are
required.
7 Click Update.
You can modify these settings later by clicking Edit Network Service Catalog Properties in
the Network Service Designer.
8 You can drag Virtual Network Functions (VNFs), Cloud-Native Network Functions (CNFs),
VNFs that are part of a Specialized Virtual Network Function Manager (SVNFM), and
networks (NS Virtual Link) to the design area. You can also drag other Network Service
catalogs to your Network Service to create a Nested Network Service.
9 On each network function and virtual link, click the Edit (pencil) icon to configure additional
settings.
VNF
n External Connection Points - Virtual link for each external connection point.
n Depends On (Optional) - Specify the VNF or CNF to be deployed before deploying this
VNF. In a scenario where you deploy many VNFs and CNFs, there can be dependencies
between them on the order in which they are deployed. This option enables you to
specify their deployment order.
CNF
n Depends On (Optional) - Specify the VNF or CNF to be deployed before deploying this
CNF. In a scenario where you deploy many VNFs and CNFs, there can be dependencies
between them on the order in which they are deployed. This option enables you to
specify their deployment order.
VMware Telco Cloud Automation auto-discovers VNFs that are part of an SVNFM registered
as a partner system, and lists them in the catalog. You can use these VNFs for creating a
Network Service Catalog.
Virtual Links
n Network name
n Description
n Protocol
When you have finished modifying the settings of an item, click Update.
10 After adding and configuring all the necessary items, click Upload.
If you want to save your work and continue later, click Save as Draft.
Results
The specified network service is added to the catalog. You can now instantiate the service.
What to do next
n To obtain the CSAR file corresponding to a network service, select the service in the catalog
and click Download.
n To remove a network service from the catalog, select the service in the catalog and click
Delete.
Prerequisites
You must have created and saved a network service descriptor using the Network Service
Designer.
Procedure
5 To modify the draft, click the Edit (pencil) icon. To remove the draft, click the Delete icon.
Procedure
Results
Procedure
4 Select a location in your local drive and save the CSAR package.
When you edit and update a Network Service package, the CSAR upgrades to comply with the
latest SOL001 standards.
Procedure
n Click the desired Network Service catalog and select the General Properties tab.
b Click Edit.
4 To save the changes and work on the general properties later, click Save.
Results
Procedure
n Click the desired Network Service catalog and select the Topology tab.
4 For more information about designing Network Service descriptors, see Design a Network
Service Descriptor.
5 To save the changes and work on the topology later, click Save.
Results
Procedure
n Click the desired Network Service catalog and select the Workflows tab.
a Click Edit.
5 To save the changes and work on the workflows later, click Save.
Results
Procedure
n Click the desired Network Service catalog and select the Resources tab.
b Click Edit.
4 To save the changes and work on the source files later, click Save.
Results
Prerequisites
Procedure
n If you have saved a validated configuration that you want to replicate on this network
service, click Upload on the top-right corner and upload the JSON file. The fields are then
auto-populated with this configuration information and you can edit them as required.
n If you want to create a network service configuration from the beginning, perform the
next steps.
c Prefix (Optional) - Enter a prefix. All entities that are created for this Network Service
are prefixed with this text. Prefixes help in personalizing and identifying the entities of a
Network Service.
d Use vApp Template(s) - To use a vApp template, select this option and select the
appropriate catalog from VMware Cloud Director.
e Tags (Optional) - Select the key and value pairs from the drop down menus. To add more
tags, click the + symbol.
5 In the Preview Network Service tab, enter a name for the service, an optional description,
review its design, and click Next.
6 In the Deploy Network Function tab, select a cloud on which to include each network
function in the network service.
Note For a VMware Cloud Director based cloud, you can use either a vApp template or a
template from the vSphere Client. For a vSphere based cloud, you can only select a vSphere
Client template.
7 Click Next.
8 In the Configure Network Functions tab, click the Edit (pencil) icon on each of the network
functions or Nested Network Service catalogs.
a For a Nested Network Service, select a pre-deployed Network Service from the existing
list of Network Services. This list is automatically curated based on the deployed
instances of the Nested Network Service catalog.
Note You can only select pre-instantiated Network Service instances for a Nested
Network Service.
n These Network Functions are curated automatically based on the deployed instances
and the selected Cloud.
n Instantiated Network Functions that are connected to other network services are not
displayed in this list.
c In the Inventory Detail tab, select the desired compute profile, select the instantiation
level, and click Next.
d In the Network Function Properties tab, select or edit an internal or external network,
and click Next.
e In the Inputs tab, provide the required inputs appropriately and click Next.
Note You cannot add a deployment profile or select an internal or an external link on a CNF.
9 In the Instantiate Properties tab, enter the values for any required properties and click Next.
10 In the Review tab, review your configuration. You can download this configuration and reuse
it for instantiating a network service catalog with a similar configuration. Click Instantiate.
Results
VMware Telco Cloud Automation creates the network functions required by your network service
on the clouds that you specified. To view a list of all instantiated functions, select Network
Services > Inventory. To track and monitor the progress of the instantiation process, click the
Expand icon on the network service and navigate further. When instantiated is displayed in
the State column for a network service, it indicates that the instantiation process is completed
successfully and the service is ready for use.
What to do next
To view the relevant information and recent tasks, click the Expand (>) icon on the desired
network service.
If you no longer want an instantiated network service, click the Options (three dots) icon and
select Terminate. Then select the network service and click Delete.
Prerequisites
n To run a vRealize Orchestrator workflow, you must register vRealize Orchestrator with
VMware Telco Cloud Automation Control Plane (TCA-CP). For more information, see the
VMware Telco Cloud Automation Deployment Guide.
Procedure
3 Click the Options (three dots) icon for the desired network service and select Run a
Workflow.
4 Select the desired network service or network function workflow and click Next.
What to do next
To view the relevant information and recent tasks, click the Expand (>) icon on the desired
network service.
Prerequisites
Procedure
3 Click the ⋮ (vertical ellipsis) icon against the Network Service that you want to heal and select
Heal.
In the Heal page, you can either select the Network Service radio button or the Network
Function radio button. Selecting Network Function displays the associated Network
Functions in the Network Service. Select the relevant Network Functions to heal. In this
example, we heal a Network Service.
5 In the Select a Workflow tab, select one of the pre-defined types of healing from the Degree
Healing drop-down menu. This option is required for auditing purposes.
6 Select the pre-packaged workflow that is used for healing the Network Service and click
Next.
7 In the Inputs tab, enter the properties of the workflow such as user name, password, host
name, Network Service command, and VIM location.
8 Click Next.
Results
The Network Service begins to heal. To view its progress, go to Inventory > Network Service
and expand the Network Service.
Prerequisites
Procedure
3 Click the Options (three dots) icon for the desired network service and select Terminate.
VMware Telco Cloud Automation checks for inputs based on the workflows that you added
for the catalog. The Finish button is then displayed.
4 Click Finish.
Results
To view the relevant information and recent tasks, click the Expand (>) icon on the desired
network service.
You can perform software updates only on CNFs. When you perform a software update, it
changes the reference of the CSAR from the instance to point to a newer version of the
CSAR. Consider an example where you find a bug in the Helm chart of a deployed CNF
instance. When you perform a software update, you patch the updated Helm chart image to
the deployed CNF instance and the version of the CNF can remain the same or can change.
From a CSAR perspective, you can perform a software update across CNF versions that do
not have any model related updates.
Upgrade applies only to CNFs. It changes the reference of the CSAR from the instance to
point to a newer version of the CSAR. When you perform an upgrade, it provides a detailed
view of the updates.
Consider an example where your original CSAR file consisted of two Helm charts:
n AMF 1.1.0
n SMF 1.1.0
n AMF 2.5
n UPF 2.3
n NRF 2.4
When you perform a component upgrade, VMware Telco Cloud Automation performs the
following tasks on the CNF instance that is running:
Note It is recommended to be consistent with the component (VDU) name across all the
CSAR versions subject to CNF update or upgrade.
Upgrade Package
Upgrade Package applies to VNFs, CNFs, and Network Services. Performing a package
upgrade changes the reference of the CSAR from the instance to point to a newer version
of the CSAR. It does not impact the current running instance in any way and no software or
model is updated. However, the workflows available for running can change.
The following table lists the type of upgrades and updates you can perform for VNFs, CNFs, and
Network Services.
VNF Yes No No
n Upgrade a CNF
Prerequisites
You must be a System Administrator or a Network Function Deployer to perform this task.
Procedure
2 Select Inventory > Network Function and select the VNF to upgrade.
3 Click the ⋮ symbol against the VNF and select Upgrade Package.
4 In the Upgrade Package screen, select the new VNF catalog to upgrade your VNF. The
descriptor version changes accordingly to the selected catalog.
Note Only those VNF catalogs that have the same software provider and product name are
displayed.
5 Click Upgrade.
Results
Your VNF is upgraded to the selected catalog version. The VNF instance now displays the
upgraded catalog name in the Network Functions > Inventory tab.
Prerequisites
You must be a System Administrator or a Network Function Deployer to perform this task.
Procedure
2 Select Inventory > Network Function and select the CNF to upgrade.
3 Click the ⋮ symbol against the CNF and select Upgrade Package.
4 In the Upgrade Package screen, select the new CNF catalog to upgrade to. The descriptor
version changes accordingly to the selected catalog.
Note Only those CNF catalogs that have the same software provider and product name are
displayed.
5 Click Upgrade.
Results
Your CNF is upgraded to the selected catalog version. The CNF instance now displays the
upgraded catalog name in the Network Functions > Inventory tab.
Upgrade a CNF
Upgrade the software version, descriptor version, components, repository details, instantiation
properties, and Network Function properties of your CNF and map them to the newer version in
the Catalog.
If the existing Helm Chart requires a software upgrade, the system upgrades the software
version of the CNF instance. If the existing CNF instance is not present in the new catalog, you
can map the current CNF instance to a new Helm Chart. If you do not make a selection, then the
existing CNF instance is removed from the Workload Cluster.
Note If there is an issue during the CNF instance update or upgrade operations, you can resolve
the issue based on the error message and trigger the update or upgrade operation again. For
example, during the upgrade operation, if an image is missing in the Harbor repository and the
operation fails due to this, you can upload the missing image and retry the upgrade operation.
Prerequisites
You must be a System Administrator or a Network Function Deployer to perform this task.
Procedure
2 Select Inventory > Network Function and select the CNF to upgrade.
4 In the Upgrade Revision tab, select the software version and Descriptor version to upgrade
to.
5 In the Components tab, select the upgraded components to be included in your CNF.
6 In the Inventory tab, select the repository URL from the drop-down menu, or specify the
repository.
For more information on updating the repository, see Updating CNF Repository from
Chartmuseum to OCI
8 In the Network Function Properties tab, review the updated model. You can download or
delete Helm Charts from the updated model.
Results
Prerequisites
You must be a System Administrator or a Network Service Deployer to perform this task.
Procedure
2 Select Inventory > Network Service and select the Network Service to upgrade.
3 Click the ⋮ symbol against the Network Service and select Upgrade Package.
4 In the Upgrade Package screen, select the new Network Service catalog and descriptor
version to upgrade your Network Service.
5 Click Upgrade.
Results
Procedure
2 Click the ⋮ symbol corresponding to the CNF which failed to upgrade and select one of the
following:
a Retry: Select this option to continue the upgrade operation from the failed step.
b Rollback: Select this option to roll back the CNF to its previous state. All the charts, VDUs,
and values of the CNF are restored.
Note The Retry and Rollback options are available for selection only when the CNF upgrade
fails and the Auto Rollback option is deactivated.
These custom overlay networks are associated with specific business purposes following a set
of predefined Service Level Agreements (SLAs), Quality of Service indicators (QoS), security, and
regulatory requirements. 5G Network Slicing provides a standard way of managing and exposing
network resources to the end-user (the UE) while assuring the delivered slice's purpose and
characteristics (for example, throughput, latency, geographical location, isolation level, and so
on).
Note On a VM-based environment, the network slicing feature is disabled by default. To use the
feature:
n Enable the network slicing service on the VMware Telco Cloud Automation Manager user
interface.
Technical Overview
VMware Telco Cloud Automation follows the 3GPP Network Slicing Management architecture,
which comprises of the following components:
n Communication Service Management Function (CSMF), which acts as the interface towards
service order management and Operations Support Systems (OSS).
n Network Slice Management Function (NSMF), which manages the life cycle of the end-to-end
slice across the network domains: Radio Access Network (RAN), 5G Core network, and the
transport network.
n Network Slice Subnet Management Function (NSSMF), which manages the lifecycle of
the Network Slice subnets within a network domain and applies the NSMF’s life cycle
management commands (For example, instantiate, scale, heal, terminate). There can be more
than one NSSMF in a network.
Use Cases
Network Slicing allows service providers to create a new breed of services that their customers
are expecting, that utilize their network resources better with fine-grained differentiated services,
and is on-demand and secured.
n Massive Machine Type Communication (mMTC) for large scale lesser bandwidth connected
devices.
n Ultra-reliable Low Latency Communication (uRLLC) for critical and latency sensitive device
connectivity.
Design, create, and manage the life cycle of on-demand network slices that are defined and
fulfilled under a user created SLA.
Streamline and standardize way network resources are exposed to the OSS layer and the
consumers of the network slices
Network Slicing provides a standard framework to design, create, and manage network
resources that can be packaged and exposed directly to the end users.
Prerequisites
Contact the VMware customer care to activate the license for the Network Slicing before you
enable the Network Slicing on your setup.
Note You must repeat the steps for enabling the Network Slicing every time you upgrade
VMware Telco Cloud Automation to a newer build of the same release.
Procedure
1 To enable the Network Slicing, log in to VMware Telco Cloud Automation Manager as root
using SSH and enable network-slicing.service.
Note After enabling, the network-slicing.service boots as part of the VM boot process,
and reboots every time the VM reboots.
3 (Optional) You can also log in to the appliance management user interface (9443 port) and
start the Network Slicing service.
Procedure
1 To stop the Network Slicing, log in to VMware Telco Cloud Automation Manager as root using
SSH and run the command.
Disabling the service will prevent the service from starting up automatically during the boot
process. Also, the service does not start after the appliance reboot.
What to do next
After deactivating the Networking Slicing feature, contact VMware Support for deactivating its
license.
You can onboard a network slice template. Once you have onboarded a network slice template,
you can then instantiate the template and use the network slice function.
Prerequisites
Ensure that the network functions and the network services that you want to add in the network
slice template are available.
Procedure
2 Select Catalog > Network Slicing and select the network slicing function.
6 Add the profile details. For more details on the profile parameters, see Edit a Network Slice
Template.
7 Add the topology details. For more details on the topology, see Edit a Network Slice
Template.
You can view the network slice template. You can edit or delete the existing network slice
template.
Prerequisites
Ensure that you have permission to edit the network slice template.
Procedure
2 Select Catalog > Network Slicing and select the network slicing function.
4 Click the ⋮ symbol against the network slicing template and select the operation from the list.
6 To view the general properties, click General Properties tab. It shows the following details:
7 To view the profile details, click Profile tab. The profile tab shows following parameters
n General settings - Shows the general parameters related to the network slice.
n Slice/Service Type - Defines the service type related to a network slice. Select the
value from the drop-down menu.
n Latency - Packet transmission latency between the network slice. The value is in milli
seconds.
n Maximum Device Speed - Maximum transmission speed that the network slice can
support.
n Network Slice Sharing Indicator - Whether the service can share the network slice.
n Reliability - Reliability of the network slice. The value is in percentage. Value range is
0 to 100.
n Survival Time - The time interval for which an application can survive without
accessing the message. The value is in milli seconds. You can also provide multiple
comma separated time intervals.
n UE Mobility Level - The mobility level of the user equipment that the network slice
supports.
n Nomadic - Network slice supports only user equipment which has intermittent
availability.
n Fully Mobility - Network slice supports user equipment with complete mobility.
n PLMN Information(public land mobile networks) - Shows the parameters related to the
public land mobile networks.
n Slice/Service Type - The service type associated with the network slice.
n Slice Differentiator - The value used to differentiate between network slices. If the
parameter is not needed, set the value to FFFFFF.
n Delay Tolerance - Shows the parameters related to delay requirements in a network slice.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Downlink Throughput per Network Slice - This attribute relates to the aggregated data
rate in downlink for all UEs together in the network slice.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Maximum Downlink Throughput per Network Slice - Maximum download speed that
the network slice provides.
n Downlink Throughput per Network Slice ( for UE) - This attribute describes the maximum
data rate supported by the network slice per UE in downlink.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Maximum Downlink Throughput Per UE per Slice - Maximum download speed for
each active user equipment that the network slice provides.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n List of KQIs and KPIs - Name of the list of KPIs and KQIs to monitor the performance
of the network.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Maximum Supported Packet Size - This attribute describes the maximum packet size
supported by the network slice.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Maximum Packet Size - Maximum packet size allowed in the network slice.
n Overall User Density - This attribute describes the maximum number of connected and/or
accessible devices per unit area (per km2) supported by the network slice.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Overall User Density - The user device density that the network device can handle.
The value is in number of users per square kilometer.
n Uplink Throughput per Network Slice - This attribute relates to the aggregated data rate
in uplink for all UEs together in the network slice (this is not per UE).
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Guaranteed Uplink Throughput per Slice - Minimum required upload speed that the
network slice provides.
n Maximum Uplink Throughput per Slice - Maximum upload speed that the network
slice provides.
n Uplink Throughput per Network Slice per UE - This attribute describes the maximum
data rate supported by the network slice per UE in uplink.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n Guaranteed Uplink Throughput Per UE per Slice - Minimum required upload speed
for each active user equipment that the network slice provides.
n Maximum Uplink Throughput Per UE per Slice - Maximum upload speed for each
active user equipment that the network slice provides.
User Management Openness - This attribute describes the capability for the NSC
to manage their users or groups of users’ network services and corresponding
requirements.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n User Management Openness Support - Whether the network slice allows the NSC to
manage the users or groups of users.
n Scalability - Provide information about the scalability of the slice. For example
number of UEs.
n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:
n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.
n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:
n API - These attributes provide an API in order to get access to the slice
capabilities.
n V2X Communication Mode - Whether the network slice supports V2X mode.
The Topology tab shows the following details of the network function associated with the
network slicing function:
n General Settings - You can modify the parameters related to the general settings of
the network slice template.
n PLMN Information - You can modify the parameters related to PLMN parameters of
the network slice template.
n S-NSSAI - You can modify the parameters related to the Single – Network Slice
Selection Assistance Information of the network slice template.
n Network Slice Structure - It represents the network functions and network services
associated with the network slice.
n Add Subnet - You can create a subnet within a network slice. You can add separate
network functions and network services to each subnet.
a To create a subnet in a network slice, click Add Subnet. Add the following details
on the Create Network Slice Subnet Template page.
n Add Descriptor - It represents the network functions and network services associated
with the network slice.
a To add a network function or a network service, click on the Add Descriptor and
select Add Network Service or Add Network Function.
When you select Add Network Service, the Select Network Service For Network
Slice Template page appears. When you select Add Network Function, the Select
Network Function For Network Slice Template page appears.
b Select the network service or the network function that you want to add and click
Select.
Prerequisites
Procedure
3 Click the ⋮ symbol against the network slice template and select the operation from the list.
n Customer Name - Name of the customer for which you want to instantiate the network
slice.
What to do next
Prerequisites
Ensure that you have permission to edit the network slice function.
Procedure
2 Select Inventory > Network Slicing and select the network slice function.
3 Click the ⋮ symbol against the network slicing function and select the operation from the list.
What to do next
You can view the network slice service order. You can edit or delete the existing network slice
service order.
Prerequisites
Ensure that you have permission to edit the network slice service order.
Procedure
2 Select Inventory > Network Slicing and select the network slice function.
4 Click the ⋮ symbol against the network slice service order and select the operation from the
list.
6 To view the general properties, click General Properties tab. It shows the following details:
n Description
n Customer Name
8 To view the profile details, click Profile tab. The profile tab shows following parameters
n General settings
n Slice/Service Type
n Slice Differentiator
n Delay Tolerance
n Category
n Tagging
n Exposure
n Support
n Deterministic Communication
n Category
n Tagging
n Exposure
n Availability
n Periodicity List
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
n Category
n Tagging
n Exposure
The Deployment Configuration tab shows the following details of the network function
associated with the network slice function:
n General Settings
n PLMN Information
n Performance Requirements
n S-NSSAI
For details of all the parameters, see Edit a Network Slice Template.
n Tasks during Network Function (NF) and Network Service (NS) life cycle management
operations - If the inputs of the LCM operation need to be dynamically computed or the
task needs to be carried out by initializing scripts on the virtual machines, HELM, Kubernetes
PODs, jobs, and operators, workflow supplements the end-to-end configuration of the NS
and NF. This type of workflow is automatically executed as part of the LCM pre-operation or
post-operation (before or after the resources of the NF / NS are manipulated). For example, a
pre-operation is executed to determine the HTTP proxy used by the NF and a post-operation
is executed to register the NF.
n Tasks that are operator specific - Allows the operator to automate tasks that are not part of
LCM operation or are fully operator designed. For example, draining traffic from a selected
NF.
Workflow Architecture
The following diagram illustrates the workflow architecture.
You can create workflows by using the TCA user interface. The embedded workflow designs are
available in the Network Function and Network Service packages as raw files until the package
is onboarded, and the workflow catalog entries are created from these raw files at the time of
onboarding.
You can execute a workflow as part of the NF/NS Lifecycle Management operation or through
the VMware Telco Cloud Automation user interface. After the workflow execution intent is
created, the workflow executor evaluates the intent and executes it. To carry out the execution,
the executor on TCA-M needs to contact the executors distributed on the TCA-CP instances.
The executor on TCA-M either contacts the external systems directly or through vRealize
Orchestrator (vRO). The choice between the two alternatives depends on the step to be
executed. It is necessary to have the distributed execution not just from the scaling perspective
but also constrained by network connectivity.
Networking Connectivity
Network connectivity is required so that the workflow executor can carry out the various tasks.
The following diagram illustrates the network connectivity border conditions.
vRO
System
administrator
Kubernetes
A
Mgmt. VM shell
TCA TCA-CP
No connectivity
possible from A to B
Normal
user
The system administrators can reach any system (TCA, TCA-CP, Kubernetes, vRO) directly. The
other users have access to TCA-M only. Due to security reasons, only the unidirectional network
model is supported; that is, the traffic can only be initiated between two entities from one
direction minimizing the possible surfaces that can be attacked. Workflow designs consider the
possible network connectivity.
n Aspects of a Workflow
Aspects of a Workflow
A workflow consists of multiple elements, and each element is responsible for describing one
aspect of the workflow. The workflow contains multiple steps which can be interlinked to define
a more complex workflow. The workflow consists of variables, and you can provide the values at
the time of execution.
The following diagram illustrates the relationship between the various elements of a workflow.
Input Variable
Workflow
Step
The top-level element is the workflow itself which contains a few mandatory elements.
"id": "testCrud1"
…
}
Following is the sample code snippet for the workflow elements, Name, Version, and
SchemaVersion:
…
}
Data Types
Data types are used in various parts of the workflow to define the type of a particular element.
Values that are not conforming to a specific data type cause design or runtime validation errors.
Input
The workflow may optionally contain inputs that pass on the read-only information to a workflow
at the time of execution. Inputs define the possible inputs (mandatory and default values)
and their characteristics. Most workflows require a certain set of input parameters to run. For
example, if a workflow resets a virtual machine, then the workflow needs to define an input with
a virtual machine data type to allow the caller to control which virtual machine to restart.
{
"inputs": {
"in1": {
"type": "string",
"format": "password",
"defaultValue": "bXlTZWNyZXQ=",
"description" : "My special input 1"
},
"in2": {
"type": "string",
"defaultValue": "myInput"
},
"in3": {
"type": "string",
"required" : true
},
"in4": {
"type": "string",
"format" : "text"
},
"in5": {
"type": "boolean",
"defaultValue" : true
},
"in6": {
"type": "number",
"defaultValue" : 123
},
"in7": {
"type": "number",
"defaultValue" : 123.4
},
"in8": {
"type": "password",
"description" : "do not use deprecated",
"defaultValue" : "bXlTZWNyZXQ="
},
"in9": {
"type": "boolean",
"defaultValue" : true
},
"in10": {
"type": "file",
"defaultValue" : "myAttachmentName.txt"
},
"in4": {
"type": "vimLocation",
"defaultValue" : "25a1a262-715b-11ed-a1eb-0242ac120002"
},
"in4": {
"type": "virtualMachine",
"defaultValue" : "myVduName"
}
},
…
}
Output
The output of a workflow is the result of workflow execution. Output parameters are set during
the execution of the workflow. Output can be used by various LCM processes. For example, a
pre-workflow that determines the location of the Network Repository Function (NRF) passes it
on as an instantiation parameter (HELM input value) to the Access and Mobility Management
Function (AMF).
The format of the outputs is identical to the inputs. However, in outputs, the attributes are not
available, and only string, number, boolean, and virtual machine data types are used.
{
"outputs": {
"output1": {
"type": "string",
"format": "password",
"defaultValue": "bXlTZWNyZXQ=",
"description" : "My special output 1"
},
"output2": {
"type": "boolean",
"defaultValue" : true
},
"output3": {
"type": "number",
"defaultValue" : 123
},
"output4": {
"type": "virtualMachine",
"defaultValue" : "myVduName"
}
},
…
}
Variables
The workflow may also contain variables. Variables are very similar to inputs. However, variables
are not read-only and may change at the time of workflow execution. The variables may
have user-defined input values or may be automatically assigned based on the context of the
workflow execution. The vnfId variable is automatically assigned if the workflow runs on Network
Function, and if the workflow runs on Network Service, the nsId variable is populated. If these
two variables are defined, they have the string data type.
The following table lists the possible fields of the workflow variables.
{
"variables": {
"variable1": {
"type": "string",
"format": "password",
"defaultValue": "bXlTZWNyZXQ=",
"description" : "My special variable 1"
},
"vnfId": {
"type": "string",
"description": "The identifier of the NF"
},
"nsId": {
"type": "string",
"description": "The identifier of the NS"
}
}
…
}
Attachments
You can attach binary files or text files in the workflow definition. Attachments are stored in
the workflow catalog, and the maximum file size is 5 MB. Attachments can be added to the
standalone workflow through the UI or automatically assigned to the workflow if the workflow
is embedded in an NF / NS package. If the workflow is embedded in an NS package, the files
located in the Artifacts/scripts directory in CSAR are automatically attached to each workflow
definition.
Therefore, if the CSAR contains the following files, the files are referred to by “script1.sh”,
“script2.sh” or “subDir/script1.sh” in default values:
n /Artifacts/scripts/script1.sh
n /Artifacts/scripts/script2.sh
n /Artifacts/scripts/subDir/script1.sh
When the workflow is created in the workflow catalog, the administrative elements are assigned
to the workflow. These administrative elements control various operability aspects of the
workflow; for example, RBAC, editability, and so on. You cannot define these attributes in the
workflow template, but the system computes these attributes after the workflow is created.
The structure of the step and its relation to the workflow is illustrated in the following diagram.
A combination of multiple steps defines a complex behavior of the workflow. The type of the
step ID is defined through the Type attribute. The ID of the step is the key to the structure of the
steps. You can describe the step in the Description field.
{
"steps": {
"stepId": {
The structure of the step and its relation to the workflow is illustrated in the following diagram.
inbinding Variable/input
Workflow
outbinding Variable/input
Initial step
The initial step of the workflow is defined by the startStepId field of the workflow, and the steps
that follow are defined by the nextStepId field. The workflow ends if the executed step does not
have the following step. In this scenario, the startStepId field is not available in the step definition.
The link between the steps is illustrated in the following template fragment:
{
"startStepId": "stepId1",
"steps": {
"stepId1": {
nextStepId": "stepId2",
...
},
"stepId2": {
"nextStepId": "stepId3",
...
},
"stepId3": {
...
}
},
...
}
n defaultValue: Default value of the input binding. This value is used if the exportName is not
available or is referring to an element with no value.
n exportName: The name of the input or variable from which the step input takes its value.
Inputs and variables of a step are illustrated in the following template fragment.
{
"inputs": {
"input1" : { … },
"input2" : { … }
},
"variables": {
"variable1" : { … },
"variable2" : { … }
},
"steps": {
"stepId1": {
"inBindings": {
"stepInput1" : {
"description" : "my input value for this step",
"type": "string",
"defaultValue" : "foo",
"exportName" : "input1"
},
"stepInput2" : {
"type": "string",
"defaultValue" : "bar"
},
"stepInput3" : {
"type": "string",
"exportName" : "variable1"
}
}
}
},
…
}
The usage of double curly brackets is illustrated in the following sample code snippet:
{
"inputs": {
"input1" : { ... }
},
"variables": {
"variable1" : { ... }
},
"steps": {
"stepId1": {
"inBindings": {
"stepInput1" : {
"type": "string",
"name" : "variable1",
"defaultValue" : "fix{{input1}}_{{variable1}}"
}
}
}
},
...
}
n name: A mandatory name of the variable or output to which the step output is saved.
Note The type of the step determines the available step outputs.
{
"outputs": {
"output1" : { … },
"output2" : { … }
},
"variables": {
"variable1" : { … },
"variable2" : { … }
},
"steps": {
"stepId1": {
"outBindings": {
"stepOutput1" : {
"type": "string",
"name" : "variable1"
},
"stepOutput1" : {
"type": "string",
"name" : "output1"
},
"stepOutput2" : {
"type": "string",
"name" : "output2"
}
}
},
…
},
…
}
n name: The name of the workflow input, output, or variable against which the condition is
evaluated.
n nextStepId: The mandatory identifier of the next step is the evaluation of the condition that is
true.
{
"input": {
"input1" : { ... },
"input2" : { ... }
},
"variables": {
"variable1" : { ... },
"variable2" : { ... }
},
"steps": {
"stepId1": {
"conditions": [
{
"name": "input1",
"comparator": "smaller",
"value": 5,
"nextStepId": "stepId1"
},
{
"name": "variable1",
"comparator": "greaterOrEquals",
"value": 5,
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "isDefined",
"nextStepId": "stepId2"
},
{
"name": "variable2",
"comparator": "equals",
"value": "foo",
"nextStepId": "stepId2"
},
{
"name": "variable2",
"comparator": "match",
"value": "myRegularExpr.*",
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "greater",
"value": 5,
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "smaller",
"value": 5,
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "smallerOrEquals",
"value": 5,
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "greaterOrEquals",
"value": 5,
"nextStepId": "stepId2"
}
]
},
"stepId2" : { ... }
},
...
}
Input and output bindings are illustrated in the following template fragment:
{
"inputs": {
"inputDelay" : { ... }
},
"steps": {
"stepId1": {
"inBindings": {
"timeout": {
"type": "number",
"defaultValue": 123
},
"initialDelay": {
"type": "number",
"exportName": "inputDelay"
}
}
}
},
...
}
Steps that require a vRO instance to interact with a VIM instance define an input binding that
specifies which vRO or VIM instance to be used. The type of this input binding is vimLocation,
and the name can be anything that is not used by the step. This binding is called location. The
presence of the VIM location is mandatory if the system cannot unambiguously identify it. If the
workflow is executed on an NF, then the VIM is deduced from the location of the NF. However, if
the workflow is not executed on an NF, then you need to specify the location of the VIM.
{
"inputs": {
"vimInput": {
"type": "vimLocation",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_SCP",
"inBindings": {
...
"vim": {
"type": "vimLocation",
"exportName": "vimInput"
}
},
...
},
},
…
}
Step Input
A step type can have any of the following input:
NOOP
This value is used to create a decision point in a workflow without executing a step that has an
external side effect; for example, connecting a VM through SSH. Regardless of the number of
inputs provided, the inputs are discarded, and no outputs are provided.
{
"inputs": {
"inputMode" : { ... }
},
"startStepId": "stepId1",
"steps": {
"stepId1": {
"type" : "NOOP",
"conditions": [
{
"name": "inputMode",
"comparator": "equals",
"value": "active",
"nextStepId": "stepIdActive"
},
{
"name": "inputMode",
"comparator": "passive",
"value": "active",
"nextStepId": "stepIdPassive"
}
]
},
"stepIdActive": { ... },
"stepIdPassive": { ... }
},
...
}
VRO_SSH
You can use the vRO SSH to execute the SSH commands on external entities, such as NFs
and routers. The SSH connection from the TCP perspective originates from the vRO instance
to the target. Connectivity is established between vRO and the external system. The vRO step
that implements the SSH command execution is called SSH Command and can be inspected by
logging into vRO.
The following table lists the vRO SSH step input values.
If the script to be executed is very long, you can use the vRO SCP action to transfer the script to
be executed to the target.
The usage of the SSH step is illustrated with the following template fragment:
{
"inputs": {
"target": {
"type": "string",
"required": true
},
"password": {
"type": "string",
"foramt" : "password",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_SSH",
"inBindings": {
"username": {
"type": "string",
"defaultValue": "root"
},
"password": {
"type": "password",
"exportName": "password"
},
"port": {
"type": "number",
"defaultValue": "22"
},
"cmd": {
"type": "string",
"defaultValue": "uptime"
},
"passwordAuthentication": {
"type": "boolean",
"defaultValue": true
},
"hostNameOrIP": {
"type": "string",
"exportName": "target"
},
"encoding": {
"type": "string",
"defaultValue": "utf-8"
}
},
"outBindings": {
"out_result": {
"name": "result",
"type": "string"
}
}
}
},
"outputs": {
"out_result": {
"type": "string"
}
},
…
}
vRO_SCP
You can use the vRO SCP workflow to transfer the file to external systems, such as NF
and router, using the SCP protocol. The vRO SCP workflow is executed using vRO. The SSH
connection from the TCP perspective originates from the vRO instance to the target. Connectivity
between vRO and the is established. The vRO workflow that implements the SCP is called File
Upload and can be inspected by logging into vRO.
The following table lists the step and the corresponding input values.
The following workflow template fragment illustrates the usage of the vRO SCP step:
{
"inputs": {
"target": {
"type": "string",
"required": true
},
"password": {
"type": "string",
"format": "password",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_SCP",
"inBindings": {
"inFile": {
"type": "file",
"defaultValue": "attachmentName.txt"
},
"username": {
"type": "string",
"defaultValue": "root"
},
"password": {
"type": "password",
"exportName": "password"
},
"destinationFileName": {
"type": "string",
"defaultValue": "foo.txt"
},
"workingDirectory": {
"type": "string",
"defaultValue": "/tmp"
},
"ip": {
"type": "string",
"exportName": "target"
}
},
"outBindings": {
"out_result": {
"name": "result",
"type": "string"
}
}
}
},
"outputs": {
"out_result": {
"type": "string"
}
},
…
}
vRO_CUSTOM
The purpose of the vRealize Orchestrator (vRO) custom workflow tools is to run any vRO
workflow in a Telco Cloud Automation (TCA) workflow. The custom workflow has only one
mandatory input binding called vroWorkflowName. This input binding defines the name of the
custom workflow to be executed. Additional input bindings may be specified to provide input for
the workflow execution in vRO.
{
"name": "testCustomVro",
"version": "v1",
"schemaVersion": "3.0",
"readOnly": false,
"startStepId": "stepId1",
"inputs": {
"vimInput": {
"type": "vimLocation",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_CUSTOM",
"inBindings": {
"vim": {
"type": "vimLocation",
"exportName": "vimInput"
},
"vroWorkflowName": {
"type": "string",
"defaultValue": "REPLACE_NAME"
},
"vro_in_string": {
"type": "string",
"defaultValue": "in1"
},
"vro_in_integer": {
"type": "number",
"defaultValue": 123
},
"vro_in_double": {
"type": "number",
"defaultValue": 123.4
},
"vro_in_boolean": {
"type": "string",
"defaultValue": true
},
"vro_in_file": {
"type": "file",
"defaultValue": "fileInWorkflow.bin"
}
},
"outBindings": {
"out_string": {
"name": "vro_out_string",
"type": "string"
},
"out_integer": {
"name": "vro_out_integer",
"type": "number"
},
"out_double": {
"name": "vro_out_double",
"type": "number"
},
"out_boolean": {
"name": "vro_out_boolean",
"type": "boolean"
},
"out_file": {
"name": "vro_out_file",
"type": "string"
}
}
}
},
"outputs": {
"out_string": {
"type": "string"
},
"out_integer": {
"type": "number"
},
"out_double": {
"type": "number"
},
"out_boolean": {
"type": "boolean"
},
"out_file": {
"type": "string"
}
}
}
Note The vRO custom step allows you to use a vRO workflow in a TCA workflow. However, the
workflow used must exist in vRO.
vRO_EXEC
VRO_EXEC allows you to execute scripts on virtual machines without having SSH. You must fulfill
the following prerequisites before implementing the step through vRO:
n vRO should be integrated with vCenter as the workflow that resides in vRO interacts with the
vCenter API.
If these prerequisites are fulfilled, you can execute the step on virtual machines in vCenter or
vCD.
3 Expand the workflow for which you want to view the vRO instances.
8 Click on the right side of the workflow beside the filter box.
9 Click Library > vCenter > Configuration > Add a vCenter Server Instance.
10 Click Run.
11 In the Set the vCenter Server instance properties tab, enter the IP / FQDN of vCenter as it is
registered in TCA-CP without HTTPS.
Note Leave the port, SDK URL, and ignore certificate fields as default. Alternatively, for
newer versions, ensure that ignore certificate is selected.
12 In the Set the connection properties tab, deselect the first option and enter the vCenter
administrator credentials.
13 In the Additional Endpoints tab, retain the default values and click Run.
This integrates the vCenter instance in vRO. You must verify that it is successful.
3 From the list of vSphere vCenter Plugins, click the vSphere vCenter Plugin with the IP / FQDN
that you provided for vRO integration with vCenter and verify if your vCenter is listed, and
you can browse the inventory.
Note If your vCenter is listed and you can browse the inventory, it indicates that
your vCenter instance is successfully integrated with vRO. If the integration fails, see vRO
documentation for detailed information on vRO integration with vCenter.
VRO_EXEC does not require connectivity between vRO and the virtual machine but requires
connectivity from vRO to vCenter. The vRO workflow that implements the step is called Run
Script In Guest and can be inspected by logging in to vRO. The step has the following input
values:
{
"inputs": {
"target": {
"type": "string",
"required": true
},
"password": {
"type": "string",
"format": "password",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_EXEC",
"inBindings": {
"username": {
"type": "string",
"defaultValue": "root"
},
"password": {
"type": "password",
"exportName": "password"
},
"vduName": {
"type": "virtualMachine",
"defaultValue": "myVduName"
},
"scriptType": {
"type": "string",
"defaultValue": "bash"
},
"script": {
"type": "string",
"defaultValue": "uptime"
},
"scriptTimeout": {
"type": "number",
"defaultValue": 12
},
"scriptRefreshTime": {
"type": "number",
"defaultValue": 3
},
"scriptWorkingDirectory": {
"type": "string",
"defaultValue": "/bin"
},
"interactiveSession": {
"type": "boolean",
"defaultValue": false
}
},
"outBindings": {
"out_result": {
"name": "result",
"type": "string"
}
}
}
},
"outputs": {
"out_result": {
"type": "string"
}
}
}
JavaScript
The JavaScript (JS) step input is used to process workflow inputs or variables. The JS step has
one mandatory input binding that specifies the script to be executed. This input binding consists
of string type and text format. The text format allows you to enter multiline strings as values. The
JS input can have any number of additional input bindings that allow the values to be passed
through the script for processing. The output bindings specify how the results of the script
execution are interpreted.
Input binding
{
"inputs": {
"input1" : { ... }
},
"variables": {
"variable1" : { ... },
"variable2" : { ... }
},
"steps": {
"stepId1": {
"type": "JS",
"inBindings": {
"script": {
"type": "string",
"format": "text",
"defaultValue": "…"
},
"myJsInput1": {
"type": "number",
"exportName": "input1"
},
"myJsInput2": {
"type": "number",
"exportName": "variable1"
}
},
"outBindings": {
"jsOutput1": {
"name": "variable1",
"type": "number"
}
}
},
...
}
The script input binding contains the JavaScript to be executed. This script contains a plain
JavaScript code, and only those features of JavaScript required for data manipulation are used.
Therefore, you cannot access the external resource line HTTP connections or files. At the time of
executing JavaScript, the engine searches for the function with the following signature:
The engine executes the function and populates the input parameters of the function with the
following values:
n Output: the output values of the step where the key to the map is the name of the value and
the value is the computed value.
n Logs: An array of log messages that belong to the step execution. A log entry contains the
following mandatory fields:
n Level: The level of the log message that contains one of the following values:
n ERROR
n INFO
n DEBUG
"level": "INFO",
"time": startTime + 2000
}
],
}
}
K8S
The purpose of the K8S action is to interact with the Kubernetes API securely. The K8S action
makes it possible to execute scripts in a POD. These scripts can have UNIX commands such as
awk, bash, jq, nc, and sed or commands that interact with the Kubernetes API such as kubectl and
helm. The commands that interact with the Kubernetes API are prepopulated with the Kubernetes
environment, and they work without credentials.
The system automatically allows access to the API to apply the principle of least privilege. This
provides the service account with access only to the relevant network function or VIM.
Note The helm version used during CNF LCM operations may differ from the helm version
available during the execution of the K8S step.
n NF_RO: Read-only access to the network function on which the workflow is executed. Read-
only indicates that only REST requests that require get, watch, and list Kubernetes verbs are
allowed, and only the resources that belong to the network function are visible.
Note The associated resources depend on the configuration mode of the policy service.
n NF_RW: Read-write access to the network function on which the workflow is executed. Only
the resources that are associated with the network function are visible.
Note The associated resources depend on the configuration mode of the policy service.
n VIM_RO: Read-only access to every resource on the Kubernetes cluster in which the network
function is hosted or to the VIM, which is selected by the vimLocation input binding of the
step. Read-only indicates that only REST requests that require get, watch, and list Kubernetes
verbs are allowed.
n VIM_RW: Unrestricted access to every resource on the Kubernetes cluster in which the
network function is hosted or to the VIM, which is selected by the vimLocation input binding
of the step.
From the RBAC perspective, the user who initiated the workflow has sufficient privileges to use
the resource with the selected privilege level (read-only/read-write).
You can specify the cluster in which you have created the POD via the target input binding. The
target can have the following values:
n WORKLOAD: The POD starts on the same cluster as the network function or VIM selected
by the vimLocation optional input binding. The workload cluster provides a good distribution
of the used resources as the POD always consumes resources from the network function or
selected VIM.
n MANAGEMENT: The POD starts on the management cluster that manages the VIM of the
network function or the selected VIM. The management clusters should only be used if the
workload cluster has no free capacity to run additional temporary PODs since this solution is
not fully scalable as it is limited by the resources of the workload cluster. Even if the POD is
running on the management cluster, it cannot access the management cluster but can access
the workload cluster.
Optionally, you can specify a node selector to further constrain the location of the PODs. In
this case, you can specify the nodeSelector input binding that sets the kubernetes.io/hostname:
<value> specified as the POD node selector.
Besides these fixed input bindings, you can specify any additional input binding. These additional
input bindings are available as environment variables or files during the step execution.
The runtime environment of the POD where you execute the script has the following properties.
n Available binaries: awk, bash, jq, head, helm, kubectl, nc, sed, tail.
n Each additional input is available as an environment variable. The name of the environment
variable is the TCA_INPUT_ concatenated with the name of the input binding. For a file, the
location of the file is specified as a value.
n The CNF environment variable is set with the identifier of the network function if the step is
executed within the context of a network function.
n The network service environment variable contains the name of the network function if the
step is executed within the context of a network function.
n CLUSTER_NAME environment variable contains the name of the workload cluster if the script
is executed within the WORKLOAD target.
The following sample code snippet provides an example workflow template fragment for this
step.
{
"inputs": {
"script": {
"type": "string",
"format" : "text",
"required": true
},
"target": {
"type": "string",
"required": true
},
"scope": {
"type": "string",
"required": true
},
"nodeSelector": {
"type": "string",
"required": false
}
},
"outputs" : {
"FINAL_OUTPUT" : {
"type" : "string",
"description" : "Final Output"
}
},
"steps" : {
"step0" : {
"type" : "K8S",
"inBindings" : {
"timeout": {
"type": "number",
"defaultValue" : 60
},
"script" : {
"type" : "string",
"format" : "text",
"exportName" : "script"
},
"inputNumber" : {
"type" : "number",
"defaultValue" : 22
},
"target" : {
"type" : "string",
"exportName" : "target"
},
"scope" : {
"type" : "string",
"exportName" : "scope"
},
"nodeSelector": {
"type" : "string",
"exportName" : "nodeSelector"
},
"file1": {
"type": "file",
"defaultValue" : "file1.txt"
},
"file2": {
"type": "file",
"defaultValue" : "file2.txt"
}
},
"outBindings" : {
"FINAL_OUTPUT" : {
"name" : "result",
"type" : "string"
}
}
}
},
…
}
The following sample code snippet has a workflow template fragment for running Kubernetes
workflows on a VIM without a network function context.
{
"inputs" : {
"script": {
"type": "string",
"format" : "text",
"required": true
},
"target": {
"type": "string",
"required": true
},
"scope": {
"type": "string",
"required": true
},
"vimId": {
"type": "vimLocation",
"required": true
}
},
"outputs" : {
"FINAL_OUTPUT" : {
"type" : "string",
"description" : "Final Output"
}
},
"steps" : {
"step0" : {
"type" : "K8S",
"inBindings" : {
"script" : {
"type" : "string",
"exportName" : "script",
"format" : "text"
},
"inputNumber" : {
"type" : "number",
"defaultValue" : 22
},
"target" : {
"type" : "string",
"exportName" : "target"
},
"scope" : {
"type" : "string",
"exportName" : "scope"
},
"myVimId": {
"type": "vimLocation",
"exportName" : "vimId"
},
"file1": {
"type": "file",
"defaultValue" : "file1.txt"
},
"file2": {
"type": "file",
"defaultValue" : "file2.txt"
}
},
"outBindings" : {
"FINAL_OUTPUT" : {
"name" : "result",
"type" : "string"
}
}
}
},
…
}
API Netconf
The purpose of the Netconf step is to interact with a service that has a Netconf interface.
Network elements provide a Netconf interface as a configuration interface. It is used to set or
retrieve configuration data from a Netconf-capable device.
inFile file Yes, if config is empty and The configuration file used.
the action is "merge" or
"replace".
The following workflow template fragment illustrates the usage of the netconf step.
{
"inputs": {
"hostname": {
"type": "string"
},
"password": {
"type": "string",
"format": "password"
}
},
"steps": {
"stepId1": {
"nextStepId": "stepId2",
"type": "NETCONF",
"inBindings": {
"action": {
"type": "string",
"defaultValue": "merge"
},
"inFile": {
"type": "file",
"defaultValue": "netconf.content.1.xml"
},
"username": {
"type": "string",
"defaultValue": "admin"
},
"password": {
"type": "string",
"format": "password",
"exportName": "password"
},
"hostname": {
"type": "string",
"exportName": "hostname"
},
"port": {
"type": "number",
"defaultValue": "17830"
}
},
"outBindings": {
"out_step1": {
"name": "result",
"type": "string"
}
}
},
"stepId2": {
"nextStepId": "stepId3",
"type": "NETCONF",
"inBindings": {
"vim": {
"type": "vimLocation",
"exportName": "inVim"
},
"action": {
"type": "string",
"defaultValue": "replace"
},
"inFile": {
"type": "file",
"defaultValue": "netconf.content.2.xml"
},
"username": {
"type": "string",
"defaultValue": "admin"
},
"password": {
"type": "string",
"format": "password",
"exportName": "password"
},
"hostname": {
"type": "string",
"exportName": "hostname"
},
"port": {
"type": "number",
"defaultValue": "17830"
}
},
"outBindings": {
"out_step2": {
"name": "result",
"type": "string"
}
}
},
"stepId3": {
"nextStepId": "stepId4",
"type": "NETCONF",
"inBindings": {
"vim": {
"type": "vimLocation",
"exportName": "inVim"
},
"action": {
"type": "string",
"defaultValue": "get"
},
"username": {
"type": "string",
"defaultValue": "admin"
},
"password": {
"type": "string",
"format": "password",
"exportName": "password"
},
"hostname": {
"type": "string",
"exportName": "hostname"
},
"port": {
"type": "number",
"defaultValue": "17830"
}
},
"outBindings": {
"out_step3": {
"name": "result",
"type": "string"
}
}
},
"stepId4": {
"type": "NETCONF",
"inBindings": {
"vim": {
"type": "vimLocation",
"exportName": "inVim"
},
"action": {
"type": "string",
"defaultValue": "getconfig"
},
"username": {
"type": "string",
"defaultValue": "admin"
},
"password": {
"type": "string",
"format": "password",
"exportName": "password"
},
"hostname": {
"type": "string",
"exportName": "hostname"
},
"port": {
"type": "number",
"defaultValue": "17830"
}
},
"outBindings": {
"out_step4": {
"name": "result",
"type": "string"
}
}
}
},
"outputs": {
"out_step1": {
"type": "string"
},
"out_step2": {
"type": "string"
},
"out_step3": {
"type": "string"
},
"out_step4": {
"type": "string"
}
},
…
}
You must specify the following mandatory parameters when executing a workflow:
n Pause at Failure: Indicates if the workflow execution should pause in case of a failed step
execution.
n Retained Until: The time until which the workflow execution is retained.
n Workflow snapshot: Snapshot of the workflow captured when creating the workflow
execution. A snapshot is created from the complete workflow and its related objects, and
the lifecycle of the workflow is associated with the existence of the workflow execution. This
ensures that even if the workflow is modified or deleted, the original workflow is available
throughout the execution of the workflow.
n NONE
n NF
n NS
The behavior of the steps varies based on the selected context. The following table provides the
difference between the three contexts.
Context
After the initial parameters of the workflow execution are set, VMware Telco Cloud Automation
creates the initial step execution. The following diagram illustrates the structure of the step
execution.
“Snapshot”
Workflow
Step Workflow
execution
Log
n Step Execution ID: Identifier of the step execution, which is a continuous list of non-negative
numbers starting from zero. Each step execution is followed by a step execution identifier.
n Step snapshot: Snapshot of the step captured from the workflow snapshot when creating
the step execution. This snapshot ensures that the changes that may occur to steps in the
workflow snapshot do not affect the created step executions.
n Start time: The time at which the step execution is created. However, this may not be the
time at which the actual execution started.
n State: Current state of the step execution, which comprises the following values:
n Waiting Before Execution: Telco Cloud Automation awaits user input to execute the step.
n Waiting After Execution: Telco Cloud Automation awaits user input after the step
execution.
n Ready To Compute Next Step: The step execution is successful, and the next step to be
executed is calculated.
n Finished: All administrative actions are completed with the step execution, and the
execution becomes immutable.
n End time: Optional time at which the step execution is completed. The end time of the step
execution is set when the step reaches the executed state.
n Variables: Snapshot of the variables taken before the execution of the current step.
n Outputs: The values the step produces. All values of the step are not used to set variables or
workflow outputs.
The current state of the step execution is illustrated in the following diagram.
If pause
at step
Abort or schedule
If pause
Waiting new step execution
at step Abort
before Finished
execution
During the execution of the workflow, log messages are created. These log messages can be
associated with the workflow execution or the step execution. The association depends on the
origin of the log. The log messages are always bound to the step execution. These log messages
consist of the following entries: message, time, and level.
The workflow execution is retained until the retention time, which is 14 days. After the retention
time has passed, the workflow execution and all its related objects, such as logs, attachments,
and step executions, are purged. The retention time can be extended up to 13 days.
Prerequisites
You must create a standalone workflow and select the context to run the workflow from the
below options.
n None
Procedure
3 Click the vertical ellipse on a workflow that you want to run and click Execute.
4 From the Context Type drop-down, select the context to run the workflow.
Note Select None as the context type if you want to execute a workflow without any
context.
5 Click the browse icon and select an instance based on the context type selected.
6 Click OK.
8 (Optional) From the Steps to Pause drop-down, select the step in which you want to pause
the workflow. If you want to pause the workflow when it fails, select Pause on failure.
9 Click EXECUTE.
Prerequisites
You must enter the inputs for parameterization of the LCM operation (instantiate, scale, heal,
upgrade, and terminate).
Procedure
3 Click the vertical ellipse of a network function for which you want to run a workflow through a
network function LCM operation and click Instantiate.
4 Enter the inputs in the Inventory Detail tab and click NEXT.
5 In the Network Function Properties tab, click the horizontal ellipse of a connection point from
the Connection Point Network Mappings section.
7 Click NEXT.
9 Click Post-Installation Properties, enter the post-installation properties, and click NEXT.
10 In the Review tab, review all the information entered for the network function instance and
click INSTANTIATE.
Procedure
3 Click the vertical ellipse of a network function for which you want to run an embedded
network function workflow and click Run Workflow.
6 (Optional) From the Steps to Pause drop-down, select the step in which you want to pause
the workflow. If you want to pause the workflow when it fails, select Pause on failure.
7 Click EXECUTE.
Procedure
3 Click the expand (>) icon of a workflow for which you want to view the details.
Procedure
3 Click the expand (>) icon of a workflow for which you want to view the details.
4 Click the expand (>) icon of a workflow step in Workflow Steps to view the workflow step
execution details of that step.
Procedure
3 Click a network function for which you want to view the workflow execution details.
4 Click the Workflows tab to see all the workflows run on that network function.
Procedure
3 Click a network service for which you want to view the workflow execution details.
4 Click the Workflows tab to see all the workflows run on that network service.
Procedure
3 Click the expand (>) icon of a workflow for which you want to view the vRO instances.
Note You must have system admin privileges to execute the above procedure, else you
must log in to the vRO manually.
Procedure
8 Click the vertical ellipse on the workflow, which you have debugged, and click Resume.
9 Click RESUME.
Note You can update the original workflow for standalone workflows only. The network function
and network service workflows cannot be updated.
Procedure
3 Click the vertical ellipse of a workflow for which you want to update the changes.
Procedure
4 Click Delete.
Procedure
3 Click the vertical ellipse of a workflow execution that you want to end.
4 Click Abort.
Note The initial retention is fixed at 14 days and is not configurable. However, you can extend
the retention duration for a workflow execution instance by using the VMware Telco Cloud
Automation portal.
Procedure
3 Click the vertical ellipse of a workflow for which you want to extend the retention time.
5 Select the retention time from the Retention Time (days) drop-down.
6 Click UPDATE.
n Workflow Read: The user can view the workflow instances using this privilege.
n Workflow Design: The user can design a workflow using this privilege.
n Workflow Execute: The user can execute a workflow using this privilege.
Note If the context of the workflow execution is not "none" then the user needs NF/NS
LCM permission to run workflows on the concrete NF/NS instance. The user may need more
permissions based on the step input values of certain steps.
Telco Cloud Automation has two built-in default roles. They are:
n Workflow Designer: The user can read and design the workflows using this role.
n Workflow Executor: The user can read and execute the workflows using this role.
n The users can design or execute any workflow using the built-in roles.
n You can use the name of the workflow to limit the permission to only access a specified set of
workflows.
n If you want to prevent a user from accessing all workflow instances, then you can define an
advanced filter with a random name as a designator.
n You can grant access to a user to an embedded workflow instance with the workflow read
privilege, or the user can inherit it from having access to the catalog entry. This means that
if the user can access a catalog entry, it gives the user implicit access to all the workflows
embedded in a catalog.
n The workflow execution creator and system administrator can access the workflows.
n get
n getconfig
n edit-config
n merge
n replace
Prerequisites
Unlike other workflows in VMware Telco Cloud Automation that use VMware vRealize
Orchestrator, the NETCONF workflow runs from the NETCONF client that is located within the
Telco Cloud Automation Control Plane (TCA-CP) appliance. From a connectivity and firewall
perspective, ensure that TCA-CP has access to the NETCONF server IP address and port before
running the workflow.
n If you change the action to get, VMware Telco Cloud Automation runs the get command on
the NETCONF server.
{
"id":"netconf_getconfig_workflow",
"name": "Netconf Get-Config Workflow",
"description":"Netconf Get-Config Workflow",
"version":"1.0",
"startStep":"step0",
"variables": [
{"name":"vnfId", "type": "string"}
],
"input": [
{"name": "USER", "description": "Username", "type": "string"},
{"name": "PWD", "description": "Password", "type": "password"},
{"name": "HOSTNAME", "description": "Hostname", "type": "string"}
],
"output": [
{"name":"result", "description": "Output Result", "type": "string"}
],
"steps":[
{
"stepId":"step0",
"workflow":"NETCONF_WORKFLOW",
"namespace": "nfv",
"type":"task",
"description": "Netconf Get-Config Workflow",
"inBinding":[
{"name":"action", "type":"string", "default" : "getconfig"},
{"name": "username", "type": "string", "exportName": "USER"},
{"name": "password", "type": "password", "exportName": "PWD"},
{"name": "port", "type": "number", "default": "17830"},
{"name": "hostname", "type": "string", "exportName": "HOSTNAME"}
],
"outBinding": [
{"name": "result", "type": "string", "exportName": "result"}
],
"nextStep":"END"
}
]
}
n If you change the action to replace, VMware Telco Cloud Automation runs the edit-config
command with the replace option.
{
"id":"netconf_merge_workflow",
"name": "Netconf Merge Workflow",
"description":"Netconf Merge Workflow",
"version":"1.0",
"startStep":"step0",
"variables": [
{"name":"vnfId", "type": "string"}
],
"input": [
{"name": "USER", "description": "Username", "type": "string"},
{"name": "FILENAME", "description": "Filename", "type": "file"},
{"name": "PWD", "description": "Password", "type": "password"},
{"name": "HOSTNAME", "description": "Hostname", "type": "string"}
],
"output": [
{"name":"result", "description": "Output Result", "type": "string"}
],
"steps":[
{
"stepId":"step0",
"workflow":"NETCONF_WORKFLOW",
"namespace": "nfv",
"type":"task",
"description": "Netconf Merge Workflow",
"inBinding":[
{"name":"action", "type":"string", "default" : "merge"},
{"name": "inFile", "type": "file", "exportName": "FILENAME"},
{"name": "username", "type": "string", "exportName": "USER"},
{"name": "password", "type": "password", "exportName": "PWD"},
{"name": "port", "type": "number", "default": "17830"},
{"name": "hostname", "type": "string", "exportName": "HOSTNAME"}
],
"outBinding": [
{"name": "result", "type": "string", "exportName": "result"}
],
"nextStep":"END"
}
]
}
n Managing Alarms
Managing Alarms
The Dashboard tab displays the total number of alarms triggered. It also displays the number of
alarms according to their severity.
VNF Alarms
VNF alarms are triggered when VMware Telco Cloud Automation identifies anomalies in the
network connection status or when the power state changes. VMware Telco Cloud Automation
also triggers VNF alarms that are predefined and user-defined in VMware vSphere.
CNF Alarms
CNF triggers alarms for system level and service level anomalies. For example, system level
alarms are triggered when an image or resource is not available, or when a pod becomes
unavailable. Service level alarms are triggered when the number of replicas that you have
specified is not identical to the number of nodes that get created, and so on. Here are some
possible anomalies when VMware Telco Cloud Automation displays an error message and
triggers an alarm. These alarms are in the Critical state:
n Image pull error - The URL to the Helm Chart image is incorrect or the image cannot be
accessed due to network issues.
VIM Alarms
VIM alarms are triggered at the VIM level for CNF infrastructure anomalies. For example, when
a Kubernetes cluster reaches its memory or CPU resource limit, its corresponding VIM triggers
an alarm. Here are some possible CNF infrastructure anomalies for which alarms are triggered.
These alarms are in the Warning state:
n CNF/VNF level - To view the alarms of individual CNFs and VNF instances, go to the
Inventory tab, click a VNF or CNF instance, and click Alarms.
n Network Service level - VNF and CNF alarms are listed at the corresponding Network Service
level.
n VDU level - For a VNF, the alarms are also listed at the corresponding VDU level.
n Global level - You can view the global alarms for all entities and users from the
Administration > Alarms tab.
1 Go to Administration > Alarms. Details of the alarm such as the alarm name, its associated
entity, its associated managed object, alarm severity, alarm triggered time, description, and
state are displayed.
2 To acknowledge a triggered alarm, select the alarm and click Acknowledge. When the
acknowledgment is successful, the state of the alarm changes to Acknowledged. To
acknowledge multiple alarms together, select the alarms that you want to acknowledge and
click Acknowledge.
By default, the list refreshes every 120 seconds. To get the current state of the alarms, click
Refresh.
VNF Reports
You can generate reports for performance metrics such as Mean CPU Usage and Mean Memory
Usage for each VNF. Set the frequency of report collection, end date and time, and the
performance metrics that you want to generate reports for.
n Disk Read
n Disk Write
The performance management report includes stats collected at the VNF and VDU levels for a
VNF instance.
CNF Reports
For this release, you can generate only the Mean CPU Usage and Mean Memory Usage
performance metrics reports.
Note To generate performance management reports for CNFs, you must install Prometheus
Operator on the namespace vmware-paas and set the default port to 9090.
Procedure
3 Click the desired CNF or VNF, and from the details page click the PM Reports tab.
5 In the Create Performance Management Job Report window, enter the following details:
n Select the collection period time, reporting frequency in hours and minutes, reporting end
date and time.
The report is scheduled and is available under PM Reports in the details page. It stays active
from the current time stamp until the provided end time.
7 To download the generated report, click the More (>) icon against your report name and click
Download.
Note You can only download those reports that are in the Available state. The generated
reports are available for download for 7 days.
Prerequisites
Note This procedure is not supported for network functions that are imported from partner
systems.
Procedure
n To view more details of a Virtual Deployment Unit (VDU) such as alerts, status, name,
memory, vCPU, and storage, click the i icon.
n To view more information about the virtual link, point at the blue square icon on the VDU.
n To view detailed information about the VDU and the VNFs, their performance data,
alarms, and reports, click the ⋮icon on the desired VDU and click Summary. The details
page provides the following tabs:
n Alarms - Lists the alarms generated for the VDUs of the selected VNFs. You can
acknowledge alarms from here.
n To view historical tasks for a desired network function, go to Network Functions >
Inventory and click the desired network function. The Tasks tab displays the historical
tasks and their status.
Prerequisites
Note This procedure is not supported for network functions that are imported from partner
systems.
Procedure
n Inventory - Displays the summary of the status of the pods, deployments, and services.
To display the tree view, click the tree icon below the Inventory tab.
n Alarms - Lists the alarms generated for the selected CNFs. You can acknowledge alarms
from here.
n PM Reports - Displays the list of performance reports that are being collected. To
set parameters for generating performance reports, click Generate Reports. You can
generate reports for a metric group, set the collection period, reporting period, and the
reporting end date.
Prerequisites
Procedure
n To view and acknowledge the consolidated alarms of all the VNFs and CNFs that belong
to the network service, click the Alarms tab. You can also view the alarms from the
Topology tab. Click the More (...) icon on the desired network service and select Alarms.
n View historical tasks for the selected network service from the Tasks tab.
n Administrator Configurations
n License Consumption
Create a Tag
Create a tag and associate a key-value pair and objects to it.
Prerequisites
Procedure
n Virtual Infrastructure
4 Click Add.
Results
Edit a Tag
You can edit a tag to add or delete key values and select or deselect associable objects.
Prerequisites
Procedure
5 Click Update.
Results
Delete a Tag
You can delete a tag and remove it from the Tags list.
Prerequisites
Procedure
Results
Prerequisites
Procedure
4 Click Transform.
5 Click Update.
Results
Log entries from the specified time period are displayed in the table. You can click Download
Audit Logs to download a copy of the displayed logs to your local machine.
Go to Administration > Troubleshooting, select the logs and click Request to generate a support
bundle.
If you intend to contact VMware support, go to Administration > Support and copy the support
information to your clipboard. This information is required in addition to the support bundle.
Note To enable quick and hassle-free troubleshooting, the Auto Approval option for collecting
VMware Telco cloud Automation logs is enabled by default. To provide the logs manually,
deactivate this option on each appliance.
Administrator Configurations
You can do the following:
Prerequisites
Procedure
3 Click Edit.
a Status - To enable or disable the login banner, click the button corresponding to Status.
When you enable the Status, the Telco Cloud Automation displays the login banner at the
login screen.
b Checkbox Consent - To enable or disable the consent check box, click the button
corresponding to Checkbox Consent. When you enable the Checkbox Consent, the user
must agree to the message before login to the Telco Cloud Automation.
c Title - Title of the consent message. The maximum allowed length of the title message is
48 characters.
d Message - The detailed message for the consent. To view the complete message on the
login screen, you can click the title. The maximum allowed length of the title message is
2048 characters.
5 Click Save.
Results
n Global default isolation mode: Sets the default isolation mode for Kubernetes clusters.
n VIM level default isolation: Inherits global isolation mode, and this is the isolation mode
of CNFs deployed into the cluster. However, you can edit the settings by navigating
to Infrastructure > Virtual Infrastructure and then clicking the Options (three dots)
corresponding to the cloud instance.
n CNF level isolation mode: Inherits VIM isolation mode and you can edit the settings. The
settings that you modify apply to the next CNF operation.
Prerequisites
Procedure
3 From the Default Isolation Mode drop-down list, select one of the isolation modes to be
applied to the Kubernetes clusters:
n Restricted: Each Network Function has access to its namespace, and no access is granted
to any other namespace or cluster-level resources.
Note By default, the Kubernetes VIMs are in permissive mode, and no cluster-level
privilege separation is enforced. To enable restricted policies, you must set the isolation
mode to Restricted.
4 Click Update.
Prerequisites
Procedure
4 To add the kernel version, click ADD KERNEL VERSION and provide the following
information:
5 Click Add.
6 To add the DPDK version, click ADD SUPPORTED DPDK and provide the following
information:
n Repository FQDN: Enter the FQDN of the repository where the DPDK resides.
7 Click Add.
License Consumption
VMware Telco Cloud Automation sends out an alert when the licenses threshold crosses 90%.
However, user operations on Network Functions and Network Services are not blocked.
CPU license usage is calculated based on the number of managed vCPUs per VIM. The
transformation factor used for calculating CPU license usage is 12 vCPUs = 1 CPU Package
License.
To view the details about the number of available and used licenses, perform the following steps:
Procedure
Results
The Licensing page displays the CPUs available and utilized per VIM.
Procedure
1 Log in to the VMware Telco Cloud Automation Appliance Manager using the port tca-m/tca-
cp:9443.
3 Select the trusted crtificate type that you want to import and do one of the following:
4 Click Apply.
Procedure
For more information on obtaining the thrumbprint, see the section Obtain vSphere
Certificate Thumbprints in Prepare to Deploy Management Clusters to vSphere.
2 Log in to the Telco Cloud Automation Control Plane appliance attached to vCenter as an
admin user through the SSH client.
3 Run the following command and note the CR name and namespace:
To edit the thumbprint, see Ensure that the vCenter IP and Thumbprint are Accurate.
a SSH into the management cluster control plane virtual IP with the user name capv.
b Run the following command and note the CR name and namespace:
To edit the thumbprint, see Ensure that the vCenter IP and Thumbprint are Accurate.
Prerequisites
Procedure
1 Run the following command and note the CR name and namespace.
To edit the thumbprint, see Ensure that the vCenter IP and Thumbprint are Accurate.
Ensure that you have updated the correct vCenter IP and thumbprint by performing the
following:
n Verify the server address field and ensure that the vCenter IP is correct.
n Verify the thumbprint field and edit the value with the lateast thumbprint that you have
already obtained.
apiVersion: telco.vmware.com/v1alpha1
kind: VCenterPrime
metadata:
name: vcprime-mgmt-cluster07
namespace: tca-system spec:
server:
address: 10.10.10.10
credentialRef:
kind: Secret
name: vcprime-mgmt-cluster07-secret
namespace: tca-system
subConfig:
datacenter: tcpscale-VMCCloudDC
thumbprint: FA:3A:8E:E1:B3:23:DR:FE:F3:6E:19:BB:FE:01:D1:18:E8:24:88:F
Update the TLS Thumbprint for TCA and TKG Management Clusters
If vCenter certificate of a secondary cloud changes, perform the following steps to update the
TLS thumbprint.
Procedure
1 SSH to the management cluster control plane virtual IP with the user name capv and update
{mgmt-cluster-name}-vsphere-cpi-addon secret.
3 Update the CPI vSphere configuration with the thumbprint of the temporary file.
Workspace]
server = "10.10.10.99"
datacenter = "test-dc"
thumbprint = "13:C1:98:D9:E2:DF:A9:6A:95:4C:6A:96:EA:8D:FE:CF:56:6C:D3:1C"
ip-family = "ipv4
Procedure
1 SSH to the management cluster control plane virtual IP with the user name capv.
3 Edit each of the vSphere clusters using the following command and update the
spec.thumbprint field with the correct thumbprint.
For management clusters, add the tkg-system namespace to the kubectl commands:
Note The option to upgrade VMware Telco Cloud Automation using the upgrade bundle is only
available for VM-based VMware Telco Cloud Automation.
Procedure
1 Download the VMware Telco Cloud Automation upgrade bundle from VMware Customer
Connect.
2 Save the upgrade bundle in a jump host and ensure that the jump host can access the
appliance to be upgraded.
3 Log in to the Appliance Management interface through FQDN. For example, https://tca-cp-
ip-or-fqdn:9443.
On the Upgrade page, details about the current installed version, upgrade date, and upgrade
state are displayed.
5 Click Upgrade.
7 Click Continue.
Results
You can upgrade the cloud-native deployment of VMware Telco Cloud Automation in both the
internet enabled and airgapped environments. You can also perform the basic troubleshooting
related to the upgrade.
Prerequisites
n Ensure that VMware Telco Cloud Automation is in healthy condition by making sure all
services are running in appliance summary of platform Manager.
Note
n You cannot schedule VMware Telco Cloud Automation Control Plane upgrade from a
VMware Telco Cloud Automation Manager appliance that is in HA mode.
n During the upgrade process, you cannot use VMware Telco Cloud Automation for any
operations.
n Upgrade the VMware Telco Cloud Automation Manager before upgrading the VMware Telco
Cloud Automation Control Plane.
Note Upgrading cloud native TCA from an older version to TCA version 2.2 is not supported.
Procedure
Results
The upgrade begins. To monitor the upgrade progress, view the Status column.
What to do next
After the successful upgrade, modify the VMware Tanzu Kubernetes Grid image using
Infrastructure Automation. For details, see Add Images or OVF.
Prerequisites
n Ensure that VMware Telco Cloud Automation is in healthy condition by making sure all
services are running in appliance summary of platform Manager.
n To upgrade VMware Telco Cloud Automation in an airgapped environment, it must have been
deployed using an Airgap Server and the FQDN of the Airgap server must not change after
deployment.
n Before starting the upgrade, ensure that the Airgap server is running the same version
to which you want to upgrade VMware Telco Cloud Automation. For example, if you are
upgrading VMware Telco Cloud Automation to version 2.1.0, then the Airgap server must also
be running version 2.1.0.
n If the Airgap server is using a self-signed certificate or private CA-signed certificate, then
the certificate must be the same as the one used while deploying VMware Telco Cloud
Automation.
Note Upgrading cloud native TCA from an older version to TCA version 2.2 is not supported.
Procedure
5 Browse and select the corresponding upgrade BOM file of the newer release to which you
want to upgrade.
6 Provide the Airgap Server FQDN and then click Upgrade to initiate the upgrade.
Results
The upgrade begins. To monitor the upgrade progress, view the Status column.
The upgrade process can take up to 45 mins. During the upgrade process, cannot use VMware
Telco Cloud Automation for any operations. Also, upgrade Telco Cloud Automation Manager
before upgrading VMware Telco Cloud Automation Control Plane.
Note
n The upgrade process can take up to 45 mins.
n During the upgrade process, you cannot use VMware Telco Cloud Automation for any
operations.
n Upgrade the VMware Telco Cloud Automation Manager before upgrading the VMware Telco
Cloud Automation Control Plane.
Procedure
2 Open both VMware Telco Cloud Automation Manager and VMware Telco Cloud Automation
Control Plane upgrade BOMS using any text editor.
9 Browse and select the corresponding upgrade BOM file of the newer release to which you
want to upgrade.
10 Click Upgrade.
Results
The upgrade begins. To monitor the upgrade progress, view the Status column.
API returns JSON response, use clusterName to get the name of the VMware Telco Cloud
Automation cluster. Use the appliance manager REST API to get the kubeconfig.
API returns JSON response, use kubeconfig to get the base64 encoded kubeconfig. Perform a
bas64 decode of the kubeconfig and use decoded value for the kubectl, helm commands.
1 Obtain the names of VMware Telco Cloud Automation upgrade pods using the following
command:
Note tca-mgr is the namespace for VMware Telco Cloud Automation Manager and tca-
system is the namespace for VMware Telco Cloud Automation Control Plane.
Example
2 View the logs of VMware Telco Cloud Automation upgrade pods, use the following command
for each of the pod:
Follow steps to store the VMware Telco Cloud Automation cluster kubeconfig.
1 Save the base64 encoded kubeconfig and VMware Telco Cloud Automation cluster name in
the below format in a json file as below:
$ cat tca_kubeconfig.json
{
"data":{
"items":[
{
"config":{
"url":"https://<TCA cluster controlPlaneEndpointIP>:6443",
"clusterName":"<TCA Cluster name>",
"kubeconfig":"<base64 encoded kubeconfig>"
}
}
]
}
}
Note tca-mgr is the namespace for VMware Telco Cloud Automation Manager and tca-system is
the namespace for VMware Telco Cloud Automation Control Plane.
After the restart, you can retry to upgrade the VMware Telco Cloud Automation upgrade.
1 Restart the VMware Telco Cloud Automation helm service pod using the following command:
Note tca-mgr is the namespace for VMware Telco Cloud Automation Manager and tca-
system is the namespace for VMware Telco Cloud Automation Control Plane.
However, if the upgrade fails after multiple retries, you can follow the steps to uninstall and
reinstall the services.
3 Add VMware Telco Cloud Automation helm repo to be able fetch helm charts.
Note VMware Telco Cloud Automation services are deployed in the namespaces: tca-mgr,
tca-system, istio-system, metallb-system, tca-services, postgres-operator-system, and
fluent-system.
Note You can use helm search repo option to search for VMware Telco Cloud Automation helm
charts.
n System Management APIs: Used for configuring TCA-Manager and TCA-CPs. These APIs are
mainly used for configuring and troubleshooting the TCA-CP appliances.
n NFV Orchestration APIs: Used to manage VNF Lifecycle, VNF Packages, Network Services,
Event Subscriptions, and so on with NFV SOL APIs (SOL005 and SOL003), CaaS and Virtual
Infrastructure Management APIs, Partner Systems, and Extension APIs.
For information on the supported VMware Telco Cloud Automation versions, see VMware
Product Interoperability Matrix.
To download the VMware Telco Cloud Automation SDKs, go to VMware Telco Cloud Automation
SDK.
For more information on the VMware Telco Cloud Automation SDKs, see Telco Cloud Automation
SDK Programming Guide.
API
PUT https://<TCA_CP>/admin/hybridity/api/global/settings/<namespace>/<option>
Options
Namespace Option Description Sample Request
You can configure the behavior for virtual machine placement, update the supported hardware
version of vfio-pci device drivers, and update the wait time and poll intervals for customization
tasks. Configure these settings to change the default behavior only when there is an issue with
your existing environment.
API
PUT: /admin/hybridity/api/global/settings/<namespace>/<property>
{
"value": <value>
}
Note The authentication is the same as the other VMware Telco Cloud Automation APIs.
Note The authentication is the same as the other VMware Telco Cloud Automation APIs.
API for NS
PUT /admin/hybridity/api/global/settings/global/NetworkServiceSchemaValidation
{
"value": false
}
Note The authentication is the same as the other VMware Telco Cloud Automation APIs.
the setup is Intel-based. If you are using an AMD-based setup, use the following API to update
the global settings to send the hardware version as 18.
Prerequisites
Run this API on VMware Telco Cloud Automation Manager.
API
PUT: /admin/hybridity/api/global/settings/InfraAutomation/vfioPciHardwareVersion
{
"value": "18"
}
Note
n Update the appropriate value based on the firmware.
Prerequisites
Run this API on VMware Telco Cloud Automation Manager.
API
PUT: /admin/hybridity/api/global/settings/InfraAutomation/<nodePoolId>_enableDRS
{
"value": true
}
Note
n Ensure that you replace all the hyphens (-) in <nodePoolId> with underscores (_).
Prerequisites
Run this API on VMware Telco Cloud Automation Manager.
PUT: /admin/hybridity/api/global/settings/InfraAutomation/reservedCoresPerNumaNode
{
"value": 3
}
PUT: /admin/hybridity/api/global/settings/InfraAutomation/reservedMemoryPerNumaNode
{
"value": 1024
}
PUT: /hybridity/api/infra/k8s/clusters/<workloadclusterId>/esxinfo
{
}
Prerequisites
Run this API on VMware Telco Cloud Automation Manager.
Number of Polls
By default, VMware Telco Cloud Automation polls 60 times. If your customization requires a
longer time to complete, you can increase the poll count.
PUT: /admin/hybridity/api/global/settings/InfraAutomation/nodePolicyStatusRetryCount
{
"value": 120
}
PUT: /admin/hybridity/api/global/settings/InfraAutomation/nodePolicyStatusWaitTime
{
"value": 120
}
PUT: /admin/hybridity/api/global/settings/InfraAutomation/nodePolicyStatusFailureRetryCount
{
"value": 30
}
API
PUT:https://<>/admin/hybridity/api/global/settings/{service}/{option}
Options
Namespace Option Description Sample Request
PUT:https://<>/admin/hybridity/api/global/settings/ClusterAutomation/intentObserverTaskDelay
The following table lists the supported partner systems and their versions:
Prerequisites
Note You must add at least one VMware Cloud Director-based cloud to your VMware Telco
Cloud Automation environment before adding a partner system.
You must have the Partner System Admin privileges to perform this task.
Procedure
3 In the Register Partner System page, select the partner system page and enter the
appropriate information for registering the partner system.
4 Click Next.
6 Click Finish.
Results
The partner system is added to VMware Telco Cloud Automation and is displayed on the Partner
Systems page.
Example
In this example, we list the steps to add Nokia CBAM as a partner system:
3 In the Register Partner System page, Nokia CBAM is preselected. Enter the following
information:
n Name - Enter a unique name to identify the partner system in VMware Telco Cloud
Automation.
n Version - Select the version of the partner system from the drop-down menu.
Note You can get the Client ID and Secret from Nokia CBAM.
4 Click Next.
6 Click Finish.
Nokia CBAM is added to VMware Telco Cloud Automation and is displayed in the Partner
Systems page.
What to do next
n You can select the partner system and click Modify Registration or Delete Registration to
edit the configuration or remove the system from VMware Telco Cloud Automation.
n You can add a network function catalog from the partner system to VMware Telco Cloud
Automation.
Prerequisites
You must have the Partner System Admin privileges to perform this task.
Procedure
2 Navigate to Infrastructure > Partner Systems and select the partner system that you want to
edit.
5 Click Next.
6 Select additional VIMs or deselect the VIMs that you do not want to associate your partner
system with.
Note You can dissociate a VIM only if the CNFs instantiated on the VIM are deleted.
7 Click Finish.
Results
What to do next
To view the updated details of your partner system, go to Infrastructure > Partner Systems,
select your partner system, and click the > icon.
Prerequisites
You must have the Partner System Admin privileges to perform this task.
Procedure
2 Navigate to Infrastructure > Partner Systems and select the partner system.
4 In the Add Network Function Catalog page, enter the following details:
n Product Name - The name of the product associated with the network function catalog.
5 Click Add.
Results
The network function catalog is added to the Network Functions > Catalogs page.
Note You cannot edit the Network Function Description of a network function catalog that is
added from a partner system.
Prerequisites
To perform this task, you must have the Partner System Administrator privileges.
Note
n Ensure that all Harbor repository URLs contain the appropriate port numbers such as 80, 443,
8080, and so on.
Procedure
3 Select Harbor.
n URL - Enter the URL of your repository. If you use a Harbor repository from a third-party
application, ensure that you provide this URL in VMware Telco Cloud Automation Control
Plane (TCA-CP).
5 Click Next.
7 Click Finish.
Results
You have successfully registered your Harbor repository. You can now select this repository for
resources when instantiating a CNF.
What to do next
The Harbor inventory synchronizes every 5 minutes. After adding, modifying, or deleting a
Harbor repository, you cannot view the changes in your inventory until the next synchronization
happens. To refresh the inventory manually, go to Infrastructure > Partner Systems and click
Refresh Harbor Inventory.
Prerequisites
To perform this task, you must have the Partner System Administrator privileges.
Procedure
5 Click Finish.
Results
You have successfully added your air-gapped repository. The repository is now listed in the list
of repositories under Partner Systems.
What to do next
You can now use the air-gapped repository when deploying a management or workload cluster
in your air-gapped environment.
Prerequisites
To perform this task, you must have the Partner System Administrator privileges.
Procedure
n No Proxy: If you do not wish to route Internet traffic through a proxy server, enter the
IP address of a server where VMware Telco Cloud Automation can fetch the required
information. You can add multiple server IP addresses.
n CA Certificate: If the proxy server uses a self-signed certificate, paste the CA certificate
used for signing the Proxy server certificate.
4 Click Finish.
Results
You have successfully registered the Proxy server. You can now select this repository for
resources when deploying a VMware Tanzu Kubernetes Grid cluster.
Prerequisites
To perform this task, you must have the Partner System Administrator privileges.
Note In TCA 2.2, you cannot register ECR through partner systems in an airgap environment.
Procedure
n FQDN - Enter the fully qualified domain name (FQDN). For example,
example.amazonaws.com.
n ECR Access Key - To perform actions, enter the ECR access key.
n Role ARN (Optional) - Enter the Amazon Resource Name (ARN) specifying the role.
5 Click Next.
7 Click Finish.
n PTP Notifications
Prerequisites
Product Version
Procedure
1 Ensure that you are using the supported VMware Telco Cloud Automation/Control Plane,
ESXi, and vCenter Server versions.
3 Create a CSAR file or edit an existing file, for example, /Definitions/VNFD.yaml, and add
the following parameter under node_components.
enableSMT: true
For example:
5 After instantiating the Network Function, log in to the Worker node and verify that the
following values are set, as expected.
% ssh [email protected]
([email protected]) Password:
Last login: Wed Feb 9 23:42:28 2022 from 172.31.251.4
00:00:02 up 1 day, 4:11, 1 user, load average: 2.06, 2.28, 2.06
capv@wc-smc-np1-759ffb5759-fpfmx [ ~ ]$ sudo su
root [ /home/capv ]# cat /sys/devices/system/cpu/smt/active
1
root [ /home/capv ]# cat /sys/devices/system/cpu/smt/control
on
root [ /home/capv ]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6312U CPU @ 2.40GHz
Stepping: 6
CPU MHz: 2399.999
BogoMIPS: 4799.99
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 36864K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx
fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl
xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1
sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm
abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase
tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma
clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat
avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg
avx512_vpopcntdq rdpid md_clear flush_l1d arch_capabilities
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list
0-1
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu1/topology/thread_siblings_list
0-1
capv@wc-h1314-np1314-594dc47cb9-kxr5f [ ~ ]$ sudo su
root [ /home/capv ]# cat /sys/devices/system/cpu/smt/active
0
root [ /home/capv ]# cat /sys/devices/system/cpu/smt/control
notsupported
root [ /home/capv ]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0,2-63
Off-line CPU(s) list: 1
Thread(s) per core: 1
Core(s) per socket: 63
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6338N CPU @ 2.20GHz
Stepping: 6
You can configure the PTP in VMware Telco Cloud Automation in two modes:
The diagram shows how the PTP works in the passthrough mode.
Note
n For XXV710 card, you can use any Physical Function (PF) port for PTP.
n For E810 card, you can use only Physical Function 0 (PF0) port for PTP.
You can create a host profile to use the PTP over VF. For details, see Add a Host Profile.
Note Before you create the host profile for PTP over VF, ensure that you use a PTP-enabled
port connected to PTP-enabled switch.
Creating a host profile enables you to control VF assignment from PF for a PTP.
For PTP over VF, the default VF assignment can happen from any of the SRIOV-enabled PFs.
Intel E810 8086 0x1591, 0x1592, 0x1593, 0x1599, 3.0 icen 1.6.5 iavf 4.2.7
0x159A, 0x159B
Note If you had configured PTP and ACC100 in passthrough mode in VMware Telco Cloud
Automation 1.9.1 or 1.9.5 and want to upgrade to VMware Telco Cloud Automation 2.0, you do
not need to create a host profile for PTP.
For the PTP and ACC100, you need to configure the following sections in the Host Profile.
To configure the device in Passthrough, SRIOV, or Custom mode, use the PCI Device settings. For
example, if the host has an E810 card with four ports, and you want to put PF0 in Passthrough
Active and PF[1-3] in SRIOV mode, you can use PCI Device settings in Host Profile to implement
these configurations.
PCI Device Groups defines the filters for selecting a particular PF for the PTP. For example, if in a
card, you have PF0 and PF1 in Passthrough Active mode and have connected the PTP switch to
PF0, you can use the filters in PCI Device Group to select PF0 for PTP.
Note
n XXV710 and E810 cards support PTP in passthrough mode.
n For PTP in Passthrough mode, configure the PTP port in Passthrough Active and SRIOV
disabled mode. You can perform these configurations using the Host Profile function of the
VMware Telco Cloud Automation.
Note
n You need to apply the Host Profile on the Network Function, which uses ACC100, before you
can instantiate that Network Function.
n If you have already created a worker node cluster on the host, you can either delete that
worker node cluster and recreate the worker node cluster after applying the Host profile, else
you can set the worker node to Enter Maintenance Mode in VMware Telco Cloud Automation
and power off the worker node in the VMware vCenter.
The custom property of the host profile requires the .cfg file available in the VMware EXSi.
Procedure
; SPDX-License-Identifier: Apache-2.0
; Copyright(c) 2020 Intel Corporation
[MODE]
pf_mode_en = 0
[VFBUNDLES]
num_vf_bundles = 16
[MAXQSIZE]
max_queue_size = 1024
[QUL4G]
num_qgroups = 0
num_aqs_per_groups = 16
aq_depth_log2 = 4
[QDL4G]
num_qgroups = 0
num_aqs_per_groups = 16
aq_depth_log2 = 4
[QUL5G]
num_qgroups = 4
num_aqs_per_groups = 16
aq_depth_log2 = 4
[QDL5G]
num_qgroups = 4
num_aqs_per_groups = 16
aq_depth_log2 = 4
Follow the procedure to create the host profile for PTP in passthrough and ACC100 device.
1 To add the passthrough device, select Passthrough from the Type drop-down menu.
2 To enable the passthrough device, click the toggle button corresponding to the
Enable Passthrough .
n To add the items, select the value from the key drop-down menu.
n Vendor ID: The vendor identification. For example, 0x8086 denotes the vendorid of
Intel.
n Device ID: The device identification for the port used for PTP. For example, 0x1593.
n Index: Index of the PTP port in Passthrough active devices. For example, 0.
1 To add the ACC100 device, select SR-IOV from the Type drop-down menu.
2 To configure the value of Number of Virtual Functions, type the value of Number of
Virtual Functions in the value field. For example, 16.
c To add the action item for custom properties for ACC100 device, click Add Action under
Device Details.
3 To add the Configuration File value, click browse and navigate and select the
acc100_config_vf_5g.cfg file. For details on the acc100_config_vf_5g.cfg, see
Obtaining the Custom File for ACC100.
n Vendor ID: The vendor identification. For example, 0x8086 denotes the vendorid of
Intel.
n Device ID: The device identification for the port used for PTP. For example, 0xd5c.
To add the key-value, select the key from the key drop-down menu and type the
corresponding value in the value field.
n To add the items, select the value from the key drop-down menu.
6 To add the PCI device group, click ADD GROUP under PCI Device Groups.
n Vendor ID: The vendor identification. For example, 0x8086 denotes the vendorid of Intel.
n Device ID: The device identification for the port used for PTP. For example, 0x1593.
n Index: index of the PTP port in Passthrough active devices. For example, 0.
Note
n To add filter item, click the + icon available in the filter.
n Ensure that you add filter items. Do not add additional filters. If you click Add Filter, it
adds additional filter and not the filter items.
10 Enter a value for the following fields in Reserved cores per NUMA node.
n Min core for CPU reservation per NUMA node. For example, 3.
What to do next
Follow the procedure to create the host profile for PTP over VF and ACC100 device.
1 To add the SRIOV device, select SR-IOV from the Type drop-down menu.
2 To configure the value of Number of Virtual Functions, type the value of Number of
Virtual Functions in the value field. For example, 8.
c To add the filter, click Add Filter. Add the key as alias and value as vmnic2.
1 To add the ACC100 device, select SR-IOV from the Type drop-down menu.
2 To configure the value of Number of Virtual Functions, type the value of Number of
Virtual Functions in the value field. For example, 16.
a To add the action item for custom properties for ACC100 device, click Add Action under
Device Details.
3 To add the Configuration File value, click browse and navigate and select the
acc100_config_vf_5g.cfg file. For details on the acc100_config_vf_5g.cfg, see
Obtaining the Custom File for ACC100.
To add the key-value, select the key from the key drop-down menu and type the
corresponding value in the value field.
n To add the items, select the value from the key drop-down menu.
n Vendor ID: The vendor identification. For example, 0x8086 denotes the vendorid of
Intel.
n Device ID: The device identification for the port used for PTP. For example, 0xd5c.
6 To add the PCI device group, click ADD GROUP under PCI Device Groups.
Note
n To add filter item, click the + icon available in the filter.
n Ensure that you add filter items. Do not add additional filters. If you click Add Filter, it
adds additional filter and not the filter items.
10 In the first filter item, select the key as sriovEnabled and enable it from the radio button.
11 In the second filter item, select the key as alias and the value as name of the physical
interface which you want to use for PTP. For example vmnic2.
Note After you add the second filter item, ensure that you can see both alias and
sriovEnabled under a single filter.
12 Enter a value for the following fields in Reserved cores per NUMA node.
n Min core for CPU reservation per NUMA node., For example, 3.
Prerequisites
At present, only Intel E810 NICs supports PTP over VF. You can use any port of the E810 card for
PTP over VF.
What to do next
Perform this procedure to apply the host profile setting on the cell site group.
Prerequisites
Procedure
4 Click the radio button corresponding to the Cell Site Group on which you need to apply the
host profile.
5 Click Edit.
6 In the Select Host Profile, select the host profile from the drop down menu.
7 Click save.
8 Click the radio button corresponding to the Cell Site Group on which you need to apply the
host profile.
9 Click the Resync button to apply the host profile. Ensure that the Status of the host displays
Provisioned.
What to do next
n If you deleted an already created worker node cluster, recreate that worker node cluster.
n If you had set the worker node to Enter Maintenance Mode in VMware Telco Cloud
Automation, then set that worker node to Exit Maintenance Mode in VMware Telco Cloud
Automation and power on the worker node in the VMware vCenter.
n Instantiate NF that uses ACC100. For details, see Instantiating a Network Function.
Note For PTP in Passthrough mode, specify Device Type as NIC. For details on CSAR
modification, see Infrastructure Requirements Designer.
ptp:
required: true
propertyName: ptp
default: 'ptp'
type: string
format: pf_group
acc100:
required: true
propertyName: acc100
default: 'acc100'
type: string
format: pf_group
....
....
passthrough_devices:
- device_type: NIC
pf_group: ptp
isSharedAcrossNuma: true
- device_type: ACC100
pf_group: acc100
resourceName: sriovacc100vfio
dpdkBinding: vfio-pci
isSharedAcrossNuma: true
Note For PTP over VF, specify Device Type as PTP. For details on CSAR modification, see
Infrastructure Requirements Designer.
ptp:
required: true
propertyName: ptp
default: 'ptp'
type: string
format: pf_group
acc100:
required: true
propertyName: acc100
default: 'acc100'
type: string
format: pf_group
....
....
passthrough_devices:
- device_type: PTP
pf_group: ptp
isSharedAcrossNuma: true
- device_type: ACC100
pf_group: acc100
resourceName: sriovacc100vfio
dpdkBinding: vfio-pci
isSharedAcrossNuma: true
Architecture Diagram
Best Practices
1. It is advisable to have a symmetric layout on both NUMA nodes. For example:
2. To configure PTP, create PCI groups for each NUMA. Each NIC can provide PTP to only one
Worker node. For example:
n Create pci-group-ptp-numa-0 that includes vmnic0. This is used for PTP in NUMA 0 while
instantiating a Network Function.
n Create pci-group-ptp-numa-1 that includes vmnic4. This is used for PTP in NUMA 1 while
instantiating a Network Function.
Note You need not use the isSharedAcrossNuma flag. Both NUMA nodes have E810 cards and
there is no need for cross-NUMA sharing.
3. Create PCI groups for ACC 100. For example, one ACC 100 on each NUMA node.
Note You need not use the isSharedAcrossNuma flag. Both NUMA nodes have ACC 100 cards
and there is no need for cross-NUMA sharing.
Hyper-threading
If hyper-threading is enabled, then each core is logically divided into two hyper-threads or
physical CPUs (pCPU):
Core pCPU
And so on.
Note After you pin a DU worker node, it does not move across the pCPU. You can configure
pinning using isNumaConfigNeeded flag in the CSAR file. This flag must be set to true.
NICs in NUMA
When the DU worker node is requesting for I/O devices through CSAR, it can either choose the
I/O connected to the same NUMA or share the I/O with different NUMA. This is configured using
isSharedAcrossNuma flag in the CSAR file. If this flag set to true, it can source the I/O devices
from a different NUMA node. If this flag is set to false or not present, it will source I/O devices
connected to the same NUMA to which the DU worker node is pinned.
cat /sys/devices/system/cpu/cpu'x'/topology/thread_siblings_list
Prerequisites
Upgrade the TCA appliance to 2.3 before upgrading the ESXi host to 8.0.
Procedure
1 Uninstall the ibbd-tools driver applicable for ESXi 7.x by using the following command:
3 Install the ibbd-tools driver released by intel for ESX 8.0 from Intel® vRAN Baseband Driver
and Tools for VMware ESXi.
4 Perform a full-resync on the cell site host configured with ACC100 using ZTP UI/API.
5 Ensure that the custom config for ACC100 is updated on the device by using the following
ESX command:
/opt/ibbdtools/bin/bbdevcli.py -d -t /devices/ifec/dev0
PTP Notifications
VMware Telco Cloud Automation deployments have PTP time synchronization for both Radio
Unit (RU) and Distributed Unit (DU). When there is a loss of time synchronization, the DU
application disables transmission until the time synchronization is reacquired.
n Synchronization State
The PTP notifications are managed by exposing the REST API to vDU applications to register for
PTP synchronization events. The PTP notification framework monitors the PTP status and delivers
PTP event notifications to the vDU application.
The following are the components required to manage the PTP notifications:
n Sidecar Container
n DU application communicates with the Sidecar using the localhost address and port,
which are exposed to the DU application by k8 Downward API.
n PTP event notifications are sent to the DU application through REST APIs exposed by
Sidecar.
Procedure
1 Add the following label to the nodepools where PTP o-cloud daemonset pods are running:
n telco.vmware.com.node-restriction.kubernetes.io/ptp-notifications: true
wget https://vmwaresaas.jfrog.io/artifactory/generic-registry/ptp-ocloud-
notifications-daemonset-1.0.0.csar
n Namespace : tca-system.
n Use the values.yaml file to override the default values and specify the NodeSelector.
You must match the label added to the nodepools for the PTP o-cloud daemonSet.
container:
monitor:
image:
repository: vmwaresaas.jfrog.io/registry/ptp-ocloud-notifications-monitor
tag: 1.0.0
holdoverPeriod: 120
pollFrequency: 1
ptpSimulated: False
nodeSelector:
telco.vmware.com.node-restriction.kubernetes.io/ptp-notifications: true
Procedure
3 To run sidecar container, specify the following in the values.yaml file of the DU pod helm
charts:
sidecarContainers:
- name: sidecar
image: vmwaresaas.jfrog.io/registry/ptp-ocloud-notifications-sidecar:1.0.0
imagePullPolicy: Always
command: ["python3"]
args: ["run-ptpclientfunction.py"]
tty: true
env:
- name: THIS_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: TRANSPORT_USER
value: "admin"
- name: TRANSPORT_PASS
value: "admin"
- name: TRANSPORT_PORT
value: "5672"
volumeMounts:
- name: sidecardatastore
mountPath: /opt/datastore
readOnly: false
sidecarVolumes:
- name: sidecardatastore
hostPath:
path: /home/capv
type: Directory
4 In the pod spec yaml file, specify the following under containers:
6 Input group name in Group Name field. Search username just added in Add Members field to add
the user to this group and click ADD
8 Input a name in field Role name and select privileges in below privileges list. Below is an
example of privileges list for reference, user need to select proper privileges themselves for
the specified role. Click CREATE after privlieges selected
Category Privileges
Category Privileges
10 Search the created user in User/Group field and select the created role from the role list in the
Role field. Click OK
User can follow steps mentioned in the Set Up CNS and Create a Storage Policy (vSphere)
section of the TKG document here to create storage policies for vSAN or local VMFS datastore.