0% found this document useful (0 votes)
290 views

Tca Userguide

Uploaded by

abitalove
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
290 views

Tca Userguide

Uploaded by

abitalove
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 505

VMware Telco Cloud

Automation User Guide


VMware Telco Cloud Automation 2.3
VMware Telco Cloud Automation Manager 2.3
VMware Telco Cloud Automation Control Plane 2.3
VMware Telco Cloud Automation User Guide

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

©
Copyright 2023 VMware, Inc. All rights reserved. Copyright and trademark information.

VMware, Inc. 2
Contents

1 Introduction 11
Common Abbreviations 12
API Documentation 15
Deployment Architecture 15
Supported Features on Different VIM Types 17

2 Getting Started 18
Viewing the Dashboard 18

3 Add an Active Directory 20


Add an Active Directory for New Deployment 21
Add an Active Directory for Existing Deployment 21

4 Managing Roles and Permissions 23


Enabling Users and User Groups to Access VMware Telco Cloud Automation 23
Object Level Access Permissions 24
Privileges and Roles 25
Creating Roles and Permissions 32
Create a Role 32
Create Permission 33
Tokens 34

5 Kubernetes Policies 36
Overview of Kubernetes Policies 37
CNF Global Permission Enforcement 39
Types of Kubernetes Policies 40
Lifecycle of an RBAC Policy 40
Create a Policy Manually 42
Edit a Policy 44
Clone a Policy 44
Download a Policy 45
Delete a Policy 45
Finalize a Policy 45
Grant a Policy 45
Edit a Policy Grant 47
Delete a Policy Grant 48
View VIM Policy Grants 49
View Policy Grants For The CNF Package 49

VMware, Inc. 3
VMware Telco Cloud Automation User Guide

Generate a Kubernetes Policy Automatically From CNF Package 49


Import Policies from CNF Package 51

6 Working with Tags 53

7 Configuring Your Virtual Infrastructure 55


Add a Cloud to VMware Telco Cloud Automation 55
Configure the Compute Profile 58
Edit a Virtual Infrastructure Account 59
Force Sync Inventory 60

8 Viewing Your Cloud Topology 61

9 Working with Infrastructure Automation 62


Introduction to Infrastructure Automation 62
Prerequisites 62
Software Versions Interoperable with Infrastructure Automation 65
Managing Specification File 66
Specification File for Cloud Native 66
Configuration and Bootstrapping 86
Automated SDDC Deployment 86
Ready for Network Function 87
Roles 87
Deployment Configurations 88
Configure Global Settings 88
Configure Appliances 91
Add Images or OVF 93
Add Certificate Authority 93
Add a Host Profile 94
Edit, Clone, and Export a Host Profile 96
Supermicro Firmware Upgrade 96
Dell Firmware Upgrade 99
Obtain the Current Firmware Version 100
Managing Domains 101
Add Management Domain 103
Edit a Management Domain 108
Add Workload Domain 109
Edit a Workload Domain 115
Add Compute Cluster 116
Edit a Compute Cluster 120
Add a Cell Site Group 120

VMware, Inc. 4
VMware Telco Cloud Automation User Guide

Edit a Cell Site Group 127


Synchronize Cell Site Domain Data 127
Add Host to a Site 128
Edit a Host 130
Synchronize Cell Site Host Data 130
Delete a Domain 131
Manually Removing Domain Information 133
Certificate Management 133
Viewing Tasks 134

10 Working with Kubernetes Clusters 136


Working with Management Clusters 139
Upgrade Management Kubernetes Cluster Version 139
Working with Kubernetes Cluster Templates 141
Deploy a Management Cluster 147
Edit a Management Cluster Control Plane 151
Edit a Management Cluster Node Pool 152
Working with V1 Workload Clusters 152
Upgrade Workload Kubernetes Cluster Version 152
Deploying a Workload Kubernetes Cluster 153
Create a v1 Workload Cluster Template 158
Managing Workload Clusters after Deployment 162
Machine Health Check 174
Place Nodes in Maintenance Mode 174
Managing Add-ons for v1 Workload Clusters 175
Working with v2 Workload Clusters 178
Anti-affinity Rules 179
Upgrade v2 Workload Kubernetes Cluster Version 180
Deploy a v2 Workload Cluster 181
Create a v2 Workload Cluster Template 187
Managing v2 Workload Clusters after Deployment 192
Add-ons Reference for v2 Workload Clusters 206
Backing Up and Restoring Kubernetes Clusters 241
Backing Up and Restoring Management Clusters 241
Backing Up and Restoring Workload Clusters 241
Remotely Accessing Clusters From VMware Telco Cloud Automation 257
Access Kubernetes Clusters Using kubeconfig 257
Access a Remote Kubernetes Cluster Using an External SSH Client 258
Access a Remote Kubernetes Cluster Using the Embedded SSH Client 259
Access Kubernetes Cluster when VMware Telco Cloud Automation is Down 259

VMware, Inc. 5
VMware Telco Cloud Automation User Guide

11 Kubernetes Cluster Upgrade Flow 261


Upgrade Validations 261
CaaS Upgrade Backward Compatibility 263

12 Running Cluster Diagnosis 267

13 Managing Network Function Catalogs 269


Onboarding a Network Function 269
Upload a Network Function Package 269
Designing a Network Function Descriptor 270
Edit Network Function Descriptor Drafts 291
Edit a CSAR File Manually 292
Delete a Network Function 292
Customizing Network Function Infrastructure Requirements 293
Node Customization 293
CNF with Customizations Example 311
Download a Network Function Package 323
Edit Network Function Catalog 323
Edit Network Function Catalog General Properties 323
Edit Network Function Topology 324
Edit Infrastructure Requirements 324
Edit Scaling Policies 325
Edit Network Function Rules 326
Edit Workflows 326
Edit the Network Function Catalog Source Files 327
Enhanced Platform Awareness 327
Add Cloud-init Script and Key to a VDU 328
Role-based Access Control to CNFs 329
Remotely Access CNFs Using kubeconfig 329
Access a Remote CNF Using an External SSH Client 329
Access a Remote CNF Using the Embedded SSH Client 330

14 Managing Network Function Lifecycle Operations 331


Instantiating a Network Function 331
Instantiate a Virtual Network Function 332
Instantiate a Cloud Native Network Function 335
External Network Referencing 337
Heal an Instantiated Network Function 338
Scale an Instantiated VNF 339
Scale an Instantiated CNF 340
Operate an Instantiated Network Function 340

VMware, Inc. 6
VMware Telco Cloud Automation User Guide

Run a Workflow on an Instantiated Network Function 341


Terminate a Network Function 341
Hiding Columns in Network Function Inventory 342
Retry, Rollback, and Reset State 342
Reconfigure a Container Network Function 343
Updating CNF Repository from Chartmuseum to OCI 344

15 Managing Network Service Catalogs 346


Onboarding a Network Service 346
Upload a Network Service Package 346
Design a Network Service Descriptor 347
Edit Network Service Descriptor Drafts 350
Delete a Network Service 350
Download a Network Service Package 351
Edit Network Service Catalog 351
Edit Network Service Catalog General Properties 351
Edit Network Service Topology 352
Edit Network Service Workflows 352
Edit the Network Service Catalog Source Files 353

16 Managing Network Service Lifecycle Operations 354


Instantiate a Network Service 354
Run a Workflow on a Network Service 356
Heal a Network Service 357
Terminate a Network Service 358

17 Upgrading Network Functions and Network Services 359


Upgrade a VNF Package 360
Upgrade a CNF Package 361
Upgrade a CNF 361
Upgrade Network Service Package 362

18 Retry or Rollback Cloud Network Function Upgrades 364

19 5G Network Slicing Concepts 365


Enable Network Slicing 367
Deactivate Network Slicing 368

20 Managing Network Slice Catalog 369


Onboarding a Network Slice Template 369
Edit a Network Slice Template 370

VMware, Inc. 7
VMware Telco Cloud Automation User Guide

Instantiating a Network Slice Template 381

21 Managing Network Slicing Lifecycle Operations 383


Network Slice Function Operations 383
Edit a Network Slice Service Order 384

22 Telco Cloud Automation Workflows 388


Aspects of a Workflow 390
Defining Steps in Workflows 396
Workflow Step Types 396
Workflow Step Input 404
Managing Workflow Execution 425
Run a Standalone Workflow 428
Run a Workflow Through a Network Function LCM Operation 429
Run an Embedded Network Function Workflow 429
View all Workflow Executions 430
View Workflow Step Execution Details 430
View Workflow Executions Running on a Network Function 430
View Workflow Executions Running on a Network Service 431
View Workflow Executions Running on vRO 431
Debug Workflow Executions 431
Update Original Workflow Based on Workflow Executin Changes 432
Delete a Workflow Execution 432
End a Workflow Execution 432
Extend Retention Time for Workflow 433
Role-based access control for workflows 433

23 Updating NETCONF Protocol Using VMware Telco Cloud Automation 435

24 Monitoring Performance and Managing Faults 438


Managing Alarms 438
Performance Management Reports 440
Scheduling Performance Management Reports 440
Monitor Instantiated Virtual Network Functions and Virtual Deployment Units 442
Monitor Instantiated CNF 443
Monitor Instantiated Network Services 445

25 Administrating VMware Telco Cloud Automation 446


Managing RBAC Tags 446
Create a Tag 446
Edit a Tag 447

VMware, Inc. 8
VMware Telco Cloud Automation User Guide

Delete a Tag 447


Transform Object Tags 448
Viewing Audit Logs 448
Troubleshooting and Support 448
Administrator Configurations 449
Create Login Banners 449
Kubernetes Policy Configurations 450
Add Kernel Versions 451
License Consumption 451
Managing vCenter Certificate Changes 452
Re-import the vCenter certificate for TCA-M/TCA-CP 452
Update the Thumbprint of vCenter 452

26 Upgrading VMware Telco Cloud Automation 456


Upgrade VMware Telco Cloud Automation Using the Upgrade Bundle 456

27 Upgrading Cloud-Native VMware Telco Cloud Automation 458


Upgrade Cloud-Native VMware Telco Cloud Automation with Internet Access 458
Upgrade Cloud-Native VMware Telco Cloud Automation in an Airgapped Environment 459
Upgrade Procedure for 2.1 and Later Versions 460
Upgrade Procedure for 2.0.0 and 2.0.1 461
Cloud-Native VMware Telco Cloud Automation Upgrade Troubleshooting 462

28 Python Software Development Kits 466

29 Global Settings APIs 467


API for CNF Debug Options 467
Global Settings for Cluster Automation 467
API for Cluster Automation Global Settings 468
API to Disble CSR Validation 468
Configure Cluster Automation Settings 468
Global Settings for Concurrency Limit 471

30 Registering Partner Systems 473


Add a Partner System to VMware Telco Cloud Automation 473
Edit a Registered Partner System 475
Associate a Partner System Network Function Catalog 475
Add a Harbor Repository 476
Add an Air Gap Repository 477
Add a Proxy Repository 478
Add Amazon ECR 478

VMware, Inc. 9
VMware Telco Cloud Automation User Guide

31 Appendix 480
Enable Virtual Hyper-Threading 480
A1: PTP Overview 483
A2: Host Profile for PTP and ACC100 486
Prerequisites for ACC100 and PTP 487
Obtaining the Custom File for ACC100 487
Host Profile for PTP in Passthrough mode and ACC100 488
Host Profile for PTP over VF and ACC100 491
Applying Host Profile to Cell Site Group 494
CSAR Configuration for PTP and ACC100 495
Symmetric Layout - Dual Socket Two NUMA System 497
Hyper-threading and NUMA 498
ACC 100 Support for ESXi 8.0 Upgrade 500
PTP Notifications 500
Install O-Cloud DaemonSet 501
Integrate Sidecar with DU Pod 502
Setup User/Group/Storage Policy in vCenter Server for vSphere CSI 503

VMware, Inc. 10
Introduction
1
The VMware Telco Cloud Automation User Guide provides information about how to use
VMware Telco Cloud Automation™. Steps to add your virtual and container infrastructure, and to
create and manage network functions and services, are covered in this guide.

Intended Audience
This information is intended for Telco service providers and users who want to use VMware
Telco Cloud Automation for designing and onboarding network functions and services. It is also
intended for users who want to transition to the cloud-native architecture with Container-as-a-
Service (CaaS) automation, and manage the Kubernetes clusters from a centralized system. To
deploy and activate the VMware Telco Cloud Automation Manager and TCA-CP services, see the
VMware Telco Cloud Automation Deployment Guide.

VMware Technical Publications Glossary


VMware Technical Publications provides a glossary of terms that can be unfamiliar to
you. For definitions of terms used in the VMware technical documentation, go to http://
www.vmware.com/support/pubs.

About VMware Telco Cloud Automation


VMware Telco Cloud Automation is a cloud orchestration solution that accelerates the time-to-
market of modern network functions and services. It provides a simplified life cycle management
automation solution, across any network and any cloud. Some of the features of VMware Telco
Cloud Automation are:

n A native integration for Virtualized Infrastructure Managers (VIMs) and cloud products such
as VMware vCloud NFV, vSphere-based clouds, VMware on mega-cloud providers, and
Kubernetes clouds. These integrations streamline your CSP orchestrations and optimize your
NFV Infrastructure (NFVI) resource use.

n A standard-driven generic VNF manager (G-VNFM) and NFV Orchestration (NFVO) modular
components to integrate any multi-vendor Management and Network Orchestration (MANO)
architecture.

VMware, Inc. 11
VMware Telco Cloud Automation User Guide

VMware Telco Cloud Automation consists of two components:

n VMware Telco Cloud Manager™ - Provides Telcos with NFV-MANO capabilities and enables
the automation of deployment and configuration of Network Functions and Network
Services.

n VMware Telco Cloud Automation Control Plane (TCA-CP) - Provides the infrastructure for
placing workloads across clouds using VMware Telco Cloud Automation.

VMware Telco Cloud Automation

VMware Telco Cloud Manager

TCA - CP TCA - CP TCA - CP TCA - CP TCA - CP TCA - CP


vSphere Endpoint Cloud Director Endpoint Tanzu Endpoint VIO Endpoint Kubernetes Endpoint Vmware Cloud Endpoint

vSphere Cloud Director Tanzu Openstack Kubernetes VMware Cloud

VMware Telco Cloud Infrastructure


vSphere vSAN NSX-T

Shared VMware vRealize Orchestrator Cluster


vRO

This chapter includes the following topics:

n Common Abbreviations

n API Documentation

n Deployment Architecture

n Supported Features on Different VIM Types

Common Abbreviations
Some of the frequently used abbreviations that are used in this guide are listed here with their
descriptions.

NFV
Network Functions Virtualization - The process of decoupling a network function from its
proprietary hardware appliance and running it as a software application in a virtual machine.

VMware, Inc. 12
VMware Telco Cloud Automation User Guide

VNF
A Virtual Network Function (VNF) is a collection of virtual machines interconnected with virtual
links. A VNF exposes its functionality through external connection points. It is managed by a
Virtual Network Function Manager (VNFM), and it can be composed into a higher-level Network
Service (NS) by a Network Function Virtualization Orchestrator (NFVO).

Network Service
A Network Service is a collection of network functions: Virtual (VNF), Cloud-Native (CNF), or
Physical (PNF); interconnected with virtual or physical links. It is managed by an NFVO. A
network function exposes its functionality through external connection points.

CNF
A Cloud-Native Network Function (CNF) is a containerized network function that uses cloud-
native principles. CNFs are designed to run inside containers. Containerization makes it possible
to run services and onboard applications on the same cluster, while directing network traffic to
correct pods.

NFVI
Network Functions Virtualization Infrastructure - Is the foundation of the overall NFV architecture.
It provides the physical compute, storage, and networking hardware that hosts the VNFs. Each
NFVI block can be thought of as an NFVI node and many nodes can be deployed and controlled
geographically.

MANO
Management and Orchestration - Manages the resources in the infrastructure, orchestration, and
life cycle operations of VNFs, CNFs, and Network Services.

VIM
Virtualized Infrastructure Manager - Is a functional block of the MANO and is responsible
for controlling, managing, and monitoring the NFVI compute, storage, and network hardware,
the software for the virtualization layer, and the virtualized resources. The VIM manages the
allocation and release of virtual resources, and the association of virtual to physical resources,
including the optimization of resources.

NFVO
NFV Orchestrator - Is a central component of an NFV-based solution. It brings together different
functions to make a single orchestration service that encompasses the whole framework and has
a well-organized resource use.

VMware, Inc. 13
VMware Telco Cloud Automation User Guide

VNFM
A VNF Manager (VNFM) is responsible for the lifecycle management of Virtual Network
Functions (VNF). It interacts with VIM, NFVO, and Network Function Catalog during lifecycle
management operations.

Note VNFM works with both VNFs and CNFs.

NFD
Network Function Descriptor - Is a deployment template that describes a network function
deployment and operational requirement. It is used to create a network function where life-cycle
management operations are performed.

Network Function Catalog


Is a functional building block within a network infrastructure. It has well-defined external
interfaces and a well-defined functional behavior.

Network Services Catalog


A Network Service Catalog stores the required artifacts to create Network Services and to
manage their life cycle operations. It has well-defined external interfaces and a well-defined
functional behavior.

SVNFM
Specific VNFM. SVNFMs are tightly coupled with the VNFs they manage.

GVNFM
Generic VNFM.

Kubernetes Pods
Kubernetes Pods are inspired by pods found in nature (pea pods or whale pods). The Pods are
groups of containers that share networking and storage resources from the same node. They are
created with an API server and placed by a controller. Each Pod is assigned an IP address, and all
the containers in the Pod share storage, IP address, and port space (network namespace).

CSI
Container Storage Interface. A specification designed to enable persistent storage volume
management on Container Orchestrators (COs) such as Kubernetes. The specification allows
storage systems to integrate with containerized workloads running on Kubernetes. Using CSI,
storage providers, such as VMware, can write and deploy plug-ins for storage systems in
Kubernetes without a need to modify any core Kubernetes code.

VMware, Inc. 14
VMware Telco Cloud Automation User Guide

CNI
Container Network Interface. The CNI connects Pods across nodes, acting as an interface
between a network namespace and a network plug-in or a network provider and a Kubernetes
network.

TCA-CP
VMware Telco Cloud Automation Control Plane. Previously known as VMware HCX for Telco
Cloud.

API Documentation
You can also operate VMware Telco Cloud Automation using APIs.

To view the VMware Telco Cloud Automation API Explorer, Click the Help (?) icon from
the top-right corner of the VMware Telco Cloud Automation user interface and select API
Documentation.

You can also access the APIs from https://developer.vmware.com/apis.

Deployment Architecture
The VMware Telco Cloud Automation implements the architecture that is outlined and defined at
a high-level through logical building blocks and core components.

VMware, Inc. 15
VMware Telco Cloud Automation User Guide

vCenter Server
(Authentication)
VMware Telco
Cloud Automation
Manager
SVNFM
Integration

VMware VMware VMware VMware


Telco Cloud Telco Cloud Telco Cloud Telco Cloud
Automation Automation Automation Automation
Control Plane Control Plane Control Plane Control Plane

VMware Cloud Kubernetes VMware Integrated


vCenter Server
Director Cluster OpenStack

vCenter Server NSX Manager vCenter Server vCenter Server

NSX Manager vCenter PSC NSX Manager NSX Manager

vRealize vRealize vRealize vRealize


Orchestrator Orchestrator Orchestrator Orchestrator

RabbitMQ RabbitMQ

VMware Cloud vSphere- Kubernetes- OpenStack-


Director-based Cloud based Cloud based Cloud based Cloud

n vCenter Server is used for authenticating and signing in to VMware Telco Cloud Automation.

n VMware Telco Cloud Automation supports registration of supported SOL 003 based
SVNFMs.

n VMware Telco Cloud Automation Control Plane (TCA-CP) is deployed on the VIM and paired
with VMware Telco Cloud Automation Manager.

n VMware Telco Cloud Automation Manager connects with TCA-CP to communicate with the
VIMs. The VIMs are cloud platforms such as vCloud Director, vSphere, Kubernetes Cluster, or
VMware Integrated OpenStack.

n vRealize Orchestrator is registered with TCA-CP and is used to run NFV workflows. You can
register for each VIM or for the entire network of VIMs. For information about registering
vRealize Orchestrator with TCA-CP, see VMware Telco Cloud Automation Deployment Guide.

n RabbitMQ is used to track VMware Cloud Director and VMware Integrated OpenStack
notifications and is required only for deployments on these clouds.

VMware, Inc. 16
VMware Telco Cloud Automation User Guide

Supported Features on Different VIM Types


The following table lists the feature sets that are supported on different VIM types.

Table 1-1. Supported Features on Different VIM Types

VMware Telco Cloud Automation

Infrastructure CaaS Generic VNF NFV


Product Versions Automation Automation Manager Orchestrator

vSphere 7.0,7.0 U1, 7.0 ✓ ✓ ✓


U3, 7.0 U3g, 8.0,
8.0b, 8.0u1

7.0 U2, 7.0 U3, ✓ ✓ ✓ ✓


7.0 U3d, 7.0 U3f,
7.0 U3h

VMware Cloud 10.3, 10.3.1, 10.3.2, ✓ ✓


Director 10.3.3, 10.4, 10.4.1

vRealize 8.8, 8.8.1, 8.8.2, ✓ ✓ ✓


Orchestrator 8.9, 8.9.1, 8.10,
8.10.1, 8.10.2, 8.11,
8.11.1

VMware NSX 3.0.2, 3.0.3, 3.1, ✓ ✓ ✓


3.1.2, 3.1.2.1, 3.1.3,
3.1.3.1, 3.2, 3.2.1,
3.2.2, 4.0.1.1,
4.1.0.2

VMware Tanzu 2.1.1 ✓ ✓ ✓ ✓


Kubernetes Grid

Kubernetes n 1.22.9 ✓ ✓ ✓
n 1.22.13
n 1.22.17
n 1.23.10
n 1.23.16
n 1.24.10

VMware 7.2, 7.2.1 ✓ ✓


Integrated
OpenStack

vRealize Log 8.10, 8.10.2 ✓


Insight

VMware Cloud 1.20, 1.22 ✓ ✓ ✓


on AWS

VMware, Inc. 17
Getting Started
2
Complete these high-level tasks to start using VMware Telco Cloud Automation.

1 Install and set up:

n VMware Telco Cloud Automation Control Plane (TCA-CP)

n VMware Telco Cloud Automation

For steps to install and set up these components, see the VMware Telco Cloud Automation
Deployment Guide.
2 Create roles and assign permissions. See Chapter 4 Managing Roles and Permissions.

3 Configure your VIMs. See Chapter 7 Configuring Your Virtual Infrastructure.

This chapter includes the following topics:

n Viewing the Dashboard

Viewing the Dashboard


The Dashboard is the first page that is displayed when you log in to VMware Telco Cloud
Automation.

The following tiles are displayed:

Clouds

Displays the number of clouds in your network and their status.

Alarms
Displays alarms that are in the Critical and Warning states.

Network Functions

Displays the number of instantiated and not instantiated network functions and catalogs. To
go to the Network Function Catalog page, click the Catalog icon.

Network Services

Displays the number of instantiated and not instantiated network services. To go to the
Network Service Catalog page, click the Catalog icon.

VMware, Inc. 18
VMware Telco Cloud Automation User Guide

Network Function Status

Displays a detailed inventory view of the network functions.

Total Resource Allocation Across Clouds

Displays the percentage of CPU, memory, and storage allocated across the clouds.

Resource Utilization

Displays the percentage of CPU, memory, and storage resources used across the clouds.

VMware, Inc. 19
Add an Active Directory
3
Adding active directory in VMware Telco Cloud Automation.

VMware Telco Cloud Automation now supports authentication through vCenter and Active
Directory. You can configure Active Directory for a new deployment or you can upgrade the
already deployed VMware Telco Cloud Automation to the latest version and configure the Active
Directory settings in the upgraded VMware Telco Cloud Automation.

You can log in to the VMware Telco Cloud Automation Appliance Manager and configure
the Active Directory settings to integrate VMware Telco Cloud Automation with your Active
Directory server.

Note Ensure that the logon user name is less than or equal to 20 characters. If the logon user
name is more than 20 characters, the login works but the group retrieval of the user fails, causing
the login to VMware Telco Cloud Automation to fail.

Prerequisites

Note
n When using the Active Directory server, ensure the reachability of Active Directory server to
VMware Telco Cloud Automation Manager.

n Active directory is available only for Telco Cloud Automation Manager and not for Telco
Cloud Automation Control Plane.

n You must use the format <username>@ad for user login.

n Only the users associated with adminGroupName can inherit the system administrator privileges
on VMware Telco Cloud Automation.

n Ensure that you have access to VMware Telco Cloud Automation Appliance Manager.

n Ensure that you have details of Active Directory server.

n Ensure that you have users and groups created in Active Directory server.

n To add a Active Directory for a new deployment of VMware Telco Cloud Automation, see
Add an Active Directory for New Deployment.

n To add a Active Directory for an existing deployment of VMware Telco Cloud Automation,
see Add an Active Directory for Existing Deployment.

VMware, Inc. 20
VMware Telco Cloud Automation User Guide

Add an Active Directory for New Deployment


Procedure to add the Active Directory in a newly deployed VMware Telco Cloud Automation
Manager.

Follow the procedure to add the Active Directory support in a newly deployed VMware Telco
Cloud Automation Manager.

Procedure

1 Login to the VMware Telco Cloud Automation Appliance Manager.

2 Enter the required details for Activation, Datacenter Location, and System Name.

3 Click Continue to save the changes and continue with the deployment.

4 To add the authentication details, select the Active Directory option on the Select
Authentication Provider page.

5 Add the following details on the Connect Your Active Directory for TCA page:

Note You can add the Active Directory configuration for both VMware Telco Cloud
Automation Manager and the VMware Telco Cloud Automation Appliance Manager.

n URL - URL of the Active Directory server.

n Base Distinguished Name for Users - The base distinguished name for the users of the
LDAP directory.

n Base Distinguished Name for Groups - The base distinguished name for the groups of
the LDAP directory.

n Admin User Distinguished Name - The base distinguished name for the administrator of
the LDAP directory.

n Password - Password of the administrator.

n Admin Group Name - Name of the administrator group of the LDAP directory.

6 Click Save to save the changes and continue with the deployment.

Add an Active Directory for Existing Deployment


Procedure to add the Active Directory in an existing VMware Telco Cloud Automation Manager
deployment.

Follow the procedure to add the Active Directory support in an existing VMware Telco Cloud
Automation Manager.

Procedure

1 Login to the VMware Telco Cloud Automation Appliance Manager.

2 To add the authentication configurations, click the Configurations tab.

VMware, Inc. 21
VMware Telco Cloud Automation User Guide

3 To add the Active Directory, click Active Directory.

4 Add the following details:

Note
n You can add the Active Directory configuration for both VMware Telco Cloud Automation
Manager and the VMware Telco Cloud Automation Appliance Manager.

n Switching the authentication provider from the existing vCenter to Active Directory adds
Active Directory and deletes vCenter and SSO configurations. It also removes the access
to VMware Telco Cloud Automation for the existing users configured in vCenter and
permissions set in VMware Telco Cloud Automation.

n URL - URL of the Active Directory server.

n Base Distinguished Name for Users - The base distinguished name for the users of the
LDAP directory.

n Base Distinguished Name for Groups - The base distinguished name for the groups of
the LDAP directory.

n Admin User Distinguished Name - The base distinguished name for the administrator of
the LDAP directory.

n Password - Password of the administrator.

n Admin Group Name - Name of the administrator group of the LDAP directory.

5 Click Save to save the changes.

What to do next

Modify the user group for each permission and set to the active directory. For example,
for a system admin user, you can change the user group from vsphere/sysadmin to
cn=admingroup,ou=groups,dc=server,dc=net. For details, see Create Permission.

VMware, Inc. 22
Managing Roles and Permissions
4
A role is a predefined set of privileges. Privileges define the rights to perform actions and read
properties. For example, the Virtual Infrastructure Administrator role allows a user to read, add,
edit, and delete VIMs. This role also allows the user to perform all the life-cycle management
operations on a Kubernetes cluster template and a Kubernetes cluster instance.

As a vCenter Server user, when you configure vCenter Server in the VMware Telco Cloud
Automation appliance, you are assigned the System Administrator role to access VMware Telco
Cloud Automation. Use this role to create roles and permissions for your users.

A System Administrator or a Role Administrator of VMware Telco Cloud Automation manages


the roles and permissions of users.

This chapter includes the following topics:

n Enabling Users and User Groups to Access VMware Telco Cloud Automation

n Object Level Access Permissions

n Privileges and Roles

n Creating Roles and Permissions

n Tokens

Enabling Users and User Groups to Access VMware Telco


Cloud Automation
VMware Telco Cloud Automation uses the vCenter Server authentication and authorization. Users
and user groups defined in vCenter Server or its identity provider (IDP) can sign in to VMware
Telco Cloud Automation.

Note Ensure that the logon user name is less than or equal to 20 characters. If the logon user
name is more than 20 characters, the login works but the group retrieval of the user fails, causing
the login to VMware Telco Cloud Automation to fail.

To enable a specific vCenter Server user or a user group to access and use VMware Telco Cloud
Automation, you must perform the following steps:

1 Log in to VMware Telco Cloud Automation with System Administrator credentials.

VMware, Inc. 23
VMware Telco Cloud Automation User Guide

2 From the left navigation pane, click Authorization > Permissions.

3 Assign the appropriate Roles to the user or user group. A Role determines the privileges that
the user or user group receives for accessing VMware Telco Cloud Automation.

4 To restrict access for your user or user group to specific objects, you can define the
restrictions in the Advance Filter criteria.

5 Save the permissions.

Users or user groups with the assigned Role can access and use VMware Telco Automation, and
perform tasks according to the specified permissions.

Object Level Access Permissions


You can assign permissions at the object level and associate them to a specific Role.

As a System Administrator, you can restrict a user to access only specific objects. For example,
you can assign permissions to VNF Administrators to access only specific VNFs. The Advance
Filter option allows you to provide object-level permissions to roles.

What are Accessible Objects


Accessible objects are the objects of VMware Telco Cloud Automation that you can access.
Virtual Infrastructure Managers (VIM), Network Function catalogs, Network Function instances,
Network Service catalogs, Network Service instances, Kubernetes cluster templates, and
Kubernetes cluster instances are all accessible objects.

What is the Parent-Child Relationship of an Object


When you define a permission for an object, that permission is implicitly assigned to all instances
created within that object. For example, when you define permissions for a user to access a
certain catalog, the user implicitly has the permissions to access all instances created in that
catalog.

The two major object groups that have an implicit parent-child relationship are:

n Network Function catalogs and Network Function instances.

n Network Service catalogs and Network Service instances.

About Advance Filters


n If a user or a user group has multiple permissions, the list of objects that they can access is a
union of all the objects that can be viewed through each permission.

n Filters that are applied to objects at the parent level are also applied to child objects. For
example, you create permissions for your VNF Administrator with filters to view the VNF
Catalogs of a vendor. When the VNF Administrator logs in, they can view the VNF Catalogs
and the VNFs that belong to the vendor. Here, the parent object is the VNF Catalog and the
child object is the VNF.

VMware, Inc. 24
VMware Telco Cloud Automation User Guide

You can enable Advance Filter and assign object-level permissions when you create or edit
permissions. For steps to create permissions, see Create Permission.

Privileges and Roles


To perform specific operations, you require privileges associated with the specific role. VMware
Telco Cloud Automation includes a set of system-defined roles and associated privileges. You
cannot edit or delete them.

System-defined Privileges
The following tables list the system-defined privileges:

VMware, Inc. 25
VMware Telco Cloud Automation User Guide

Table 4-1. System Wide Privileges

Privilege Included Privilege(s) Accessible Objects

System Admin - Administration n Role Admin All


privileges for all operations. n Role Audit
n System Audit
n Virtual Infrastructure Audit
n Partner System Read
n Network Service Instance Read
n Network Service Catalog Read
n Network Function Catalog Read
n Network Function Instance Read
n Virtual Infrastructure Admin
n Virtual Infrastructure Consume
n Network Function Catalog
Design
n Network Function Catalog
Instantiate
n Network Function Instance
Lifecycle Management
n Network Service Catalog Design
n Network Service Catalog
Instantiate
n Network Service Instance
Lifecycle Management
n Partner System Admin
n Infrastructure Lifecycle
Management
n Infrastructure Design
n Tag Admin
n Workflow Read
n Workflow Design
n Workflow Execute
n System Admin

System Audit - Read privileges for all n Virtual Infrastructure Audit All
operations. n Partner System Read
n Network Service Instance Read
n Network Service Catalog Read
n Network Function Catalog Read
n Network Function Instance Read
n Role Audit
n Workflow Read
n System Audit

Role Admin - Administration n Role Audit Roles and Permissions


privileges for all Roles operations. n Role Admin

VMware, Inc. 26
VMware Telco Cloud Automation User Guide

Table 4-1. System Wide Privileges (continued)

Privilege Included Privilege(s) Accessible Objects

Role Audit - Read privileges for all Role Audit Roles and Permissions
Roles operations.

Tag Admin - Administration privileges Tag Admin Tags


for tag operations.

Table 4-2. Virtual Infrastructure Privileges

Privilege Included Privileges Accessible Objects

Virtual Infrastructure Admin n Virtual Infrastructure Audit n Virtual Infrastructure


- Administration privileges for n Virtual Infrastructure Admin
Infrastructure.

Virtual Infrastructure Audit - Read Virtual Infrastructure Audit n Virtual Infrastructure


privileges for Infrastructure. n Kubernetes Cluster Instance
n Kubernetes Cluster Template

Virtual Infrastructure Consume - Virtual Infrastructure Consume n Virtual Infrastructure


Deploy privileges for VIM.

Infrastructure Design - Design n Workflow Read n Kubernetes Cluster Template


privileges for CaaS cluster templates. n Workflow Design n Workflow Catalogs
n Infrastructure Design

Infrastructure Lifecycle Management n Virtual Infrastructure Consume n Kubernetes Cluster Instance


- Lifecycle management privileges for n Infrastructure Design n Workflow Catalog
CaaS cluster instances. n Workflow Read n Workflow Instances
n Workflow Design
n Workflow Execute
n Infrastructure Lifecycle
Management

Table 4-3. Partner System Privileges

Privilege Included Privileges Accessible Objects

Partner System Read - Read Partner System Read n Virtual Infrastructure


privileges for Partner Systems.

Partner System Admin - n Partner System Read n Virtual Infrastructure


Administration privileges for Partner n Network Function Catalog Read n Workflow Catalog
Systems. n Virtual Infrastructure Consume
n Workflow Read
n Partner System Admin

VMware, Inc. 27
VMware Telco Cloud Automation User Guide

Table 4-4. Network Function Catalog Privileges

Privilege Included Privileges Accessible Objects

Network Function Catalog Design n Network Function Catalog Read n Network Function Catalog
- Design privileges for Network n Workflow Read n Workflow Catalog
Function Catalog. n Workflow Design
n Network Function Catalog
Design

Network Function Catalog Read - n Workflow Read n Network Function Catalog


Read privileges for Network Function n Network Function Catalog Read n Workflow Catalog
Catalog.

Network Function Catalog n Network Function Catalog Read n Network Function Catalog
Instantiate - Instantiation privileges n Virtual Infrastructure Consume n Workflow Catalog
for Network Function Catalog. n Network Function Instance Read
n Workflow Read
n Network Function Catalog
Instantiate

Table 4-5. Network Function Instance Privileges

Privilege Included Privileges Accessible Objects

Network Function Instance Read - Network Function Instance Read n Network Function Instance
Read privileges for Network Function n Network Function Catalog
Instance.

Network Function Instance Lifecycle n Network Function Instance Read n Network Function Instance
Management - Lifecycle management n Network Function Catalog n Workflow Catalog
privileges for Network Function Instantiate n Workflow Instance
Instance. n Network Function Catalog Read
n Virtual Infrastructure Consume
n Workflow Read
n Workflow Execute
n Network Function Instance
Lifecycle Management

VMware, Inc. 28
VMware Telco Cloud Automation User Guide

Table 4-6. Network Service Catalog Privileges

Privilege Included Privileges Accessible Objects

Network Service Catalog Design - n Network Service Catalog Read n Network Service Catalog
Design privileges for Network Service n Network Function Catalog Read n Workflow Catalog
Catalog. n Workflow Read
n Workflow Design
n Network Service Catalog Design

Network Service Catalog Read - n Network Function Catalog Read n Network Service Catalog
Read privileges for Network Service n Workflow Read n Workflow Catalog
Catalog. n Network Service Catalog Read

Network Service Catalog Instantiate n Network Service Catalog Read n Network Service Catalog
- Instantiation privileges for Network n Network Function Catalog Read n Workflow Catalog
Service Catalog. n Virtual Infrastructure Consume
n Network Function Instance Read
n Network Service Instance Read
n Workflow Read
n Network Service Catalog
Instantiate

Table 4-7. Network Service Instance Privileges

Privilege Included Privileges Accessible Objects

Network Service Instance Lifecycle n Network Service Catalog n Network Service Instance
Management - Lifecycle Management Instantiate n Workflow Catalog
privileges for Network Service n Network Service Catalog Read n Workflow Instance
Instance. n Network Function Catalog Read
n Virtual Infrastructure Consume
n Network Function Instance Read
n Network Function Catalog
Instantiate
n Workflow Read
n Workflow Execute
n Network Service Instance
Lifecycle Management

Network Service Instance Read - Network Service Instance Read n Network Service Instance
Read privileges for Network Service n Network Service Catalog
Instance.

VMware, Inc. 29
VMware Telco Cloud Automation User Guide

Table 4-8. Workflow Privileges

Privilege Included Privileges Accessible Objects

Workflow Read Workflow Read Workflow Catalog

Workflow Design n Workflow Read Workflow Catalog


n Workflow Design

Workflow Execute n Workflow Execute n Workflow Catalog


n Workflow Read n Workflow Instance

System-defined Roles
The following table lists the system-defined roles.

Role Privileges

System Administrator n Role Admin


The users assigned to this role can perform all the n System Audit
available actions in VMware Telco Cloud Automation. n Virtual Infrastructure Audit
n Virtual Infrastructure Admin
n Virtual Infrastructure Consume
n Infrastructure Design
n Infrastructure Lifecycle Management
n Network Function Catalog Design
n Network Function Catalog Read
n Network Function Catalog Instantiate
n Network Function Instance Read
n Network Function Instance Lifecycle Management
n Network Service Catalog Design
n Network Service Catalog Read
n Network Service Catalog Instantiate
n Network Service Instance Read
n Network Service Instance Lifecycle Management
n Partner System Read
n Partner System Admin
n Role Audit
n System Admin
n Tag Admin
n Workflow Read
n Workflow Design
n Workflow Execute

Network Function Designer n Network Function Catalog Read


The users assigned to this role can perform all the n Network Function Catalog Design
network function actions such as designing, uploading, n Workflow Read
and managing the Network Function Catalogs. n Workflow Design

VMware, Inc. 30
VMware Telco Cloud Automation User Guide

Role Privileges

Network Function Deployer n Network Function Instance Read


The users assigned to this role can perform all n Network Function Catalog Instantiate
the network function actions related to the life-cycle n Network Function Catalog Read
management operations such as Instantiate, Scale, Heal, n Virtual Infrastructure Consume
and other actions available on a Network Function
n
instance.
n Network Function Instance Lifecycle Management
n Workflow Read
n Workflow Execute

Virtual Infrastructure Administrator n Virtual Infrastructure Audit


The users assigned to this role can perform all the virtual n Virtual Infrastructure Admin
infrastructure-related actions in VMware Telco Cloud n Virtual Infrastructure Consume
Automation. n Infrastructure Design
n Infrastructure Lifecycle Management

Virtual Infrastructure Auditor Virtual Infrastructure Audit


The users assigned to this role can view all the virtual
infrastructure entities in VMware Telco Cloud Automation.

Network Service Designer n Network Service Catalog Design


The users assigned to this role can perform all the n Network Service Catalog Read
network service actions such as designing, uploading, and n Network Function Catalog Read
managing the Network Service Catalogs. n Workflow Read
n Workflow Design

Network Service Deployer n Network Service Instance Read


The users assigned to this role can perform all n Network Service Catalog Instantiate
the network service actions related to the life-cycle n Network Service Catalog Read
management operations such as Instantiate, Scale, Heal, n Network Function Instance Read
and other actions available on a Network Service
n Virtual Infrastructure Consume
instance.
n Network Function Catalog Read
n Network Function Catalog Instantiate
n Network Service Instance Lifecycle Management
n Network Function Instance Lifecycle Management
n Workflow Read
n Workflow Execute

System Auditor n System Audit


The users assigned to this role can view all the entities in n Virtual Infrastructure Audit
VMware Telco Cloud Automation. n Network Service Instance Read
n Network Service Catalog Read
n Network Function Catalog Read
n Network Function Instance Read
n Partner System Read
n Role Audit
n Workflow Read

VMware, Inc. 31
VMware Telco Cloud Automation User Guide

Role Privileges

Role Administrator n Role Admin


The users assigned to this role can perform all the object n Role Audit
access control related actions in VMware Telco Cloud n Tag Admin
Automation.

Partner System Administrator n Partner System Read


The users assigned to this role can perform all the n Partner System Admin
partner system-related actions in VMware Telco Cloud n Network Function Catalog Read
Automation. n Virtual Infrastructure Consume

Partner System Read Only Partner System Read


The users assigned to this role can view all the partner
system entities in VMware Telco Cloud Automation.

Role Auditor Role Audit


The users assigned to this role can view all the object
access control related roles and permissions in VMware
Telco Cloud Automation.

Vendor Admin n Virtual Infrastructure Consume


n Partner System Read
n Network Function Catalog Design
n Network Function Catalog Read
n Network Function Catalog Instantiate
n Network Function Instance Read
n Network Function Instance Lifecycle Management
n Network Service Catalog Design
n Network Service Catalog Read
n Network Service Catalog Instantiate
n Network Service Instance Read
n Network Service Instance Lifecycle Management
n Workflow Read
n Workflow Design
n Workflow Execute

Workflow Designer n Workflow Read


n Workflow Design

Workflow Executor n Workflow Read


n Workflow Execute

Creating Roles and Permissions


Apart from the predefined roles and privileges that are available in VMware Telco Cloud
Automation, you can create custom roles and assign specific privileges to them. You can also
assign specific access permissions to users and user groups.

Create a Role
Create a role and assign specific permissions.

VMware, Inc. 32
VMware Telco Cloud Automation User Guide

Prerequisites

You must be a System Administrator or a Role Administrator to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 From the top-right corner, click the drop-down menu next to the User icon. Go to
Authorization > Roles.

3 Click Create Role.

4 Enter the role name, an optional description, and select the privileges to be associated with
that role.

5 Click Save.

Results

Your role is created successfully and is displayed under the list of roles.

What to do next

n To edit your role, click Edit.

n To delete a role, click Delete. To delete a role, you must delete all its associated permissions.

You can now create permissions for your role.

Create Permission
Create permissions that are applicable only to specific users and user groups.

Prerequisites

You must be a System Administrator or a Role Administrator to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 From the top-right corner, click the drop-down menu next to the User icon. Go to
Authorization > Permissions.

The existing permissions are displayed.

3 Click Create Permission.

4 In the Create Permission page, enter the following information:

n Role - Select the role to associate the permission with.

n Name - Enter a unique name for the permission.

n Description - Enter an optional description about the permission.

VMware, Inc. 33
VMware Telco Cloud Automation User Guide

n User(s) / User Group(s) - Enter the user name or the group name to associate the
permission with. To validate the user and user group name and to associate the
permissions, click Validate.

Note
n When using Active Directory by group, you can provide the group in the following
format cn=admingroup,ou=groups,dc=server,dc=net.

n When using Active Directory by username, you can provide the user name in the
following format userName@ad.

n When using the vCenter, the format to enter the group name is domain\groupName.

n Configure Advanced Filters - Select this option if you want to add advanced filters such
as specific object type, attribute, metric, and their values. For example, you can associate
the permissions that you create for a Network Function Deployer to access a specific
Network Function Catalog, a Network Function Instance, Network Service Catalog,
Network Service Instance, or a Virtual Infrastructure. Click Add. You can also filter objects
in the catalog based on tags by adding specific tags and values to permissions.

5 Click Save.

Results

Your permission is created successfully and is displayed under the list of permissions.

Tokens
VMware Telco Cloud Automation generates a token each time a user remotely accesses a
Kubernetes cluster or a VMware Telco Cloud Automation Control Plane (TCA-CP).

These tokens are available under Authorization > Tokens.

There are different types of tokens:

n Virtual Infrastructure SSH Token: This token is generated when you use login credentials or
the embedded terminal session for accessing a Kubernetes cluster.

n Virtual Infrastructure REST: This token is generated when you use the Download Kube
Config option for accessing a Kubernetes cluster.

n Network Function SSH: This token is generated when you use login credentials or the
embedded terminal session for accessing a Network Function.

n Network Function REST: This token is generated when you use the Download Kube Config
option for accessing a Network Function.

n TCA-M: This token is generated when you use the Show Login Credentials or Open Terminal
options for accessing the TCA-CP.

VMware, Inc. 34
VMware Telco Cloud Automation User Guide

To view more information about a token, click the drop-down arrow against the token. A token
that is not utilized expires after eight hours. A system administrator can revoke a token at any
time.

VMware, Inc. 35
Kubernetes Policies
5
You can control the access to computational resources at various levels, such as:

n Cluster-level access control (binary): Defines whether the user can access the cluster or not.

n Namespace-level access control (binary): Defines whether a user can access the namespace
or not.

n Kubernetes-level access control: Defines what resources a user can access.

Telco Cloud Automation allows you to create different security domains within a single
Kubernetes cluster. These security domains are associated with users, network function
packages, or instances.

This chapter includes the following topics:

n Overview of Kubernetes Policies

n CNF Global Permission Enforcement

n Types of Kubernetes Policies

n Lifecycle of an RBAC Policy

n Create a Policy Manually

n Edit a Policy

n Clone a Policy

n Download a Policy

n Delete a Policy

n Finalize a Policy

n Grant a Policy

n Edit a Policy Grant

n Delete a Policy Grant

n View VIM Policy Grants

n View Policy Grants For The CNF Package

n Generate a Kubernetes Policy Automatically From CNF Package

VMware, Inc. 36
VMware Telco Cloud Automation User Guide

n Import Policies from CNF Package

Overview of Kubernetes Policies


Kubernetes policies allow you to have restricted access to the Kubernetes clusters in addition to
the following:

n Determine the privileges required for a CNF instantiation and LCM operation.

n Assign global Kubernetes privileges to CNF templates or CNF instances.

n Run HELM operations with a limited service account.

A service account provides non-interactive and non-human access to services within the
Kubernetes cluster. Application Pods, system components, and entities, whether internal or
external to the cluster, use specific service account credentials. TCA uses service accounts to
communicate with Kubernetes.

The service accounts are generated in TCA during the Kubernetes VIM registration process or
during the workload cluster creation.

The following diagram illustrates the usage of service accounts for accessing the Kubernetes API.

Figure 5-1. Diagram 1


create

Custom
initiate HELM POD Operator resource
instance
use

use use use


Service account Service account
TCA
(known to TCA) (managed by CNF)

TCA uses the service account provided during VIM registration for the following purposes:

n Interact with Kubernetes APIs

n Create resources such as PODs, operators, and custom resource instances

The PODs might use the service account to access the APIs.

Operators are software extensions to Kubernetes that use custom resources to manage
applications and their components. Operators follow Kubernetes principles, mainly the
control loop principle. Custom resource models the Kubernetes application, which has a
desired state and an actual state. The operator implements a custom controller to ensure
that the desired state is equal to the actual state.

VMware, Inc. 37
VMware Telco Cloud Automation User Guide

The controller resides in a pod and interacts with Kubernetes API in a control loop to move
the actual state to the desired state. The operator may perform (based on the designed CNF)
scheduled jobs on the application. For example, it creates consistent backups. Operators
are shipped in helm charts, including the custom resource definitions and the associated
controllers.

After the HELM resource construction phase is completed, TCA is not aware of what the
operator does on the Kubernetes cluster, similar to a POD with access to the Kubernetes API.

The purpose of the Kubernetes policy in TCA is to control the access level of each of these
entities to Kubernetes. Kubernetes prevents privilege escalation for their clients, which means
that a service account cannot create another service account with a higher level of privilege
than it already has. TCA builds on this principle by providing these entities with a restricted
service account instead of the unrestricted service account. Kubernetes policy controls the level
of restriction.

The level of restriction is defined through the permission model within TCA, which is illustrated in
the following diagram.

Figure 5-2. Diagram 2

K8S cluster

2. filter=(...) 3. filter=(
name=VMware_.*)
User Permission Controls LCM (installation /
scale) to selected namespaces
1. filter=(...)
1 in * 1
NS CNF Namespace Resource
* * *

The following CNF-level permissions control access to the resources:

n CNF LCM / READ: Controls lifecycle operation execution, deletion, and read access to a CNF
instance. See Figure 5-1. Diagram 1.

n VIM: Controls the VIM instance to which you can deploy network functions. See Figure 5-2.
Diagram 2.

n Namespace: Controls which namespaces you can use in the clusterIf the CNF is restricted
to contain Kubernetes resources that reside in a namespace, then the application of
namespace-based RBAC is sufficient. However, if the CNF needs to read (get, list, watch)
or manage resources (create, update, patch, and delete) outside its namespaces, then
Kubernetes policies need to be applied. See Figure 5-3. Diagram 3.

VMware, Inc. 38
VMware Telco Cloud Automation User Guide

Figure 5-3. Diagram 3

1 in * 1
NS CNF Namespace Resource
* * *
1
*
HELM

Cluster
Resource
*

CNF Global Permission Enforcement


The CNF global permission enforcement system allows you to assign and enforce global
permissions.

CNF permission enforcement aims at running the HELM commands in the context of a restricted
service account. This restricted service account requires minimum permissions to perform the
LCM operations. The limited-service account created based on namespace access only might not
be adequate as it does not provide access to cluster-level resources.

The following diagram illustrates the CNF permission enforcement system.

K8S CNF template

NF deployer CNF

LCM

Service Role
HELM Cluster Role
Account Binding

K8S resource Role RW to CNF


namespaces
auto created by TCA

An LCM operation comprises the following steps:

1 TCA communicates with the target VIM and preconfigures it if required.

2 TCA creates a service account with the necessary permissions (role bindings + roles and
cluster roles).

This step ensures that the CNF does not access any other resource than the one allowed by
the role binding.

3 TCA triggers HELM with the created service account.

VMware, Inc. 39
VMware Telco Cloud Automation User Guide

A virtual infrastructure administrator assigns the required privileges by extending the RBAC
permission model with policy and policy grants, which is illustrated in the following diagram.

providing access to *
Resource

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
VIM Policy rules:
- verbs: [ "create” ]
1
apiGroups: [ "apiextensions.k8s.io/v1" ]
filter=(...) 1 resources: [ "CustomResourceDefinition" ]
CNF package Policy Grant resourceNames: : [ ”NokiaMME1" ]

Types of Kubernetes Policies


Kubernetes policies can be Role-based access control (RBAC) policy or Pod Security Admission
(PSA) policy.

RBAC Policy
RBAC policy allows you to regulate access to computer or network resources based on the roles
of individual users. See Lifecycle of an RBAC Policy.

PSA Policy
PSA policy allows you to regulate access to computer or network resources by enforcing POD
security standards. You can implement the POD security at the cluster level or at the namespace
level by using the namespace labels.

The three levels of Pod Security are privileged, baseline, and restricted. If multiple PSA policies
are applied to a CNF, then a policy that has a more permissive Pod Security Standard is applied
to the CNF.

Note Both PSA and RBAC policies are applied only to CNFs that are in restricted mode, either
instantiated on restricted VIM or set to Restricted manually

Lifecycle of an RBAC Policy


A Kubernetes policy defines a set of permissions that are required in addition to the Read-Write
access to the namespaces of the CNF. As a user with access to the CNF package, you can create
a policy. When you create a policy, you only define the requirement for specific permissions. The
virtual infrastructure administrator grants permission by creating a policy grant. A policy grant
links the policy and VIM with a CNF package. A policy grant may also link with a specific CNF
instance or a CNF LCM operation.

The following table lists the privileges and the corresponding accessible objects.

VMware, Inc. 40
VMware Telco Cloud Automation User Guide

Table 5-1. Kubernetes Policy Privileges

Privilege Policy Template Policy Grant

System Administrtor Read-Write Read-Write

Virtual Infrastructure Administrtor Read-Write Read-Write

Virtual Infrastructure Audit Read-Write Read-Only

Virtual Infrastructure Consume Read-Write Read-Only

When you create a policy, it moves to the draft state with an expiration date set for the policy
automatically. In the draft state, you can edit a policy, and every time you edit a policy, the
expiration date is extended.

Note
n The draft policy is automatically deleted if you do not finalize it before the expiration date.

n After granting a policy, it can no longer be edited or deleted.

The lifecycle of a policy is illustrated in the following diagram.

publish

depend on
Policy (draft) Policy (final) Policy Grant

edit

The policy and policy grant are used during LCM operations to prepare the context in which
HELM is executed. Before executing a HELM operation, TCA creates or updates a service account
and its corresponding roles, cluster roles, or role bindings to represent a context in which the
CNF should be running. Based on policies and policy grants, TCA creates a set of CNF-specific
roles or cluster roles and role bindings. These will make it possible for the service account
to access global resources. Roles are created based on HELM to namespace mapping in the
instantiated VNF to provide Read-Write access to the namespaces in which the CNF resides.
These service accounts reside on a TCA-specific namespace and are labeled with the policy
grant ID or the CNF instance ID. Proper labeling of the service accounts allows you to update or
delete them when you no longer require them.

VMware, Inc. 41
VMware Telco Cloud Automation User Guide

K8S CNF template


Policy Grant

NF deployer CNF

LCM

Namespace Service Role


mapping HELM chart Cluster Role Policy
Account Binding

K8S resource Role RW to CNF


namespaces
auto created by TCA

Create a Policy Manually


You can manually create Kubernetes policies and apply them to any cloud instance.

A policy defines a set of Roles and ClusterRoles that provide additional access to the Kubernetes
resources. Since the Kubernetes resource names vary for every instance, and the policy
templates are fixed, TCA allows a policy to be applied for multiple CNF instances.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Authorization > Kubernetes Policies.

3 Click Create New.

4 In the Name field, enter name of the policy.

5 (Optional) In the Description field, enter description of the policy.

VMware, Inc. 42
VMware Telco Cloud Automation User Guide

6 From the Type drop-down, select KUBERNETES_RBAC


or KUBERNETES_PSA based on your requirement.

The following table illustrates the policy and sample policy definition.

Policy Type Sample Policy Defintion

KUBERNETES_RBAC apiVersion: rbac.authorization.k8s.io/v1


kind: ClusterRole
metadata:
purpose: istioCRDs
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
resourceNames: ["istiooperators.install.istio.io"]
verbs: ["get", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: SomeOtherAppNamespace
purpose: GrantAccessForOtherAppSevices
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list"]

KUBERNETES_PSA enforce: baseline


audit: restricted
warn: privileged

7 Click Next.

8 In the Add Policy Details page, browse and upload the YAML file that contains policy details
or enter the policy details similar to the sample provided in the preceding table.

9 Click Finish.

VMware, Inc. 43
VMware Telco Cloud Automation User Guide

Edit a Policy
You can edit a policy to make the changes as required.

Note You cannot edit a policy after granting it.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Authorization > Kubernetes Policies.

3 Click the vertical ellipse of the policy that you want to edit and click Edit.

4 Make the required changes to the Name, Description, or Type fields and click Next.

5 Click Finish.

Clone a Policy
You can clone any policy and change the name, description, and type of the policy to suit the
new policy.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Authorization > Kubernetes Policies.

3 Click the vertical ellipse of the policy that you want to clone and click Clone.

4 Make the required changes to the Name, Description, and Type fields and click Next.

5 Click Finish.

VMware, Inc. 44
VMware Telco Cloud Automation User Guide

Download a Policy
You can download a Kubernetes RBAC or PSA policy as a JSON file.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Authorization > Kubernetes Policies.

3 Click the vertical ellipse of the policy that you want to download and click Download.

Delete a Policy
You can delete a policy when you no longer require it.

Note You cannot delete a policy after granting it.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Authorization > Kubernetes Policies.

3 Click the vertical ellipse of the policy that you want to delete and click Delete.

4 Click Delete.

Finalize a Policy
Before finalizing a policy, ensure that no further changes are required, as you cannot make any
changes to the policy after finalizing it. You can grant a policy to a user only after finalizing it.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Authorization > Kubernetes Policies.

3 Click the vertical ellipse of the policy that you want to finalize and click Finalize.

4 Click Finalize.

Grant a Policy
A policy grant is the grant of the requirement as defined in the policy for a selected VIM. A VIM
administrator grants the policy. Granting a policy establishes a connection between the policy,
the VIM on which the policy is granted, and filters of the objects to which the grant applies.

Note You can only grant a policy after finalizing it.

VMware, Inc. 45
VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Authorization > Kubernetes Policies.

3 Click the vertical ellipse of the finalized policy that you want to grant and click Grant.

4 In Grant Details, enter a name and click Next.

5 Select the cloud to which you want to grant the policy and Click Next.

6 From the ObjectType drop-down, select Network Function Catalog, Network Function
Instance, or Lifecycle Operation.

Each object type has different attributes. The following table illustrates the attributes and
their description for each object type.

ObjectType Attribute Description

Network Function Catalog Name Name of the Network Function Catalog.

Provider Vendor of the Network Function Catalog.

ProductName Product Name of the Network Function Catalog.

Descriptor ID VNFD identifier of the Network Function Catalog.

Descriptor version VNFD version of the Network Function Catalog.

Software version Software version of the Network Function Catalog.

Tag Tag of the Network Function Catalog.

Network Function Instance Name Name of the Network Function Instance.

VMware, Inc. 46
VMware Telco Cloud Automation User Guide

ObjectType Attribute Description

Tag Tag of the Network Function Instance.

Lifecycle Operation Name Name of the Lifecycle Operation.


The following are the possible Lifecycle Operations:
n Upgrade
n Scale
n Instantiate
n Terminate

7 From the Attribute drop-down, select an attribute.

8 From the Operator drop-down, select an operator.

The following operators are available for each object type. You can select the required
operator.

n Equals to

n Not equals to

n Any of

n Matches.

Note Tag attribute does not support the Matches operator.

Note If there are no filters for Network Function Catalog, the filters within the policy
grant match every Network Function Instance created from the given template. This is also
applicable to Network Function Instances and Lifecycle Operations.

9 In the Values field, enter a value for the selected operator.

10 Click Finish.

Edit a Policy Grant


You can edit the filters of a policy grant as required.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Infrastructure > Virtual Infrastructure.

VMware, Inc. 47
VMware Telco Cloud Automation User Guide

3 Click the vertical ellipse of the cloud instance in which you want to edit the policy grant and
click View Policy Grants.

4 In the Policy Grants tab, click the vertical ellipse of the policy grant you want to edit and click
Edit.

5 Click Next.

6 From the ObjectType drop-down, select Network Function Catalog, Network Function
Instance, or Lifecycle Operation.

7 From the Attribute drop-down, select an attribute.

8 From the Operator drop-down, select an operator.

9 In the Values field, enter a value for the filter.

10 Click Finish.

Delete a Policy Grant


You can delete a grant from a policy when you do not need it.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Infrastructure > Virtual Infrastructure.

3 Click the vertical ellipse of the cloud instance in which you want to delete the grant and click
View Policy Grants.

4 In Policy Grants tab, click the vertical ellipse of the grant you want to delete and click Delete.

5 Click Delete.

VMware, Inc. 48
VMware Telco Cloud Automation User Guide

View VIM Policy Grants


You can view all the policy grants for a VIM.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Infrastructure > Virtual Infrastructure.

3 Click the vertical ellipse of the cloud instance in which you want to view the VIM policy grants
and click View Policy Grants.

View Policy Grants For The CNF Package


You can view all the policy grants for a CNF package.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Catalog > Network Function.

3 Click the CNF package for which you want to view the policy grants.

4 From Actions drop-down, click View Policy

Grants.

Generate a Kubernetes Policy Automatically From CNF


Package
You can generate a Kubernetes RBAC policy automatically from the CNF package.

A CNF template processor determines the global privileges or namespaces for a CNF.

VMware, Inc. 49
VMware Telco Cloud Automation User Guide

RBAC policies are generated based on the CNF package helm chart resources. If resources
created or accessed by CNF are outside the namespace, TCA creates a new RABC rule for that
resource.

Note
n Some of the resource names may be generated with the Helm release name or random
names from the Helm chart. Therefore, the CNF deployer or VIM Administrator should review
the automatically generated policies.

n Helm inspection may sometimes fail to detect the custom resource details if the resources are
deployed outside Helm. In such a scenario, a warning message is displayed in the description
of the generated policy template.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Catalog > Network Function.

3 Click the CNF package for which you want to create a policy automatically.

4 From Actions drop-down, click Create Policy.

5 In the Inventory Details tab, click the browse icon in the Select Cloud field.

6 Click the radio button of the cloud instance that you want to select and click OK.

7 In the Helm Charts tab, do one of the following:

n Select Repository URL: Click this radio button to automatically display the repository URL.

n Specify Repository URL: Click this radio button to enter the repository URL, username,
and password in the respective fields.

VMware, Inc. 50
VMware Telco Cloud Automation User Guide

8 In the Inputs tab, provide input value for all the input parameters such as pf,
PHC2SYS_CONFIG_FILE, and PTP4L_CONFIG_FILE.

9 In the Review tab, review all the parameters and click Create Policy.

Import Policies from CNF Package


You can provide predefined policies with the CNF package in CSAR to your operators. A new
folder, secutityPolicies is added to the Artifacts folder, which contains the policy definitions in
the YAML format with the following fields:

n policyType: KUBERNETES_RBAC

n name

n description

n definition: The policy definition.

The following is an example for a policy definition in CSAR.

policyType: KUBERNETES_RBAC
name: Policy 1
description: My favourite policy
definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: SomeOtherAppNamespace
purpose: GrantAccessForOtherAppSevices
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list"]

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Catalog > Network Function.

3 Click the CNF package from which you want to import the policies.

VMware, Inc. 51
VMware Telco Cloud Automation User Guide

4 From Actions drop-down, click Import Policy.

5 Select the policies, which are embedded into the Network Function package that you want to
import from the CNF package.

6 Click Import Policy.

Note You can edit the imported policies until they are granted.

VMware, Inc. 52
Working with Tags
6
Tags are labels that contain user-defined keys and values. You can attach tags to catalogs and
instances, and virtual infrastructure. Tagging makes it easier to search and sort resources, and to
assign specific rules to the resource.

Tags help in managing, grouping, and filtering resources in a catalog that has similar properties
or for restricting access to resources to a certain group of users. For example, you can apply
relevant tags when instantiating a network function catalog and filter the network function
instances using these tags. Or, you can assign an SSD tag to your network functions. This way,
you can gently enforce users to deploy these network functions only on VIMs having SSD as the
storage profile.

Users with the Tag Admin privilege can create, edit, or delete tags. The System Administrator
and Role Administrator roles have the Tag Admin privilege by default.

Note Existing tags from VMware Telco Cloud Automation version 1.8 and earlier are exported
and added to the list of tags in VMware Telco Cloud Automation 1.9 during upgrade.

Creating, Editing, and Deleting Tags


You can create a tag, assign key-value pairs and objects to it, edit, and delete the tag using
VMware Telco Cloud Automation. For more information, see Managing RBAC Tags.

Adding Tags to VIMs


You can add tags when adding a VIM, or you can edit an existing VIM to add tags to it. For more
information, see Chapter 7 Configuring Your Virtual Infrastructure.

Add Tags to Permissions


You can add pre-created tags to Permissions, and use these tags for filtering objects in the
catalog. For more information about adding tags to Permissions, see Create Permission.

VMware, Inc. 53
VMware Telco Cloud Automation User Guide

Adding Tags to Network Function Catalogs


You can add tags when onboarding a network function, or you can edit an existing network
function catalog to add tags to it. For more information, see Onboarding a Network Function.

VMware, Inc. 54
Configuring Your Virtual
Infrastructure 7
Before creating and instantiating network functions and services, you must add your virtual
infrastructure to VMware Telco Cloud Automation.

Note VMware Telco Cloud Automation supports vSphere, VMware Cloud Director, Kubernetes
Cluster, VMware Tanzu, VMware Integrated OpenStack, VMware Cloud on AWS, Google VMware
Engine (GVE), and Microsoft Azure VMware Solution (AVS).

You can add a virtual infrastructure from the Infrastructure > Virtual Infrastructure page. The
Virtual Infrastructure page provides a graphical representation of clouds that are distributed
geographically. Details about the cloud such as Cloud Name, Cloud URL, Cloud Type, Tenant
Name, Connection Status, and Tags are also displayed. To view more information such as TCA-
CP URL, Location, User Name, Network Function Inventory, and so on, click the > icon on a
desired cloud.

This chapter includes the following topics:

n Add a Cloud to VMware Telco Cloud Automation

n Configure the Compute Profile

n Edit a Virtual Infrastructure Account

n Force Sync Inventory

Add a Cloud to VMware Telco Cloud Automation


The first step to managing network functions and services is to add a cloud to VMware Telco
Cloud Automation.

Prerequisites

n To perform this task, you must have the Virtual Infrastructure Admin privileges.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure and click + Add.

The Add New Virtual Infrastructure Account page is displayed.

VMware, Inc. 55
VMware Telco Cloud Automation User Guide

3 Select the type of cloud. Based on the cloud type you select, enter the following virtual
infrastructure details:

Note VMware Telco Cloud Automation auto-imports self-signed certificates. To import, click
Import from the pop-up window and continue.

a For VMware Cloud Director and VMware Integrated OpenStack (VMware VIO):

Cloud Name Enter a name for your virtual infrastructure.

Cloud URL Enter the TCA-CP cloud appliance URL. This URL is
used for making HTTP requests.

Tags Select the key and value pairs from the drop down
menus. To add more tags, click the + symbol.

Username Enter the user name of a cloud user having edit


permissions on the cloud.
n The format for a vCloud Director-based cloud is
username@organization-name.
n The role of vCloud Director is Organization
Administrator.
n The role of VMware Integrated OpenStack (VIO) is
Project Administrator.

Password Enter the infrastructure user password.

Tenant Name Enter the organization name for vCloud Director. Enter
the project name for VIO.

b For Kubernetes and VMware Tanzu:

Cloud Name Enter a name for your virtual infrastructure.

Cloud URL Enter the TCA-CP cloud appliance URL. This URL is
used for making HTTPS requests.

Tags Enter the labels to associate with your cloud.

Cluster Name Enter the cluster name that you provided when
registering the Kubernetes Cluster in TCA-CP Manager.

Kubernetes Config Enter the YAML kubeconfig file for your Kubernetes
Cluster.

Default Isolation Mode Select one of the following:


n Permissive: No restriction is applied during LCM
operations or proxy remote accesses.
n Restricted: Each Network Function has access to
its namespace, and no access is granted to any
other namespace or cluster-level resources.

Note By default, the K8s VIMs are in permissive


mode, and no cluster-level privilege separation is
enforced. To enable restricted policies, you must
set the isolation mode to Restricted.

VMware, Inc. 56
VMware Telco Cloud Automation User Guide

c For VMware vSphere, Microsoft Azure VMware Solution (AVS), and Google VMware
Engine (GVE):

Cloud Name Enter a name for your virtual infrastructure.

Cloud URL Enter the TCA-CP cloud appliance URL. This URL is
used for making HTTP requests.

Tags Enter the labels to associate with your cloud.

Username Enter the user name of a cloud user having edit


permissions on the cloud. The format for the vSphere
cloud is username@domain-name.

Password Enter the infrastructure user password.

d For Amazon EKS:

Cloud Name Enter a name for your virtual infrastructure.

VMware Telco Cloud Automation Control Plane URL Enter the TCA-CP cloud appliance URL. This URL is
used for making HTTP requests.

Tags Enter the labels to associate with your cloud.

EKS Cluster Name Enter the EKS Cluster name.

EC2 Region Enter the region of your Elastic Compute Cloud (EC2)
systems.

EKS Access Key Enter the EKS Access Key.

EKS Access Secret Enter the secret token, key, or password.

4 Optionally, you can add tags to your cloud. Tags are used for filtering and grouping clouds,
network functions, and network services.

5 Click Validate.

The configuration is validated.

6 Click Add.

Results

You have added the cloud to your virtual infrastructure. You can see an overview of your virtual
infrastructure on the Infrastructure > Virtual Infrastructure page together with a map showing
the physical location of each cloud.

What to do next

To configure additional clouds in your virtual infrastructure, click + Add. To modify your existing
infrastructure, click Edit or Delete.

For VMware Cloud Director, vSphere, and VIO, you must configure the deployment profiles for
your cloud.

VMware, Inc. 57
VMware Telco Cloud Automation User Guide

Configure the Compute Profile


If you have added a vCloud Director, vSphere, or VIO cloud, you must add a compute profile.
Compute Profiles allow you to specify the underlying resource where the virtual network
functions are deployed.

Prerequisites

You must have the Virtual Infrastructure Admin privileges to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure and select the options symbol against the
virtual infrastructure.

3 Click Manage Compute Profile.

4 Under Compute Profiles, click Add.

5 Enter the following information:

Compute Profiles allow you to specify the underlying resource where the network functions
are deployed.

n For vCloud Director clouds:

n Name - Name of the compute profile.

n Description - A brief description of the profile.

n OrgVdc - Select the Organization vDC from the pop-up window.

n Storage Profile - Select the storage profile from the pop-up window.

n Tag(s) - Enter the labels to associate your compute profile with.

n Location - Enter the cloud location to add the compute profile. To add the compute
profile to the current cloud, select Same as VIM.

n For VIO clouds:

n Name - Name of the compute profile.

n Description - A brief description of the profile.

n AvailabilityZone - Select the Availability Zone.

n Tag(s) - Enter the labels to associate your compute profile with.

n Location - Enter the cloud location to add the compute profile.

n For VMware vSphere clouds:

n Name - Name of the compute profile.

n Description - A brief description of the profile.

VMware, Inc. 58
VMware Telco Cloud Automation User Guide

n Compute - Select the resource pool or cluster.

n Datastore - Select the datastore for the resource pool or cluster.

n Edge Cluster - Select the Edge Cluster from vCenter NSX-T.

n Folder - Select the folder to deploy the virtual machines.

n Tag(s) - Enter the labels to associate your compute profile with.

n Location - Enter the cloud location to add the compute profile.

6 Click Add.

Results

The compute profile is added to your cloud. To view the compute profile, navigate to
Infrastructure > Virtual Infrastructure and click the > icon against the cloud name.

The Resource Status column in the Virtual Infrastructure page displays the resource use of those
clouds that are configured with vCloud Director, vSphere, or VIO VIMs.

What to do next

To edit a compute profile, navigate to Infrastructure > Virtual Infrastructure and click the cloud
name. In the cloud details page, go to the desired compute profile and click the Edit icon.

Edit a Virtual Infrastructure Account


You can edit an existing virtual infrastructure account to updates details such as Cloud Name,
Cloud URL, User Name, Password, and so on.

In this example, we edit the virtual infrastructure details of vCloud Director.

Prerequisites

You must have the Virtual Infrastructure Admin privileges to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure and select the desired virtual infrastructure
to edit.

3 Click the Edit icon.

The Edit Virtual Infrastructure Account page is displayed.

4 Under Virtual Infrastructure Details, edit the desired details.

5 To Validate the information, click Validate.

6 To update the virtual infrastructure account details, click Update.

VMware, Inc. 59
VMware Telco Cloud Automation User Guide

Force Sync Inventory


If the virtual infrastructure inventory information is not synchronized between VMware Telco
Cloud Automation Control Plane (TCA-CP) and VMware Telco Cloud Automation Manager, you
can initiate a partial sync or a full sync.

Prerequisites

You must have the Virtual Infrastructure Admin privileges to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure and select the options symbol against the
virtual infrastructure.

3 Click Force Sync Inventory.

4 On the Force Sync Inventory Data of TCA Control Nodes pop-up:

a To synchronize only the missing information, for example, alarms, CNF inventory, worker
node IPs, PM reports, and Harbor repository in partner systems. Select Partial Sync from
the drop-down menu.

b To synchronize the entire virtual infrastructure inventory information, for example, alarms,
CNF inventory, worker node IPs, PM reports, and Harbor repository in partner systems.
Click Full Sync.

5 Click OK.

VMware, Inc. 60
Viewing Your Cloud Topology
8
VMware Telco Cloud Automation provides a visual topology of your cloud sites across
geographies. It enables administrators to manage network functions and services.

To view your cloud sites and services, perform the following steps:

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 From the left navigation pane, click Clouds.

Results

The Clouds page displays the cloud sites that are registered to VMware Telco Cloud Automation.

What to do next

To view details of a cloud site such as Cloud Name, Cloud Type, User Name, and Status, point to
the cloud site.

VMware, Inc. 61
Working with Infrastructure
Automation 9
Infrastructure Automation can deploy the entire SDDC at Central, Regional or the Cell Site. It
automatically deploys the SDDC components such as vCenter, NSX, vSAN, vRO, vRLI, TCA-CP
on the target hosts. It simplifies the deployments and management of the telecommunication
infrastructure.

This chapter includes the following topics:

n Introduction to Infrastructure Automation

n Deployment Configurations

n Managing Domains

n Viewing Tasks

Introduction to Infrastructure Automation


Automatic deployment of the telecommunication infrastructure.

You can manage the telecommunication infrastructure through Infrastructure Automation. It also
deploys the application on various sites based on the site-specific requirements.

The Infrastructure Automation has four stages.

1 Prerequisites

Validations related to the prerequisites of each component before deployment.

2 Configuration and Bootstrapping

Configurations related to networking, appliances, ISO, and domains.

3 Automated SDDC Deployment

Automatic deployment of the SDDC.

4 Ready for Network Functions

All sites are ready. You can configure and initiate the network functions.

Prerequisites
Infrastructure Automation validates various prerequisites before beginning the actual
deployment.

VMware, Inc. 62
VMware Telco Cloud Automation User Guide

Different sites have different prerequisites that must be fulfilled before beginning the actual
deployment. Infrastructure Automation validates all these prerequisites to ensures easy and fast
deployment.

Note Ensure that you have different vSAN disk sizes for cache and capacity tiers, else the cloud
builder may not select the correct cache disk.

Host
n All the hosts in a domain are homogeneous.

n Each host has a minimum of one solid-state disk (SSD) and three solid-state disk/hard disk
drives for vSAN.

n Each host requires two physical NICs connected to the same physical switch.

Physical Switch
n Jumbo Frames enabled on the Physical Switch.

n DHCP enabled on the NSX host overlay network.

n Each ESXi server has a minimum of two physical NICs connected to the switch in trunk mode.
Access to all the VLANs (Management Network, vMotion Network, vSAN, NSX Host Overlay
Network, NSX Edge Overlay Network, Uplink 1 and Uplink 2) on the trunk port.

Domain
n Each domain has a minimum of four hosts.

n DNS name configured for all the appliances in all the domains.

n ESXi servers are installed for each domain through the PXE server or an ISO image.

n A common web server to access the software images at the central site.

n Central and regional sites have the internet connectivity.

Network
n VMware Telco Cloud Automation/ VMware Telco Cloud Automation Control Plane can require
an unrestricted communication to connect.tec.vmware.com and hybridity-depot.vmware.com
over port TCP 443 for license activation and updates.

n VMware Telco Cloud Automation uses different ports for different services. For details, see
VMware Telco Cloud Automation Ports.

VMware, Inc. 63
VMware Telco Cloud Automation User Guide

n Time synchronization through NTP for all VLANs.

n Unique VLANs are created for following networks on the physical switch:

Network MTU Description

Management Network 1500 Used to connect the management components of the software like
vCenter, ESXi, NSX Manager, VMware Telco Cloud Automation, and
VMware Telco Cloud Automation Control Plane.

vMotion Network 9000 Used for the live migration of virtual machines. It is an L2 routable
network and used only for the vMotion traffic within a data center.

vSAN 9000 Used for the vSAN traffic. It is an L2 routable network and is used only for
the vSAN traffic within a data center.

NSX Host Overlay Network 9000 Used for the NSX Edge overlay traffic. Requires a routable with Host
overlay VLAN in the same site. This network requires a DHCP server
to provide IPs to the NSX host overlay vmk interfaces. The DHCP pool
should equal the number of ESXi hosts on this network.

NSX Edge Overlay Network 9000 Used for the overlay traffic between the hosts and Edge Appliances.

Uplink 1 9000 Used for the uplink traffic. Uplink 1 is in the same subnet as the Top of
Rack (ToR) switch uplink address.

Uplink 2 9000 A redundant path for the uplink traffic. Uplink 2 is in the same subnet as
the Top of Rack switch uplink address.

n Each ESXi server has a minimum of two physical NICs connected to the switch in trunk mode.
Access to all the VLANs (Management Network, vMotion Network, vSAN, NSX Host Overlay
Network, NSX Edge Overlay Network, Uplink 1 and Uplink 2) on the trunk port.

n Configure the same NTP server on both the Cloud Builder and the ESXi host.

n Run the command ntpq -pn on both the Cloud Builder and the ESXi host and check if the
output of the NTP server shows *.

n Name resolution through DNS for all appliances in all the domains.

VMware, Inc. 64
VMware Telco Cloud Automation User Guide

n DNS records for all appliances with forward and reverse resolution.

Note You can create custom naming schemes for the appliances. You can also select
the naming schemes from Appliance Naming Scheme from the drop-down menu when
configuring the global parameters or override the naming schemes when configuring
domains. The options available for the appliance naming scheme are:
n {applianceName}-{domain-Name}

n {applianceName}

n Custom

For example: If the naming scheme is set to {appliancename}-{domainname}, the name for a
Virtual Center appliance is vc-cdc1.telco.example.com, where:
n vc is the appliance name.

n cdc1 is the domain name.

n telco.example.com is the DNS suffix.

License
The licenses for the following components are required:

n VMware vSphere (ESXi)

n VMware NSX-T Data Center

n VMware Telco Cloud Automation

n VMware Telco Cloud Automation Control Plane

n VMware vCenter Server

n VMware vSAN

n (Optional) VMware vRealize Log Insight

Note The actual license requirements may change based on the components installed.

Software Versions Interoperable with Infrastructure Automation


The following table lists the software versions interoperable with Infrastructure Automation.

Software Version

vCenter Server 7.0 U1a, 7.0 U1c, 7.0 U2, 7.0 U2d, 7.0 U3k, 8.0b, 8.0u1

ESXi 7.0 U1a, 7.0 U1c, 7.0 U2, 7.0 U2d, 7.0 U3k, 8.0b, 8.0u1

Note For the RAN sites, you have to manually upgrade VMware vCenter and ESXi. For details,
see vCenter Upgrade.

VMware, Inc. 65
VMware Telco Cloud Automation User Guide

Managing Specification File


Download the specification template, upload the modified specification template, and download
the current specification template.

You can use the specification template to provide infrastructure details for automated
deployment. You can also upload a new specification or download the current specification of
the deployed infrastructure.

Note For the changes required in cloud native deployment, see Specification File for Cloud
Native.

Procedure

1 To download a specification template, click Download Spec Template.

2 To download the current specification of the infrastructure, click Download Spec.

3 To upload a new specification for the infrastructure, click Upload Spec and select the new
specification file.

Specification File for Cloud Native


Changes in the specification file for cloud native deployment.

Cloud native deployment requires additional configuration in cloud specification file.

Prerequisites

Download the cloud native specification file from the Telco Cloud Automation.

VMware, Inc. 66
VMware Telco Cloud Automation User Guide

Procedure

u Open the Specification file and configure the following parameters for cloud native
deployment.

Parameter Description

pscUserGroup The username which creates the kubernetes clusters in the cloud native
VMware Telco Cloud Automation. You can specify this parameter under
settings section or the domains section. The pscUserGroup parameter under
settings section acts as global value and the pscuserGroup parameters
under domain overrides the value for that specific domain.

Note You must specify the pscUserGroup. You can specify the pscUserGroup
either in settings, or in domains or in both the settings and domains.

TCA_BOOTSTRAPPER The bootstrapper for the cloud native VMware Telco Cloud Automation.
Add the following details:
n type

n name

n ipIndex

n rootpassword

n adminpassword

TCA_MANAGEMENT_CLUSTER The cluster manager for the cloud native VMware Telco Cloud Automation.
Add the following details:
n type

n name

n ipIndex

n clusterPassword

TCA_CP The load balancer for VMware Telco Cloud Automation control plane (TCA-
CP).
Add the following details:
n type

n name

n ipIndex

VMware, Inc. 67
VMware Telco Cloud Automation User Guide

Parameter Description

TCA Load balancer for VMware Telco Cloud Automation manager in the cloud
native VMware Telco Cloud Automation.
Add the following details:
n type

n name

n ipIndex

airgapServer The parameter is required only for the airgapped environment.


Add the following details:
n fqdn

n caCert

Note
n Encode the CA certificate with BASE64 encoding.
n For adding the images (.OVA files) for cloud builder deployment, see
Add Images or OVF.

Note
n You can use the domain settings to override the values provided in the settings.

n You cannot override the appliance type TCA_BOOTSTRAPPER appliance in the management
domain of a central site.

n You cannot override the appliance type TCA in the workload domain of a central site.

See the reference code for cloud-specific changes.

{
"domains": [
{
"name": "cdc",
"type": "CENTRAL_SITE",
"subType": "MANAGEMENT",
"enabled": true,
"preDeployed": {
"preDeployed": false
},
"minimumHosts": 3,
"location": {
"city": "Bengal\u016bru",
"country": "India",
"address": "",
"longitude": 77.56,
"latitude": 12.97
},

"switches": [
{
"name": "cdc-dvs001",
"uplinks": [
{

VMware, Inc. 68
VMware Telco Cloud Automation User Guide

"pnic": "vmnic0"
},
{
"pnic": "vmnic1"
}
]
}
],
"services": [
{
"name": "networking",
"type": "nsx",
"enabled": true,
"nsxConfig": {
"shareTransportZonesWithParent": false
}
},
{
"name": "storage",
"type": "vsan",
"enabled": true,
"vsanConfig": {
"vsanDedup": false
}
}
],
"networks": [
{
"switch": "cdc-dvs001",
"type": "management",
"name": "management",
"segmentType": "vlan",
"vlan": 3406,
"mtu": 1500,
"mac_learning_enabled": false,
"gateway": "172.17.6.253",
"prefixLength": 24,
"_comments": [
"If K8S master/worker nodes will be installed on this network,
then it requires DHCP configured on the network"
]
},
{
"switch": "cdc-dvs001",
"type": "vMotion",
"name": "vMotion",
"segmentType": "vlan",
"vlan": 3408,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.8.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.8.10",

VMware, Inc. 69
VMware Telco Cloud Automation User Guide

"end": "172.17.8.20"
}
]
},
{
"switch": "cdc-dvs001",
"type": "vSAN",
"name": "vSAN",
"segmentType": "vlan",
"vlan": 3409,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.9.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.9.10",
"end": "172.17.9.20"
}
]
},
{
"switch": "cdc-dvs001",
"type": "nsxHostOverlay",
"name": "nsxHostOverlay",
"segmentType": "vlan",
"vlan": 3407,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.7.253",
"prefixLength": 24,
"_comments": [
"This network requires DHCP configured on the network"
]
},
{
"switch": "cdc-dvs001",
"type": "nsxEdgeOverlay",
"name": "nsxEdgeOverlay",
"segmentType": "vlan",
"vlan": 3410,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.10.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.10.10",
"end": "172.17.10.20"
}
]
},
{
"switch": "cdc-dvs001",
"type": "uplink",

VMware, Inc. 70
VMware Telco Cloud Automation User Guide

"name": "uplink1",
"segmentType": "vlan",
"vlan": 3411,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.11.253",
"prefixLength": 24,
"ipAddresses": [
"172.17.11.100",
"172.17.11.101"
]
},
{
"switch": "cdc-dvs001",
"type": "uplink",
"name": "uplink2",
"segmentType": "vlan",
"vlan": 3410,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.10.253",
"prefixLength": 24,
"ipAddresses": [
"172.17.10.100",
"172.17.10.101"
]
}
],
"applianceOverrides": [
{
"name": "tb1-cdc-cb",
"enabled": true,
"id": "app-cc834fe9-2f5f-4d7c-9538-4f6cf84a0c3b",
"nameOverride": "tb1-cdc-cb",
"type": "CLOUD_BUILDER",
"ipIndex": 32,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-sddcmgr",
"enabled": true,
"id": "app-94dc5b6f-f034-4d01-be12-a9919bb851e9",
"nameOverride": "tb1-cdc-sddcmgr",
"type": "SDDC_MANAGER",
"ipIndex": 33,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vc",
"size": "small",
"enabled": true,
"id": "app-20ae3412-d7bb-46fb-a213-3eee4980c59b",
"nameOverride": "tb1-cdc-vc",

VMware, Inc. 71
VMware Telco Cloud Automation User Guide

"type": "VC",
"ipIndex": 31,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vro",
"enabled": true,
"id": "app-652abcba-954f-4ef2-b66d-ef3ac80ac923",
"nameOverride": "tb1-cdc-vro",
"type": "VRO",
"ipIndex": 40,
"rootPassword": "Base64 encoded password"
},
{
"name": "nsx-cdc",
"size": "large",
"enabled": true,
"id": "app-2d8b171b-8ed0-4093-9492-918e9cbb8881",
"nameOverride": "tb1-cdc-nsx",
"type": "NSX_MANAGER",
"ipIndex": 34,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"name": "nsx001",
"enabled": true,
"id": "app-69d10093-e451-40d4-8d11-091c87978037",
"nameOverride": "tb1-cdc-nsx01",
"parent": "tb1-cdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 35
},
{
"name": "nsx002",
"enabled": true,
"id": "app-ba4fdd21-7f99-4162-939e-7158f82bb4cd",
"nameOverride": "tb1-cdc-nsx02",
"parent": "tb1-cdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 36
},
{
"name": "nsx003",
"enabled": true,
"id": "app-f7fd0803-546a-43ae-8b8c-2112c128b12e",
"nameOverride": "tb1-cdc-nsx03",
"parent": "tb1-cdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 37
},
{
"name": "edgecluster001",

VMware, Inc. 72
VMware Telco Cloud Automation User Guide

"size": "large",
"enabled": true,
"id": "app-0bd34f11-7970-44eb-9ce0-e969e9a4ef80",
"nameOverride": "edge-cdc",
"tier0Mode": "ACTIVE_STANDBY",
"type": "NSX_EDGE_CLUSTER",
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"name": "nsx-edge001",
"enabled": true,
"id": "app-4f44afa4-e83d-4129-9fef-1854d762fc67",
"nameOverride": "tb1-cdc-edge01",
"parent": "edge-cdc",
"type": "NSX_EDGE",
"ipIndex": 38
},
{
"name": "nsx-edge002",
"enabled": true,
"id": "app-c3311d3a-0931-4b77-9f3f-d4e976e0e88f",
"nameOverride": "tb1-cdc-edge02",
"parent": "edge-cdc",
"type": "NSX_EDGE",
"ipIndex": 39
},
{
"name": "tb1-cdc-mgmt-clus",
"enabled": true,
"id": "app-313a7384-55d5-42ba-aa1e-023b935a3770",
"nameOverride": "tb1-cdc-mgmt-clus",
"type": "TCA_MANAGEMENT_CLUSTER",
"ipIndex": 45,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-bootstrapper-clus",
"enabled": true,
"id": "app-d7376ba4-fc02-4612-8fee-62f1df817b86",
"nameOverride": "tb1-cdc-bootstrapper-clus",
"type": "BOOTSTRAPPER_CLUSTER",
"ipIndex": 46,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-tca",
"enabled": true,
"id": "app-b90c397a-c33b-4eb7-80dc-d7fc072b1e13",
"nameOverride": "tb1-tca",
"type": "TCA",
"ipIndex": 42,
"rootPassword": "Base64 encoded password"
},

VMware, Inc. 73
VMware Telco Cloud Automation User Guide

{
"name": "tb1-cdc-tcacp",
"enabled": true,
"id": "app-21d4a277-de90-4c19-a2ea-19d67aa48f36",
"nameOverride": "tb1-cdc-tcacp",
"type": "TCA_CP",
"ipIndex": 43,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vrli",
"enabled": true,
"id": "app-c2acb9ef-9e9c-4f79-b792-fbc5016132e7",
"nameOverride": "tb1-cdc-vrli",
"type": "VRLI",
"ipIndex": 41,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "vsannfs",
"enabled": false,
"id": "app-85e35807-12c9-471e-bb5d-11c68c039af5",
"nameOverride": "tb1-cdcvsanfs",
"type": "VSAN_NFS",
"ipIndexPool": [
{
"start": 47,
"end": 49
}
],
"nodeCount": 3,
"shares": [
{
"name": "default-share",
"quotaInMb": 10240
}
],
"_comments": [
"FQDN for each appliance will be generated as {appliance.name}
{nodeIndex}-{domain.name}.{dnsSuffix}.",
"nodeCount should be same with host number provisioned in day1
operation.",
"Make sure ipIndexPool size larger than nodeCount",
"nodeCount should be same with host number provisioned in day1
operation."
],
"rootPassword": "Base64 encoded password"
}
],
"csiTags": {},
"csiCategories": {
"useExisting": false
}
},

VMware, Inc. 74
VMware Telco Cloud Automation User Guide

{
"name": "rdc",
"type": "REGIONAL_SITE",
"subType": "MANAGEMENT",
"enabled": false,
"preDeployed": {
"preDeployed": false
},
"minimumHosts": 3,
"location": {
"city": "Bengal\u016bru",
"country": "India",
"address": "",
"longitude": 77.56,
"latitude": 12.97
},
"licenses": {
"vc": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"nsx": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"esxi": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"vsan": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"tca_cp": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
],
"vrli": [
"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
]
},
"switches": [
{
"name": "rdc-dvs001",
"uplinks": [
{
"pnic": "vmnic0"
},
{
"pnic": "vmnic1"
}
]
}
],
"services": [
{
"name": "networking",
"type": "nsx",
"enabled": true,
"nsxConfig": {

VMware, Inc. 75
VMware Telco Cloud Automation User Guide

"shareTransportZonesWithParent": false
}
},
{
"name": "storage",
"type": "vsan",
"enabled": true,
"vsanConfig": {
"vsanDedup": false
}
}
],
"networks": [
{
"switch": "rdc-dvs001",
"type": "management",
"name": "management",
"segmentType": "vlan",
"vlan": 3406,
"mtu": 1500,
"mac_learning_enabled": false,
"gateway": "172.17.6.253",
"prefixLength": 24,
"_comments": [
"If K8S master/worker nodes will be installed on this network,
then it requires DHCP configured on the network"
]
},
{
"switch": "rdc-dvs001",
"type": "vMotion",
"name": "vMotion",
"segmentType": "vlan",
"vlan": 3408,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.8.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.8.21",
"end": "172.17.8.30"
}
]
},
{
"switch": "rdc-dvs001",
"type": "vSAN",
"name": "vSAN",
"segmentType": "vlan",
"vlan": 3409,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.9.253",
"prefixLength": 24,

VMware, Inc. 76
VMware Telco Cloud Automation User Guide

"ipPool": [
{
"start": "172.17.9.21",
"end": "172.17.9.30"
}
]
},,
{
"switch": "rdc-dvs001",
"type": "nsxHostOverlay",
"name": "nsxHostOverlay",
"segmentType": "vlan",
"vlan": 3407,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.7.253",
"prefixLength": 24,
"_comments": [
"This network requires DHCP configured on the network"
]
},
{
"switch": "rdc-dvs001",
"type": "nsxEdgeOverlay",
"name": "nsxEdgeOverlay",
"segmentType": "vlan",
"vlan": 3410,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.10.253",
"prefixLength": 24,
"ipPool": [
{
"start": "172.17.10.21",
"end": "172.17.10.30"
}
]
},
{
"switch": "rdc-dvs001",
"type": "uplink",
"name": "uplink1",
"segmentType": "vlan",
"vlan": 3411,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.11.253",
"prefixLength": 24,
"ipAddresses": [
"172.17.11.102",
"172.17.11.103"
]
},
{
"switch": "rdc-dvs001",

VMware, Inc. 77
VMware Telco Cloud Automation User Guide

"type": "uplink",
"name": "uplink2",
"segmentType": "vlan",
"vlan": 3410,
"mtu": 9000,
"mac_learning_enabled": false,
"gateway": "172.17.10.253",
"prefixLength": 24,
"ipAddresses": [
"172.17.10.102",
"172.17.10.103"
]
}
],
"applianceOverrides": [
{
"name": "tb1-cdc-cb",
"enabled": true,
"id": "app-17d69bcf-a3c4-4f74-b9c9-777f7857afd8",
"nameOverride": "tb1-rdc-cb",
"type": "CLOUD_BUILDER",
"ipIndex": 52,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-sddcmgr",
"enabled": true,
"id": "app-7dbaab47-6995-4147-b652-4722c23cfa69",
"nameOverride": "tb1-rdc-sddcmgr",
"type": "SDDC_MANAGER",
"ipIndex": 53,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vc",
"size": "small",
"enabled": true,
"id": "app-b5ead9d7-0ac5-4a24-9b61-763527b3391f",
"nameOverride": "tb1-rdc-vc",
"type": "VC",
"ipIndex": 51,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vro",
"enabled": true,
"id": "app-890f0dd2-08c9-4b95-83d3-a4272ea93886",
"nameOverride": "tb1-rdc-vro",
"type": "VRO",
"ipIndex": 60,
"rootPassword": "Base64 encoded password"
},

VMware, Inc. 78
VMware Telco Cloud Automation User Guide

{
"name": "nsx-cdc",
"size": "large",
"enabled": true,
"id": "app-cfa7e716-6056-4843-924d-bdb950878e6a",
"nameOverride": "tb1-rdc-nsx",
"type": "NSX_MANAGER",
"ipIndex": 54,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"name": "nsx001",
"enabled": true,
"id": "app-1e02e9bd-a526-4343-a217-7e0b494b0c22",
"nameOverride": "tb1-rdc-nsx01",
"parent": "tb1-rdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 55
},
{
"name": "nsx002",
"enabled": true,
"id": "app-f4738cd2-7414-441c-9b0a-303962a784af",
"nameOverride": "tb1-rdc-nsx02",
"parent": "tb1-rdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 56
},
{
"name": "nsx003",
"enabled": true,
"id": "app-c7395a79-7390-4d8e-a47b-ceaa020fb138",
"nameOverride": "tb1-rdc-nsx03",
"parent": "tb1-rdc-nsx",
"type": "NSX_MANAGER_NODE",
"ipIndex": 57
},
{
"name": "edgecluster001",
"size": "large",
"enabled": true,
"id": "app-f9b7b4aa-ec57-406d-aad1-b0d237f3866f",
"tier0Mode": "ACTIVE_STANDBY",
"type": "NSX_EDGE_CLUSTER",
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"name": "nsx-edge001",
"enabled": true,
"id": "app-ec56d220-a465-42ac-9a21-774b0c8fbc81",
"nameOverride": "tb1-cc-edge01",

VMware, Inc. 79
VMware Telco Cloud Automation User Guide

"parent": "edgecluster001",
"type": "NSX_EDGE",
"ipIndex": 70
},
{
"name": "nsx-edge002",
"enabled": true,
"id": "app-7536f85c-3d57-4854-b1a9-444408f77582",
"nameOverride": "tb1-cc-edge02",
"parent": "edgecluster001",
"type": "NSX_EDGE",
"ipIndex": 71
},
{
"name": "tb1-cdc-bootstrapper",
"enabled": true,
"id": "app-551ee02b-b947-400d-b655-9c0b9db21813",
"nameOverride": "tb1-cdc-bootstrapper",
"type": "TCA_BOOTSTRAPPER",
"ipIndex": 44,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-mgmt-clus",
"enabled": true,
"id": "app-24357516-acb4-40f4-872e-bc0ee56c917f",
"nameOverride": "tb1-rdc-mgmt-clus",
"type": "TCA_MANAGEMENT_CLUSTER",
"ipIndex": 64,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-bootstrapper-clus",
"enabled": true,
"id": "app-89649f4c-8787-4fe6-8afb-7ffc5f622aad",
"nameOverride": "tb1-rdc-bootstrapper",
"type": "BOOTSTRAPPER_CLUSTER",
"ipIndex": 63,
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-tcacp",
"enabled": true,
"id": "app-713f4f8f-5603-4c9f-9811-796b2523c6fc",
"nameOverride": "tb1-rdc-tcacp",
"type": "TCA_CP",
"ipIndex": 62,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "tb1-cdc-vrli",
"enabled": true,
"id": "app-51ebc34e-3e46-4462-a836-c538ddd3847b",

VMware, Inc. 80
VMware Telco Cloud Automation User Guide

"nameOverride": "tb1-rdc-vrli",
"type": "VRLI",
"ipIndex": 61,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"name": "vsannfs",
"enabled": false,
"id": "app-cd581ed8-f481-4ac4-ace3-ce84fade5d93",
"type": "VSAN_NFS",
"ipIndexPool": [
{
"start": 47,
"end": 49
}
],
"nodeCount": 3,
"shares": [
{
"name": "default-share",
"quotaInMb": 10240
}
],
"_comments": [
"FQDN for each appliance will be generated as {appliance.name}
{nodeIndex}-{domain.name}.{dnsSuffix}.",
"nodeCount should be same with host number provisioned in day1
operation.",
"Make sure ipIndexPool size larger than nodeCount",
"nodeCount should be same with host number provisioned in day1
operation."
],
"rootPassword": "Base64 encoded password"
}
],
"csiTags": {},
"csiCategories": {
"useExisting": false
}
}
],
"settings": {
"ssoDomain": "vsphere.local",
"pscUserGroup": "Administrators",
"enableCsiZoning": false,
"validateCloudBuilderSpec": true,
"csiRegionTagNamingScheme": "region-{domainName}",
"clusterCsiZoneTagNamingScheme": "zone-{domainName}",
"hostCsiZoneTagNamingScheme": "zone-{hostname}",
"dnsSuffix": "telco.net",
"airgapServer": {
"fqdn": "airgap-server.telco.net",
"caCert":
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZvVENDQTRtZ0F3SUJBZ0lKQU4rcEtkajNCdGFiTUEwR0NTcU

VMware, Inc. 81
VMware Telco Cloud Automation User Guide

dTSWIzRFFFQkRRVUFNR2N4Q3pBSkJnTlYKQkFZVEFsVlRNUkF3RGdZRFZRUUlEQWROZVZOMFlYUmxNUkV3RHdZRFZRU
UhEQWhOZVVOdmRXNTBlVEVPTUF3RwpBMVVFQ2d3RlRYbFBjbWN4RFRBTEJnTlZCQXNNQkUxNVFuVXhGREFTQmdOVkJB
TU1DMlY0WVcxd2JHVXVZMjl0Ck1CNFhEVEl5TURReU9ERXhNelF6TmxvWERUTXlNRFF5TlRFeE16UXpObG93WnpFTE1
Ba0dBMVVFQmhNQ1ZWTXgKRURBT0JnTlZCQWdNQjAxNVUzUmhkR1V4RVRBUEJnTlZCQWNNQ0UxNVEyOTFiblI1TVE0d0
RBWURWUVFLREFWTgplVTl5WnpFTk1Bc0dBMVVFQ3d3RVRYbENkVEVVTUJJR0ExVUVBd3dMWlhoaGJYQnNaUzVqYjIwd
2dnSWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElDRHdBd2dnSUtBb0lDQVFEWkV4M044VEs3NXk4RU5kVFd0WEl1cjFJ
R3Q0Z3oKaStEZmdCemR1NkJscnNSZ3RSc0UrcDR3Y0xzQ3B5NjJHNStsb0pLL0U5dlFoQWRQVkxvK1lBdlZXTEVkNjk
wdApQcW5iWHpDU3U0QjRHWVZ4Tytjd0ZlTTN5ZXBjYklDK2NGNVcrdndDaDZvaVZjS1RBVjNXeXIrVVd6TXYvem1VCj
dNNHdHbTY3VTJNOFJHR0JNY0FLOFBjblNwRzl5S01QcHA5eFVQZUx1UlhHalB6VFlXTGkySll4aERva3NLQysKVHYwT
25rTkQyUnM3UDZhU2VmSkJROTdvcVpxQllva0o4TjYzaTJpemcySDczM2F4S0Y4WVNUS2NibG5kQVVSNQpPVUMxMHZ3
OTNxaHdCekZVM1RrZzR1cUxvd3dxOHI0MC92VXE5Z2M3eFF2RlFNU3JvcldHVUphZjJHQkRzbUFRCmlXQnpIVmgvTk5
GdlkzQXBnLzhCRXpKRE9LUGxSTDlpQTZTUzFxaGlOVGlwZ3VEV0U3THVDeWJPd1l2QnN0SlIKd0ZIN0s1SDJWSkVjbF
RVdkZkZjJQZWJRU2tXLy9VeTFzQlVtRTcySXNQL2k3S0dhQ1dDUVZ4MHIzUXkwclVneQoxWFFtWlFsbUw5ZVpOc2Q5e
k9EYnk2eVlmL1Z4N1Z2b1FDQWtRZzJqYlVnTmJuTWZ4dWVuaFFHWjI0cW1XWXRqCnFoakJWcjBTU1lwUk5reGdwc2Vi
M3Y0bkRyNU1XczRzUldjWmlpOHZmdTZMUnNJclA1TERlMDRzaGtCeVJmZWYKQ2Z3MXFhc3FIalB6Z1g3N3pTTW9CSk5
LR2NUOFU4SEJKZ1Z2TWQ1bVFrbE1yVzYrNUJrMEpvK0FtM2xyb0tiNwppNWxVWnNPNzJiN29WUUlEQVFBQm8xQXdUak
FkQmdOVkhRNEVGZ1FVOEpBSnBpdUZtOGFDNDhTcnl0WkZNcENMCmZtVXdId1lEVlIwakJCZ3dGb0FVOEpBSnBpdUZtO
GFDNDhTcnl0WkZNcENMZm1Vd0RBWURWUjBUQkFVd0F3RUIKL3pBTkJna3Foa2lHOXcwQkFRMEZBQU9DQWdFQVRQaFFH
Rml4RzBNeGh0SEtkVzhQTHVwbGM4YlBtSmZuWnpVMApaUkRjRzVKNjhNT01CRW1Uc2lHY2h4djU0enF1RzB2ZHVhNHc
vRjhVYXd3bGk4Tkw3anlpYTRuU1oxbEczajAwClEzU1dCbk5kMmFVc1U2TGxrTkpHTFNsU2hYMDNEcGlHdXQxYzRrbl
djdGxzTkRoSm5ESUhzdzNDU1UrYjZKb1IKREJjbE9YVFBhT25GV2ZRMzhJc3Q5Nlk0dWxETXZLdEo2YkduOUtQdldIT
kNTeCswVFIzNkVYVWVzeTliOWR4RQpJYTFEbENlSFRja1AzOXMzTzkxeElXZE0xK1NDRXlHUklMOHZBK3BHTnk3RUJF
Rzlsd3ZvYWhKdFNlbHkyYU9ZCjZJbkVCaG0rL1pFNGtOc282VkVmblJKZnY2bVBRRlAwZTJJanI2aTI4NmNGOFQ5Wkh
pL2hyS3U0djdvSVpSNEoKbEFuTzBmQkNCcFZhL2NJa1R6WXhzSUZFTUVzTHFCSkJZaEZpWWdsVmthTVJiNnZWTW5yNE
l2bHI0VGRObytZTApDSXlmR3N2NWdyYzNZb1JiZ09vY3lYYkpvQmdBdy9pK3ZwMzllNU94ZWR1R3hwRGI0Z0hyNHkze
UdkVE4xWWVDCnJJR3FPdm5rYzZWcWNGbXpLakZndDNLSDQ4V3JoSWg2aU90ZFhQV3l1ektyWGdwSFI3WTRNdUN5K001
THFabXAKdGpzZVNYTEN0OCs2MVhLRGNFZEtLc3ltL2JPbEp1TDJVOW9VaUdFaVp6Q0wycFdxMWU0Z3doNTlwWWRJaUY
yQgpXRzhQaUx1eXZuOG9EZkEwdklIaUhVYlVDdkVkYXNSZTB2Z3JiMGwwSjBHVWlnM3J0MHZsNm4zMG1aa1gzVUs4Cj
BsS0NoSFE9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
},
"ntpServers": [
"172.17.6.14"
],
"dnsServers": [
"172.17.6.13"
],
"applianceNamingScheme": "{applianceName}",
"proxy": {
"enabled": false
},
"appliancesSharedWithManagementDomain": [
{
"type": "VRLI",
"enabled": false
}
]
},
"appliances": [
{
"type": "CLOUD_BUILDER",
"id": "app-f988dfbb-8392-436f-a66c-22deaec7919c",
"name": "tb1-cdc-cb",
"ipIndex": 32,
"enabled": true,
"adminPassword": "Base64 encoded password",

VMware, Inc. 82
VMware Telco Cloud Automation User Guide

"rootPassword": "Base64 encoded password"


},
{
"type": "SDDC_MANAGER",
"id": "app-54f28df8-cd3d-4883-8df5-94b62c2733b0",
"name": "tb1-cdc-sddcmgr",
"ipIndex": 33,
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"type": "VC",
"id": "app-3b1dbec6-ea4f-4a19-9296-351a6b659b88",
"name": "tb1-cdc-vc",
"ipIndex": 31,
"size": "small",
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"type": "VRO",
"id": "app-411b4fbc-c039-4b3f-9a45-a64b3c266acf",
"name": "tb1-cdc-vro",
"ipIndex": 40,
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "NSX_MANAGER",
"id": "app-16e92560-b5da-445c-bc7b-9a4fcd543872",
"name": "nsx-cdc",
"ipIndex": 34,
"size": "large",
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"type": "NSX_MANAGER_NODE",
"id": "app-ab012e4f-d6cc-449f-bcc7-cc1c2e154435",
"name": "nsx001",
"ipIndex": 35,
"parent": "nsx-cdc"
},
{
"type": "NSX_MANAGER_NODE",
"id": "app-59d6587d-6036-4df3-9487-c273581b5383",
"name": "nsx002",
"ipIndex": 36,
"parent": "nsx-cdc"
},
{

VMware, Inc. 83
VMware Telco Cloud Automation User Guide

"type": "NSX_MANAGER_NODE",
"id": "app-2a285da3-0a18-474d-851b-0a1b84d31646",
"name": "nsx003",
"ipIndex": 37,
"parent": "nsx-cdc"
},
{
"type": "NSX_EDGE_CLUSTER",
"id": "app-6d8710c3-8004-4c95-a760-220febe7a358",
"name": "edgecluster001",
"size": "large",
"tier0Mode": "ACTIVE_STANDBY",
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password",
"auditPassword": "Base64 encoded password"
},
{
"type": "TCA_BOOTSTRAPPER",
"id": "app-21890852-0f98-4fae-88bd-db316179e905",
"name": "tb1-cdc-bootstrapper",
"ipIndex": 44,
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"type": "TCA_MANAGEMENT_CLUSTER",
"id": "app-12634788-3203-4d33-8a01-05f1a9166a89",
"name": "tb1-cdc-mgmt-clus",
"ipIndex": 45,
"clusterPassword": "Base64 encoded password",
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "BOOTSTRAPPER_CLUSTER",
"id": "app-0bcbc44e-ac2e-45ed-8f53-e8d5002e030d",
"name": "tb1-cdc-bootstrapper-clus",
"ipIndex": 46,
"clusterPassword": "Base64 encoded password",
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "TCA",
"id": "app-f2d56bde-b2de-48d4-b6ef-372c46a4f3a5",
"name": "tb1-tca",
"ipIndex": 42,
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "TCA_CP",
"id": "app-8c13c502-ce1c-463d-a6c3-541b36e76558",

VMware, Inc. 84
VMware Telco Cloud Automation User Guide

"name": "tb1-cdc-tcacp",
"ipIndex": 43,
"enabled": true,
"rootPassword": "Base64 encoded password"
},
{
"type": "NSX_EDGE",
"id": "app-14f9bc62-bbcc-4e19-aae5-fba346bada85",
"name": "nsx-edge001",
"ipIndex": 38,
"parent": "edgecluster001"
},
{
"type": "NSX_EDGE",
"id": "app-777e027a-cfcd-46ee-a389-4d747786545a",
"name": "nsx-edge002",
"ipIndex": 39,
"parent": "edgecluster001"
},
{
"type": "VRLI",
"id": "app-5d1559fa-0850-429f-b4e3-0e1707e2d3b6",
"name": "tb1-cdc-vrli",
"ipIndex": 41,
"enabled": true,
"adminPassword": "Base64 encoded password",
"rootPassword": "Base64 encoded password"
},
{
"type": "VSAN_NFS",
"id": "app-cc6659f3-1ef2-4d61-a388-14ba5afaa6c9",
"name": "vsannfs",
"ipIndexPool": [
{
"start": 47,
"end": 49
}
],
"nodeCount": 3,
"enabled": true,
"shares": [
{
"name": "default-share",
"quotaInMb": 10240
}
],
"_comments": [
"FQDN for each appliance will be generated as {appliance.name}{nodeIndex}-
{domain.name}.{dnsSuffix}.",
"nodeCount should be same with host number provisioned in day1 operation.",
"Make sure ipIndexPool size larger than nodeCount",
"nodeCount should be same with host number provisioned in day1 operation."
],
"rootPassword": "Base64 encoded password"
}

VMware, Inc. 85
VMware Telco Cloud Automation User Guide

],
"images": {
"cloudbuilder": "http://172.17.6.12/images/2.1_images/VMware-Cloud-
Builder-4.4.0.0-19312029_OVF10.ova",
"vro": "http://172.17.6.12/images/2.1_images/
O11N_VA-8.6.2.20205-19108182_OVF10.ova",
"tca": "http://172.17.6.12/images/2.1_images/VMware-Telco-Cloud-
Automation-2.1.0-19714586.ova",
"haproxy": [],
"kube": [
"http://172.17.6.12/images/2.1_images/photon-3-kube-v1.22.8-vmware.1-tkg.1-
d69148b2a4aa7ef6d5380cc365cac8cd-19632105.ova"
],
"vsphere_plugin": "http://172.17.6.12/images/2.1_images/vco-plugin.zip",
"vrli": "http://172.17.6.12/images/2.1_images/VMware-vRealize-Log-
Insight-8.6.2.0-19092412_OVF10.ova"
},
"deleteDomains": []
}

Configuration and Bootstrapping


The configuration and bootstrapping involve configuring the global settings, appliance, images,
and domain-specific settings.

The configuration and bootstrapping use two tabs.

n Configuration

On this tab, you can configure global settings, appliances, and images or virtualization files
(OVF).

n Domains

On this tab, you can configure network and licenses for various sites. For example, you can
configure a central site, a regional site, a compute cluster, or a cell site group. You can also
add hosts.

Automated SDDC Deployment


After the configuration and bootstrapping phase is complete, Infrastructure Automation deploys
the software defined data center (SDDC) .

Note The SDDC deployment starts only when the minimum number of hosts are registered for a
domain and the domain is enabled.

As a part of the SDDC deployment, the following software components are installed according to
the domain type.

VMware, Inc. 86
VMware Telco Cloud Automation User Guide

Site Deployment Type

Central A full SDDC with Telco Cloud Automation. It includes:


n VMware vCenter
n VMware NSX
n VMware vRealize Orchestrator
n VMware vRealize Log Insight
n VMware Telco Cloud Automation
n VMware Telco Cloud Automation Control Plane

Regional A full SDDC, includes:


n VMware vCenter
n VMware NSX
n VMware vRealize Orchestrator
n VMware Telco Cloud Automation Control Plane
The central site controls operations in the regional site.

Compute Cluster A vCenter cluster. A central site or a regional site manages the compute cluster

Cell Site Group A set of ESXi hosts, where RAN is deployed. A central or regional site manages the hosts in
the Cell Site Group.

Ready for Network Function


The final step of Infrastructure Automation.

Infrastructure Automation deploys all the required application for the sites and makes the site
ready for network functions. Design and deployment of network services and function can start.
You can now create and initiate the network functions.

Roles
You can perform different operations based on your role.

Based on the roles and associated permissions, a user can perform different roles in
Infrastructure Automation.

System Administrator
A system administrator can manage all the sites.

A system administrator performs the management of the existing sites that are configured and
deployed. A system administrator performs operations that include:

n Adding new licenses.

n Adding new sites.

n Modifying the existing configurations.

Note The system administrator can perform operations depending on the permissions available
to the system administrator.

VMware, Inc. 87
VMware Telco Cloud Automation User Guide

vSphere Admin User


A vSphere (VC) admin user can deploy and configure all the sites.

A VC admin user performs following activities:

n Configuring the sites.

n Adding the licenses.

n Adding the host.

Deployment Configurations
You can configure the global settings, appliance settings, and provide link to ISO images to
deploy.

Configure Global Settings


You can configure networking parameters.

You can configure Service settings and Proxy Config settings on the Global Settings page.

Note You can override the values for each domain when configuring the domains.

Procedure

1 Click the Configuration tab under the Infrastructure Automation.

2 Click Global Settings.

3 To modify the global parameters, click Edit.

VMware, Inc. 88
VMware Telco Cloud Automation User Guide

4 Provide the required details for Service parameters.

Field Description

DNS Suffix Address of the DNS suffix for each appliance. For example:
telco.example.com

DNS Server The IP address of the DNS server. You can add multiple DNS server IP,
separated by comma.
Based on the network type selected during the TCA deployment, you can
enter one of the following:
n IPv4 network type: Enter IPv4 addresses or FQDNs
n IPv6 network type: Enter only FQDNs
n Dual Stack network type: Enter IPv4 addresses or FQDNs for IPv4
interfaces and FQDNs only for IPv6 interfaces

NTP Server Name of the NTP server. For example: time.vmware.com. You can add
multiple NTP server address, separated by comma.
Based on the network type selected during the TCA deployment, you can
enter one of the following:
n IPv4 network type: Enter IPv4 addresses or FQDNs
n IPv6 network type: Enter only FQDNs
n Dual Stack network type: Enter IPv4 addresses or FQDNs for IPv4
interfaces and FQDNs only for IPv6 interfaces

5 To use the proxy server, enable the Proxy Config. Click the Enabled button.

6 Provide the required details for Proxy parameters.

Field Description

Protocol Proxy protocol. Select the value from the drop-down menu.

Proxy Server IP of the proxy server.

Proxy Port Port of the proxy server.

Proxy Username Optional. User name to access the proxy server.

Proxy Password Optional. Password corresponding to the user name to access the proxy
server.

Proxy Exclusion Optional. List of IP and URLs to exclude from proxy. You can use special
characters to provide regular expression URLs. For example, *.abx.xyz.com.

7 Provide the required details for CSI Tagging parameters.

Field Description

Enabled Whether the CSI tagging is enabled.

Region Tag Naming Scheme Tagging scheme for data center. Default value: region-{domainName}.

VMware, Inc. 89
VMware Telco Cloud Automation User Guide

Field Description

Cluster Zone Tag Naming Scheme Tagging scheme for compute cluster or hosts. Default value: zone-
{domainName}.

Host Zone Tag Naming Scheme The CSI tag for the hosts. Default value: zone-{hostname}.

Note Ensure that you use the following naming schemes:

n For Region tag, ensure that the naming scheme contains {domainName}. For example,
<text_identifier>-{domainName}.

n For Cluster Zone tag, ensure that the naming scheme contains the {domainName}. For
example, <text_identifier>-{domainName}.

n For Host Zone tag, ensure that the naming scheme contains the {hostname}. For
example, <text_identifier>-{hostname}.

8 Select the Activation Mode.

Standalone mode of activation is the recommended mode irrespective of air-gapped or non-


air-gapped environment. The SaaS mode of activation is deprecated and will be removed in
future releases.

9 Provide the address of the SaaS server . For example, connect.tec.vmware.com. It is used for
both the activation and the software updates.

Note
n The option is available when you set the Activation Mode to SaaS.

n When using the air-gapped server, set the Activation Mode to Standalone.

n You can provide the air-gapped server details for VMware Telco Cloud Automation
through cloud_spec.json file.

n When you provide the air-gapped server details through cloud_spec.json, remove the
SaaS section. Set the activation mode to Standalone.

n When you provide the air-gapped server details through cloud_spec.json, add the
certificate details only if you have a self-signed CA certificate.

10 Provide the vSphere SSO Domain value.

11 Provide the TCA SSO Credentials value.

Note The TCA SSO credentials are used by Infrastructure Automation for communicating
with the TCA Manager.

12 Provide the Appliance Naming Scheme. Select the value from the drop-down menu. This
naming scheme is used for all the appliances added to VMware Telco Cloud Automation.

13 To deploy the vRealize Log Insight in management domain and share it with workload
domain, enable the Share vRLI with management domain.

VMware, Inc. 90
VMware Telco Cloud Automation User Guide

Configure Appliances
Configure the IP index and password of various appliances available under the Appliance
Configuration.

You can configure the IP index and password for all the appliances available in Infrastructure
Automation.

Note IP index is the index of the IP address in the subnet which is configured in the Networks
under Domain section. The IP for each appliance is derived by adding the IP Index to the
subnet address, so that the administrator does not need to provide an IP for each appliance
in each domain. VMware Telco Cloud Automation recommends to follow a common IP addressing
scheme for all the domains. However, if required, you can override the IP Index for each domain.
Ensure that you provide the IP index based on the subnet value.

Note
n You can configure the Root Password, Admin Password, and Audit Password, and select
the Use above credentials for all the password fields to use the same password for all the
appliances.

n When creating the password for following appliances, ensure that you follow the password
guidelines

n For Cloudbuilder:

n Minimum password length for admin password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.

n Minimum password length for root password is 8 characters and must include at least
one uppercase, one lowercase, one digit, and one special character.

n vCenter

n The admin password length is between 8 to 20 character and must contain at least
one uppercase, one lowercase, one digit, and one special character (@!#$%?^).

n The root password length is between 8 to 20 character and must contain at least one
uppercase, one lowercase, one digit, and one special character (@!#$%?^).

n NSXT password

n Minimum length for root, admin, and audit password is 12 characters and must
contain at least one lower case, one uppercase, one digit, one special character.
The password should contain at least 5 different characters. Password cannot contain
three consecutive characters. Dictionary word is not allowed. The password should
not contain more than four monotonic character sequence.

Field Description

Appliance Type The name of the appliance. It is a non-editable.

Appliance Name The name of the appliance.

VMware, Inc. 91
VMware Telco Cloud Automation User Guide

Field Description

IP Index The last octet of the IP address. The first three octets of the IP address are
computed from the IP address of the gateway IP.

Note The IP index depends on management subnet prefix length. Ensure that you
provide IP index values within the IP range dictated by that subnet prefix length.
For example, if you use subnet prefix length of 24, then the subnet has 254 IPs.
Hence, the IP index value cannot exceed 254. If you use prefix length of 27 or 28,
then the subnet has 30 or 14 IPs, respectively. The IP index values must then not
exceed 30 or 14, respectively. Ensure that you check the values before adding the
IP index.

Enabled Enable or disable the deployment of appliance across all domain.

Root Password Password of the root user of the appliance.

Note Minimum length of the password is 13 characters and it must include a special
character, a capital letter, a lower-case letter, and a number.

Admin Password Password of the administrator of the appliance.

Note Minimum length of the password is 13 characters and it must include a special
character, a capital letter, a lower-case letter, and a number.

Audit Password Password of the audit user. Applicable only for NSX Manager, and NSX Edge
cluster.

Note Minimum length of the password is 13 characters and it must include a special
character, a capital letter, a lower-case letter, and a number.

Cluster Password Password for creating the cluster. Applicable only for VMware Telco Cloud
Automation management cluster and bootstrapper cluster.

Note Minimum length of the password is 13 characters and it must include a special
character, a capital letter, a lower-case letter, and a number.

NSX Manager Configuration Applicable only for NSX Manager.


n Name: Name of the NSX Manager node.
n IP: The fourth octane of the IP address applicable to the node.

NSX Edge Cluster Configuration Applicable only for NSX Edge Cluster.
n Name: Name of the NSX Edge cluster.
n IP: The fourth octet of the IP address applicable to the node.
n Size: Size of the NSX Edge cluster. Select the option from the drop-down menu.
n Tier0Mode: Whether to deploy the NSX Edge cluster in Active-Standby or
Active-Active. Select the option from the drop-down menu.

Node Count Number of vSAN NFS nodes. Minimum three and a maximum of eight nodes are
required. Applicable only for vSAN NFS.

IP Pool List of static IP indexes for vSAN NFS nodes. Each vSAN NFS node requires one IP.
Applicable only for vSAN NFS.

Shares Size of the NFS share. Applicable only for vSAN NFS.

Procedure

1 Click the Configuration tab under the Infrastructure Automation.

VMware, Inc. 92
VMware Telco Cloud Automation User Guide

2 Click Appliance Configuration.

3 To modify the parameters, click Edit.

Add Images or OVF


Add the URL of the appliance images.

Provide the location where the Infrastructure Automation can locate the install images for all
appliances. The web server stores all the images of application. Provide the complete link of each
appliance image.

To configure the Appliance, follow the steps:

Procedure

1 Click the Configuration tab.

2 Click Images.

3 Click Edit.

4 Provide complete URL of each appliance image.

Note
n You can add multiple images for VMware Tanzu Kubernetes Grid and VMware Tanzu
Kubernetes Grid - HA Proxy.

n For the air-gapped environment, VSAN NFS requires OVF file.

n Manual installation of vSAN requires additional files. For details, see vSAN Manual
approach and add the files required for manual approach in the image server.

Add Certificate Authority


You can configure the certificate authority.

The certificate authority (CA) issues the digital certificate. These certificates help to create a
secure connection between various appliances of a domain.

To add the certificate authority, perform the following:

Procedure

1 Click the Configuration tab

2 Click Security.

3 To add a new certificate signing authority, click Add Certificate Authority.

VMware, Inc. 93
VMware Telco Cloud Automation User Guide

4 Enter the following details on the Add Certificate Authority page.

Field Value

Name The fully qualified domain name (FQDN) of the server.

Country The two-letter ISO code for the country where the organization is located.

Key Size Size of the key used in the certificate.

Valid for days The number of for which the certificate is valid.

Locality The city where the organization is located.

Email Address An email address of the organization.

Organization The complete legal name of the organization. It can include suffixes such as
Inc, Corp, or LLC. Do not use abbreviation.

Organization Unit The division of the organization handling the certificate.

State The state or region where the organization is located. Do not use
abbreviation.

5 To confirm the details, click Add.

Add a Host Profile


You can add, modify, or delete the host profile.

You can create a host profile with specific set of BIOS settings, firmware version, PCI devices,
and PCI groups. When you create a host and select a specific host profile for a domain, VMware
Telco Cloud Automation applies the configurations of the specified host profile to all the hosts
within that domain.

Note
n To upgrade Supermicro firmware, see Supermicro Firmware Upgrade.

n To upgrade Dell firmware, see Dell Firmware Upgrade.

n To obtain the firmware details, see Obtain the Current Firmware Version.

n To add PTP details, see A1: PTP Overview.

Prerequisites

n Ensure that you have details of BIOS keys and values.

n Ensure that you have details of the firmware.

Procedure

1 Click the Configuration tab.

2 Click Host Profile.

VMware, Inc. 94
VMware Telco Cloud Automation User Guide

3 To add a new host profile, click Add .

Note To create a new host profile using the configuration file of other host profile, click Load
Configuration and select the required JSON file.

4 Add the profile name in the Profile Name field.

5 In the BIOS Settings Field, to add values click Add Attributes.

n Add the BIOS key in the Key field.

n Add the corresponding value of the BIOS key in the Value field.

6 In Firmwares, to add values, click Add Firmware.

n Add the firmware name in the Name field.

n Add the identity of the firmware, that the vendor provides, in the Software field.

n Add the version of the firmware to which you want to upgrade the current firmware in the
Version field.

n Add the location of the firmware upgrade file in the Location field.

Note Ensure that you provide a valid URL. The URL must start with HTTP and end with
extensions .XML or .EXE.

n Add the value of checksum of the firmware upgrade file in the Checksum field.

7 In the PCI Device Settings, to add values click Add Device.

n To add a PCI device action, click Add Action.

n Select the value from the drop-down menu. You can select SR-IOV for SRIOV based
devices, PassThrough for PassThrough devices, or Custom for ACC100 devices.

n For the SRIOV device, configure the value of Number of Virtual Functions.

n For PassThrough device, configure the value of Enable Passthrough.

n For Custom (ACC100) devices, provide the configuration file required for ACC100 in
Configuration File field.

n To add a filter for PCI devices, click Add Filter. Provide the values of Key and Value field.

8 In the PCI Device Groups, to add create a device group click Add Group .

a Add the group name in the Device Group Name.

b To add a filter for the device group, click Add Filter and enter the key and value in the
Key and Value field. You can select the value from the drop-down list.

n NUMA ID

n Device ID

n Vendor ID

VMware, Inc. 95
VMware Telco Cloud Automation User Guide

n Alias

n Index

9 In the ESXi Reservation, to add ESXi details.

n Reserved cores per NUMA node - Number of cores reserved for ESXi process. For ESXi
version 7.0U2 and above, the default value is 1. For other ESXi versions, the default value
is 2.

n Reserved Memory per NUMA node - Memory reserved for ESXi process. The default
value is 512 MB.

n (Optional) Min. cores for CPU reservation per NUMA node - Number of physical core
reserved for each NUMA node. If you do not configure this parameter, the value from
reservedCoresPerNumaNode is applied. The default value is 3.

What to do next

Edit, Clone, and Export a Host Profile.

Edit, Clone, and Export a Host Profile


You can edit, clone, export, or delete an existing host profile.

You can modify a host profile, export the host profile details, or create a copy of the host profile
using clone function. You can also delete the host profile and refresh the host profile details.

Prerequisites

Ensure that a host profile exists.

Procedure

1 Click the Configuration tab.

2 Click the Host Profile tab.

3 Select the host profile on which you want to perform the operation.

4 To delete a host profile, click Delete.

5 To modify a host profile, click Edit.

6 To export the configurations of a host profile in a JSON file, click Export. You can use this
JSON file to create a new host profile.

7 To create a duplicate host profile, click Clone.

8 To refresh the details of all the host profile on the Host Profile page, click Refresh.

Supermicro Firmware Upgrade


Manual steps to upgrade the firmware of supermicro.

VMware, Inc. 96
VMware Telco Cloud Automation User Guide

Supermicro firmware upgrade involves manual steps. These steps include creating upgrade
package, modifying the script and validating the integrity of the upgrade package.

Note Ensure that you upload the downloaded firmware upgrade package, the upgrade-
script.sh, and firmware-index.xml in the same absolute path.

Prerequisites

n Obtain the firmware upgrade file.

n Upload the firmware file to a web server.

n Ensure that Telco Cloud Automation has permission to access the web server location to
obtain the uploaded files and packages.

Procedure

1 Create the upgrade-script.sh. Use the below example to create the upgrade script.

Note
n The example uses E810 card. To create the script for other cards, change the E810 to the
card name for which you need to create the script.

n Modify the command nvmupdaten64e with the required command based on the card type.
You can get the commands in readme.txt file in upgrade package.

datastore=$(esxcli storage filesystem list| awk '{ print $1 }' | tail -n +3 | head -n 1)
echo $datastore

cd $datastore/; rm -rf E810; ls *.gz |xargs -n1 tar -xzf

cd $datastore/

check_version(){
#Dont forget the space added below
if echo "X" | ./nvmupdaten64e | grep "Update " ; then
echo "Inside check_version"
echo "./nvmupdaten64e"
return 1
else
echo "it's in else"
return 0
fi
}

e810=$(ls | grep -nr "E810")


if [ -z "$e810" ]
then
echo "Contains different card"
exit 1
else
cd E810

VMware, Inc. 97
VMware Telco Cloud Automation User Guide

esx=$(ls | grep -nr "ESXi_x64")


if [ -n "$esx" ]
then
cd ESXi_x64
echo $PWD
nvm=$(ls | grep -nr "nvmupdaten64e")
echo "NVM is $nvm"
if [[ $1 == "--check_version" ]]
then
check_version
output=$?
echo "output is $output"
return $output
fi
if [[ -n "$nvm" ]]
then
echo "PWD is: $PWD"
./nvmupdaten64e -u -l -o update.xml -b -c nvmupdate.cfg
return 0
else
echo "Invalid package"
return 1
fi
fi
fi

2 Generate the checksum for the upgrade-script.sh.

3 Generate the checksum for the downloaded firmware file.

4 Create the firmware-index.xml. Use the below example to create the firmware index file.

<metaList>
<metadata>
<url>E810_NVMUpdatePackage_v2_32_ESX.tar.gz</url>
<checksum>fbbb201dfcc4c900e4fc5d3a6f4264110d4a32cdecec43c55d04164130b8d249</
checksum>
</metadata>
<metadata>
<url>upgrade-script.sh</url>
<checksum>0faa2fb41347377ad1435911abc4eb38246a7fcf5c3cdcea3e21e34778678cac</
checksum>
</metadata>
</metaList>

a url : In the first url tag, enter the name of the upgrade file.

b checksum : In the first checksum tag, enter the checksum generated for the upgrade
package file.

c url : In the second url tag, enter the name of the upgrade script file.

d checksum : In the second checksum tag, enter the checksum generated for the upgrade
script file.

VMware, Inc. 98
VMware Telco Cloud Automation User Guide

5 Generate the checksum for the firmware-index.xml file.

6 Open Telco Cloud Automation.

7 Navigate to Infrastructure Automation.

8 Click Configuration and then click Host Profile.

9 On the Host Profile page, to add new host profile, click Add.

10 To add firmware details, click Add Firmware. Enter the following details:

n Add the firmware name in the Name field. For Supermicro, this is a user defined field.

n Add the identity of the firmware, that the vendor provides, in the Software field. For
Supermicro, this is a user defined field.

n Add the version of the firmware to which you want to upgrade the current firmware in the
Version field.

n Add the location of the firmware-index.xml file in the Location field.

Note Ensure that you use only a HTTP-based URL.

n Add the checksum generated for the firmware-index.xml file in the Checksum field.

Dell Firmware Upgrade


Upgrade procedure for dell firmware.

The task provide details on how to upgrade the dell firmware. You can add these details in the
host profile, for details, see Add a Host Profile.

Prerequisites

n Ensure that you have obtained the details of the firmware. To obtain the current firmware
version, see Obtain the Current Firmware Version.

n Ensure that you have uploaded the firmware file to a web server.

n Ensure that HostConfig operator can access IPMI network.

Procedure

1 Add the firmware name in the Name field.

2 Add the identity of the firmware, that the vendor provides, in the Software field. SoftwareID
represents the firmware identity. You can obtain the softwareID from the componentID in the
package.xml file bundled within the firmware package that you downloaded. To obtain the
softwareID, use the following steps:

a Download firmware upgrade file from Dell website. The upgrade packge is compressed
using zip format and the upgrade file uses .exe extension.

b Extract the upgrade zip package and obtain the package.xml file.

VMware, Inc. 99
VMware Telco Cloud Automation User Guide

c Search for componentID in package.xml. Get the componentID that matches your device.
For example, for ethernet 25G 2P XXV710 adapter, the componentID is 105834.

<Device componentID="105834" embedded="1">


<PCIInfo deviceID="158B" subDeviceID="0009" subVendorID="8086" vendorID="8086"/>
<Display lang="en"><![CDATA[Intel(R) Ethernet 25G 2P XXV710 Adapter]]></Display>
<RollbackInformation alternateRollbackIdentifier="102300"
fmpWrapperIdentifier="46127B9A-4C44-47D6-848E-4C35AA15AA55"
impactsTPMmeasurements="true" rollbackIdentifier="02330797-557b-423b-8a7e-
cf2799ba4efc" rollbackTimeout="1500" rollbackVolume="MAS022"/>
<PayloadConfiguration>
<Image filename="FmpUpdateWrapper.efi" id="5ee4b1a2-8754-42c5-9f14-168af959001b"
skip="false" type="FRMW" version="20.0.17"/>
</PayloadConfiguration>
</Device>

3 Add the version of the firmware to which you want to upgrade the current firmware in the
Version field.

4 Add the location of the firmware upgrade file in the Location field. You can download the
firmware upgrade package from Dell website. Upload the firmware upgrade package to a
web server.

Note Ensure that you use only a HTTP based url.

5 Add the value of checksum of the firmware upgrade file in the Checksum field. You can
obtain the checksum value from the Dell website using which you downloaded the firmware
upgrade package. The below example shows all the details for firmware upgrade.

softwareId: "105834" # the softwareId for XXV710


name: "XXV710"
version: "20.0.17"
location: "http://10.118.76.38/firmware-store/X710/
Network_Firmware_DK4G2_WN64_20.0.17_A00.EXE"
checksum: "a294442e2268a6ea56c4f9d7eba4b5ec74fe639bf413bddb5003678faac5db57"

Obtain the Current Firmware Version


Procedure to obtain the current firmware version

The process helps you to obtain the current firmware version of the device.

Procedure

1 You can obtain the firmware version through iDRAC.

a Login to iDRAC.

b Navigate to System.

c Click the Inventory tab.

VMware, Inc. 100


VMware Telco Cloud Automation User Guide

2 To obtain the firmware version of network interface cards (NICs), follow the steps:

a Use the SSH to login to the VMware ESXi server.

b Execute the esxcli network nic list command to get a list of all NICs.

esxcli network nic list


Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU
Description
------ ------------ ------- ------------ ----------- ----- ------ -----------------
---- -----------
vmnic0 0000:1a:00.0 bnxtnet Up Up 25000 Full bc:97:e1:d5:24:20 1500 Broadcom BCM57414
NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller
vmnic1 0000:1a:00.1 bnxtnet Up Up 25000 Full bc:97:e1:d5:24:21 1500 Broadcom BCM57414
NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller
vmnic2 0000:5e:00.0 i40en Up Up 25000 Full 40:a6:b7:59:2c:90 1500 Intel(R) Ethernet
Controller XXV710 for 25GbE SFP28
vmnic3 0000:5e:00.1 i40en Up Up 25000 Full 40:a6:b7:59:2c:91 1500 Intel(R) Ethernet
Controller XXV710 for 25GbE SFP28

c Execute the esxcli network nic get -n vmnic2|grep Firmware command to obtain the
firmware value.

esxcli network nic get -n vmnic2|grep Firmware


Firmware Version: 7.10 0x800075e6 19.5.12

Managing Domains
You can add, delete, and configure various sites to create the infrastructure.

You can add a management domain, workload domain for central site or regional sites. You can
add compute clusters, or cell sites in Infrastructure Automation. You can also add a host for each
site and perform security management for each appliance within domains.

VMware, Inc. 101


VMware Telco Cloud Automation User Guide

You can modify the details of an already added site and view the appliances related to each site.
You can resynchronize the site details after modifying the configurations, to ensure that all the
configurations are working correctly.

Note
n Starting from Release 2.3, VMware Telco Cloud Automation terminates the support for
creating a central data center, regional data center, and compute cluster using Infrastructure
Automation. The feature will enter a maintenance mode starting from releases 2.1 and 2.2.
Post termination, users will have the option to add the pre-deployed data centers through
Infrastructure Automation in a VM-based deployment.

n If you require the Host Profile function for a pre-deployed domain:

n You need to deploy the Telco Cloud Automation Control Plane manually.

n When configuring the Telco Cloud Automation Control Plane through Telco Cloud
Automation Appliance Manager, you must use FQDN for the vCenter.

n You must register the Telco Cloud Automation Control Plane on the Virtual Infrastructure
page of the Telco Cloud Automation Manager.

Figure 9-1. Domains


Node Pool-
AS

Node Pool- AMF


Infra

Harbor SMF

K8’s
GIT Node Pool-
Management
DP
Cluster

Node Pool- VRO UPF


SDM
TCA-CP
Node Pool- VC Node Pool-
AUSF
Signaling CP2
NSX
A-SBC
HSS SCP
(P-CSCF) Workload
Domain/WD03
K8’s
Management Node Pool- Node Pool- Node Pool- Node Pool- Node Pool-
PCF
Cluster Media IMS K8’s Edge AC CS
Management
VRO A-SBC Cluster Node Pool- CU DU
SMSF
(BGW) CP2
TCA-CP VRO
VC I/S/E- MEC DU DU
MRF TCA-CP CHF
CSCF
NSX VC
Workload Workload Workload NSX Workload Workload
Domain/WD01 Cluster 1 Cluster 2 Cluster 1 Cluster 2 Cell Site
Workload
K8’s Workload Cluster
Domain/WD02
TCA
K8’s K8’s K8’s
Workload Workload Workload Workload
TCA-CP TCA-CP
Cluster 1 Cluster 2 Cluster 1 Cluster
VRO VRO K8’s Workload Cluster
Control Plane Control Plane Control Plane
VRLI VRLI
Workload
VC VC Cluster
Worker Node Worker Node Worker Node
NSX NSX K8’s Workload Cluster

VSAN VSAN VSAN VSAN VSAN VSAN VSAN VSAN VSAN

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi

WD01/
Management WD01/ WD01/ Management WD02/ WD03/ WD02/
Aggregation Cell Site
Domain Compute Cluster Compute Cluster Domain Compute Cluster Compute Cluster Edge Site
Cluster

x50 x1k x20k

Central Data Center x2s Regional Data Center x10

Core Function RAN Function

VMware, Inc. 102


VMware Telco Cloud Automation User Guide

Add Management Domain


You can add the management domain for a central or regional site.

Prerequisites

n Obtain the required licenses and network information required for configuration.

n Regenerate the Self-Signed Certificates on ESXi Hosts. For details, see ESXi Host Certificate.

n Ensure that you configure for vMotion and vSAN network.

Procedure

1 Click Domains under Infrastructure Automation.

2 Select the site type.

n To add management domain for central site, click Central Site.

n To add management domain for regional site, click Regional Site.

3 Click the Add Management Domain icon.

The Add Management Domain page appears.

4 On the Add Management Domain page, provide the required information.

5 To enable the provisioning of the site, Click the button corresponding to Enabled. You cannot
perform this operation on a disabled site.

6 To add an existing management domain, click the button corresponding to Pre-Deployed .


When you enable Pre-Deployed, you must provide Default Resources.

Note
n For a pre-deployed domain, VMware Telco Cloud Automation shows only the required
configurations. Some of these configuration may not appear for non pre-deployed
domains.

n VMware Telco Cloud Automation does not perform any operation on a pre-deployed
workload domain. However, you can add compute cluster and cell site group to the
domain.

n VMware Telco Cloud Automation can auto-detect the resources if only one resource for
resource type is available in the vCenter. If multiple resources for each resource type are
available, you must fill the values.

n When you add a pre-deployed domain, always use Appliance Overrides to enter the
vCenter IP, FQDN, and password.

n For a pre-deployed domain, when adding the DVS name and management network in
Appliance Overrides, ensure that the names match the corresponding DVS name and
management network names in the vCenter.

a Datacenter - Enter the name of the data center.

VMware, Inc. 103


VMware Telco Cloud Automation User Guide

b Cluster - Enter the name of the cluster.

c Datastore - Enter the name of the datastore.

7 Enter the details.

Field Description

Name The name of the site.

Minimum number of hosts Minimum number of hosts required for the site. The number of hosts cannot
be less than 4 or more than 64.

Select Host Profile Select the host profile from the drop-down list. The selected Host profile
gets associated with the each host in the management domain.

Location The location of the site. Click the button corresponding to the location.

Search Enter the keyword to search a location.

Latitude Latitude of the compute cluster location. The details are automatically added
when you select the location. You can also modify the latitude manually.

Longitude Longitude of the compute cluster location. The details are automatically
added when you select the location. You can also modify the longitude
manually.

Settings You can modify the service settings and the proxy settings for each site.
These configurations override the global configuration available in Global
Configuration tab on Configuration page. For more details on service and
proxy parameters, see Configure Global Settings.
vSphere SSO Domain is available for local settings and not for global
settings. To configure the vSphere SSO Domain for a domain, enable the
Override and enter the required information in the corresponding Override
Value.

Note The default value of vSphere SSO Domain is vsphere.local.

For a pre-deployed site, VMware Telco Cloud Automations shows vSphere


SSO Username. Set the value of vSphere SSO Username to user belonging
to the administrator group in the underlying VMware vCenter Server. If you
do not provide the value, system takes administrator as default value.

VMware, Inc. 104


VMware Telco Cloud Automation User Guide

Field Description

Licenses Licenses of various appliances applicable to the site. These appliances


include:
n VMware vSphere (ESXi)
n VMware NSX-T Data Center
n VMware Telco Cloud Automation (available only for Central site)
n VMware Telco Cloud Automation Control Plane
n VMware vCenter Server
n VMware vRealize Log Insight
n VMware vSAN

Services You can enable the networking and storage operations for the specific site.
You can also enable or disable the compression and duplication of data
through
vSAN Deduplication and Compression option.

Note The duplication and compression works only on the all-flash disk
group. When you enable the vSAN Deduplication and Compression option,
you cannot create a hybrid storage group.

8 You can add new CSI categories or use the existing categories from the VMware VSphere
server. You can also create tags corresponding to the CSI categories. To add the CSI
Categories information, add the required information.

Note
n To configure the CSI Categories, enable the Override for the CSI Tagging under Settings,
and Override Value.

n Once added, you cannot edit or remove the CSI configuration.

Field Description

Use Existing Whether to use the existing categories set in the underlying VMware
VSphere server. Click the corresponding button to enable or disable the
option.

Note When use the Use Existing, ensure that you provide values for
both the region categories and the zone categories as set in the underlying
VMware vSphere server.
n When creating Zone category in VMware VSphere, choose Hosts and
Clusters under Associable Object Types.
n When creating Region category in VMware VSphere, choose Datacentre
under Associable Object Types.

Region The CSI category for the datacenter.

Zone The CSI category for the compute clusters or hosts.

CSI Region Tag The CSI tagging for the datacenter.

CSI Zone Tag The CSI tagging for compute clusters or hosts.

VMware, Inc. 105


VMware Telco Cloud Automation User Guide

9 Add the Switch Configuration information. Click plus icon to add more switches and uplinks.

Field Description

Switch Name of the switch.

Uplinks Select the network interface card (NIC) for the central site under Uplinks.

Note A central site requires minimum two NICs to communicate. NIC details
must match the actual configuration across all ESXi servers.

10 Add the Networks information.

Note
n For vMotion and vSAN, the IP pool should equal the total number of ESXi hosts.

n You can click + sign under Networks to create additional VLAN or overlay network to
connect with additional applications.

n For Application network type, you can add DHCP IP Pool.

n Add the gateway and prefix length when creating the VLAN application network if you
enable the networking service and deploy the edge cluster in NDC, RDC, or Compute
Cluster.

n Add the gateway and prefix length when creating the overlay network.

n Ensure that you use same switch for NSX overlay, Host overlay and uplinks for each
domain.

Field Description

Name The name of the network.

Segment Type Segment type of the network. Select the value from the list.

Network Type The type of the network.

Switch The switch details which the sites use for network access.

VLAN The VLAN ID for the network.

MTU The MTU length (in bytes) for the network.

Prefix Length The prefix length for each packet for the network.

Note The Prefix length is applicable only for the IPv4 environment.

Gateway Address The gateway address for the network.

Note The gateway address is applicable only for the IPv4 environment.

VMware, Inc. 106


VMware Telco Cloud Automation User Guide

11 (Optional) Add the Appliance Overrides information. Ensure that the appliance names match
the actual names entered in DNS. If they do not match, you can change the name.

Note
n For NSX-Edge cluster configuration:

n To override the Edge form factor, select the Size from the drop-down menu.

n To override the HA, select the Tier0Mode from the drop-down menu.

n You can configure the Root Password, Admin Password, and Audit Password, and select
the Use above credentials for all the password fields to use the same password for all
the appliances.

n When overriding the password for following appliances, ensure that you follow the
password guidelines

n For Cloudbuilder:

n Minimum password length for admin password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.

n Minimum password length for root password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.

n vCenter

n The admin password length is between 8 to 20 character and must contain at


least one uppercase, one lowercase, one digit, and one special character (@!#$%?
^).

n The root password length is between 8 to 20 character and must contain at least
one uppercase, one lowercase, one digit, and one special character (@!#$%?^).

n NSXT password

n Minimum length for root, admin, and audit password is 12 characters and
must contain at least one lower case, one uppercase, one digit, one special
character. The password should contain at least 5 different characters. Password
cannot contain three consecutive characters. Dictionary word is not allowed. The
password should not contain more than four monotonic character sequence.

Field Description

Root Password Password of the root user of the appliance.

Note Minimum length of the password is 13 characters and it must include a


special character, a capital letter, a lower-case letter, and a number.

Admin Password Password of the administrator of the appliance.

Note Minimum length of the password is 13 characters and it must include a


special character, a capital letter, a lower-case letter, and a number.

VMware, Inc. 107


VMware Telco Cloud Automation User Guide

Field Description

Audit Password Password of the audit user. Applicable only for NSX Manager, and NSX Edge
cluster.

Note Minimum length of the password is 13 characters and it must include a


special character, a capital letter, a lower-case letter, and a number.

Cluster Password Password for creating the cluster. Applicable only for VMware Telco Cloud
Automation management cluster and bootstrapper cluster.

Note Minimum length of the password is 13 characters and it must include a


special character, a capital letter, a lower-case letter, and a number.

Override Whether to override the current values.

Appliance Type The type of the appliance.

Name The name of the appliance.

Name Override The new name of the appliance to override the previous name of appliance.

IP Index The IP index of the appliance. The value is fourth octet of the IP address.
The initial three octets are populated from the network address provided in
domain.
VMware Telco Cloud Automation uses IP index to calculate the IP address of
the appliance. It adds the IP Index to the base address of the management
network to obtain the IP address of the appliance.

Note
n IP index is applicable only for the IPv4 environment.
n The IP index depends on management subnet prefix length. Ensure that
you provide IP index values within the IP range dictated by that subnet
prefix length. For example, if you use subnet prefix length of 24, then
the subnet has 254 IPs. Hence, the IP index value cannot exceed 254.
If you use prefix length of 27 or 28, then the subnet has 30 or 14
IPs, respectively. The IP index values must then not exceed 30 or 14,
respectively. Ensure that you check the values before adding the IP
index.

Enabled Whether the appliance is enabled and available for operations.

What to do next

n Add Workload Domain.

n Add Host to a Site.

n Certificate Management.

Edit a Management Domain


You can modify the management domain details, add a host, modify appliance details and
perform certificate management.

VMware, Inc. 108


VMware Telco Cloud Automation User Guide

You can modify the configuration of a management domain, add a host, view the list of
appliances applicable to the management site, and perform certificate management operations
such as generate Certificate Signing Request (CSR), download CSR, and retry the download or
generate CSR operations.

Note
n You cannot modify the CSI tagging information.

n You can add the CSI tagging information only for a new domain.

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the Central Site or Regional Site icon.

3 Select the management site to edit.

4 Click Edit.

5 To modify the configurations, click the Configuration tab.

6 To add a new host, click the Host tab.

7 To view the list of available appliances, click the Appliance tab.

8 To perform certificate operations, click Certificate Management.

What to do next

n Add Host to a Site.

n Certificate Management.

Add Workload Domain


You can add the workload domain for a central or regional site.

To add a workload domain, follow the steps:

Prerequisites

n Obtain the licenses and network information required for configuration.

n Regenerate the Self-Signed Certificates on ESXi Hosts. For details, see ESXi Host Certificate.

n Ensure that you configure the gateway for vMotion and vSAN network.

Procedure

1 Click Domains under Infrastructure Automation.

2 Select the site type.

n To add workload domain for central site, click Central Site.

n To add workload domain for regional site, click Regional Site.

VMware, Inc. 109


VMware Telco Cloud Automation User Guide

3 Click the Add Workload Domain icon.

The Add Workload Domain page appears.

4 To enable the provisioning of the site, click the button corresponding to Enabled. You cannot
perform operations in a disabled site.

5 To add an existing workload domain, click the button corresponding to Pre-Deployed. When
you enable Pre-Deployed, you must provide Default Resources.

Note
n For a pre-deployed domain, VMware Telco Cloud Automation shows only the required
configurations. Some of these configuration may not appear for non pre-deployed
domains.

n VMware Telco Cloud Automation does not perform any operation on a pre-deployed
workload domain. However, you can add compute cluster and cell site group to the
domain.

n VMware Telco Cloud Automation can auto-detect the resources if only one resource for
resource type is available in the vCenter. If multiple resources for each resource type are
available, you must fill the values.

n When you add a pre-deployed domain, always use Appliance Overrides to enter the
vCenter IP, FQDN, and password.

n For a pre-deployed domain, when adding the DVS name and management network in
Appliance Overrides, ensure that the names match the corresponding DVS name and
management network names in the vCenter.

a Datacenter - Enter the name of the data center.

b Cluster - Enter the name of the cluster.

c Datastore - Enter the name of the datastore.

6 Enter the required details.

Field Description

Name The name of the site.

Minimum number of hosts Minimum number of hosts required for the site. The number of hosts cannot
be less than 4 or more than 64.

Select Host Profile Select the host profile from the drop-down list. The selected Host profile
gets associated with the each host in the workload domain.

Parent site Select the parent site from the drop-down menu.

Location The location of the site. Click to add the location details.

Search Enter the keyword to search a location.

Address Enter the address of the location.

VMware, Inc. 110


VMware Telco Cloud Automation User Guide

Field Description

Latitude Latitude of the compute cluster location. The details are automatically added
when you select the location. You can also modify the latitude manually.

Longitude Longitude of the compute cluster location. The details are automatically
added when you select the location. You can also modify the longitude
manually.

Settings You can modify the service settings and the proxy settings for each site.
These configurations override the global configuration available in Global
Configuration tab on Configuration page. For more details on service and
proxy parameters, see Configure Global Settings.
vSphere SSO Domain is available for local settings and not for global
settings. To configure the vSphere SSO Domain for a domain, enable the
Override and enter the required information in the corresponding Override
Value.
For a pre-deployed site, VMware Telco Cloud Automations shows vSphere
SSO Username. Set the value of vSphere SSO Username to user belonging
to the administrator group in the underlying VMware vCenter Server. If you
do not provide the value, system takes administrator as default value.

Licenses Licenses of various appliances applicable to the site. These appliances


include:
n VMware vSphere (ESXi)
n VMware NSX-T Data Center
n VMware Telco Cloud Automation (available only for Central site)
n VMware Telco Cloud Automation Control Plane
n VMware vCenter Server
n VMware vRealize Log Insight
n VMware vSAN

Services You can enable the networking and storage operations for the specific site.
You can also enable or disable the compression and duplication of data
through
vSAN Deduplication and Compression option.

Note The duplication and compression works only on the all-flash disk
group. When you enable the vSAN Deduplication and Compression option,
you cannot create a hybrid storage group.

VMware, Inc. 111


VMware Telco Cloud Automation User Guide

7 You can add new CSI categories or use the existing categories from the VMware VSphere
server. You can also create tags corresponding to the CSI categories. To add the CSI
Categories information, add the required information.

Note
n To configure the CSI Categories, enable the Override for the CSI Tagging under Settings,
and Override Value.

n Once added, you cannot edit or remove the CSI configuration.

Field Description

Use Existing Whether to use the existing categories set in the underlying the VMware
VSphere server. Click the corresponding button to enable or disable the
option.

Note When use the Use Existing, ensure that you provide the values for
both region categories and zone categories as set in the underlying VMware
vSphere server.
n When creating Zone category in VMware VSphere, choose Hosts and
Clusters under Associable Object Types.
n When creating Region category in VMware VSphere, choose Datacentre
under Associable Object Types.

Region The CSI category for the datacenter.

Zone The CSI category for the compute clusters or hosts.

CSI Region Tag The CSI tagging for the datacenter.

CSI Zone Tag The CSI tagging for the compute clusters or hosts.

8 Add the Switch Configuration information. Click plus icon to add more switches and uplinks.

Field Description

Switch The name of the switch.

Uplinks Select the network interface card (NIC) for the regional site under Uplinks.

Note A regional site requires minimum two NICs to communicate. NIC


details should match the actual configuration across all ESXi servers.

VMware, Inc. 112


VMware Telco Cloud Automation User Guide

9 Add the Networks information.

Note
n For vMotion and vSAN, the IP pool should be equal to the total number of ESXi hosts.

n To create additional VLAN or overlay network to connect with additional applications,


click + sign under Networks.

n For Application network type, you can add DHCP IP Pool.

n Add the gateway and prefix length when creating the VLAN application network if you
enable the networking service and deploy the edge cluster in NDC, RDC, or Compute
Cluster.

n Add the gateway and prefix length when creating the overlay network.

n Ensure that you use same switch for NSX overlay, Host overlay and uplinks for each
domain.

Field Description

Name The name of the network.

Segment Type Segment type of the network. Select the value from the list.

Network Type The type of the network.

Switch The switch details which the site uses to access network.

VLAN The VLAN ID for the network.

MTU The MTU length (in bytes) for the network.

Prefix Length The Prefix length for each packet for the network.

Note The prefix length is applicable only for the IPv4 environment.

Gateway Address The gateway address for the network.

Note The gateway address is applicable only for the IPv4 environment.

Network Address The network address for the network.

VMware, Inc. 113


VMware Telco Cloud Automation User Guide

10 (Optional) Add the Appliance Overrides information. Ensure that the appliance names match
the actual names entered in DNS. If they do not match, you can change the name.

Note
n For NSX-Edge cluster configuration:

n To override the Edge form factor, select the Size from the drop-down menu.

n To override the HA, select the Tier0Mode from the drop-down menu.

n You can configure the Root Password, Admin Password, and Audit Password, and select
the Use above credentials for all the password fields to use the same password for all
the appliances.

n When creating the password for following appliances, ensure that you follow the
password guidelines

n For Cloudbuilder:

n Minimum password length for admin password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.

n Minimum password length for root password is 8 characters and must include at
least one uppercase, one lowercase, one digit, and one special character.

n vCenter

n The admin password length is between 8 to 20 character and must contain at


least one uppercase, one lowercase, one digit, and one special character (@!#$%?
^).

n The root password length is between 8 to 20 character and must contain at least
one uppercase, one lowercase, one digit, and one special character (@!#$%?^).

n NSXT password

n Minimum length for root, admin, and audit password is 12 characters and
must contain at least one lower case, one uppercase, one digit, one special
character. The password should contain at least 5 different characters. Password
cannot contain three consecutive characters. Dictionary word is not allowed. The
password should not contain more than four monotonic character sequence.

Field Description

Root Password Password of the root user of the appliance.

Note Minimum length of the password is 13 characters and it must include a


special character, a capital letter, a lower-case letter, and a number.

Admin Password Password of the administrator of the appliance.

Note Minimum length of the password is 13 characters and it must include a


special character, a capital letter, a lower-case letter, and a number.

VMware, Inc. 114


VMware Telco Cloud Automation User Guide

Field Description

Audit Password Password of the audit user. Applicable only for NSX Manager, and NSX Edge
cluster.

Note Minimum length of the password is 13 characters and it must include a


special character, a capital letter, a lower-case letter, and a number.

Cluster Password Password for creating the cluster. Applicable only for VMware Telco Cloud
Automation management cluster and bootstrapper cluster.

Note Minimum length of the password is 13 characters and it must include a


special character, a capital letter, a lower-case letter, and a number.

Override Whether to override the current values.

Appliance Type The type of the appliance.

Name The name of the appliance.

Name Override The new name of the appliance to override the previous name of appliance.

IP Index IP index of the appliance. The value is fourth octet of the IP address. The
initial three octets are populated from the network address provided in
domain.
VMware Telco Cloud Automation uses IP index to calculate the IP address of
the appliance. It adds the IP Index to the base address of the management
network to obtain the IP address of the appliance.

Note
n IP index is applicable only for the IPv4 environment.
n The IP index depends on management subnet prefix length. Ensure that
you provide IP index values within the IP range dictated by that subnet
prefix length. For example, if you use subnet prefix length of 24, then
the subnet has 254 IPs. Hence, the IP index value cannot exceed 254.
If you use prefix length of 27 or 28, then the subnet has 30 or 14
IPs, respectively. The IP index values must then not exceed 30 or 14,
respectively. Ensure that you check the values before adding the IP
index.

Enabled Whether the appliance is enabled and available for operations.

What to do next

n Add Host to a Site.

n Certificate Management.

Edit a Workload Domain


You can modify the workload domain details, add a host, modify appliance details and perform
certificate management.

VMware, Inc. 115


VMware Telco Cloud Automation User Guide

You can modify the configuration of a workload domain, add a host, view the list of appliances
applicable to the management site, and perform certificate management operations such as
generate Certificate Signing Request (CSR), download CSR, and retry the download or generate
CSR operations.

Note
n You cannot modify the CSI tagging information.

n You can add the CSI tagging information only for a new domain.

To modify a regional site, follow the steps:

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the Central Site or Regional Site icon.

3 Select the workload management site to edit.

4 Click Edit.

5 To modify the configurations, click the Configuration tab.

6 To add a new host, click the Host tab.

7 To view the list of applicable appliances, click the Appliance tab.

8 To perform certificate operations, click Certificate Management.

What to do next

n Add Host to a Site.

n Certificate Management.

Add Compute Cluster


A compute cluster is a combination of sites managed by a regional or central site.

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the Compute Cluster icon.

3 Click Add.

The Add Domain page appears.

4 On the Add Domain page, provide the required information.

VMware, Inc. 116


VMware Telco Cloud Automation User Guide

5 To enable the provisioning of the site, click the button corresponding to Enabled. You cannot
perform any operation on a disabled site.

Field Description

Name The name of the site.

Minimum number of hosts Minimum number of hosts required for the site. The number of hosts cannot
be less than 4 or more than 64.

Select Host Profile Select the host profile from the drop-down list. The selected Host profile
gets associated with each host in the compute cluster domain.

Parent Site The management or workload domain that manages the cluster. Select from
the drop-down menu.

Location The location of the compute cluster.

Search Enter the keyword to search a location.

Latitude Latitude of the compute cluster location. The details are automatically added
when you select the location. You can also modify the latitude manually.

Longitude Longitude of the compute cluster location. The details are automatically
added when you select the location. You can also modify the longitude
manually.

Settings You can modify the service settings and the proxy settings for each site.
These configurations override the global configuration available in Global
Configuration tab on Configuration page. For more details on service and
proxy parameters, see Configure Global Settings.
vSphere SSO Domain is available for local settings and not for global
settings. To configure the vSphere SSO Domain for a domain, enable the
Override and enter the required information in the corresponding Override
Value.

Note The default value of vSphere SSO Domain is vsphere.local.

Licenses Not applicable. The compute cluster uses the licenses of parent site.

Services n For a compute cluster, you can activate the NSX services. For certain
workloads, if you do not require these services, you can deactivate
these services.
n To use the network services of the parent site, click the Share Transport
Zones With Parent button.
n You can use the vSAN or localstore. Select the value from the drop-
down menu.
Click Enabled button to activate or deactivate the Networking or Storage
services.

VMware, Inc. 117


VMware Telco Cloud Automation User Guide

6 You can add new CSI categories or use the existing categories from the VMware VSphere
server. You can also create tags corresponding to the CSI categories. To add the CSI
Categories information, add the required information under Settings.

Note
n To configure the CSI Categories, enable the Override for the CSI Tagging under Settings,
and Override Value.

n Once added, you cannot edit or remove the CSI configuration.

n For a vSAN disabled compute cluster, ensure that the CSI Zone tag name must contain
{hostname}. For example, <text_identifier>-{hostname}.

Field Description

Use Existing Whether to use the existing categories set in the underlying the VMware
VSphere server. Click the corresponding button to activate or deactivate the
option.

Note When using the Use Existing, ensure that you provide the values for
both the region and the zone categories as set in the underlying VMware
vSphere server.
n When creating a Zone category in VMware VSphere, choose Hosts and
Clusters under Associable Object Types.
n When creating a Region category in VMware VSphere, choose
Datacentre under Associable Object Types.

Region The CSI category for the data center.

Zone The CSI category for the compute clusters or hosts.

CSI Region Tag The CSI tagging for the data center.

CSI Zone Tag The CSI tagging for the compute clusters or hosts.

7 Add the Switch Configuration information. Click plus icon to add more switches and uplinks.

Field Description

Switch The name of the switch.

Uplinks Select the network interface card (NIC) for the compute cluster under
Uplinks.

Note A cell site requires a minimum of two NICs to communicate. The


uplinks must match the actual configuration across all ESXi servers.

VMware, Inc. 118


VMware Telco Cloud Automation User Guide

8 Add the Networks information.

Note
n For vMotion and vSAN, the IP pool should be equal to the total number of ESXi hosts. If
you do not provision the appliances, vSAN, nsxHostOverlay, nsxEdgeOverlay, uplinks are
optional.

n You can click + sign under Networks to create additional VLAN or overlay network to
connect with additional applications.

Field Description

Name The name of the network.

Segment Type Segment type of the network. Select the value from the list.

Network Type The type of the network.

Switch The switch details that the sites use for network access.

VLAN VLAN ID for the network.

MTU MTU length (in bytes) for the network.

Prefix Length Prefix the length for each packet for the network.

Gateway Address The gateway address for the network.

9 (Optional) Add the Appliance Overrides information. Ensure that the appliance names match
the actual names entered in DNS. If they do not match, you can change the name.

Note For NSX-Edge cluster configuration:


n To override the Edge form factor, select the Size from the drop-down menu.

n To override the HA, select Tier0Mode from the drop-down menu.

n You can override the values of vSAN NFS and NSX Edge Cluster for the compute cluster
and deactivate the deployment of vSAN NFS and NSX Edge Cluster for the compute
cluster.

Field Description

Override Whether to override the current values.

Appliance Type The type of the appliance.

Name The name of the appliance.

Name Override The new name of the appliance to override the previous name of appliance.

VMware, Inc. 119


VMware Telco Cloud Automation User Guide

Field Description

IP Index IP index of the appliance. The value is the fourth octet of the IP address. The
initial three octets are populated from the network address provided in the
domain.
VMware Telco Cloud Automation uses IP index to calculate the IP address of
the appliance. It adds the IP Index to the base address of the management
network to obtain the IP address of the appliance.

Note The IP index depends on management subnet prefix length. Ensure


that you provide IP index values within the IP range dictated by that
subnet prefix length. For example, if you use the subnet prefix length of
24, then the subnet has 254 IPs. Hence, the IP index value cannot exceed
254. If you use the prefix length of 27 or 28, then the subnet has 30 or
14 IPs, respectively. The IP index values must then not exceed 30 or 14,
respectively. Ensure that you check the values before adding the IP index.

Enabled Whether the appliance is enabled and available for operations.

Edit a Compute Cluster


Modify the compute cluster details.

You can modify the configuration of a compute cluster and add a host.

Note
n Modifying of CSI tagging information is not applicable.

n You can add the CSI tagging information only for a new domain.

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the Compute Cluster icon.

3 Select the compute cluster to edit.

4 Click Edit.

5 To modify the configurations, click the Configuration tab.

6 To add a new host, click the Host tab.

Add a Cell Site Group


You can add, manage, or delete a cell site group.

To add a cell site group, follow the steps:

Prerequisites

Obtain the network information required for configuration.

VMware, Inc. 120


VMware Telco Cloud Automation User Guide

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the Cell Site Group icon.

3 Click Add.

The Add Domain page appears.

4 On the Add Domain page, provide the required information.

5 Click the button corresponding to Enabled, to enable the provisioning of the site. You cannot
perform any operation on a disabled site.

6 To add an existing cell site group, click the button corresponding to Pre-Deployed. When
you add a Pre-Deployed cell site group, you can override the following values.

n DNS Suffix - Address of the DNS suffix.

n DNS Server - The IP address of the DNS server.

To configure the values, enable the Override and enter the required information in the
corresponding Override Value.

Note
n VMware Telco Cloud Automation does not perform any operation on a pre-deployed
domain.

7 Enter the required details.

Field Description

Name The name of the site.

Select Host Profile Select the host profile from the drop-down list. The selected Host profile
gets associated with each host in the cell site group.

Parent Domain Select the parent domain from the list. The parent site manages all the sites
within the cell site group.

Settings You can modify the service settings and the proxy settings for each site.
These configurations override the global configuration available in Global
Configuration tab on Configuration page. For more details on service and
proxy parameters, see Configure Global Settings.
vSphere SSO Domain is available for local settings and not for global
settings. To configure the vSphere SSO Domain for a domain, enable the
Override and enter the required information in the corresponding Override
Value.

Note The default value of vSphere SSO Domain is vsphere.local.

VMware, Inc. 121


VMware Telco Cloud Automation User Guide

8 You can add new CSI categories or use the existing categories from the VMware VSphere
server. You can also create tags corresponding to the CSI categories. To add the CSI
Categories information, add the required information under Settings.

Note
n To configure the CSI Categories, enable the Override for the CSI Tagging under Settings,
and Override Value.

n Once added, you cannot edit or remove the CSI configuration.

n For CSI zone tag, ensure that the name must contain {hostname}. For example,
<text_identifier>-{hostname}.

Field Description

Use Existing Whether to use the existing categories set in the underlying the VMware
VSphere server. Click the corresponding button to activate or deactivate the
option.

Note When using the Use Existing, ensure that you provide the values for
both region categories and zone categories as set in the underlying VMware
vSphere server.
n When creating Zone category in VMware VSphere, choose Hosts and
Clusters under Associable Object Types.
n When creating Region category in VMware VSphere, choose Datacentre
under Associable Object Types.

Region The CSI category for the data center.

Zone The CSI category for the compute clusters or hosts.

CSI Region Tag The CSI tagging for the data center.

CSI Zone Tag The CSI tagging for the compute clusters or hosts.

9 Activate or deactivate the Enable datastore customizations option. By default, this feature is
activated. However, you can deactivate it by clicking the toggle button.

Note The Enable datastore customizations field is available only for non-pre-deployed cell
site groups.

If you activate the datastore customization:

n The datastores for all the hosts associated with this domain are named based on the disk
capacity and free space. For example, if the hostname is host201-telco.example.com, the
datastore with the highest capacity is named host201_localDS-0 where host201 is the
prefix, which is the substring preceding the first hyphen (-) and 0 is the index representing
the datastore with the highest capacity. The remaining datastores are named as host201-
DO-NOT-USE-0, host201-DO-NOT-USE-1, and so on, where the indexes 0 and 1 represent the
decreasing order of the free space. 0 represents the highest possible free space.
n The customization is applicable to all the hosts in the domain.

VMware, Inc. 122


VMware Telco Cloud Automation User Guide

To change the delimiter for extracting prefixes from the host FQDNs, do the following:
a SSH to the TCA VM as admin.

b Use the su command and switch to the root user.

c Use the following docker command to enter the tcf-manager container:

docker exec -it tcf-manager bash

d Edit the DATASTORE_DELIMITER parameter in /opt/vmware/tcf/config/


application_properties.ini.

Note DATASTORE_DELIMITER is applicable only when the datastore customizations are


enabled.

If you deactivate the datastore customization:

n The datastores for all the hosts associated with this domain are named in the order in
which the datastores are fetched from vCenter. For example, if the host name is host201-
telco.example.com, the datastores are named host201-telco-localDS-0, host201-telco-
localDS-1, and so on, where host201-telco is the prefix, which is the first substring
before the dot (.) in the hostname and the indexes 0 and 1 represent the order in which
the datastores are fetched from vCenter.

10 Add the Switch Configuration information. Click plus icon to add more switches and uplinks.

Field Description

Switch The name of the switch.

Uplinks Select the network interface card (NIC) for the site under Uplinks.

Note A site requires a minimum of two NICs to communicate. NIC details


should match the actual configuration across all ESXi servers.

11 Add the Networks information.

Note
n System defines the Management network for a cell site group. User can create custom
VLAN based application networks. All cell sites in a cell site group connect with same
management network.

n For the application network type, you can enable the mac address learning for the port
groups. To enable the mac address learning, enable the Mac Learning available under
Networks.

Field Description

Name The name of the network.

Segment Type Segment type of the network. Select the value from the list.

VMware, Inc. 123


VMware Telco Cloud Automation User Guide

Field Description

Network Type The type of the network.

Switch The switch details that the sites use for network access.

VLAN The VLAN ID for the network.

MTU The MTU length (in bytes) for the network.

Prefix Length The Prefix length for each packet for the network.

Note The Prefix length is applicable only for the IPv4 environment.

Gateway Address The gateway address for the network.

Note The gateway address is applicable only for the IPv4 environment.

What to do next

Add Host to a Site.

Custom Uplink Mapping and Teaming Policy


Switch and network specification for a cell site group is extended to include custom uplink-pnic
mapping and teaming policy, respectively.

Custom mapping is an optional parameter in the domain specification. If you do not provide an
input, then the default mapping and policy is created. You can configure the custom uplink-pnic
mapping and teaming policy using the VMware Telco Cloud Automation web interface or APIs.

Note
n This feature is only applicable to a cell site group.

n You can specify the teaming policy and uplink category mapping only when you are creating
a new cell site group domain. Do not override the settings for the cell site group domains for
which the distributed virtual switches are already created.

The following example is a snippet of the CSG specification file:

{
"name": "test-csg-6",
"type": "CELL_SITE_GROUP",
"enabled": true,
"preDeployed": {
"preDeployed": false
},
"parent": "rdc1",
"switches": [
{
"name": "test-csg-6-dvs001",
"uplinks": [
{
"pnic": "vmnic0",
"name": "PortA1"
},

VMware, Inc. 124


VMware Telco Cloud Automation User Guide

{
"pnic": "vmnic1",
"name": "PortA2"
}
]
},
{
"name": "test-csg-6-dvs002",
"uplinks": [
{
"pnic": "vmnic2",
"name": "PortB1"
},
{
"pnic": "vmnic3",
"name": "PortB2"
}
]
}
],
"networks": [
{
"type": "application",
"name": "dvs1-app-network-1",
"segmentType": "vlan",
"switch": "test-csg-6-dvs001",
"vlan": 0,
"mtu": 1500,
"mac_learning_enabled": false,
"uplinkTeamingPolicy": {
"uplinkPortOrder": {
"active": [
"PortA1"
],
"standby": [
"PortA2"
],
"unused": []
}
}
},
{
"type": "application",
"name": "dvs2-app-network-1",
"segmentType": "vlan",
"switch": "test-csg-6-dvs002",
"vlan": 0,
"mtu": 1500,
"mac_learning_enabled": false,
"uplinkTeamingPolicy": {
"uplinkPortOrder": {
"active": [
"PortB1"
],
"standby": [

VMware, Inc. 125


VMware Telco Cloud Automation User Guide

"PortB2"
],
"unused": []
}
}
},
{
"type": "management",
"name": "management",
"segmentType": "vlan",
"switch": "test-csg-6-dvs001",
"vlan": 0,
"mtu": 1500,
"mac_learning_enabled": false,
"uplinkTeamingPolicy": {
"uplinkPortOrder": {
"active": [
"PortA1"
],
"standby": [
"PortA2"
],
"unused": []
}
}
}
],
"csiTags": {},
"csiCategories": {
"useExisting": false
}
}

To configure the custom uplink-pnic mapping and teaming policy from the web interface,
perform the following:

Procedure

1 Log in to the VMware Telco Cloud Automation

2 Navigate to Infrastructure Automation > Domains.

3 Click Cell Site Group.

4 Select the radio button corresponding to the Cell Site Group for which you want to configure
the uplinks.

5 In the Configurations tab, expand Switch Configuration.

6 Click the toggle button to configure uplinks and provide a unique name for each of the
uplinks.

7 Expand Networks and click the toggle button to specify a mapping for the uplink
categorization.

8 Click Save.

VMware, Inc. 126


VMware Telco Cloud Automation User Guide

Edit a Cell Site Group


You can modify the cell site group details.

You can modify the configuration of cell site group, add a host, and modify the network
configurations related to cell site group.

Note
n Modifying CSI tagging information is not applicable.

n You can add the CSI tagging information only for newly added hosts to a cell site group after
the resync.

n Once a Cell Site Group domain has provisioned or failed hosts, changing the parent of this
Cell Site Group domain and then resyncing it does not migrate the hosts to the vCenter of the
newly selected parent.

n You can specify the parent of a Cell Site Group only when adding or creating the Cell Site
Group domain. You cannot change the parent of a Cell Site Group once hosts are added to it.

To modify a cell site group, follow the steps:

Prerequisites

A cell site group is configured.

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the Cell Site Group icon.

3 To edit a cell site group:

a Select the cell site group to edit.

b Click Edit.

4 Edit the details, as required, and click Save.

Synchronize Cell Site Domain Data


For a cell site group, you can synchronize data at the domain level.

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the site type under which you want to synchronize the domain data.

3 Select the required domain.

VMware, Inc. 127


VMware Telco Cloud Automation User Guide

4 Click Resync.

The Confirm Resync dialog box appears with the Partial Resync check box selected by
default. The Partial Resync option synchronizes the data of the unprovisioned cell site group
and retries for the failed host under the unprovisioned cell site group.

5 In the Confirm Resync dialog box, click Resync.

Add Host to a Site


A minimum number of hosts are required for each site to start the automated deployment for
each site.

You can add a host to any site or site cluster. A minimum number of hosts are required for each
site type to function. You can define the minimum number of hosts for each site when adding the
site.

Note If a cell site domain has multiple Distributed Virtual Switches in it, then the switch to which
the management network is associated should also be mapped to use the vmnic that has the
vmk0 VMKernel network interface attached.

Prerequisites

n A site type for which you want to add a host is already added in Domains.

n When adding a host to the cell site group, ensure that you have at least either the parent
site or the cell site group provisioned. You cannot add a host to a cell site group that has an
unprovisioned parent site.

Parent Site Status Cell Site Group Status Host Addition

Provisioned Provisioned Allowed

Not Provisioned Provisioned Allowed

Not Provisioned Not Provisioned Not Allowed

n Ensure that the certificate is generated with the server hostname as SAN by performing the
following:

a Log in to the ESXi host using an SSH client, Putty, or any other SSH client.

b Regenerate the self-signed certificate by running the following command:

/sbin/generate-certificates

c Restart the hostd and vpxa services by running the following command:

/etc/init.d/hostd restart && /etc/init.d/vpxa restart

Procedure

1 Click the Domains tab under Infrastructure Automation.

VMware, Inc. 128


VMware Telco Cloud Automation User Guide

2 Select the data center for which you want to add a host.

3 Select the site for which you want to add a host.

4 Click Edit to modify the site details.

5 On the Host tab, click Add Host.

6 Configure the network for the host.

Fields Description

Host Address (FQDN) The associated FQDN of the host.

User Name User name to access the host.

Password Password corresponding to the user name to access the host.

vSAN Cache Device Name of the vSAN device used as cache.

You can add the IPMI information for the sites that have host profiles configured with BIOS
and firmware details.

n IPMI Username - User name to access the intelligent platform management interface
(IPMI).

n IPMI Password - Password to access the intelligent platform management interface


(IPMI).

n IPMI Address(FQDN) - Address of the IPMI interface. You must provide the fully qualified
domain name.

n Override datastore customization - Click the toggle button to override the datastore
customization configured at the domain level.

If you activate this option, the Enable datastore customizations field is made available.

n Enable datastore customizations - Click the toggle button to activate the datastore
customization on this host.

If you activate the datastore customization, the datastores on this host are named
based on the disk capacity and free space.

If you deactivate the datastore customization, the datastores on this host are named
in the order in which the datastores are fetched from vCenter.

n Pre-Deployed - Whether the host is a pre-deployed.

Note
n When adding a host to a pre-deployed cell site group, you must add only the pre-
deployed host.

n A pre-deployed host means a host already added to the vCenter and configured as
required.

VMware, Inc. 129


VMware Telco Cloud Automation User Guide

n Use Above credentials for all hosts - If you want to use same user name and password
for each host, select the checkbox.

n Use above IPMI credentials for all hosts - If you want to use same user name and
password to access IPMI for each host, select the checkbox.

7 Click Save.

Edit a Host
Modify an already created host.

To modify the configurations of an already created host or delete an unprovisioned host,


perform the following:

Prerequisites

A host is already added to a site.

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the site type under which you want to modify the host.

3 Select the site to edit.

4 Click Edit.

5 To modify a host, click the Host tab.

6 Select the host to edit. You can perform the following operations on the selected host:

n To delete a host with errors, click Delete and select Force Delete from the Delete page.

n To edit the host configuration, click Edit Host .

n To delete an unprovisioned host, click Delete Host.

n To refresh the host details, click Refresh.

Synchronize Cell Site Host Data


For a cell site group, you can synchronize data at the host level.

Procedure

1 Click Domains under Infrastructure Automation.

2 Click Cell Site Group.

3 Select the cell site group under which you want to synchronize the host data.

4 To synchronize host data, click the Host tab.

5 Select one or more hosts.

VMware, Inc. 130


VMware Telco Cloud Automation User Guide

6 Click Resync.

The Confirm Resync dialog box appears with the Partial Resync check box selected by
default.

7 Determine whether you want to choose the partial Resync option or not based on the
following:

n If you choose the Partial Resync option, hosts are processed based on any of the
following conditions:

n Status of the cell site host is FAILED, and the status of the host setting is NOT
CONFIGURED

n Status of the cell site host is PROVISIONED, and the status of the host setting is
FAILED

n If you don't choose the Partial Resync option, hosts with any status except for IN
PROGRESS and DELETING are processed.

8 In the Confirm Resync dialog box, click Resync.

Delete a Domain
Starting from VMware Telco Cloud Automation version 2.1, the process for deleting domains has
changed.

Previous Behavior
Previously, to delete a domain you had to remove the domain definition from the domains list.
And at the back-end, VMware Telco Cloud Automation deleted the domain.

For example, the following code snippet is a sample cloud spec file with two domains - test1 and
test2.

{
"domains": [
{
"name": "test1",
...
},
{
"name": "test2",
...
}
],
"settings": {
...
},
"appliances": [
...
],
"images": {
...

VMware, Inc. 131


VMware Telco Cloud Automation User Guide

}
}

To delete test1, you had to modify this cloud spec file by removing test1 from it.

{
"domains": [
{
"name": "test2",
...
}
],
"settings": {
...
},
"appliances": [
...
],
"images": {
...
}
}

Current Behavior
Now, to delete a domain you can add a list of strings (names of the domains you need to delete)
to the deleteDomains field in the cloud spec file. For example, deleteDomains" : ["cdc1",
"rdc1"].

It is optional to include or exclude the domain to be deleted in the domains list. The following
code snippet is an example for the new behavior. We provide test1 in deleteDomains list to
delete that domain.

{
"domains": [
{
"name": "test1",
...
},
{
"name": "test2",
...
}
],
"settings": {
...
},
"appliances": [
...
],
"images": {
...
},

VMware, Inc. 132


VMware Telco Cloud Automation User Guide

"deleteDomains": ["test1"]
}

Manually Removing Domain Information


Delete the information of damaged or faulted domains.

You can delete an enabled domain which is in a provisioned state, and has one or more hosts in
DELETE_FAILED state.

Prerequisites

n Remove the infrastructure associated with the domain. For example, management appliances
like vCenter, NSX manager, vRLI, vRO, TCA-CP, DVS, Portgroups, Host Folders, Network
Folders, DataCenters, Clusters, and ESXi hosts.

n Ensure that there is no active task running in Infrastructure Automation.

Procedure

1 Stop the tcf-manager docker container with the command docker stop tcf-manager .

2 Navigate to /common/lib/docker/volumes/tcf-manager-config/_data/ .

a Open the cloud_spec.json file, remove the entries of the domain as required.

b Open the cloud_config.json file, remove the entries of the domain as required.

c Open the ip_usage.json file, remove the entries of the domain as required.

3 Navigate to /common/lib/docker/volumes/tcf-manager-specs/_data/ .

a Open the certificates folder, remove the certificates of the domain as required.

b Open the csrs folder, remove the csr entries of the domain as required.

c Open the private folder, remove the entries of the domain as required.

d Remove the bringup json file of the required domain.

e Remove the appliance properties files of the required domain if available.

4 Stop the tcf-manager docker container with the command docker start tcf-manager .

Certificate Management
You can perform Certificate Signing Request (CSR) for domain.

VMware, Inc. 133


VMware Telco Cloud Automation User Guide

You can generate the CSR, upload SSL server certificate, and retry to generate the CSR.

Note
n Telco Cloud Automation supports only self-signed certificates.

n Do not enclose the certificate or the key in single or double quotes.

n In the certificate, add a new line after -----BEGIN CERTIFICATE----- and before -----END
CERTIFICATE----.

n In the private key, add a new line after -----BEGIN PRIVATE KEY----- and before -----END
PRIVATE KEY-----.

Prerequisites

Certificate Authority (CA) is added. For details on adding CA, see Add Certificate Authority.

Procedure

1 Click Domains under Infrastructure Automation.

2 Click the Central Site or Regional Site icon.

3 Select the management site to edit.

4 Click Edit.

5 To perform certificate operations, click Certificate Management.

6 Select the appliances to perform the operations.

n To generate the CSR, click Generate CSR. It generates the CSR, signs the CSR and applies
the certificate on the selected appliances.

n To upload a SSL server certificate, click Upload SSL Server Certificate.

n In Server Certificate, add the server certificate details.

n In Private Key, add the private key details.

n To finish SSL server certificate upload, click Upload.

n To retry the failed operation, click Retry.

n To refresh the certificate data for appliances, click Refresh.

7 To generate the CSR, click Generate CSR.

Viewing Tasks
You can view the status of the current and the past tasks executed in Infrastructure Automation.

You can view the status of all the tasks. This includes the current task and the older tasks. You
can view the progress, status, and the start and end time of the task.

VMware, Inc. 134


VMware Telco Cloud Automation User Guide

Procedure

1 Click Tasks tab under Infrastructure Automation.

A list of tasks appears.

2 Click the task for which you want to view details.

VMware, Inc. 135


Working with Kubernetes Clusters
10
A Kubernetes cluster is a set of nodes that run containerized applications.

Containerized applications are more lightweight and flexible than virtual machines, and they
share the operating system. In this way, Kubernetes clusters allow for applications to be more
easily developed, moved, and managed. Kubernetes clusters allow containers to run across
multiple machines and environments: Virtual, physical, cloud-based, and on-premises. For more
information about Kubernetes clusters and its components, see the Kubernetes documentation at
https://kubernetes.io/docs/concepts/overview/.

There must be a minimum of one controller node and one worker node for a Kubernetes cluster
to be operational. For production and staging, the cluster is distributed across multiple worker
nodes. For testing, the components can all run on the same physical or virtual node.

Network functions require special customizations such as Real-Time Kernel and HugePages on
Kubernetes Worker nodes. The advantage of deploying Kubernetes clusters through VMware
Telco Cloud Automation is that, it customizes the Kubernetes clusters according to its network
function requirement before deploying the CNFs.

Note The Node Customization feature is applicable only when you deploy Kubernetes clusters
through VMware Telco Cloud Automation.

VMware, Inc. 136


VMware Telco Cloud Automation User Guide

Kubernetes Cluster Deployment Process

Design Cluster
Deploy Kubernetes
Onboard vSphere VIM Template for
Management Cluster
Management Cluster

Design Cluster Kubernetes Workload


Deploy Kubernetes
Template for Cluster auto-onboarded as VIM to
Workload Cluster
Workload Cluster VMware Telco Cloud Automation

Customize Kubernetes Instantiate CNF on


Design / Upload CNF Workload Cluster Kubernetes Workload
CSAR based on CNF requirements Cluster VIM

VMware, Inc. 137


VMware Telco Cloud Automation User Guide

Late Binding and CaaS Automation Workflow

CNF LCM Late binding, CaaS Automation

VMware Telco Cloud Automation - Manager

VMware Telco Cloud Automation - Control Plane

Bootstrapper TKG CLI CCLI

5
1 2 3 4 7
6

Kubernetes NodeConfig VMConfig Kubernetes


Helm Control Operator Operator Cluster API Control
Plane Plane

TKG Management Cluster


NodeConfig
Daemon
TKG TKG Worker
Workload Node
Cluster

vSphere VMs

Legend

1. Helm - CNF Lifecycle Management (CNF LCM)


2. Kubernetes Control Plane - CNF inventory and configuration
3. NodeConfig Operator
• Kubernetes node customization
4. VMConfig Operator
• VM reconfigure
• Power on/off
5. VMware Tanzu Kubernetes Grid (TKG) CLI
• Creating TKG cluster plan files
• Talking to the management cluster for create, delete and upgrade operations through cluster APIs
• Adding nodes
6. Cluster API
• Inventory reading
• Resource management
• Event collection
7. CCLI
• List cluster details
• Run kubectl commands
• SSH to any node of the clusters

VMware, Inc. 138


VMware Telco Cloud Automation User Guide

This chapter includes the following topics:

n Working with Management Clusters

n Working with V1 Workload Clusters

n Working with v2 Workload Clusters

n Backing Up and Restoring Kubernetes Clusters

n Remotely Accessing Clusters From VMware Telco Cloud Automation

Working with Management Clusters


Update the Kubernetes version of a Management cluster, create and edit a Management cluster
template, and deploy a Management cluster.

Upgrade Management Kubernetes Cluster Version


You can upgrade the existing Management Kubernetes Cluster version to the latest versions of
Kubernetes supported in the current version of the VMware Telco Cloud Automation.

You can upgrade the Kubernetes cluster through VMware Telco Cloud Automation.

Note When you upgrade a management cluster to the latest version, the certificate renewal of
the cluster is automatically enabled and the number of days defaults to 90.

The following table lists the Kubernetes upgrade compatibility for the Management cluster when
upgrading from VMware Telco Cloud Automation.

VMware Telco Cloud Automation Existing Kubernetes Versions 1.24.10

2.2 1.23.10 Yes

Before upgrading Kubernetes to the latest version, consider the following constraints and
prepare for the upgrade plan:

n VMware Telco Cloud Automation preserves the customization performed through previous
CNF instantiate / upgrade on the nodepools of the cluster. Any manual changes performed
directly on the nodes are not preserved.

n Applications may face downtime during kubernetes upgrade and may take some time to be
available for operations.

n Clusters may take some time to be available for operations.

n Check and upgrade the required node pools in the Workload cluster.

n The IP addresses of master nodes and the worker nodes change after upgrade.

n If the upgrade fails, you can correct the configuration and perform the upgrade again.

Implications of Not Upgrading Management Cluster


Not upgrading a Management cluster can impact various operations.

VMware, Inc. 139


VMware Telco Cloud Automation User Guide

Not upgrading the management node can impact:

n Ability to edit the management cluster.

n Ability to create, upgrade, and modify the workload cluster managed through the
management cluster.

n Ability to upgrade and instantiate the CNF in the workload cluster managed through the
management cluster.

Upgrade the Kubernetes Version for a Cluster Instance


You can upgrade the existing version of Kubernetes to the latest Kubernetes.

Prerequisites

n Create an upgrade plan for the upgrading the cluster instance, considering the impact of
cluster downtime.

n Take backup of any manual customization added to the clusters. You must take the backup
manually.

Note You need to note down all the manual customization added to the clusters.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure.

The CaaS Infrastructure page is displayed.

3 Select the cluster instance for upgrade.

4 Click the Options (⋮) symbol against the Kubernetes cluster that you want to upgrade.

5 Select Upgrade Cluster.

The Upgrade Cluster window is displayed.

6 In the Select Version field, select the Kubernetes version to upgrade from the list.

7 In the Virtual Machine Template, click the option to select the VM template applicable for the
new version of Kubernetes.

8 Click Upgrade.

The upgrade process starts.

9 Click > to view the progress of the update.

What to do next

To get the latest IP address details of the node, view the Cluster Instances page.

VMware, Inc. 140


VMware Telco Cloud Automation User Guide

Working with Kubernetes Cluster Templates


A Kubernetes cluster template is a blueprint of the Kubernetes cluster and contains the required
configuration. Before creating a Kubernetes cluster, create a Kubernetes cluster template for
deploying the cluster. Using VMware Telco Cloud Automation, you can create a Kubernetes
cluster template, upload, download, edit, and, use it for deploying multiple clusters.

When you define the Kubernetes cluster template, select whether it is a Management cluster
type or a Workload cluster type.

n Management cluster - A Management cluster is a Kubernetes cluster that performs the role
of the primary management and operational center. You use the Management cluster for
managing multiple Workload clusters.

n Workload cluster - The clusters where the actual application resides. Deploy network
functions on the Workload clusters.

When creating a Kubernetes cluster template for a Management cluster or a Workload cluster,
you must provide two types of configuration information:

n Cluster Configuration - Specify the details about the Container Storage Interfaces (CSI) such
as vSphere-CSI, NFS Client, and Container Network Interface (CNI) such as Antrea, Calico, and
Multus, version of Kubernetes, and tools such as Helm Charts.
n Master Node and Worker Node Configuration - Here, you specify the details about the
master node virtual machine and the worker node virtual machines. Specify details such as
the storage, CPU, memory size, number of networks, labels, number of replicas for the master
nodes, and worker nodes, and so on.

Supported Addon Versions


The addon versions are applicable only when deploying new Kubernetes clusters. They are not
applicable when you upgrade your Kubernetes cluster to a newer version.

Addon Versions

VMware
Telco Cloud VMware Tanzu
Automation Kubernetes Grid Kubernetes
Type Name Version Version Version Version

CNI Antrea v1.7.2_vmware. 2.3.0 2.1.1 n 1.22.17


1-tkg.1- n 1.23.16
advanced
n 1.24.10
CNI Calico v3.24.1_vmware
.1-tkg.1

CNI Multus v3.8.0_vmware.


2-tkg.2

CNI Whereabouts v0.5.4_vmware.


1-tkg.1

VMware, Inc. 141


VMware Telco Cloud Automation User Guide

VMware
Telco Cloud VMware Tanzu
Automation Kubernetes Grid Kubernetes
Type Name Version Version Version Version

CSI vSphere CSI v2.6.2_vmware.


2-tkg.2

CSI NFS Client v4.0.2

Monitoring Fluent-Bit v1.9.5_vmware.


1-tkg.1

Monitoring Prometheus v2.37.0_vmware


.2-tkg.1

Networking AKO v1.8.2_vmware.


1-tkg.1

System Cert-Manager 1.10.1+vmware.


1-tkg.2

Tools Velero v1.9.5_vmware.


1

Note Few of the add-on versions are controlled by TKG and therefore the versions may change
after the release. However, the version documented in the preceding table is the minimum
version available. For information on the updated versions, refer to the TKG documentation.

Create a Management Cluster Template


Create a Management cluster template and use it to deploy your Kubernetes Management
cluster.

Prerequisites

To perform this operation, you require a role with Infrastructure Design privilege.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > Caas Infrastructure > Cluster Templates and click Add.

The Add Kubernetes Template wizard is displayed.

3 In the Template Detail tab, provide the following details:

n Name - Enter the name of the template.

n Cluster Type - Select Management Cluster.

n Description (Optional) - Enter a description for the template.

n Tags (Optional) - Add appropriate tags to the template.

VMware, Inc. 142


VMware Telco Cloud Automation User Guide

n Kubernetes Version - By default, the latest version of Kubernetes is selected.

Note The supported Container Network Interface (CNI) for a Management cluster is Antrea.

4 Click Next.

5 In the Master Node Configuration tab, enter the following details:

n Name - Name of the profile

n CPU - Number of vCPUs

n Memory - Memory in GB

n Storage - Storage size in GB

n Replica - Number of controller node VMs to be created. The ideal number of replicas for
production or staging deployment is 3.

n Networks - Enter the labels to group the networks. The minimum number of labels
required to connect to the management network is 1. Network labels are used for
providing networks inputs when deploying a cluster. Meaningful network labels such
as N1, N2, N3, and so on, help the deployment users provide the correct network
preferences. To add more labels, click Add.

Note For the Management network, master node supports only one label.

n Labels (Optional) - Enter the appropriate labels for this profile. These labels are applied to
the Kubernetes node. To add more labels, click Add.

6 To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes,
click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.

7 Click Next.

8 In the Worker Node Configuration tab, add a node pool. A node pool is a set of nodes
that have similar properties. Pooling is useful when you want to group the VMs based on
the number of CPUs, storage capacity, memory capacity, and so on. You can add one node
pool to a Management cluster and multiple node pools to a Workload cluster, with different
groups of VMs. To add a node pool, enter the following details:

n Name - Name of the profile

n CPU - Number of vCPUs

n Memory - Memory in GB

n Storage - Storage size in GB

n Replica - Number of controller node VMs to be created.

n Networks - Enter the labels to group the networks. Network labels provide networks
inputs when deploying a cluster. To add more labels, click Add.

VMware, Inc. 143


VMware Telco Cloud Automation User Guide

n Labels - Enter the appropriate labels for this profile. These labels are added to the
Kubernetes node. To add more labels, click Add.

9 To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes,
click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.

10 Click Next and review the configuration.

11 Click Add Template.

Results

The template is created.

What to do next

Create a Workload cluster template.

Add AVI Kubernetes Operator


You can add the AVI Kubernetes Operator - Operator (AKOO) when creating a Management
cluster template or by editing a cluster configuration.

This topic lists the steps to add AKOO using the Edit Cluster Configuration tab.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.

4 Click Edit Cluster Configuration.

5 In the Cluster Configuration tab, under Networking, click Add and select ako-operator.

6 Enter the following details:

Option Description

AVI Controller

Controller Host Enter the AVI controller host name. The format is
[scheme://address[:port].
n Scheme: HTTP or HTTPS. Defaults to HTTPS if the
scheme is not specified.
n Address: IPv4 address or the host name of the AVI
controller.
n Port: If port is not specified, it defaults to the port of
the AVI controller.

User Name Enter the user name to log in to the AVI controller.

Password Enter the password to log in to the AVI controller.

Trusted Certificate (Optional) Paste the trusted certificate in native multiline format for
secure communication with the AVI controller.

VMware, Inc. 144


VMware Telco Cloud Automation User Guide

Option Description

Load Balancer and Ingress Service Configuration

Cloud Name Enter the cloud name configured in the AVI Controller.

Default Service Engine Group Enter the service engine group name configured in the
AVI Controller.

Default VIP Network Enter the VIP network name in the AVI Controller.

Default VIP Network CIDR Enter the VIP network CIDR in the AVI Controller.

Note If the certificate or password of the Avi Controller expires, you can edit the AKO
Operator configurations with the new certificate or password.

7 Click Add.

Edit a Kubernetes Cluster Template


You can edit a cluster template to update its description, cluster configuration, tags, Kubernetes
version, master node configuration details, and worker node configuration details.

Note
n Ensure that storage size is 50 GB.

n Ensure that the network label length does not exceed 15 characters.
n Editing the Kubernetes cluster template does not change the cluster instances that are
already deployed.

To perform this operation, you require a role with Infrastructure Design privileges.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure and click the Cluster Templates tab.

3 Select the Kubernetes cluster template that you want to edit.

4 Click Edit.

5 In the Edit Kubernetes Template wizard, make the required updates to the template details,
master node configuration, and worker node configuration fields.

6 Review the updates and click Update Template.

Results

You have successfully updated the cluster template.

VMware, Inc. 145


VMware Telco Cloud Automation User Guide

Download and Upload a Kubernetes Cluster Template


You can download a Kubernetes cluster template as a JSON file and upload it to another
environment. This option is useful when you want to share a validated cluster template across
multiple environments.

Note To make sure that all the features are available, download or upload a Kubernetes cluster
template of the same VMware Telco Cloud Automation version.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure and click the Cluster Templates tab.

3 To download, select the cluster template and click Download.

The cluster template downloads as a JSON file.

4 To upload the JSON file to a different environment, navigate to the environment and log in to
the VMware Telco Cloud Automation web interface.

5 Go to CaaS Infrastructure > Cluster Templates.

6 Click Upload and select the JSON file.

7 Click Upload.

The cluster template uploads to your environment and is available in the CaaS Infrastructure
> Cluster Templates tab.

Delete a Kubernetes Cluster Template


Delete a Kubernetes cluster template from VMware Telco Cloud Automation.

Note You cannot delete a Kubernetes template when it is being used for deploying a cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure and click the Cluster Templates tab.

3 Select the Kubernetes cluster template that you want to delete.

4 Click Delete.

5 Confirm the delete operation.

Results

The cluster template is deleted from VMware Telco Cloud Automation.

Upgrade the Kubernetes Version for Cluster Template


You can upgrade the existing version of the Kubernetes version to the latest Kubernetes version.

VMware, Inc. 146


VMware Telco Cloud Automation User Guide

For steps to upgrade the Kubernetes version of a cluster template, see Edit a Kubernetes Cluster
Template.

Deploy a Management Cluster


Deploy a cluster using the Kubernetes cluster template.

Prerequisites

n You require a role with Infrastructure Lifecycle Management privileges.

n You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.

n You must have onboarded a vSphere VIM.

n You must have created or uploaded a Management cluster template.

n A network must be present with the DHCP range and the static IP address of the same
subnet.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure and click Deploy Cluster.

Note Depending on the VMware Telco Cloud Automation setup, internet accessed or air-
gapped, the options available for the cluster may change.

3 From the drop-down menu, select Management Cluster (v1).

n If you have saved a validated Management cluster configuration that you want to
replicate on this cluster, click Upload on the top-right corner and upload the JSON file.
The fields are then auto-populated with this configuration information and you can edit
them as required. You can also use the Copy Spec function of VMware Telco Cloud
automation instead of JSON file, for details, see Copy Spec and Deploy New.

n If you want to create a Management cluster configuration from the beginning, perform
the next steps.

4 Select a cloud on which you want to deploy the Kubernetes cluster.

Under the Advanced Options, you can select the Infrastructure for Management Cluster
LCM. The VMware Telco Cloud Automation uses this VIM and associated control planes for
cluster LCM operations.

5 Click Next.

6 The Select Cluster Template tab displays the available Kubernetes cluster templates. Select
the Management Kubernetes cluster template that you have created.

Note If the template displays as Not Compatible, edit the template and try again.

7 Click Next.

VMware, Inc. 147


VMware Telco Cloud Automation User Guide

8 In the Kubernetes Cluster Details tab, provide the following details:

n Name - Enter the cluster name. The cluster name must be compliant with DNS hostname
requirements as outlined in RFC-952 and amended in RFC-1123.

n Description (Optional) - Enter an optional description of the cluster.

n Password - Create a password to log in to the Master and Worker nodes. The default
user name is capv.

Note Ensure that the password meets the minimum requirements displayed in the UI.

n Confirm Password - Confirm the password that you have entered.

n OS Image With Kubernetes - The pop-up menu displays the OS image templates in your
vSphere instance that meet the criteria to be used as a Tanzu Kubernetes Grid base OS
image with the selected Kubernetes version. If there are no templates, ensure that you
upload them to your vSphere environment.

n IP Version - Whether to use the IPv4 or IPv6 for cluster deployment. Select the value
from the drop-down list.

n Virtual IP Address - VMware Tanzu Kubernetes Grid deploys a kube-vip pod that
provides load-balancing services to the cluster API server. Thiskube-vip pod uses a
static virtual IP address to load-balance API requests across multiple nodes. Assign an
IP address that is not within your DHCP range, but in the same subnet as your DHCP
range.

n Syslog Servers - Add the syslog server IP address/FQDN for capturing the infrastructure
logs of all the nodes in the cluster.

n vSphere Cluster - Select the default vSphere cluster on which the Master and Worker
nodes are deployed.

n Resource Pool - Select the default resource pool on which the Master and Worker nodes
are deployed.

n VM Folder - Select the virtual machine folder on which the Master and Worker nodes are
placed.

n Datastore - Select the default datastore for the Master and Worker nodes to use.

n MTU (Optional) - Select the maximum transmission unit (MTU) in bytes for management
interfaces of control planes and node pools. If you do not select a value, the default value
is 1500.

n Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured
in the guest operating system of each node in the cluster. You can override this option on
the Master node and each node pool of the Worker node. To add a DNS, click Add.

VMware, Inc. 148


VMware Telco Cloud Automation User Guide

n Airgap & Proxy Settings - Use this option when you need to configure the Airgap or the
Proxy environment for VMware Telco Cloud Automation. If you do not want to use the
Airgap or Proxy, select None.

Note You must use either airgap or proxy in an IPv6 setup. Do not select none for an
IPv6 setup.

n In an air-gapped environment:

n If you have added an air-gapped repository, select the repository using the
Airgap Repository drop-down menu.

n If you have not added an air-gapped repository yet and want to add one now,
select Enter Repository Details:

n FQDN - Enter the URL of your repository.

n CA Certificate - If your air-gapped repository uses a self-signed certificate,


paste the contents of the certificate in this text box. Ensure that you copy
and paste the entire certificate, from ----BEGIN CERTIFICATE---- to ----END
CERTIFICATE----.

n In a proxy environment:

n If you have added a proxy, select the proxy using the Proxy Repository drop-
down menu.

n If you have not added proxy yet and want to add one now, select Enter Proxy
Details and provide the following details:

n HTTP Proxy - To route the HTTP requests through proxy, enter the URL or full
domain name of HTTP proxy. You must use the format FQDN:Port or IP:Port.

n HTTPS Proxy - To route the HTTPs requests through proxy, enter the URL
or full domain name of HTTPs proxy. You must use the format FQDN:Port or
IP:Port.

n (Optional) No Proxy - Enter the name of the local server.

Note You must add the cluster node network CIDR, vCenter FQDN(s), harbor
FQDN(s) and any other host that you want to bypass the proxy in this list.

n (Optional) CA Certificate - If your air-gapped repository uses a self-signed


certificate, paste the contents of the certificate in this text box. Ensure that
you copy and paste the entire certificate, from ----BEGIN CERTIFICATE---- to
----END CERTIFICATE----.

9 In Harbor, If you have defined a Harbor repository as a part of your Partner system, click Add
> Select Repository. To add a new repository, click Add > Enter Repository Detail.

Note You can add multiple Harbor repositories.

VMware, Inc. 149


VMware Telco Cloud Automation User Guide

10 Click Next.

11 In the Control Plane Node Configuration tab, provide the following details:

Note VMware Telco Cloud Automation displays the allocated CPU, Memory, and Storage
details along with number of Replica details of the master node. These configurations depend
on the Cluster template selected for Kubernetes Cluster deployment.

n vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the Master
node, select the vSphere cluster from here.

n Resource Pool (Optional) - If you want to use a different resource pool for the master
node, select the resource pool from here.

n Datastore (Optional) - If you want to use a different datastore for the master node, select
the datastore from here.

n Network - Associate a management or a private network. Ensure that the management


network connects to a network where DHCP is enabled, and can access the VMware
Photon repository.

n Domain Name Servers - You can override the DNS. To add a DNS, click Add.

12 Click Next.

13 In the Worker Node Configuration tab, provide the following details:

Note VMware Telco Cloud Automation displays the allocated CPU, Memory, and Storage
details along with number of Replica details of the master node. These configurations depend
on the Cluster template selected for Kubernetes Cluster deployment.

n vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the worker
node, select the vSphere cluster from here.

n Resource Pool (Optional) - If you want to use a different resource pool for the worker
node, select the resource pool from here.

n Datastore (Optional) - If you want to use a different datastore for the worker node, select
the datastore from here.

n Network - Associate a management or a private network. Ensure that the management


network can access the VMware Photon repository.

n Domain Name Servers - You can override the DNS. To add a DNS, click Add.

14 Click Next and review the configuration. You can download the configuration and reuse it for
deploying a cluster with a similar configuration.

15 Click Deploy.

When deploying a management cluster, the certificate renewal of the cluster is automatically
enabled and the number of days defaults to 90.

VMware, Inc. 150


VMware Telco Cloud Automation User Guide

If the operation is successful, the cluster is created and its status changes to Active. If the
operation fails, the cluster status changes to Not Active. If the cluster fails to create, delete
the cluster, upload the previously downloaded configuration, and recreate it.

Results

The Management cluster is deployed and VMware Telco Cloud Automation automatically pairs it
with the cluster of the site.

Note You can deploy one Management cluster at a time. Parallel deployments are queued and
deployed in sequence.

What to do next

n You can view the Kubernetes clusters deployed through VMware Telco Cloud Automation
from the Kubernetes Cluster tab.

n To view more details of the Kubernetes cluster that you have deployed, change the
password, or to add syslog servers, go to CaaS Infrastructure > Cluster Instances and click
the cluster.

Edit a Management Cluster Control Plane


You can scale up or scale down the number of control plane nodes for a cluster. You can also
configure the Machine Health Check for these nodes.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click ⋮ corresponding to the management cluster that you want to edit and select Edit
Control Plane Node Configuration.

4 (Optional) modify the value of Replicas to scale down or scale up the control plane nodes.

5 (Optional) to activate the machine health check, select Configure Machine Health Check. For
more information, see Machine Health Check.

By default, the Machine Health Check controller is deactivated.

6 Under Advanced Configuration, you can configure the node start-up timeout duration and
set the unhealthy conditions.

a Node Start Up Timeout- (Optional) Enter the time duration for Machine Health Check to
wait for the node to join the cluster. If the node does not join within the specified time,
Machine Health Check considers it unhealthy and starts the remediation process.

b Node Unhealthy Conditions - Set unhealthy conditions for the nodes. If any of these
conditions are met, Machine Health Check considers these nodes as unhealthy and starts
the remediation process.

VMware, Inc. 151


VMware Telco Cloud Automation User Guide

7 Click SAVE.

Edit a Management Cluster Node Pool


You can scale up or scale down the number of nodes for a cluster. You can also configure the
Machine Health Check for these nodes.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click ⋮ corresponding to the management cluster that you want to edit and select Edit
Worker Node Configuration.

4 Select the node pool that you want to edit and click Edit.

5 Modify the value of Replicas to scale down or scale up the node pool.

6 To activate the machine health check, select Configure Machine Health Check. For more
information, see Machine Health Check.

By default, the Machine Health Check controller is deactivated.

7 Under Advanced Configuration, you can configure the Node Start Up Timeout duration and
set the unhealthy conditions.

a Node Start Up Timeout- (Optional) Enter the time duration for Machine Health Check to
wait for the node to join the cluster. If the node does not join within the specified time,
Machine Health Check considers it unhealthy.

b Node Unhealthy Conditions - Set unhealthy conditions for the nodes. If any of these
conditions are met, Machine Health Check considers these nodes as unhealthy and starts
the remediation process.

8 Click UPDATE.

Working with V1 Workload Clusters


Update the Kubernetes version of a V1 Workload cluster, create and edit a Workload cluster
template, and deploy a Workload cluster.

Upgrade Workload Kubernetes Cluster Version


You can upgrade the existing Workload Kubernetes Cluster version to the latest versions of
Kubernetes supported in the current version of the VMware Telco Cloud Automation.

The following table lists the Kubernetes upgrade compatibility for the Workload cluster when
upgrading from VMware Telco Cloud Automation.

VMware, Inc. 152


VMware Telco Cloud Automation User Guide

VMware
Telco Cloud
Automation Existing Kubernetes Versions v1.22.17 v1.23.16 v1.24.10

2.2 1.21.14 Yes No No

2.2 1.22.13 Yes Yes No

2.2 1.23.10 No Yes Yes

Note Before upgrading to TCA 2.3, it is mandatory to upgrade all the Kubernetes clusters
deployed as part of TCA 2.1.x in TCA 2.2.

Implications of Not Upgrading Workload Cluster


Not upgrading an unsupported Workload cluster can impact various operations.

Not upgrading the workload cluster can impact:

n Ability to edit the workload cluster.

n Ability to create, upgrade, and modify the node pools.

n Ability to upgrade and instantiate the CNF.

Deploying a Workload Kubernetes Cluster


Deploy a Workload cluster using the Kubernetes cluster template.You can deploy distributed
Workload clusters where the Master node can be on one vSphere cluster and the Worker node
on another vSphere cluster.

VMware Telco Cloud Automation uses VMware Tanzu Kubernetes Grid to create VMware Tanzu
Kubernetes clusters. VMware Tanzu Kubernetes Grid has concepts such as Management and
Workload clusters. The Management cluster manages the Workload clusters and both these
clusters can be deployed on different vCenter Servers.

For more information about the VMware Tanzu Kubernetes Grid concepts, see Tanzu Kubernetes
Grid Concepts at https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/index.html.

Deploy a Workload Cluster


Deploy a Workload cluster using the Kubernetes cluster template.

Prerequisites

n You require a role with Infrastructure Lifecycle Management privileges.

n You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.

n You must have onboarded a vSphere VIM.

n You must have created a Management cluster or uploaded a Workload cluster template.

n A network must be present with a DHCP range and a static IP of the same subnet.

VMware, Inc. 153


VMware Telco Cloud Automation User Guide

n When you enable multi-zone, ensure that:

n For region: vSphere Datacenter has tags attached for the selected category.

n For zone: vSphere Cluster or hosts under the vSphere cluster has tags attached for the
selected category. Ensure that vSphere Cluster and hosts under vSphere cluster do not
share the same tags.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure and click Deploy Kubernetes Cluster.

n If you have saved a validated Workload cluster configuration that you want to replicate
on this cluster, click Upload on the top-right corner and upload the JSON file. The fields
are then auto-populated with this configuration information and you can edit them as
required. You can also use the Copy Spec function of VMware Telco Cloud automation
instead of JSON file, for details, see Copy Spec and Deploy New.

n If you want to create a Workload cluster configuration from the beginning, perform the
next steps.

3 Select a cloud on which you want to deploy the Kubernetes cluster.

4 Click Next.

5 The Select Cluster Template tab displays the available Kubernetes clusters. Select the
Workload Kubernetes cluster template that you have created.

Note If the template displays as Not Compatible, edit the template and try again.

6 Click Next.

7 In the Kubernetes Cluster Details tab, provide the following details:

n Name - Enter the cluster name. The cluster name must be compliant with DNS hostname
requirements as outlined in RFC-952 and amended in RFC-1123.

n Description (Optional) - Enter an optional description of the cluster.

n Management Cluster - Select the Management cluster from the drop-down menu. You
can also select a Management cluster deployed in a different vCenter.

n Password - Create a password to log in to the Master node and the Worker node. The
default user name is capv.

Note Ensure that the password meets the minimum requirements displayed in the UI.

n Confirm Password - Confirm the password that you have entered.

n OS Image With Kubernetes - The pop-up menu displays the OS image templates in your
vSphere instance that meet the criteria to be used as a Tanzu Kubernetes Grid base OS
image with the selected Kubernetes version. If there are no templates, ensure that you
upload them to your vSphere environment.

VMware, Inc. 154


VMware Telco Cloud Automation User Guide

n Virtual IP Address - VMware Tanzu Kubernetes Grid deploys a kube-vip pod that
provides load-balancing services to the cluster API server. Thiskube-vip pod uses a
static virtual IP address to load-balance API requests across multiple nodes. Assign an
IP address that is not within your DHCP range, but in the same subnet as your DHCP
range.

n Syslog Servers - Add the syslog server IP address/FQDN for capturing the infrastructure
logs of all the nodes in the cluster.

n vSphere Cluster - Select the default vSphere cluster on which the Master and the Worker
nodes are deployed.

n Resource Pool - Select the default resource pool on which the Master and Worker nodes
are deployed.

n VM Folder - Select the virtual machine folder on which the Master and Worker nodes are
placed.

n Datastore - Select the default datastore for the Master and Worker nodes to use.

n MTU (Optional) - Select the maximum transmission unit (MTU) in bytes for management
interfaces of control planes and node pools. If you do not select a value, the default value
is 1500.

n Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured
in the guest operating system of each node in the cluster. You can override this option on
the Master node and each node pool of the Worker node. To add a DNS, click Add.

n Airgap & Proxy Settings - Use this option when you need to configure the Airgap or the
Proxy environment for VMware Telco Cloud Automation. If you do not want to use the
Airgap or the Proxy, select None.

n In an air-gapped environment:

n If you have added an air-gapped repository, select the repository using the
Airgap Repository drop-down menu.

n If you have not added an air-gapped repository yet and want to add one now,
select Enter Repository Details:

n Name - Provide a name for your repository.

n FQDN - Enter the URL of your repository.

n CA Certificate - If your air-gapped repository uses a self-signed certificate,


paste the contents of the certificate in this text box. Ensure that you copy
and paste the entire certificate, from ----BEGIN CERTIFICATE---- to ----END
CERTIFICATE----.

n In a proxy environment

n If you have added a proxy repository, select the repository using the Proxy
Repository drop-down menu.

VMware, Inc. 155


VMware Telco Cloud Automation User Guide

n If you have not added a proxy repository yet and want to add one now, select
Enter Repository Details:

n HTTP Proxy - To route the HTTP requests through the proxy, enter the URL or
full domain name of the HTTP proxy.

n HTTPS Proxy - To route the HTTPs requests through the proxy, enter the URL
or full domain name of the HTTPs proxy.

n No Proxy - Enter the name of the local server.

n CA Certificate - If your air-gapped repository uses a self-signed certificate,


paste the contents of the certificate in this text box. Ensure that you copy
and paste the entire certificate, from ----BEGIN CERTIFICATE---- to ----END
CERTIFICATE----.

n Harbor - If you have defined a Harbor repository as a part of your Partner system, click
Add > Select Repository. To add a new repository, click Add > Enter Repository Detail.

Note You can add multiple Harbor repositories.

n NFS Client - Enter the server IP address and the mount path of the NFS client. Ensure that
the NFS server is reachable from the cluster. The mount path must be accessible to read
and write.

n If all the nodes inside the Kubernetes cluster do not have access to the shared datastore,
you can enable multi-zone. To enable multi-zone, provide the following details in the
vSphere CSI:

Note Multi-Zone feature is not supported on an existing Kubernetes cluster that


is upgraded from previous VMware Telco Cloud Automation versions. It is also not
supported on a newly created workload cluster from a Management cluster that is
upgraded from a previous VMware Telco Cloud Automation version.

n Enable Multi-Zone - Click the corresponding button to enable the multi-zone feature.

n Region - Select the region from list of categories. VMware Telco Cloud Automation
obtains the information of categories created in the VMware vSphere server and
displays the list.

Note If you cannot find the region in the list, click Force Refresh to obtain the latest
list of categories from the VMware vSphere server.

n Zone - Select the zone from list of categories. VMware Telco Cloud Automation
obtains the information of zones created in the VMware vSphere server and displays
the list.

Note If you cannot find the zone in the list, click Force Refresh to obtain the latest list
of categories from the VMware vSphere server.

VMware, Inc. 156


VMware Telco Cloud Automation User Guide

n vSphere CSI Datastore (Optional) - Select the vSphere CSI datastore. This datastore must
be accessible from all the nodes in the cluster. This datastore is provided as parameter
to default Storage Class. When you enable the multi-zone, the vSphere CSI Datastore is
disabled.

8 Click Next.

9 In the Control Plane Node Configuration tab, provide the following details:

n vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the master
node, select the vSphere cluster from here.

n Resource Pool (Optional) - If you want to use a different resource pool for the master
node, select the resource pool from here.

n Datastore (Optional) - If you want to use a different datastore for the master node, select
the datastore from here.

n Network - Associate a management or a private network. Ensure that the management


network connects to a network where DHCP is enabled, and can access the VMware
Photon repository.

n Domain Name Servers - You can override the DNS. To add a DNS, click Add.

10 Click Next.

11 In the Worker Node Configuration tab, provide the following details for each node pool
defined in the template:

n vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the worker
node, select the vSphere cluster from here.

n Resource Pool (Optional) - If you want to use a different resource pool for the worker
node, select the resource pool from here.

n Datastore (Optional) - If you want to use a different datastore for the worker node, select
the datastore from here.

n Network - Associate a management or a private network. Ensure that the management


network connects to a network where DHCP is enabled, and can access the VMware
Photon repository.

12 Click Next and review the configuration. You can download the configuration and reuse it for
deploying a cluster with a similar configuration.

13 Click Deploy.

If the operation is successful, the cluster is created and its status changes to Active. If the
operation fails, the cluster status changes to Not Active. If the cluster fails to create, delete
the cluster, upload the previously downloaded configuration, and recreate it.

VMware, Inc. 157


VMware Telco Cloud Automation User Guide

Results

The Workload cluster is deployed and VMware Telco Cloud Automation automatically pairs it
with the cluster's site.

What to do next

n You can view the Kubernetes clusters deployed through VMware Telco Cloud Automation
from the Kubernetes Cluster tab.

n To view more details of the Kubernetes cluster that you have deployed, go to CaaS
Infrastructure > Cluster Instances and click the cluster.

Create a v1 Workload Cluster Template


Create a Workload cluster template and use it for deploying your workload clusters.

Prerequisites

To perform this operation, you require a role with Infrastructure Design privileges.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > Caas Infrastructure > Cluster Templates and click Add.

3 In the Template Details tab, provide the following details:

n Name - Enter the name of the Workload cluster template.

n Cluster Type - Select Workload Cluster.

n Description (Optional) - Enter a description for the template.

n Tags (Optional) - Add appropriate tags to the template.

4 Click Next.

5 In the Cluster Configuration step, provide the following details:

n Kubernetes Version - Select the Kubernetes version from the drop-down menu. For the
list of supported versions, see Table 1-1. Supported Features on Different VIM Types.

VMware, Inc. 158


VMware Telco Cloud Automation User Guide

n CNI - Click Add and select a Container Network Interface (CNI). The supported CNIs are
Multus, Calico, and Antrea. To add additional CNIs, click Add under CNI.

Note
n Either Calico or Antrea and only one of them must be present. Multus is mandatory
when the network functions require any CNI plug-ins such as SRIOV or Host-Device.

n You can add CNI plug-ins such as SRIOV as a part of Node Customization when
instantiating, upgrading, or updating a CNF.

n The following CNIs or CNI plug-ins are available by default:

Note VMware Telco Cloud Automation does not support dhcp in an IPv6
environment.

bandwidth
dhcp
flannel
host-local
loopback
ptp
static
vlan
bridge
firewall
host-device
ipvlan
macvlan
portmap
sbr
tuning

n CSI - Click Add and select a Container Storage Interface (CSI) such as vSphere CSI or
NFS Client. For more information, see https://vsphere-csi-driver.sigs.k8s.io/ and https://
github.com/kubernetes-sigs/nfs-subdir-external-provisioner.

Note You can create a persistence volume using vSphere CSI only if all nodes in cluster
have access to shared datastore.

n Timeout (Optional) (For vSphere CSI) - Enter the CSI driver call timeout in seconds.
The default timeout is 300 seconds.

n Storage Class - Enter the storage class name. This storage class is used to provision
Persistent Volumes dynamically. A storage class with this name is created in the
Kubernetes cluster. The storage class name defaults to vsphere-sc for the vSphere
CSI type and nfs-client for the NFS Client type.

VMware, Inc. 159


VMware Telco Cloud Automation User Guide

n Default Storage Class - To set this storage class as default, enable the Default
Storage Class option. The storage class defaults to True for the vSphere CSI type.
It defaults to False for the NFS Client type. Only one of these types can be the default
storage class.

Note Only one vSphere CSI type and one NFS Client type storage class can be
present. You cannot add more than one storage class of the same type.

n To add additional CSIs, click Add under CSI.

n Tools - The current supported tool is Helm. Helm helps in troubleshooting the deployment
or upgrade of a network function.

n Helm 3.x is pre-installed in the cluster and the option to select the Helm 3.x is removed
from cluster template.

n Helm version 2 is mandatory when the network functions deployed on this cluster
depend on Helm v2. The supported Helm 2 version is 2.17.0.

n If you provide Helm version 2, VMware Telco Cloud Automation automatically deploys
Tiller pods in the Kubernetes cluster. If you require Helm CLI to interact with your
Kubernetes cluster for debugging purposes, install Helm CLI manually.

n Note
n If you require any other version of Helm, apart from the installed versions, you
must install the required versions manually.

Click Add and select Helm from the drop-down menu. Enter the Helm version.

6 Click Next.

7 In the Master Node Configuration tab, enter the following details:

n Name - Name of the pool. The node pool name cannot be greater than 36 characters.

n CPU - Number of vCPUs

n Memory - Memory in GB

n Storage - Storage size in GB. Minimum disk size required is 50 GB.

n Replica - Number of controller node VMs to be created. The ideal number of replicas for
production or staging deployment is 3.

n Networks - Enter the labels to group the networks. The minimum number of labels
required to connect to the management network is 1. Network labels are used for
providing networks inputs when deploying a cluster. Meaningful network labels such
as N1, N2, N3, and so on, help the deployment users provide the correct network
preferences. To add more labels, click Add.

VMware, Inc. 160


VMware Telco Cloud Automation User Guide

n Labels (Optional) - Enter the appropriate labels for this profile. These labels are applied to
the Kubernetes node. To add more labels, click Add.

Note For the Management network, master node supports only one label.

8 To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes,
click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.

9 In the Worker Node Configuration tab, add a node pool. A node pool is a set of nodes that
have similar VMs. Pooling is useful when you want to group the VMs based on the number of
CPUs, storage capacity, memory capacity, and so on. You can add multiple node pools with
different groups of VMs. Each node pool can be deployed on a different cluster or a resource
pool.

Note All Worker nodes in a node pool contain the same Kubelet and operating system
configuration. Deploy one network function with infrastructure requirements on one node
pool.

You can create multiple node pools for the following scenarios:

n When you require the Kubernetes cluster to be spanned across multiple vSphere clusters.

n When the cluster is used for multiple network functions that require node customizations.
To add a node pool, enter the following details:

n Name - Name of the node pool. The node pool name cannot be greater than 36
characters.

n CPU - Number of vCPUs

n Memory - Memory in MB

n Storage - Storage size in GB. Minimum disk size required is 50 GB.

n Replica - Number of controller node VMs to be created.

n Networks - Enter the labels to group the networks. Networks use these labels to provide
network inputs during a cluster deployment. Add additional labels for network types such
as IPvlan, MacVLAN, and Host-Device. Meaningful network labels such as N1, N2, N3,
and so on, help users provide the correct network preferences during deployment. It is
mandatory to include a management interface label. SR-IOV interfaces are added to the
Worker nodes when deploying the network functions.

Note A label length must not exceed 15 characters.

Apart from the management network, which is always the first network, the other labels
are used as interface names inside the Worker nodes. For example, when you deploy a
cluster using the template with the labels MANAGEMENT, N1, and N2, the Worker nodes
interface names are eth0, N1, N2. To add more labels, click Add.

VMware, Inc. 161


VMware Telco Cloud Automation User Guide

n Labels - Enter the appropriate labels for this profile. These labels are applied to the
Kubernetes node and you can use them as node selectors when instantiating a network
function. To add more labels, click Add.

10 Under CPU Manager Policy, set CPU reservations on the Worker nodes as Static or
Default. For information about controlling CPU Management Policies on the nodes, see
the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-cluster/cpu-
management-policies/.

Note For CPU-intensive workloads, use Static as the CPU Manager Policy.

11 To enable Machine Health Check, select Configure Machine Health Check. For more
information, see Machine Health Check.

12 Under Advanced Configuration, you can configure the Node Start Up Timeout duration and
set the unhealthy conditions.

a (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check to
wait for a node to join the cluster. If a node does not join during the specified time,
Machine Health Check considers it unhealthy.

b Set unhealthy conditions for the nodes. If any of these conditions are met, Machine Health
Check considers these nodes as unhealthy and starts the remediation process.

13 To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes,
click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.

14 Click Next and review the configuration.

15 Click Add Template.

Results

The template is created.

What to do next

Deploy a Management or Workload cluster.

Managing Workload Clusters after Deployment


After you deploy a Kubernetes cluster, you can edit its cluster configuration, Master and Worker
node configurations, upgrade the Kubernetes version, and change the Kubernetes password.

Transform v1 Workload Cluster to v2 Workload Cluster


Using the VMware Telco Cloud Automation user interface, you can transform a Workload cluster
that was created using the v1 API schema to the latest v2 API schema.

Note When you transform a v1 workload cluster to a v2 workload cluster, the certificate renewal
of the cluster is automatically enabled and the number of days defaults to 90.

VMware, Inc. 162


VMware Telco Cloud Automation User Guide

Prerequisites

n You require a role with Infrastructure Lifecycle Management privileges.

Note
n For a successful transformation of node pools, you must perform the sync esx info API call
before initiating a transform of imported v1 clusters.

n After the Workload Cluster transformation:

n You cannot obtain the Task history of transformed Workload Cluster.

n VMware Telco Cloud Automation changes the node pool name from <existing nodepool
name> to <cluster name>-<existing nodepool name >. For example, if a Workload Cluster
w1 has a node pool named np1, then after transformation node pool name becomes
w1-np1.

n You cannot use the v1 API on the transformed Workload Cluster. You can use the v2 API
to manage all the life-cycle management operations on this Workload Cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

The CaaS Infrastructure dashboard lists the v1 and v2 clusters and their status.

2 Click the ⋮ menu against a v1 cluster that you want to transform, and click Transform Cluster.

Note After you transform a v1 Workload cluster to v2 Workload cluster, you cannot perform
any v1 operations on it.

3 Click Transform.

Results

VMware Telco Cloud Automation converts the v1 Workload Cluster to v2 Workload cluster. You
can now perform v2 life-cycle management operations on it.

Stop a Cluster Creation Operation


You can stop an ongoing cluster creation operation.

Prerequisites

Note This operation is supported only on Workload clusters.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster creation operation that you want
to stop.

VMware, Inc. 163


VMware Telco Cloud Automation User Guide

4 Click Abort.

Results

VMware Telco Cloud Automation rolls back the progress of Kubernetes Cluster creation
operation and deletes all the deployed nodes. After VMware Telco Cloud Automation stops the
cluster creation operation, you cannot deploy the same cluster again.

Edit a Kubernetes Cluster Configuration


You can add a Container Network Interface (CNI) such as Multus to the existing list of CNIs,
update the Container Storage Interface (CSI) timeout duration, and toggle the default storage
class type. You can add a tool such as Helm to manage your Kubernetes applications and update
its version. You can also add or update the syslog servers to redirect the infrastructure logs of
the Master and Worker nodes, and update the Harbor repository details.

Prerequisites

Ensure that the cluster is not running any operations.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.

4 Click Edit Cluster Configuration.

5 In the Cluster Configuration tab, add a CNI, CSI, tool, AVI Kubernetes Operator (AKO), or
syslog server, and click Save.

Note In a Workload cluster:

n You cannot edit the Storage Class name in the vSphere-CSI (NFS Client is also not
supported).

n You can add CNI, CSI, or Tools, but cannot remove them.

n You cannot enable Multi-Zone on an existing Kubernetes cluster that is upgraded from
previous VMware Telco Cloud Automation versions. It is also not supported on a newly
created Workload cluster from a Management cluster that is upgraded from a previous
VMware Telco Cloud Automation version.

n You cannot enable or disable multi-zone if any persistent volumes (PV) provisioned
through vSphere CSI are present inside Kubernetes cluster.

Results

You have successfully edited the cluster configuration.

VMware, Inc. 164


VMware Telco Cloud Automation User Guide

Edit a Kubernetes Cluster Master Node Configuration


You can scale up or scale down the number of Master node replicas and add or remove labels.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.

4 Click Edit Master Node Configuration.

5 In the Master Nodes tab, scale down or scale up the Worker nodes, add or remove labels,
and click Save.

Results

You have successfully edited the Master node configuration of your Kubernetes cluster instance.

Edit a Kubernetes Cluster Node Pool


You can scale up or scale down the number of Worker nodes in each node pool and add labels.
If a network function with infrastructure requirements is running in this node pool, the scaling up
operation automatically applies all the node customizations on the new nodes.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.

4 Click Edit Worker Node Configuration.

5 To edit a node pool, select the node pool from the Worker Nodes tab, and click Edit.

6 Modify the value of Replicas to scale down or scale up the Worker nodes. You can also add
labels and use these labels as node selectors when instantiating the Network Functions.

7 To enable Machine Health Check, select Configure Machine Health Check. For more
information, see Machine Health Check.

8 Under Advanced Configuration, you can configure the Node Start Up Timeout duration and
set the unhealthy conditions.

a (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check to
wait for a node to join the cluster. If a node does not join during the specified time,
Machine Health Check considers it unhealthy.

b Set unhealthy conditions for the nodes. If any of these conditions are met, Machine Health
Check considers these nodes as unhealthy and starts the remediation process.

9 Click Update.

VMware, Inc. 165


VMware Telco Cloud Automation User Guide

Results

You have successfully edited the Worker node configuration of a Kubernetes cluster instance in
your node pool.

Add a Node Pool


You can add a node pool to your Kubernetes Workload cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster where you want to add the node
pool.

4 Click Edit Worker Node Configuration and click Add.

In Add Node Pool, enter the following information:

n Name - Enter the name of the node pool. The node pool name cannot be greater than 36
characters.

n CPU - Select the number of virtual CPUs in the node pool.

n Memory - Select the amount of memory for the node pool.

n Storage - Select the storage size. Minimum disk size required is 50 GB.

n Replica - Select the number of controller node virtual machines.

n vSphere Cluster (Optional) - To use a different vSphere Cluster, select the vSphere
cluster from here.

n Resource Pool (Optional) - To use a different resource pool, select the resource pool from
here.

n Datastore (Optional) - To use a different datastore, select the datastore from here.

n Labels - Add key-value pair labels to your nodes. You can use these labels as node
selectors when instantiating a network function.

n Networks - You can add the network details.

n Label - Enter the labels to group the networks. Networks use these labels to
provide network inputs during a cluster deployment. Add additional labels for network
types such as IPvlan, MacVLAN, and Host-Device. Meaningful network labels such
as N1, N2, N3, and so on, help users provide the correct network preferences
during deployment. It is mandatory to include a management interface label. SR-IOV
interfaces are added to the Worker nodes when deploying the network functions.

Note A label length must not exceed 15 characters.

VMware, Inc. 166


VMware Telco Cloud Automation User Guide

Apart from the management network, which is always the first network, the other
labels are used as interface names inside the Worker nodes. For example, when you
deploy a cluster using the template with the labels MANAGEMENT, N1, and N2, the
Worker nodes interface names are eth0, N1, N2. To add more labels, click Add.

n Network - Select the network that you want to associate with the label.

n (Optional) MTU - Provide the MTU value for the network. The minimum MTU value is
1500. The maximum MTU value depends on the configuration of the network switch.

n Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured
in the guest operating system of each node in the cluster. You can override this option on
the Master node and each node pool of the Worker node. To add a DNS, click Add.

n CPU Manager Policy - Set CPU reservations on the Worker nodes as Static or
Default. For information about controlling CPU Management Policies on the nodes, see
the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-cluster/
cpu-management-policies/.

Note For CPU-intensive workloads, use Static as the CPU Manager Policy.

n Configure Machine Health Check - Click the corresponding button to enable the machine
health check. When you enable the Configure Machine Health Check, you can configure
the health check related options under Advanced Configuration. For details on Machine
Health Check, see Machine Health Check

n Under Advanced Configuration, you can configure the Node Start Up Timeout duration
and set the unhealthy conditions.

Note Node Start Up Timeout is applicable when the Machine Health Check is enabled.

1 (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check
to wait for a node to join the cluster. If a node does not join during the specified time,
Machine Health Check considers it unhealthy.

2 Set unhealthy conditions for the nodes. If the nodes meet any of these conditions,
Machine Health Check considers them as unhealthy and starts the remediation
process.

n To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes
nodes, select Use Linked Cloning for Cloning the VMs.

5 Click Add.

Results

You have successfully added the node pool to your Workload cluster.

Delete a Node Pool


Delete a node pool from the Kubernetes Workload cluster.

VMware, Inc. 167


VMware Telco Cloud Automation User Guide

Prerequisites

Ensure that the node pool is not running any applications.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster and select Edit Worker Node
Configuration.

4 Select the node pool, click Delete, and confirm the operation.

Results

You have successfully deleted the node pool.

Change the Kubernetes Password


You can change the password that you had set when deploying your Kubernetes cluster.

Prerequisites

Ensure that your password meets the minimum security requirements listed in the interface.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to change the
password.

4 Click Change Password.

5 In the Change Password pop-up window, enter your new password and confirm it.

6 Click Change Password.

Results

You have successfully changed the password of your Kubernetes cluster.

Note It might take some time for the system to reflect the changed password.

Copy Spec and Deploy New


You can copy the configuration of a specific Kubernetes cluster and use it for deploying a new
Kubernetes cluster.

This option is helpful when you are deploying multiple Kubernetes clusters with similar
configurations.

VMware, Inc. 168


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Caas Infrastructure and select the Kubernetes cluster.

3 Click the Options (⋮) symbol against the Kubernetes cluster and select Copy Spec and
Deploy New.

4 To confirm the operation, click OK.

Results

VMware Telco Cloud Automation automatically generates a new cluster template with the
configuration of the Kubernetes cluster that you copied.

Retry a Failed Cluster Creation Operation


If the cluster creation operation become unavailable due to an unknown error such network
unavailability, you can retry the operation.

Note This operation is available from VMware Telco Cloud Automation version 1.9 onwards.

Note The Retry option retries the cluster creation operation from the point of failure. Which
means, you cannot edit the properties and recreate the cluster from the beginning using Retry.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 To retry a failed Kubernetes Cluster creation operation, click the Options symbol against the
failed Kubernetes cluster and click Retry.

Results

The cluster creation operation resumes.

What to do next

To edit a cluster and then retry the cluster creation operation, you must delete the cluster and
recreate it.

Viewing Cluster Details


After a Kubernetes cluster is deployed, it is listed under Infrastructure > CaaS Infrastructure >
Cluster Instances. To view more information about the Kubernetes cluster, click it.

VMware, Inc. 169


VMware Telco Cloud Automation User Guide

Cluster Details
You can view the details of cluster and the health of various components associated to the
cluster.

n The Details section provides the following information:

n Cluster Type - Management or a Workload cluster.

n Cluster URL - The URL of the cluster API server.

n Cluster Username - User name to access the cluster.

n vSphere Cluster Name - The name of the selected vSphere cluster.

n Management Cluster - The backing Management cluster name.

n Cluster Template - The backing cluster template name.

n The Components sections provides details of various components and their health:

n Component - Name of the component.

n Health - Health status of the component.

Note Telco Cloud Automation obtains the health status directly from Kubernetes and
displays that health status on Telco Cloud Automation user interface.

n Healthy : The component is working fine.

n Unhealthy : The components is not working fine and has some faults.

n Unknown : The component is not available.

Note
n When the cluster is under upgrade or under creation, the status may show
Unknown.

n Management clusters created in older versions of Telco Cloud Automation


remains Unknown, unless you upgrade these clusters to Telco Cloud Automation
2.0.

You can click on the Components to view the details of Pods associated with each
Components.

Click on the Pod to view the following details:

Note Telco Cloud Automation obtains the health status directly from Kubernetes and
displays that health status on Telco Cloud Automation user interface. Kubernetes maintains
the conditions and details.

n Details - Shows the Namespace, Node name, Creation timestamp, and IP associated with
the Pod.

VMware, Inc. 170


VMware Telco Cloud Automation User Guide

n Conditions - Shows the status of Initialized, Ready, Containers Ready, and POD
Scheduled conditions.

n Containers - Shows the Name, State, and Started At time of the container.

Cluster Configuration
The Cluster Configuration tab displays information about the Kubernetes version of the cluster,
upgrade history, its CNI and CSI configurations, any tools such as Helm associated with the
cluster, syslog server details, and Harbor repository details. To edit any of the configuration
information, click Edit.

For the Management Cluster, you can view the name, version, and status of the nodeconfig-
operator and vmconfigoperator under tools on the Cluster Configuration tab. To view more
details of the tools, click the name of the operator.

n nodeconfig-operator

n Details - Shows the version and the health status of the operator.

n K8s Resources - Shows the details of the K8s (Kubernetes) resources.

n Nodeconfig-operator - Shows the Namespace, Created, Replica, and Ready Replicas


of the nodeconfig operator deployment. Click the name of the nodeconfig operator to
view the summary of Details, Conditions, and Pods associated.

n Details - Shows the Namespace, Observed Generation, Created, Replicas,


Updated Replicas, Ready Replicas, and Available Replicas.

n Conditions - Shows the health condition of the operator.

n Pods - Shows the Name, Created, Ready Container, and Phase of the Pod. To
view more details of a pod, click the name of the pod.

n Details - Shows the Namespace, Node name, Creation timestamp, and IP


associated with the Pod.

n Conditions - Shows the status of Initialized, Ready, Containers Ready, and


POD Scheduled conditions.

n Containers - Shows the Name, State, and Started At time of the container.

n Nodeconfig-daemon - Shows the Namespace, Created, Replica, and Ready Replicas


of the nodeconfig DaemonSet. Click the name of the nodeconfig daemon to view the
summary of details, conditions, and associated pods.

n vmconfigoperator

n Details - Shows the version and health status of the operator.

VMware, Inc. 171


VMware Telco Cloud Automation User Guide

n K8s Resources - Shows the Namespace, Created, Replica, and Ready Replicas of the
Deployment.

n vmconfig-operator

n Details - Shows the Namespace, Observed Generation, Created, Replicas,


Updated Replicas, Ready Replicas, and Available Replicas.

n Conditions - Shows the health condition of the operator.

n Pods - Shows the Name, Created, Ready Container, and Phase of the Pod. To
view more details of a pod, click the name of the pod.

n Details - Shows the Namespace, Node name, Creation timestamp, and IP


associated with the pod.

n Conditions - Shows the status of Initialized, Ready, Containers Ready, and


POD Scheduled conditions.

n Containers - Shows the Name, State, and Started At time of the container.

Control Plane Nodes


The Control Plane Nodes tab displays the details about the Control Plane node, labels attached
to the node, and network labels. It also displays the IP addresses and names of the VMs that are
deployed for the Master node. It also displays various health parameters associated with the VMs
like Memory pressure, Disk pressure, PID pressure, and Ready State. You can also click the VMs
to view more details of that VM. To increase or decrease the replica count and to add labels, click
Edit.

n Details - You can view the hardware details like CPU, Storage, Memory, and Replicas, along
with the name of the node. You can also view the status of node whether the node is active
or inactive.

n Network - You can view the network details of the node.

n VMs - You can view various details of the VMs like Memory pressure, Disk pressure, PID
pressure, and Ready State. You can also click the VMs to view more details of that VM.

Click on the VM to view the following information:

n Node Details - Shows the hardware and operating system related details of the VM. This
includes:

n Architecture

n Kernel Version

n Kubelet Version

n OS Image

n Container Runtime Version

n Kube Proxy Version

VMware, Inc. 172


VMware Telco Cloud Automation User Guide

n Operating System

n Conditions - Shows various health conditions like Memory Pressure, Disk Pressure, PID
Pressure, and Ready State of the node pool.

n Addresses - Shows the Hostname, InternalIP and ExternalIP associated with the VM.

n Allocatable/Capacity - Shows the availability and allocation of the resources associated


with the VM.

Worker Nodes
The Worker Nodes tab displays the existing node pools of a Kubernetes cluster. To view more
details of the node pool such as its name, CPU size, memory size, storage size, number of
replicas, node customization details, and its status, click the name of the node pool. When you
click the name of the node pool, you can view the following details:

n Details - You can view the hardware details like CPU, Storage, Memory, and Replicas. You
can also view the status of node pool whether the node pool is active or inactive.

n Labels - You can view the various labels associated with the node pool.

n Network - You can view the network details of the node pool.

n CPU Manager Policy - You can view the type of CPU manager policy associated with the
node pool.

n VMs - You can view various details of the VMs like Memory pressure, Disk pressure, PID
pressure, and Ready State. You can also click the VMs to view the details of that VM.

Click the VM to view the following information:

n NodePool Details

n Node Details - Shows the hardware and the operating system related details of the
VM. This includes:

n Architecture

n Kernel Version

n Kubelet Version

n OS Image

n Container Runtime Version

n Kube Proxy Version

n Operating System

n Conditions - Shows various health conditions like Memory Pressure, Disk Pressure,
PID Pressure, and Ready State of the node pool.

n Addresses - Shows the Hostname, InternalIP, and ExternalIP associated with the VM.

n Allocatable/Capacity - Shows the availability and allocation of the resources


associated with the VM.

VMware, Inc. 173


VMware Telco Cloud Automation User Guide

n Node Customizations - Shows the Kernel and Network details after node customization.

n Tasks - Shows the list of the task performed.

You can also add a node pool to the cluster, edit the number of replicas on a node pool, and
delete a node pool from here.

Tasks
The Tasks tab displays the progress of the cluster-level tasks and their status.

n Management Cluster - Displays the progress of Management cluster tasks along with the
progress of all the Workload cluster tasks that the cluster manages. It also displays the node
pool tasks of all the Workload clusters.

n Workload Cluster - Displays the progress of the Workload cluster tasks along with the
progress of its node pool tasks.

You can apply filters to view the progress of specific operations and specific clusters.

Machine Health Check


Machine Health Check is a controller that provides node health monitoring and node auto-repair
for Tanzu Kubernetes clusters.

You can enable Machine Health Check and define the unhealthy conditions for the controller to
monitor when creating the node pool cluster template. You can also edit the Machine Health
Check conditions on an existing node pool under a Workload cluster. Machine Health Check
monitors the node pools for any unhealthy nodes and tries to remediate by recreating them.
For example, set the maximum duration a node can remain in the not ready state to 15 minutes
after which, the Machine Health Check controller triggers a remediation. For more details on
machine health check, see https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/
healthchecking.html.

By default, the Machine Health Check controller is disabled.

For steps to enable and configure Machine Health Check when creating a Workload cluster
template, see Create a v1 Workload Cluster Template.

For steps to configure Machine Health Check on an existing node pool, see Edit a Kubernetes
Cluster Node Pool.

There may be instances when you want to power down a virtual machine to perform certain
maintenance activities. To avoid Machine Health Check remediating during the down time,
you can place the node pools in Maintenance Mode. For steps to place the Worker node in
Maintenance Mode, see Place Nodes in Maintenance Mode.

Place Nodes in Maintenance Mode


When you want to perform certain reconfigurations on the Worker nodes that involves powering
off and rebooting the virtual machine, you can place the Worker nodes in Maintenance Mode.

VMware, Inc. 174


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Kubernetes cluster that requires the Worker nodes to be placed in Maintenance
Mode.

4 Click the Worker Nodes tab.

5 Click the Options (⋮) symbol against the node and select Enter Maintenance Mode.

Results

The node is placed in Maintenance Mode and the Machine Health Check controller does not
remediate if there is a system down time.

Example

When you place a Worker node in Maintenance Mode and power it off, it does not power on until
you remove it from Maintenance Mode.

What to do next

To remove the Worker node from Maintenance Mode, click the Options (⋮) symbol against the
node and select Exit Maintenance Mode.

Managing Add-ons for v1 Workload Clusters


Configure and upgrade add-ons.

Upgrading Add-Ons
You can now upgrade the add-on operators to a later version from VMware Telco Cloud
Automation.

In a scenario where you upgrade VMware Telco Cloud Automation to a newer patch release
but the underlying VMware Tanzu Kubernetes Grid cluster remains the same, you can upgrade
only the add-ons. Upgrade Management cluster operators such as nodeconfig-operator and
vmconfig-operator, and Workload cluster operators such as CNIs and CSIs to their later versions.

Implication of Not Upgrading Add-Ons


Implications of not upgrading the add-ons of management and workload clusters.
Implications of Not Upgrading Add-ons in the Management Cluster
If the operators are not the latest versions, the corresponding Management cluster displays an
error for upgrading the add-ons. To upgrade Management cluster add-ons individually, use the
Upgrade Add-Ons option.

If you do not upgrade the add-ons of a Management cluster:

n Cannot perform operations on the Management cluster.

VMware, Inc. 175


VMware Telco Cloud Automation User Guide

n Cannot perform operations on the workload cluster that uses management clusters with
earlier add-ons.
Implications of Not Upgrading Add-ons in the Workload Cluster
If the operators are not the latest versions, the corresponding workload clusters display a
warning for upgrading the add-ons. To upgrade workload cluster add-ons individually, use the
Upgrade Add-Ons option.

If you do not upgrade the add-ons of workload cluster:

n Cannot add CSI.

n Cannot add CNI.

Upgrade Add-Ons
Upgrade the add-ons in a Management cluster or a Workload cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure.

The CaaS Infrastructure page is displayed.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to upgrade.

4 Click Upgrade Add-Ons.

5 To confirm the action, click OK.

Results

The add-ons upgrade to a newer version.

Add-On Failures
When you provide a wrong input for an add-on when creating a cluster, the cluster creation
operation does not fail immediately.

For example, If you provide a wrong server IP address for a CSI add-on, the cluster creation does
not fail. After the cluster creation operation is completed, a warning message is displayed against
the Kubernetes cluster listing the add-ons that failed during the operation. You can then edit the
Kubernetes cluster and update the CSI add-on details.

Add-On Configuration Reference for v1 Workload Clusters


Use this reference when configuring add-ons on your v1 Workload cluster.

VMware, Inc. 176


VMware Telco Cloud Automation User Guide

vSphere-CSI

Option Description

Zone Zone is the tag category name defined in vCenter Server.


Tags belonging to this category are assigned to the host or
vSphere cluster objects for marking the storage topology.

Region Region is the tag category name defined in vCenter Server.


Tags belonging to this category are assigned to the Data
Center objects for marking the storage topology.

Storage Class Enter the storage class name. This storage class is used to
provision persistent volumes dynamically. A storage class
with this name is created in the Kubernetes cluster.

IsDefault To set this storage class as default, select True.

Reclaim Policy Select whether to delete or retain the add-on during a


reclaim event.

Datastore URL Enter the datastore URL.

NFS-Client

Option Description

Storage Class Enter the storage class name. This storage class is used to
provision persistent volumes dynamically. A storage class
with this name is created in the Kubernetes cluster.

Is Default To set this storage class as default, select True.

NFS Server Address For an IPv4 cluster, enter the IPv4 address or FQDN of the
NFS Server. For an IPv6 cluster, enter the FQDN.

Path Enter the server IP address and the mount path of the NFS
client. Ensure that the NFS server is reachable from the
cluster. The mount path must also be accessible to read
and write.

Harbor
If if have already registered a Harbor, you can click Select Registered Harbor and select the
Harbor from the list. Else you can click Add New Harbor and provide the following details:

Option Description

URL Enter the Harbor URL.

Username Enter the Harbor user name.

Password Enter the Harbor password.

Helm
This add-on has no configuration.

VMware, Inc. 177


VMware Telco Cloud Automation User Guide

Multus

Option Description

Log Level Enter the log level. Select from:


n Panic
n Debug
n Error
n Verbose

Log File Path Path where you want to store the log files.

System Settings

Option Description

Cluster Password Enter the password for the cluster.

Syslog Add the syslog server IP address/FQDN for capturing the


infrastructure logs of all the nodes in the cluster.

Working with v2 Workload Clusters


The CaaS infrastructure API version is upgraded to version 2 (v2).

With this upgrade, users now have better control over cluster failures. You can now view the
status of all the components at a granular level and act on a failure while the cluster creation is
in progress. Also, during the cluster creation process, you can edit or delete a node pool or an
add-on at any point if there is an error.

For example, if an add-on IP address is incorrect, you can view the error immediately, edit the
IP address, and provide the correct one while the cluster creation is in progress. Even though
VMware Telco Cloud Automation is deprecating v1 clusters, you can still perform cluster life-cycle
operations on v1 clusters using the new user interface. However, to access the new features of
the v2 user interface such as granular updates, new add-ons, stretched clusters, and so on, you
must transform your clusters to V2 APIs. You can transform v1 Workload clusters to v2 Workload
clusters using the Transform Cluster option in the VMware Telco Cloud Automation UI. For more
information about transforming a v1 Workload cluster to v2 Workload cluster, see Transform v1
Workload Cluster to v2 Workload Cluster.

Note For this release, you cannot deploy a v2 Management cluster or perform any v2 life-cycle
management operations on Management clusters.

The CaaS Infrastructure Dashboard


The CaaS Infrastructure dashboard provides a new listing page where you can view the v1 and
v2 clusters and the status of their control planes, node pools, and add-ons. You can drill down on
each of the clusters and view further details.

VMware, Inc. 178


VMware Telco Cloud Automation User Guide

Anti-affinity Rules
Anti-affinity rule for K8s worker nodes is enabled in VMware Telco Cloud Automation by default.
Anti-affinity is specific to workload clusters and ensures that the nodes deployed are spread
across different hosts.

Consider the following example where a label node.cluster.x-k8s.io/esxi-host is added to


each worker node to indicate the host on which the anti-affinity rule is applied. Based on the
node to which anti-affinity is applicable, you can control the anti-affinity rules on that node.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: node.cluster.x-k8s.io/esxi-host
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
nodeSelector:
"telco.vmware.com/nodepool": "npg-1"
containers:
- name: nginx-server
image: harbor-repo.vmware.com/ecp_snc/nginx:1.23.1

In the preceding example, topologySpreadConstraints is used to control the anti-affinity rules


based on the host node.cluster.x-k8s.io/esxi-host and nodeSelector is used to apply the
anti-affinity rule within the node pool npg-1.

By default, workload clusters on vSphere and standalone management clusters follow anti-
affinity rules to deploy node pool workers and control plane nodes on different ESXi hosts.

VMware, Inc. 179


VMware Telco Cloud Automation User Guide

The following diagram illustrates the node placements when the anti-affinity rules are enabled.

Control Plane Node Placement

Node Pool1 Control Plane Worker Node Placement

workload
cluster

Control
Plane Control Plane Control Plane Control Plane
Node Node Node

Node
Pool1
Worker Worker Worker
Node Node Node

Physical Host Physical Host Physical Host

Upgrade v2 Workload Kubernetes Cluster Version


You can upgrade the existing Workload Kubernetes Cluster version to the latest versions of
Kubernetes supported in the current version of the VMware Telco Cloud Automation.

Note When you upgrade a v2 workload cluster to the latest version, the certificate renewal of
the cluster is automatically enabled and the number of days defaults to 90.

The following table lists the Kubernetes upgrade compatibility for the Workload cluster when
upgrading from VMware Telco Cloud Automation.

VMware
Telco Cloud
Automation Existing Kubernetes Versions v1.22.17 v1.23.16 v1.24.10

2.2 1.21.14 Yes No No

2.2 1.22.13 Yes Yes No

2.2 1.23.10 No Yes Yes

Implications of Not Upgrading v2 Workload Cluster


Not upgrading an unsupported Workload cluster can impact various operations.

Not upgrading the workload cluster can impact:

n Ability to edit the workload cluster.

n Ability to create, upgrade, and modify the node pools.

VMware, Inc. 180


VMware Telco Cloud Automation User Guide

n Ability to upgrade and instantiate the CNF.

n About the backward compatibility, please refer to CaaS Upgrade Backward Compatibility.

Deploy a v2 Workload Cluster


Deploy a v2 Workload cluster using the VMware Telco Cloud Automation user interface.

Prerequisites

n You require a role with Infrastructure Lifecycle Management privileges.

n You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.

n You must have onboarded a vSphere VIM.

n You must have created a Management cluster or uploaded a Workload cluster template.

n A network must be present with a DHCP range and a static IP of the same subnet.

n When you enable multi-zone, ensure that:

n For region: vSphere data center has tags attached for the selected category.

n For zone: vSphere Cluster or hosts under the vSphere cluster has tags attached for the
selected category. Ensure that vSphere Cluster and hosts under vSphere cluster does not
share the same tags.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure and click Deploy Cluster.

3 From the drop-down menu, select Workload Cluster.

4 In the Workload Cluster Deployment wizard, enter information for each of the sub-categories:

5 1. Destination Info

n Management Cluster - Select the Management cluster from the drop-down menu. You
can also select a Management cluster deployed in a different vCenter.

n Destination Cloud - Select a cloud on which you want to deploy the Kubernetes cluster.

n Datacenter - Select a data center that is associated with the cloud.

Advanced Options - Provide the secondary cloud information here. These options are
applicable when creating stretch clusters.

n (Optional) Secondary Cloud - Select the secondary cloud. It is required for stretched
cluster creation.

n (Optional) Secondary Data Center - Select the secondary data center.

n (Optional) NF Orchestration VIM - Provide the details of the VIM. VMware Telco Cloud
Automation uses this VIM and associated Control Planes for NF life cycle management.

VMware, Inc. 181


VMware Telco Cloud Automation User Guide

6 Click Next.

7 2. Cluster Info

n Name - Enter a name for the Workload cluster. The cluster name must be compliant with
DNS hostname requirements as outlined in RFC-952 and amended in RFC-1123.

n TCA BOM Release - The TCA BOM Release file contains information about the Kubernetes
version and add-on versions. You can select multiple BOM release files.

Note After you select the BOM release file, the Security Options section is made
available.

n CNI - Select a Container Network Interface (CNI) such as Antrea or Calico.

n Proxy Repository Access - Available only when the selected management cluster uses a
proxy repository. Select the proxy repository from the drop-down list.

n Airgap Repository Access - Available only when the selected management cluster uses a
airgap repository. Select the airgap repository from the drop-down list.

n IP Version - The IP version specified in the Management cluster is displayed here.

n Cluster End Point - Enter the IP of the API server loadbalancer.

n Cluster (pods) CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the
CIDR pool to assign IP addresses to pods in the cluster.

n Service CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the CIDR
pool to assign IP addresses to the services in the cluster.

n Enable Autoscaler - Click the toggle button to activate the autoscaler feature.

The autoscaler feature automatically controls the replica count on the node pool by
increasing or decreasing the replica counts based on the workload. If you activate this
feature for a particular cluster, you cannot deactivate it after the deployment.When you
activate the autoscaler feature, the following fields are displayed:

Note The values in these fields are automatically populated from the cluster. However,
you can edit the values.

n Min Size - Sets a minimum limit to the number of worker nodes that autoscaler should
decrease.

n Max Size - Sets a maximum limit to the number of worker nodes that autoscaler
should increase.

n Max Node - Sets a maximum limit to the number of worker and control plane nodes
that autoscaler should increase. The default value is 0.

n Max Node Provision Time - Sets the maximum time that autoscaler should wait for
the nodes to be provisioned. The default value is 15 minutes.

VMware, Inc. 182


VMware Telco Cloud Automation User Guide

n Delay After Add - Sets the time limit for the autoscaler to start the scale-down
operation after a scale-up operation. For example, if you specify the time as 10
minutes, autoscaler resumes the scale-down scan after 10 minutes of adding a node.

n Delay After Failure - Sets the time limit for the autoscaler to restart the scale-down
operation after a scale-down operation fails. For example, if you specify the time as 3
minutes and there is a scale-down failure, the next scale-down operation starts after 3
minutes.

n Delay After Delete - Sets the time limit for the autoscaler to start the scale-down
operation after deleting a node. For example, if you specify the time as 10 minutes,
autoscaler resumes the scale-down scan after 10 minutes of deleting a node.

n Unneeded Time - Sets the time limit for the autoscaler to scale-down an unused node.
For example, if you specify the time as 10 minutes, any unused node is scaled down
only after 10 minutes.

8 Click Next.

9 Security Options

n Click the Enable toggle button to apply the customized audit configuration. Otherwise,
the default audit configuration is applied to the workload cluster.

n Click the POD Security Default Policy toggle button to apply the POD security policies to
the workload cluster.

n POD Security Standard Audit: Policy violation adds an audit annotation to the event
recorded in the audit log, but does not reject the POD.

n POD Security Standard Warn: Policy violation displays an error message on the UI,
but does not reject the POD.

n POD Security Standard Enforce: Policy violation rejects the POD.

Select one of the following options from the preceding drop-down lists:

n Restricted: A fully restrictive policy that follows the current POD security
hardening best practices for providing permissions.

n Baseline: A minimal restrictive policy that prevents known privilege escalations.


Allows the default Pod configurations.

n Privileged: An unrestrictive policy providing the widest possible permissions.


Allows known privilege escalations.

10 Control Plane Info

n To configure Control Plane node placement, click the Settings icon in the Control Plane
Node Placement table.

n Name - Enter the name of the Control Plane node.

n Destination Cloud - The destination cloud is selected by default. To make a different


selection, use the drop-down menu.

VMware, Inc. 183


VMware Telco Cloud Automation User Guide

VM Placement

n Datacenter - Select a data center for the Control Plane node.

n Resource Pool - Select the default resource pool on which the Control Plane node is
deployed.

n VM Folder - Select the virtual machine folder on which the Control Plane node is
placed.

n Datastore - Select the default datastore for the Control Plane node.

n VM Template - Select a VM template.

VM Size

n Number of Replicas - Number of controller node VMs to be created. The ideal


number of replicas for production or staging deployment is 3.

n Number of vCPUs - To ensure that the physical CPU core is used by the same
node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-
enabled, and if the network function requires NUMA alignment and CPU reservation.

n Cores Per Socket (Optional) - Enter the number of cores per socket if you require
more that 64 cores.

n Memory - Enter the memory in GB.

n Disk Size - Enter the disk size in GB.

Network

n Management Network - Select the Management network.

n MTU - Enter the maximum transmission unit in bytes.

n DNS - Provide comma-separated primary and secondary DNS servers.

Labels

n To add the appropriate labels for this profile, click Add Label. These labels are added
to the Kubernetes node.

Advanced Options

n Clone Mode - Specify the type of clone operation. Linked Clone is supported on
templates that have at least one snapshot. Otherwise, the clone mode defaults to Full
Clone.

n Certificate Expiry Days - Specify the number of days for automatic certificate renewal
by TKG before its expiry. By default, the certificate expires after 365 days. If you
specify a value in this field, the certificate is automatically renewed before the set
number of days. For example, if you specify the number of days as 50, the certificate
is renewed 50 days before its expiry, which is after 315 days.

VMware, Inc. 184


VMware Telco Cloud Automation User Guide

The default value is 90 days. The minimum number of days you can specify is 7 and
the maximum is 180.

Note You cannot edit the number of days after you deploy the cluster.

n n

n Kubeadmin Config Template (YAML) - Enable or deactivate the Kubeadmin Config


Template YAML.

n Click Apply.

11 Add-Ons

To deploy an add-on such as NFS Client or Harbor, click Deploy Add-on.

a From the Select Add-On wizard, select the add-on and click Next.

b For add-on configuration information, see Add-On Configuration Reference for v1


Workload Clusters.

12 Click Next.

13 Node Pools

n A node pool is a set of nodes that have similar properties. Pooling is useful when
you want to group the VMs based on the number of CPUs, storage capacity, memory
capacity, and so on. You can add one node pool to a Management cluster and multiple
node pools to a Workload cluster, with different groups of VMs. To add a Worker node
pool, click Add Worker Node Pool.

n Name - Enter the name of the node pool.

n Destination Cloud - The destination cloud is selected by default. To make a different


selection, use the drop-down menu.

VM Placement

n Datacenter - Select a data center for the node pool.

n Resource Pool - Select the default resource pool on which the node pool is deployed.

n VM Folder - Select the virtual machine folder on which the node pool is placed.

n Datastore - Select the default datastore for the node pool.

n VM Template - Select a VM template.

n Enable Autoscaler - This field is available only if autoscaler is enabled for the
associated cluster. At the node level, you can activate or deactivate autoscaler based
on your requirement.

The following field values are automatically populated from the cluster.

n Min Size (Optional) - Sets a minimum limit to the number of worker nodes that
autoscaler should scale down. Edit the value, as required.

VMware, Inc. 185


VMware Telco Cloud Automation User Guide

n Max Size (Optional) - Sets a maximum limit to the number of worker nodes that
autoscaler should scale up. Edit the value, as required.

Note
n Using autoscaler on a cluster does not automatically change its node group
size. Therefore, changing the maximum or minimum size does not scale up or
scale down the cluster size. When you are editing the autoscaler-configured
maximum size of the node pool, ensure that the maximum size limit of the
node pool is lesser than or equal to the current replica count.

n When a scale-down is in progress, it is not recommended to edit the maximum


size of the cluster.

n You can view the scale-up and scale-down events under the Events tab of the
Telco Cloud Automation portal.

VM Size

n Number of Replicas - Number of node pool VMs to be created. The ideal number of
replicas for production or staging deployment is 3.

Note The Number of Replicas field is unavailable if autoscaler is enabled for the
node.

n n Number of vCPUs - To ensure that the physical CPU core is used by the same
node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-
enabled, and if the network function requires NUMA alignment and CPU reservation.

n Cores Per Socket (Optional) - Enter the number of cores per socket if you require
more that 64 cores.

n Memory - Enter the memory in GB.

n Disk Size - Enter the disk size in GB.

Network

n Management Network - Select the Management network.

n MTU - Enter the maximum transmission unit in bytes.

n DNS - Provide comma-separated primary and secondary DNS servers.

n ADD NETWORK DEVICE - Click this button to add a dedicated NFS interface to the
node pool, select the interface, and then enter the following:

n Interface Name - Enter the interface name as tkg-nfs to reach the NFS server.

Labels

n To add the appropriate labels for this profile, click Add Label. These labels are added
to the Kubernetes node.

VMware, Inc. 186


VMware Telco Cloud Automation User Guide

Advanced Options

n Clone Mode - Specify the type of clone operation. Linked Clone is supported on
templates that have at least one snapshot. Otherwise, the clone mode defaults to Full
Clone.

n To enable Machine Health Check, select Configure Machine Health Check

n Kubeadmin Config Template (YAML) - Enable or deactivate the Kubeadmin Config


Template YAML.

n Click Apply.

14 6. Ready to Deploy - Click Deploy.

Results

The cluster details page displays the status of the overall deployment and the deployment status
of each component.

Create a v2 Workload Cluster Template


Create a Workload cluster template and use it for deploying your workload clusters.

Prerequisites

You require a role with Infrastructure Design privileges.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > Caas Infrastructure > Cluster Templates.

3 Click Add and select Workload Cluster Template.

4 In the Create Workload Cluster Template wizard, enter information for each of the sub-
categories:

5 Template Info

n Name - Enter the name of the Workload cluster template.

6 1. Destination Info

n Management Cluster - Select the Management cluster from the drop-down menu. You
can also select a Management cluster deployed in a different vCenter.

n Destination Cloud - Select a cloud on which you want to deploy the Kubernetes cluster.

n Datacenter - Select a data center that is associated with the cloud.

Advanced Options - Provide the secondary cloud information here. These options are
applicable when creating stretch clusters.

n (Optional) Secondary Cloud - Select the secondary cloud. It is required for stretched
cluster creation.

VMware, Inc. 187


VMware Telco Cloud Automation User Guide

n (Optional) Secondary Data Center - Select the secondary data center.

n (Optional) NF Orchestration VIM - Provide the details of the VIM. VMware Telco Cloud
Automation uses this VIM and associated Control Planes for NF life cycle management.

7 Click Next.

8 2. Cluster Info

n TCA BOM Release - The TCA BOM Release file contain information about the Kubernetes
version and add-on versions. You can select multiple BOM release files.

n CNI - Select a Container Network Interface (CNI) such as Antrea or Calico.

n Proxy Repository Access - Available only when the selected management cluster uses a
proxy repository. Select the proxy repository from the drop-down list.

n Airgap Repository Access - Available only when the selected management cluster uses a
airgap repository. Select the airgap repository from the drop-down list.

n IP Version - The IP version specified in the Management cluster is displayed here.

n Cluster End Point - Enter the IP of the API server loadbalancer.

n Cluster (pods) CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the
CIDR pool to assign IP addresses to pods in the cluster.

n Service CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the CIDR
pool to assign IP addresses to the services in the cluster.

n Enable Autoscaler - Click the toggle button to activate the autoscaler feature.

The autoscaler feature automatically controls the replica count on the node pool by
increasing or decreasing the replica counts based on the workload. If you activate this
feature for a particular cluster, you cannot deactivate it after the deployment.When you
activate the autoscaler feature, the following fields are displayed:

Note The values in these fields are automatically populated from the cluster. However,
you can edit the values.

n Min Size - Sets a minimum limit to the number of worker nodes that autoscaler should
decrease.

n Max Size - Sets a maximum limit to the number of worker nodes that autoscaler
should increase.

n Max Node - Sets a maximum limit to the number of worker and control plane nodes
that autoscaler should increase. The default value is 0.

n Max Node Provision Time - Sets the maximum time that autoscaler should wait for
the nodes to be provisioned. The default value is 15 minutes.

n Delay After Add - Sets the time limit for the autoscaler to start the scale-down
operation after a scale-up operation. For example, if you specify the time as 10
minutes, autoscaler resumes the scale-down scan after 10 minutes of adding a node.

VMware, Inc. 188


VMware Telco Cloud Automation User Guide

n Delay After Failure - Sets the time limit for the autoscaler to restart the scale-down
operation after a scale-down operation fails. For example, if you specify the time as 3
minutes and there is a scale-down failure, the next scale-down operation starts after 3
minutes.

n Delay After Delete - Sets the time limit for the autoscaler to start the scale-down
operation after deleting a node. For example, if you specify the time as 10 minutes,
autoscaler resumes the scale-down scan after 10 minutes of deleting a node.

n Unneeded Time - Sets the time limit for the autoscaler to scale-down an unused node.
For example, if you specify the time as 10 minutes, any unused node is scaled down
only after 10 minutes.

9 Click Next.

10 Control Plane Info

n To configure Control Plane node placement, click the Settings icon in the Control Plane
Node Placement table.

n Name - Enter the name of the Control Plane node.

n Destination Cloud - The destination cloud is selected by default. To make a different


selection, use the drop-down menu.

VM Placement

n Datacenter - Select a data center for the Control Plane node.

n Resource Pool - Select the default resource pool on which the Control Plane node is
deployed.

n VM Folder - Select the virtual machine folder on which the Control Plane node is
placed.

n Datastore - Select the default datastore for the Control Plane node.

n VM Template - Select a VM template.

VM Size

n Number of Replicas - Number of controller node VMs to be created. The ideal


number of replicas for production or staging deployment is 3.

n Number of vCPUs - To ensure that the physical CPU core is used by the same
node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-
enabled, and if the network function requires NUMA alignment and CPU reservation.

n Cores Per Socket (Optional) - Enter the number of cores per socket if you require
more that 64 cores.

n Memory - Enter the memory in GB.

n Disk Size - Enter the disk size in GB.

VMware, Inc. 189


VMware Telco Cloud Automation User Guide

Network

n Management Network - Select the Management network.

n MTU - Enter the maximum transmission unit in bytes.

n DNS - Provide comma-separated primary and secondary DNS servers.

Labels

n To add the appropriate labels for this profile, click Add Label. These labels are added
to the Kubernetes node.

Advanced Options

n Clone Mode - Specify the type of clone operation. Linked Clone is supported on
templates that have at least one snapshot. Otherwise, the clone mode defaults to Full
Clone.

n Certificate Expiry Days - Specify the number of days for automatic certificate renewal
by TKG before its expiry. By default, the certificate expires after 365 days. If you
specify a value in this field, the certificate is automatically renewed before the set
number of days. For example, if you specify the number of days as 50, the certificate
is renewed 50 days before its expiry, which is after 315 days.

The default value is 90 days. The minimum number of days you can specify is 7 and
the maximum is 180.

Note You cannot edit the number of days after you deploy the cluster.

n n

n Kubeadmin Config Template (YAML) - Enable or deactivate the Kubeadmin Config


Template YAML.

n Click Apply.

11 Add-Ons

To deploy an add-on such as NFS Client or Harbor, click Deploy Add-on.

a From the Select Add-On wizard, select the add-on and click Next.

b For add-on configuration information, see Add-On Configuration Reference for v1


Workload Clusters.

12 Click Next.

VMware, Inc. 190


VMware Telco Cloud Automation User Guide

13 Node Pools

n A node pool is a set of nodes that have similar properties. Pooling is useful when
you want to group the VMs based on the number of CPUs, storage capacity, memory
capacity, and so on. You can add one node pool to a Management cluster and multiple
node pools to a Workload cluster, with different groups of VMs. To add a Worker node
pool, click Add Worker Node Pool.

n Name - Enter the name of the node pool.

n Destination Cloud - The destination cloud is selected by default. To make a different


selection, use the drop-down menu.

VM Placement

n Datacenter - Select a data center for the node pool.

n Resource Pool - Select the default resource pool on which the node pool is deployed.

n VM Folder - Select the virtual machine folder on which the node pool is placed.

n Datastore - Select the default datastore for the node pool.

n VM Template - Select a VM template.

n Enable Autoscaler - This field is available only if autoscaler is enabled for the
associated cluster. At the node level, you can activate or deactivate autoscaler based
on your requirement.

The following field values are automatically populated from the cluster.

n Min Size (Optional) - Sets a minimum limit to the number of worker nodes that
autoscaler should scale down. Edit the value, as required.

n Max Size (Optional) - Sets a maximum limit to the number of worker nodes that
autoscaler should scale up. Edit the value, as required.

Note
n Using autoscaler on a cluster does not automatically change its node group
size. Therefore, changing the maximum or minimum size does not scale up or
scale down the cluster size. When you are editing the autoscaler-configured
maximum size of the node pool, ensure that the maximum size limit of the
node pool is lesser than or equal to the current replica count.

n When a scale-down is in progress, it is not recommended to edit the maximum


size of the cluster.

n You can view the scale-up and scale-down events under the Events tab of the
Telco Cloud Automation portal.

VMware, Inc. 191


VMware Telco Cloud Automation User Guide

VM Size

n Number of Replicas - Number of node pool VMs to be created. The ideal number of
replicas for production or staging deployment is 3.

Note The Number of Replicas field is unavailable if autoscaler is enabled for the
node.

n n Number of vCPUs - To ensure that the physical CPU core is used by the same
node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-
enabled, and if the network function requires NUMA alignment and CPU reservation.

n Cores Per Socket (Optional) - Enter the number of cores per socket if you require
more that 64 cores.

n Memory - Enter the memory in GB.

n Disk Size - Enter the disk size in GB.

Network

n Management Network - Select the Management network.

n MTU - Enter the maximum transmission unit in bytes.

n DNS - Provide comma-separated primary and secondary DNS servers.

n ADD NETWORK DEVICE - Click this button to add a dedicated NFS interface to the
node pool, select the interface, and then enter the following:

n Interface Name - Enter the interface name as tkg-nfs to reach the NFS server.

Labels

n To add the appropriate labels for this profile, click Add Label. These labels are added
to the Kubernetes node.

Advanced Options

n Clone Mode - Specify the type of clone operation. Linked Clone is supported on
templates that have at least one snapshot. Otherwise, the clone mode defaults to Full
Clone.

n To enable Machine Health Check, select Configure Machine Health Check

n Kubeadmin Config Template (YAML) - Enable or deactivate the Kubeadmin Config


Template YAML.

n Click Apply.

14 6. Ready to Create - Click CREATE CLUSTER TEMPLATE.

Managing v2 Workload Clusters after Deployment


After you deploy a Kubernetes cluster, you can edit its Cluster configuration, edit its Control
Plane, Node Pool, Add-Ons configuration and upgrade the Kubernetes version.

VMware, Inc. 192


VMware Telco Cloud Automation User Guide

Viewing Cluster Details


After a Kubernetes cluster is deployed, it is listed under Infrastructure > CaaS Infrastructure >
Cluster Instances. To view more information about the Kubernetes cluster, click cluster name.

Overview
You can view the details of cluster and the health of various components associated to the
cluster.

n The first section provides the following information:

n Cluster Type - Management or a Workload cluster.

n Management Cluster - The backing Management cluster name.

n Management Cluster URL - The URL of the management cluster API server.

n Endpoint IP - The endpoint IP of cluster.

n Cloud Name - The name of the selected vSphere cluster.

n Created - The creation time of the cluster

n IP Version - The endpoint IP version of cluster.

n Revision -The revision time of cluster.

n The Configuration and Control Plane section provides details of various components and
their health:

n The status of the component is shown before the name.

Health - Health status of the component.

Note Telco Cloud Automation obtains the health status directly from Kubernetes and
displays that health status on Telco Cloud Automation user interface.

n Healthy : The component is working fine.

n Unhealthy : The components is not working fine and has some faults.

n Unknown : The component is not available.

Note
n When the cluster is under upgrade or under creation, the status may show
Unknown.

n Management clusters created in older versions of Telco Cloud Automation


remains Unknown, unless you upgrade these clusters to Telco Cloud Automation
2.0.

n Click on the component name to view the details.

n The status of the component.

VMware, Inc. 193


VMware Telco Cloud Automation User Guide

n The Pods information. Which contains pod Name, Created, Ready Containers and
Phase.

n Click on the pod name to view the details.

Note Telco Cloud Automation obtains the health status directly from Kubernetes
and displays that health status on Telco Cloud Automation user interface. Kubernetes
maintains the conditions and details.

n Details - Shows the Namespace, Node name, Creation Timestamp, and IP associated
with the Pod.

n Conditions - Shows the status of Initialized, Ready, Containers Ready, and POD
Scheduled conditions.

n Containers - Shows the Name, State, and Started At time of the container.

n The Node Pools section provides details of Node Pools and their health:

n The number of Node Pools in this Workload cluster.

n The status of Node Pools.

n The number of Node Pools which are in Provisioned status.

n The Add-Ons section provides details of Add-Ons and their health:

n The number of Add-Ons in this Workload cluster.

n The status of Add-Ons.

n Click on the Add-On name to view the K8s resources details.

n It shows the K8s resource Name, Kind, Namespace, Created, Desired, Ready,
Replica, Ready Replicas, etc.

Configuration and Control Plane


The Configuration and Control Plane tab displays the details about the Cluster and the Control
Plane nodes.

n The Status provides the cluster status.

n The Conditions section provides the conditions details like Type, Status, Reason, Severity,
Message and Last Transition Time, and You could click the Show More to get the CR of
TcaKubernetesCluster and TcaKubeControlPlane.

n The Cluster Global configuration section provides the global configuration of cluster:

n Details - Shows the cluster details like CNI Type, Endpoint IP, Pods, Services, TCA Bom
Release Reference, NF Orchestration VIM.

n Cloud Providers - Shows the cloud providers details like VIM name, Datacenter, Type.

VMware, Inc. 194


VMware Telco Cloud Automation User Guide

n The Control Plane Configuration section provides the details of control plane nodes:

n Details - Shows the Control Plane hardware details like:Name, CPU, Memory, Storage,
Replicas, Folder, Resource Pool, Cloud Name, Datacenter, Datastore, TCA Bom Release
Reference, Clone Mode, Template.

n Network - Shows the network details like Network Name, MTU, DHCP4.

n Labels - Shows the labels of control plane nodes.

n Nodes - Shows the various details of the VMs like Memory pressure, Disk pressure, PID
pressure, Ready State and K8S version.

Click on the VM name to view the following information:

n Node Details - Shows the hardware and operating system related details of the VM.
Which contains Architecture, Kernel Version, Kubelet Version, OS Image, Container
Runtime Version, Kube Proxy Version, Operating System.

n Conditions - Shows various health conditions like Memory Pressure, Disk Pressure,
PID Pressure, and Ready State of the node pool.

n Addresses - Shows the Hostname, InternalIP and ExternalIP associated with the VM.

n Labels - Shows the various labels associated with the VM.

n Allocatable/Capacity - Shows the availability and allocation of the resources


associated with the VM.

Node Pools
The Node Pools tab displays the existing node pools of a Kubernetes cluster. To view more
details of the node pool such as its name, CPU size, memory size, storage size, number of
replicas, node customization details, and its status, click the name of the node pool, then you can
view the following details:

n The Status provides the node pool status.

n The Conditions section provides the conditions details, like Type, Status, Reason, Severity,
Message and Last Transition Time, and You could click the Show More to get the CR of
TcaNodePool, NodePolicy and NodePolicyMachineStatus.

n The Details section provides the hardware details of the node pool. This contains Name,
Replicas, CPU, Memory, Storage, Clone Mode, Cloud, Datacenter, Resource Pool, VM
Folder, Datastore, VM Template, Manage Network, CPU Manager Policy, Reservation for
Kubernetes Processes, Reservation for System Processes, TCA Bom Release Reference,
Domain Name Servers.

n The Labels section provides the various labels associated with the node pool.

n The Machine Health Check section provides the details of the Machine Health Check.

n The Nodes section provides various details of the VMs like VM Name, IP, Memory pressure,
Disk pressure, PID pressure, Ready State and K8S version.

VMware, Inc. 195


VMware Telco Cloud Automation User Guide

Click the VM to view the following information:

n In the Node Pool Details tab, it shows Node Details, Conditions, Addresses, Labels and
Allocatable/Capacity.

n Node Details - Shows the hardware and the operating system related details of
the VM. This contains Architecture, Kernel Version, Kubelet Version, OS Image,
Container Runtime Version, Kube Proxy Version, Operating System.

n Conditions - Shows various health conditions like Memory Pressure, Disk Pressure,
PID Pressure, and Ready State of the node pool.

n Addresses - Shows the Hostname, InternalIP, and ExternalIP associated with the VM.

n Labels - Shows the various labels associated with the VM.

n Allocatable/Capacity - Shows the availability and allocation of the resources


associated with the VM.

n In the Node Customisations tab, it shows the details of node customisations. Which
contains Status, NUMA Alignment, Kernel, Network, Tuned Profile, File Injection etc.

n In the Events tab, it shows the list of the events performed. Which contains Message,
Type, Owner, Resource Name, Resource Type, Reason, Count, First Occurrence and Last
Occurrence.

Note You can apply filters to view the details of specific Node Pool.

Add-Ons
The Add-Ons tab displays the existing Add-Ons of a Kubernetes cluster.

n All - In this table, it list all Add-Ons, and the details like Name, Type, Status, Revision and
Created.

n Add-On Categories- Add-Ons are also divided into several categories, and you can see the
corresponding Add-On list under each category table. Categories include:

n Cni - Which contains antrea, calico.

n Csi - Which contains vsphere-csi, nfs-client.

n Monitoring - Which contains prometheus, fluent-bit.

n Networking - Which contains load-balancer-and-ingress-service, multus and


whereabouts.

n System: Which contains harbor, systemSettings .

n Tca-Core-Addon: Which contains nodeconfig-operator.

VMware, Inc. 196


VMware Telco Cloud Automation User Guide

n Tool: Which contains helm, velero.

n Single Add-On details - Click the add-on name to view the K8s resources details. It shows the
K8s resource Name, Kind, Namespace, Created, Desired, Ready, Replica, Ready Replicas,
etc.

Note You can apply filters to view the details of specific Add-On.

Events
The Events tab displays the progress of the cluster-level events and their status.

n The Events table shows the list of the events performed. Which includes Message,
Type, Owner, Resource Name, Resource Type, Reason, Count, First Occurrence and Last
Occurrence.

Note You can apply filters to view the details of specific Event.

Edit Cluster Configuration and Control Plane


You can scale up or scale down the number of Master node replicas and add or remove labels.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.

4 Click Edit Cluster Configuration.

5 In the Configuration tab Destination Info step, could select new cloud under the Advanced
Options.

6 In the Control Plane Info step, click the configuration icon to the right of the control panel
row.

7 Click the configure icon, there will show Control Plane Node Info dialog.

8 Edit the Number of replicas to scale down or scale up the Control Plane nodes.

9 Click Add Label or Remove to edit the node labels.

10 Click Apply to apply configurations.

11 After the Control Plane dialog close, click Next jump to Ready to Deploy, Click Deploy.

Results

You have successfully edited the Kubernetes Cluster and its Control Plane configuration.

Copy Specification and Deploy new Cluster


You can copy the configuration of a specific Kubernetes cluster and use it for deploying a new
Kubernetes cluster.

VMware, Inc. 197


VMware Telco Cloud Automation User Guide

Prerequisites

This option is helpful when you are deploying multiple Kubernetes clusters with similar
configurations.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to edit.

4 Click Copy Specification and Deploy new Cluster.

5 In the Configuration tab, all inputs default value is same as current workload cluster, edit the
Destination Info, Cluster Info, Control Plane Info, Add-Ons, Node Pools configuration, and click
Next until to Ready to Deploy, then click Deploy.

Results

You have successfully deploy a new v2 Workload Cluster.

Retry a Failed Cluster Creation Operation


If the cluster creation operation become unavailable due to an unknown error such network
unavailability, you can retry the operation.

Note This operation is available from VMware Telco Cloud Automation version 2.1 onwards.

Note The Retry option retries the cluster creation operation from the point of failure. If you want
to change the configuration of the cluster and retry the creation, you must delete the old cluster
and recreate it.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to retry.

4 Click Retry to retry a failed Kubernetes Cluster creation operation.

Results

The Cluster creation operation resumes.

Stop Cluster Creation Operation


You can stop an ongoing cluster creation operation.

Prerequisites

Note This operation is supported only on Workload Clusters.

VMware, Inc. 198


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster creation operation that you want
to stop.

4 Click Abort.

Results

VMware Telco Cloud Automation rolls back the progress of Kubernetes Cluster creation
operation and deletes all the deployed nodes. After VMware Telco Cloud Automation stops the
cluster creation operation, you cannot deploy the same cluster again.

Delete a Cluster
You can delete the Kubernetes Workload cluster.

Prerequisites

Ensure that the workload cluster is not running any applications.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster.

4 Click Delete and confirm the operation.

Results

You have successfully deleted the Kubernetes cluster instance.

Add a Node Pool


You can add a node pool to your Kubernetes Workload cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Kubernetes cluster name that you want to configure.

4 Select the Node Pools tab, click Add Node Pool.

VMware, Inc. 199


VMware Telco Cloud Automation User Guide

5 There will show a Node Pool Details Dialog, edit node pool configuration and click Add.

In Add Node Pool, enter the following information:

n Name - Enter the name of the node pool. The node pool name cannot be greater than 36
characters.

n Destination Cloud - Select the cloud for the node pool.

n Datacenter - Select the datacenter for the node pool.

n Resource Pool - Select the resource pool for the node pool.

n VM Folder - Select the folder for the node pool machines.

n Datastore - To use a different datastore, select the datastore from here.

n VM Template - Select the template for the node pool machines.

n Replica - Select the number of controller node virtual machines.

n CPU - Select the number of virtual CPUs in the node pool.

n Cores per Socket (Optional) - Select the number of cores per socket in the node pool.

n Memory - Select the amount of memory for the node pool.

n Disk Size - Select the disk size. Minimum disk size required is 50 GB.

n Labels - Add key-value pair labels to your nodes, which to be used as node selectors
when instantiating a network function.

n Networks - You can add the network details.

n Network - Select the network that you want to associate with the label.

n (Optional) MTU - Provide the MTU value for the network. The minimum MTU value is
1500. The maximum MTU value depends on the configuration of the network switch.

n (Optional) DNS - Enter a valid DNS IP address as Domain Name Servers. These DNS
servers are configured in the guest operating system of each node in the cluster. You
can override this option on the Master node and each node pool of the Worker node.
Multiple DNS servers can be separated by commas.

n Labels - Add key-value pair labels to your nodes, which to be used as node selectors
when instantiating a network function.

n Expand Advanced Options, user could configure Maintenance Mode/Clone Mode/


Machine Health Check/Kubeadmin Config Template.

n Enable or disable Maintenance Mode.

n Select the Clone Mode, enable clone mode will use the vSphere linked clone feature
to create machines for Kubernetes nodes.

VMware, Inc. 200


VMware Telco Cloud Automation User Guide

n To enable Machine Health Check, select Configure Machine Health Check. For more
information, see Machine Health Check.

n (Optional) Enter the Node Start Up Timeout time duration for Machine Health
Check to wait for a node to join the cluster. If a node does not join during the
specified time, Machine Health Check considers it unhealthy.

n Select the node unhealthy conditions, which support Ready/MemoryPressure/


DiskPressure/PIDPressure/NetworkUnavailable. Select the Status, such as False/
Unknown/True. Then select Timeout. If any of these conditions are met, Machine
Health Check considers these nodes as unhealthy and starts the remediation
process.

n Kubeadmin Config Template - Set CPU reservations on the Worker nodes as Static or
Default. For information about controlling CPU Management Policies on the nodes,
see the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-
cluster/cpu-management-policies/.

Note For CPU-intensive workloads, use Static as the CPU Manager Policy.

6 Click Apply.

7 After the Node Pool dialog close, click Next jump to Ready to Deploy, Click Deploy.

Results

You have successfully added the node pool of a Kubernetes cluster instance.

Edit a Node Pool


You can scale up or scale down the number of Worker nodes in each node pool and add labels.
If a network function with infrastructure requirements is running in this node pool, the scaling up
operation automatically applies all the node customizations on the new nodes.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Kubernetes cluster name that you want to configure.

4 Select the Node Pools tab, click the Options (⋮) symbol against the node pool that you want
to edit.

5 Click Edit, there will show out Node Pool Details dialog.

6 Edit the Number of replicas to scale down or scale up the node pool nodes.

7 Click Add Label or Remove to edit the node labels, which to be used as node selectors as
node selectors when instantiating the network functions.

8 Expand Advanced Options, enable or disable Maintenance Mode.

VMware, Inc. 201


VMware Telco Cloud Automation User Guide

9 To enable Machine Health Check, select Configure Machine Health Check. For more
information, see Machine Health Check.

a (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check to
wait for a node to join the cluster. If a node does not join during the specified time,
Machine Health Check considers it unhealthy.

b Select the node unhealthy conditions, which support Ready/MemoryPressure/


DiskPressure/PIDPressure/NetworkUnavailable. Select the Status, such as False/
Unknown/True. Then select Timeout. If any of these conditions are met, Machine Health
Check considers these nodes as unhealthy and starts the remediation process.

10 Click Apply.

11 After the Node Pool dialog close, click Next jump to Ready to Deploy, Click Deploy.

Results

You have successfully edited the Node Pool configuration of a Kubernetes cluster instance.

Delete a Node Pool


You can delete a Node Pool from the Kubernetes Workload cluster.

Prerequisites

Ensure that the node pool is not running any applications.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Kubernetes cluster name that you want to configure.

4 Select the Node Pools tab, click the Options (⋮) symbol against the node pool that you want
to edit.

5 Click Delete and confirm the operation.

Results

You have successfully deleted the Node Pool of a Kubernetes cluster instance.

Deploy a Similar Node Pool


You can copy the configuration of a specific Node Pool and use it for deploying a new Node
Pool.

This option is helpful when you are deploying multiple node pools with similar configurations.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

VMware, Inc. 202


VMware Telco Cloud Automation User Guide

2 Navigate to Infrastructure > Caas Infrastructure and select the Kubernetes cluster.

3 Click the Kubernetes cluster name that you want to configure.

4 Select the Node Pools tab, click the Options (⋮) symbol against the Kubernetes cluster that
you want to edit.

5 Click Deploy Similar Node Pool.

6 In the Configuration tab Node Pools step, there is a new node pool with the -copy suffix. In
the last column of this node pool, click Configure.

7 In the Configuration tab, all inputs default value is same as current node pool, After confirm
the node pool configuration, click Apply .

8 Click Next until to Ready to Deploy, then click Deploy.

Results

VMware Telco Cloud Automation automatically generates a new Node Pool with the
configuration of the Node Pool that copied.

Deploy Add-Ons
You can deploy Add-Ons to your Kubernetes Workload cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Kubernetes cluster name that you want to configure.

4 Select the Add-Ons tab, click Deploy Add-ons.

5 In the Configuration tab Add-Ons step, Click Deploy Add-on.

6 Select one add-on and configure that, then click Ok. About the add-on configuration, please
refer to Managing Add-ons for v2 Workload Clusters.

7 After the Configure Add-on dialog close, click Next jump to Ready to Deploy, Click Deploy.

Results

You have successfully added these Add-Ons to a Kubernetes cluster instance.

Edit a Add-On
You can reconfigure Add-on of your Kubernetes cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Kubernetes cluster name that you want to configure.

VMware, Inc. 203


VMware Telco Cloud Automation User Guide

4 Select the Add-Ons tab, click the Options (⋮) symbol against the node pool that you want to
edit.

5 Click Edit, there will show Node Pool Details dialog.

6 In the Configuration tab Add-Ons step, Click Deploy Add-on.

7 Click Edit, there will show Add-on Configuration dialog.

8 Refer to Managing Add-ons for v2 Workload Clusters configure add-on, and Click Ok.

9 After the Add-on Configuration dialog close, click Next jump to Ready to Deploy, Click
Deploy.

Results

You have successfully edited the Add-On configuration of a Kubernetes cluster instance.

Delete a Add-on
You can delete a Add-on of your Kubernetes cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Kubernetes cluster name that you want to configure.

4 Select the Add-Ons tab, click the Options (⋮) symbol against the node pool that you want to
edit.

5 Click Delete and confirm the operation.

Results

You have successfully deleted the Add-On of a Kubernetes cluster instance.

Upgrade the Control Plane


The existing Workload Cluster Could upgrade to the latest versions of kubernetes supported in
the VMware Telco Cloud Automation 2.2.

Follow these steps to upgrade the Workload Cluster Control Plane.

Note Refer to the Upgrade v2 Workload Kubernetes Cluster Version, select the supported
kubernetes version to upgrade Control Plane first. Add-Ons is upgraded with the upgrade of the
Control Plane. After the upgrade is complete, upgrade node pool to see Upgrade Cluster Node
Pool.

Prerequisites

Before upgrade the Workload Cluster, make sure Management cluster has upgraded to the
VMware Telco Cloud Automation 2.2, and the workload cluster in Provisioned status.

VMware, Inc. 204


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Options (⋮) symbol against the Kubernetes cluster that you want to upgrade.

4 Click Edit Cluster Configuration.

5 In the Configuration tab Cluster Info step, select the new TCA BOM Release which you want
upgrade to, then click Next.

6 In the Control Plane Info step, click the configuration icon to the right of the control panel
row.

7 Select the VM Template from the available template that suite the TCA BOM Release you
selected. Then click Apply.

8 After the Control Plane dialog close, click Next jump to Ready to Deploy, Click Deploy.

9 The upgrade process can be monitored below the rows of the Cluster, the upgrade events
will be updated in the Events tab.

10 If the upgrade is a failure, click the Retry option to continue upgrade the Control Plane .
However, configuration modification is not allowed before Retry.

Results

You have successfully upgraded the Kubernetes Cluster instance and its Control Plane.

Upgrade a Node Pool


The existing Workload Cluster Node Pool could upgrade to the latest kubernetes version
supported in the VMware Telco Cloud Automation 2.2.

Follow these steps to upgrade Node Pool.

Note Starting with VMware Telco Cloud Automation 2.2, which separated the Control Panel
upgrade from the Node Pool upgrade, there are several options for upgrading Node Pool:
n Keep Node Pools in current version

n Upgrade some specific Node Pools

n Upgrade all Node Pools

n Node Pools can upgrade in parallel or serial

n Node Pool is one version lower than or equal to the Control Panel

Prerequisites

Before upgrade the Workload Cluster Node Pool, make sure Workload Cluster Control Plane has
upgraded, and the Workload Cluster in Provisioned status.

VMware, Inc. 205


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure > Cluster Instances.

3 Click the Kubernetes cluster name that you want to configure.

4 Select the Node Pools tab, click the Options (⋮) symbol against the node pool that you want
to upgrade.

5 Click Edit, there will show out Node Pool Details dialog.

6 Select the new TCA BOM Release which you want upgrade to.

7 Select the VM Template from the available template which suite for the TCA BOM Release
you selected.

8 Then click Apply.

9 After the Node Pool dialog is closed, click Next jump to Ready to Deploy, Click Deploy.

10 The upgrade process can be monitored below the rows of the Node Pool, the upgrade
events will be updated in the Events tab.

11 If the upgrade is failure, click the Retry option to continue upgrade the Node Pool. However,
configuration modification is not allowed before Retry.

Results

You have successfully upgraded a Node Pool of a Kubernetes cluster instance.

Add-ons Reference for v2 Workload Clusters


Configure Add-Ons.

Add-Ons Configurations
Use the following reference while configuring Add-Ons on your v2 Workload cluster.

vsphere-csi

Option Description

Zone Zone is the tag category name defined in vCenter Server.


Tags belonging to this category are assigned to the host or
vSphere cluster objects for marking the storage topology.

Region Region is the tag category name defined in vCenter Server.


Tags belonging to this category are assigned to the Data
Center objects for marking the storage topology.

VC Username Enter a user name for vSphere-CSI.

VC Password Enter a password for vSphere-CSI.

Storage Class Enter the storage class name. This storage class is used to
provision persistent volumes dynamically. A storage class
with this name is created in the Kubernetes cluster.

VMware, Inc. 206


VMware Telco Cloud Automation User Guide

Option Description

IsDefault Select True if you want to set the storage policy as a


default one. Else, select False.

Note Only one storage policy can be set to True.

Reclaim Policy Select whether to delete or retain the add-on during a


reclaim event.

Datastore URL Enter the datastore URL.

Use Storage Policy Select the required storage policy.

ADD NEW STORAGECLASS Click this button to add one or more storage classes.

Note You can add multiple storage classes. However, you


can set only one storage policy as default.

nfs-client

Option Description

Storage Class Enter the storage class name. This storage class is used to
provision persistent volumes dynamically. A storage class
with this name is created in the Kubernetes cluster.

Is Default To set this storage class as default, select True.

NFS Server Address For an IPv4 cluster, enter the IPv4 address or FQDN of the
NFS Server. For an IPv6 cluster, enter the FQDN.

Path Enter server IP address and mount path of the NFS client.
Ensure that the NFS server is reachable from the cluster.
The mount path must also be accessible to read and write.

harbor
If a Harbor has already been registered, click Select Registered Harbor and select the
appropriate Harbor from the list. Otherwise, click Add New Harbor and provide the following
details:

Option Description

URL Enter the Harbor URL.

Username Enter the Harbor user name.

Password Enter the Harbor password.

helm
This add-on has no configuration.

multus

Caution Do NOT delete multus add-on once it is provisioned, as this might prevent creating or
deleting pods on the workload cluster. See multus-cni known issue #461.

VMware, Inc. 207


VMware Telco Cloud Automation User Guide

Option Description

Log Level Enter the log level. Select from:


n Panic
n Debug
n Error
n Verbose

Log File Path Path where you want to store the log files.

systemsettings

Option Description

Cluster Password Enter the password for the cluster.

Syslog Add the syslog server IP address/FQDN for capturing the


infrastructure logs of all the nodes in the cluster.

load-balancer-and-ingress-service(aka AKO)
Load-balancer-and-ingress-service add-on also known as AKO(AVI Kubernetes Operator) add-
on.

Note
1 To install load-balancer-and-ingress-service(AKO) add-on for a Workload cluster, you must
add AKOO(AVI Kubernetes Operator - Operator) on the Management cluster. For information
about adding AKOO, see Add AVI Kubernetes Operator - Operator.

2 Service engine group can not be shared by more than one TCA clusters, even if load-
balancer-and-ingress-service(AKO) add-on is deleted from the original cluster or the original
cluster is deleted already. To use a service engine group which was used by other cluster,
delete the service engine group from Avi Controller UI and recreate it.

3 To customize additional load-balancer-and-ingress-service(AKO) configurable fields and


manage AKO objects(aviinfrasetting, gatewayclass, gateway) via the Custom Resources(CRs)
tab, see Advanced Configuration for Load-balancer-and-ingress-service Add-On.

Option Description

Cloud Name Enter the cloud name configured in the AVI Controller.

Default Service Engine Group Enter the service engine group name configured in the AVI
Controller.

Default VIP Network Enter the VIP network name in the AVI Controller.

Default VIP Network CIDR Enter the VIP network CIDR in the AVI Controller.

Ingress Configuration for AKO Deployment

VMware, Inc. 208


VMware Telco Cloud Automation User Guide

Option Description

Service Type Enter the ingress method for the service. Choose from the
following options:
n Node Port
n Cluster IP
n Node Port Local - Available only for Antrea CNI.

Network Name Enter the cluster node network name. To add a network,
click Add Network.

CIDRs You can enter multiple comma-separated CIDR values or


use the <CR> tag to enter multiple CIDR values.

Promethues
Prometheus provides Kubernetes-native deployment and management of Prometheus and
related monitoring components.

Note
1 To customize additional prometheus configurable fields via the Custom Resources(CRs) tab,
see Advanced Configuration for Prometheus Add-On.

2 Some parameters(e.g. PVC parameters, service type, port) are immutable after prometheus
add-on provisioned. See Configurable parameters.

Option Description

Use Reference Configs Click the toggle button to use the reference configurations.

Storage Class Name The name of the Storage Class. Default Storage Class will
be used if not set.

Access Mode Choose from:


n Read Write Once
n Read Only Many
n Read Write Many

Storage Enter the size of the Persistent Volume Claim (PVC). The
default value is 150 GB.

fluent-Bit

Note
1 Do not set cpu-manager-policy is to static for node pools as this may lead to crashing of
fluent-bit deamonset pods.

2 To customize additional fluent-bit configurable fields(inputs, outputs, filters, parsers) via the
Custom Resources(CRs) tab, see Advanced Configuration for Fluent-bit Add-On.

3 To update the provisioned fluent-bit configuration, manually restart all fluent-bit pods to
make the new configuration take effect.

VMware, Inc. 209


VMware Telco Cloud Automation User Guide

Option Description

Use Reference Configs Click the toggle button to use the reference configurations.

service Service configuration for fluent-bit. Default value is:

[Service]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020

Outputs You must enter the syslog server IP address.

whereabouts
This add-on has no configuration.

cert-manager
This add-on has no configuration.

Note In certain scenarios, the cainjector pod or webhook pod of cert-manager add-on can be in
CrashLoopBackOff status while the cert-manager add-on status on UI will be Unhealthy. In such
case, restart the CrashLoopBackOff pod with command kubectl delete pod -n cert-manager
<crash-pod-name> to recover.

velero
Velero is used to back up and restore a workload cluster.

Note After changing the "Backup Storage" configuration (such as, Storage URL and Storage
Bucketname), existing ResticRepositories CR should be deleted manually in order to continue
using Restic to back up Persistent Volumes data.

kubectl delete ResticRepositories <resticrepository-name> -n velero

Option Description

Credential

Access ID Enter an ID to access backup storage.

Access Key Enter password to access backup storage.

Backup Storage

Storage URL Enter URL of the S3-compatible object storage service.

VMware, Inc. 210


VMware Telco Cloud Automation User Guide

Option Description

Region Enter location of the bucket created in the S3-Compatible


object storage server.

Note For example, enter minio if you are using the MinIO
service.

Storage Bucket Name Enter name of the storage bucket where the backup
should be restored.

Note It is recommended to use a dedicated bucket for


each TKG workload cluster.

CA certificate Paste the CA certificate in PEM format.

Note
n This field appears only if the storage URL is in HTTPS
format.
n Also append https-proxy certificate if velero is behind
https-proxy.

TKG standard extension


This addon is used to manage the TKG standard extensions, such as tkg-contour and tkg-harbor.

Note
n You must install cert-manager before installing any of the TKG standard extensions.

n The following TKG standard extensions which are supported by the VMware Telco Cloud
Automation addons cannot be installed through TKG standard extension: cert-manager,
multus-cni, whereabouts, fluent-bit, promethesus.

n For TKG standard extension configurations and other information, see Installing and
Managing Packages with the Tanzu CLI.

Option Description

Addon Name Enter the addon name to be installed through TKG


standard extension.

Note The addon name should be prefixed with tkg.

Advanced Configuration for Load-balancer-and-ingress-service Add-On


Use this reference when configuring additional parameters of load-balancer-and-ingress-service
addon or managing AKO objects(aviinfrasetting, gatewayclass, gateway) via the Custom
Resources(CRs) tab.

VMware, Inc. 211


VMware Telco Cloud Automation User Guide

Configurable parameters

Note Some parameters are only applicable for certain topology(e.g. NSX-T environment) or
certain feature(e.g. Provide cluster control plane HA with Avi). Customize these parameters
carefully base on your actual environment.

Parameter description type Default value Note

cloudName Cloud name string Mandatory, formatted


configured in Avi on UI
Controller

controllerVersion Avi Controller version string 20.1.3

controlPlaneNetwork.c ControlPlaneNetwork string Only for using Avi


idr .cidr describes the provide control plane
control plane HA feature
network cidr of the
cluster

controlPlaneNetwork.n ControlPlaneNetwork string Only for using Avi


ame .name describes the provide control plane
control plane HA feature
network name of the
cluster

defaultServiceEngineG Service engine group string Mandatory, formatted


roup name configured in on UI
Avi Controller

defaultVipNetwork VIP network name in string Mandatory, formatted


Avi Controller on UI

defaultVipNetworkCidr VIP network CIDR in string Mandatory, formatted


Avi Controller on UI

defaultVipNetworkIpP End represents the string


ools.end ending IP address of
the pool

defaultVipNetworkIpP Start represents the string


ools.start starting IP address
of the pool

defaultVipNetworkIpP Type represents the enum["V4"] V4


ools.type type of IP Address

extraConfigs.apiServer ApiServerPort integer 8080


Port specifies Internal port
for AKO's API server
for the liveness
probe of the AKO
pod

VMware, Inc. 212


VMware Telco Cloud Automation User Guide

Parameter description type Default value Note

extraConfigs.disableSt DisableStaticRouteSy boolean false


aticRouteSync nc describes AKO
should sync static
routing or not. If the
POD networks are
reachable from the
Avi SE, this should be
to true. Otherwise, it
should be false.

extraConfigs.enableEv Defines enable boolean false


ents or disable
event broadcasting
via AKO

extraConfigs.enableEV EnableEVH specifies boolean false


H if you want to enable
the Enhanced Virtual
Hosting Model in Avi
Controller for the
Virtual Services

extraConfigs.fullSyncF FullSyncFrequency string 1800


requency controls how often
AKO polls the Avi
controller to update
itself with cloud
configurations.

extraConfigs.ingress.d Enabling this flag boolean false


efaultIngressController will use AKO
as the default ingress
controller

extraConfigs.ingress.di DisableIngressClass boolean true


sableIngressClass will prevent AKO
Operator to install
AKO IngressClass
into workload
clusters

extraConfigs.ingress.e Enabling this boolean false


nableMCI flag would tell
AKO to start
processing multi-
cluster ingress
objects

extraConfigs.ingress.n Cluster node network string list Mandatory when


odeNetworkList.cidrs cidrs extraConfigs.ingress.s
erviceType is
ClusterIP, formatted
on UI

VMware, Inc. 213


VMware Telco Cloud Automation User Guide

Parameter description type Default value Note

extraConfigs.ingress.n Cluster node network string Mandatory when


odeNetworkList.name name extraConfigs.ingress.s
erviceType is
ClusterIP, formatted
on UI

extraConfigs.ingress.n NoPGForSNI boolean false


oPGForSNI describes if you
want to get rid
of poolgroups from
SNI VSes. Do not
use this flag if
you don't want http
caching

extraConfigs.ingress.p PassthroughShardSiz enum["SMALL", SMALL


assthroughShardSize e controls the "MEDIUM", "LARGE"]
passthrough virtualse
rvice numbers

extraConfigs.ingress.s ServiceType enum["ClusterIP", ClusterIP Mandatory, formatted


erviceType describes ingress "NodePort", on UI
methods for a "NodePortLocal"]
service

extraConfigs.ingress.s ShardVSSize enum["SMALL", SMALL


hardVSSize describes ingress "MEDIUM", "LARGE",
shared virtual service "DEDICATED"]
size

extraConfigs.l4Config. AutoFQDN controls enum["default", "flat", disabled


autoFQDN the FQDN "disabled"]
generation.
Valid value should be
default(<svc>.<ns>.<s
ubdomain>), flat
(<svc>-
<ns>.<subdomain>) o
r disabled

extraConfigs.l4Config. DefaultDomain string


defaultDomain controls the default
sub-domain to use
for L4 VSes when
multiple sub-domains
are configured in the
cloud.

extraConfigs.layer7Onl Layer7Only specifies boolean false


y if you want AKO only
to do layer 7 load
balancing

extraConfigs.log.logFil LogFile specifies the string


e log file name

VMware, Inc. 214


VMware Telco Cloud Automation User Guide

Parameter description type Default value Note

extraConfigs.log.logLe LogLevel specifies enum["INFO", INFO


vel the AKO pod log "DEBUG", "WARN",
level "ERROR"]

extraConfigs.log.moun MountPath specifies string


tPath the path to mount
PVC

extraConfigs.log.persis PersistentVolumeClai string


tentVolumeClaim m specifies if a PVC
should make for AKO
logging

extraConfigs.namespa NameSpaceSelector.l string


ceSelector.labelKey abelKey contains
label key used for
namespace
migration. Same label
key has to be
present on
namespace/s which
needs migration/sync
to AKO

extraConfigs.namespa NameSpaceSelector.l string


ceSelector.labelValue abelValue contains
label value used for
namespace
migration. Same label
value has to be
present on
namespace/s which
needs migration/sync
to AKO

extraConfigs.networks BGPPeerLabels string list


Config.bgpPeerLabels specifies BGP peers,
this is used for
selective VsVip
advertisement.

extraConfigs.networks EnableRHI specifies boolean false


Config.enableRHI cluster wide setting
for BGP peering.

extraConfigs.networks T1 Logical Segment string Only applies to NSX-T


Config.nsxtT1LR mapping for backend cloud.
network.

extraConfigs.nodePort NodePortSelector string


Selector.key only applicable
if serviceType
is NodePort

VMware, Inc. 215


VMware Telco Cloud Automation User Guide

Parameter description type Default value Note

extraConfigs.nodePort NodePortSelector string


Selector.value only applicable
if serviceType
is NodePort

extraConfigs.primaryIn Defines if the AKO boolean true


stance instance is primary.
Value `true` indicates
that AKO instance is
primary. In a multiple
AKO deployment in
a cluster, only one
AKO instance should
be primary

extraConfigs.rbac.psp PspEnabled enables boolean false


Enabled the deployment of a
PodSecurityPolicy th
at grants AKO the
proper role

extraConfigs.rbac.psp PspPolicyAPIVersion string


PolicyAPIVersion decides the API
version of the
PodSecurityPolicy

extraConfigs.servicesA ServicesAPI specifies boolean true


PI if enables AKO in
services API mode:
https://kubernetes-
sigs.github.io/
service-apis/.
Currently, implement
ed only for L4. This
flag uses the
upstream GA
APIs which are not
backward compatible
with the advancedL4
APIs which uses a
fork and a version of
v1alpha1pre1

extraConfigs.vipPerNa Enabling this flag boolean false


mespace would tell AKO to
create Parent VS per
Namespace in EVH
mode

tenant.context Context is the type of enum["Provider", Provider This field is immutable


AVI tenant context. "Tenant"]

tenant.name Name is the name of string This field is immutable


the tenant.

VMware, Inc. 216


VMware Telco Cloud Automation User Guide

Parameter description type Default value Note

workloadCredentialRef WorkloadCredentialR string


.name ef points to a Secret
resource that
includes the
username and the
password to access
and configure the
AviController.
* username
Username used with
basic authentication
for the Avi
REST API *
password Password
used with basic
authentication for the
Avi REST API
This field is
optional. When
it's not specified,
username/password
will beautomatically
generated for each
Cluster and Tenant
needs to be non-nil in
this case.

workloadCredentialRef The namespace string


.namespace of the Secret
resource includes
the username and
password

A simplest CR sample is:

metadata:
name: load-balancer-and-ingress-service
clusterName: wc0
spec:
name: load-balancer-and-ingress-service
clusterRef:
name: wc0
namespace: wc0
config:
stringData:
values.yaml: |
cloudName: vcenter-cloud0
defaultServiceEngineGroup: wc0-se-group
defaultVipNetwork: oam-vip-dvpg
defaultVipNetworkCidr: 172.16.73.0/24
extraConfigs:
ingress:
serviceType: ClusterIP

VMware, Inc. 217


VMware Telco Cloud Automation User Guide

nodeNetworkList:
- networkName: cluster-mgmt-dvpg
cidrs:
- 172.16.68.0/22

Managing AKO objects via load-balancer-and-ingress-service add-on


Append aviObjects section to load-balancer-and-ingress-service add-on CR to manage AKO
objects(aviinfrasetting, gatewayclass, gateway) lifecycle.

A sample CR with aviObjects is:

metadata:
name: load-balancer-and-ingress-service
clusterName: wc0
spec:
name: load-balancer-and-ingress-service
clusterRef:
name: wc0
namespace: wc0
config:
stringData:
values.yaml: |
cloudName: vcenter-cloud0
defaultServiceEngineGroup: wc0-se-group
defaultVipNetwork: oam-vip-dvpg
defaultVipNetworkCidr: 172.16.73.0/24
extraConfigs:
ingress:
serviceType: ClusterIP
nodeNetworkList:
- networkName: cluster-mgmt-dvpg
cidrs:
- 172.16.68.0/22
aviObjects:
aviinfrasettings:
- metadata:
name: ais0
spec:
seGroup:
name: wc0-se-group
network:
vipNetworks:
- networkName: oam-vip-dvpg
l7Settings:
shardSize: MEDIUM
- metadata:
name: ais1
spec:
seGroup:
name: wc0-se-group
network:
vipNetworks:
- networkName: sig-vip-dvpg
l7Settings:

VMware, Inc. 218


VMware Telco Cloud Automation User Guide

shardSize: MEDIUM
gatewayclasses:
- metadata:
name: gwc0
spec:
controller: ako.vmware.com/avi-lb
parametersRef:
group: ako.vmware.com
kind: AviInfraSetting
name: ais0
gateways:
- metadata:
name: gw0
namespace: gw0
spec:
gatewayClassName: gwc0
listeners:
- protocol: TCP
port: 80
routes:
selector:
matchLabels:
ako.vmware.com/gateway-namespace: gw0
ako.vmware.com/gateway-name: gw0
group: v1
kind: Service
- protocol: TCP
port: 8081
routes:
selector:
matchLabels:
ako.vmware.com/gateway-namespace: gw0
ako.vmware.com/gateway-name: gw0
group: v1
kind: Service

n In this sample CR, two aviinfrasetting objects ais0ais1, one gatewayclass object gwc0, and
one gateway object gw0 will be created or updated, if already exist.

n Aviinfrasetting objects can be created with enableRhi: true and bgpPeerLabels as needed.

n Edit load-balancer-and-ingress-service add-on and then switch to the Custom


Resources(CRs) tab. Remove the specific AKO objects from aviObjects section to delete
them from workload cluster.

n TCA will create namespace(if not exist) for gateway objects but will not delete the
namespace when deleting the gateway objects.

Advanced Configuration for Prometheus Add-On


Use this reference when configuring additional parameters of prometheus addon via the Custom
Resources(CRs) tab.

VMware, Inc. 219


VMware Telco Cloud Automation User Guide

Configurable parameters

Parameter Description Type Default value Note

prometheus.deployme Number of integer 1


nt.replicas Prometheus replicas.

prometheus.deployme Prometheus list - -- Prometheus will


nt.containers.args container arguments. storage.tsdb.retentio replace the whole arg
You can configure n.time=42d list, make sure the
this parameter to - --config.file=/etc/ customized arg list
change retention config/ contains all of these
time. For information prometheus.yml args.
about configuring -
Prometheus storage --storage.tsdb.path=/
parameters, see data
the Prometheus
- --
documentation. Note:
web.console.libraries
Longer retention
=/etc/prometheus/
times require more
console_libraries2
storage capacity. It
- --
might be necessary
web.console.templat
to increase the
es=/etc/prometheus/
persistent volume
consoles
claim size if you
are significantly - --web.enable-
increasing the lifecycle
retention time.

prometheus.deployme Prometheus map {}


nt.containers.resource container resource
s requests and limits.

prometheus.deployme The Prometheus map {}


nt.podAnnotations deployments pod
annotations.

prometheus.deployme The Prometheus map {}


nt.podLabels deployments pod
labels.

prometheus.deployme Configmap-reload list


nt.configMapReload.co container arguments.
ntainers.args

prometheus.deployme Configmap-reload map {}


nt.configMapReload.co container resource
ntainers.resources requests and limits.

prometheus.service.ty Type of service to Enum["ClusterIP","No ClusterIP Immutable


pe expose Prometheus. dePort","LoadBalanc
er"]

prometheus.service.po Prometheus service Integer 80 Immutable


rt port.

prometheus.service.tar Prometheus service Integer 9090 Immutable


getPort target port.

VMware, Inc. 220


VMware Telco Cloud Automation User Guide

Parameter Description Type Default value Note

prometheus.service.la Prometheus service map {}


bels labels.

prometheus.service.an Prometheus service map {}


notations annotations.

prometheus.pvc.annot PVC annotations. map {}


ations

prometheus.pvc.stora Storage class to use string Immutable, formatted


geClassName for persistent volume on UI
claim. The default
storage class is used
if it is not set.

prometheus.pvc.acces Define access mode Enum["ReadWriteOn ReadWriteOnce Immutable, formatted


sMode for persistent volume ce", on UI
claim. "ReadOnlyMany",
"ReadWriteMany"]

prometheus.pvc.stora Define storage size string 150Gi Immutable, formatted


ge for persistent volume on UI
claim.

prometheus.config.pro For information YAML file prometheus.yaml


metheus_yml about the
global Prometheus
configuration, see
the Prometheus
documentation.

prometheus.config.aler For information YAML file alerting_rules.yaml


ting_rules_yml about the
Prometheus alerting
rules, see
the Prometheus
documentation.

prometheus.config.rec For information YAML file recording_rules.yaml


ording_rules_yml about the
Prometheus
recording rules, see
the Prometheus
documentation.

prometheus.config.aler Additional YAML file alerts_yml.yaml


ts_yml prometheus alerting
rules are configured
here.

prometheus.config.rule Additional YAML file rules_yml.yaml


s_yml prometheus
recording rules are
configured here.

alertmanager.deploym Number of Integer 1


ent.replicas alertmanager
replicas.

VMware, Inc. 221


VMware Telco Cloud Automation User Guide

Parameter Description Type Default value Note

alertmanager.deploym Alertmanager map {}


ent.containers.resourc container resource
es requests and limits.

alertmanager.deploym The Alertmanager map {}


ent.podAnnotations deployments pod
annotations.

alertmanager.deploym The Alertmanager map {}


ent.podLabels deployments pod
labels.

alertmanager.service.t Type of service Enum["ClusterIP"] ClusterIP Immutable


ype to expose
Alertmanager.

alertmanager.service.p Alertmanager service Integer 80 Immutable


ort port.

alertmanager.service.t Alertmanager service Integer 9093 Immutable


argetPort target port.

alertmanager.service.l Alertmanager service map {}


abels labels.

alertmanager.service.a Alertmanager service map {}


nnotations annotations.

alertmanager.pvc.anno Alertmanager PVC map {}


tations annotations.

alertmanager.pvc.stor Storage class to use string Immutable


ageClassName for persistent volume
claim. The default
provisioner is used if
it is not set.

alertmanager.pvc.acce Define access mode Enum["ReadWriteOn ReadWriteOnce Immutable


ssMode for persistent volume ce",
claim. "ReadOnlyMany",
"ReadWriteMany"]

alertmanager.pvc.stor Define storage size string 2Gi Immutable


age for persistent volume
claim.

alertmanager.config.al For information YAML file alertmanager_yml


ertmanager_yml about the global
YAML configuration
for Alert Manager,
see the Prometheus
documentation.

kube_state_metrics.de Number of integer 1


ployment.replicas kube-state-metrics
replicas.

kube_state_metrics.de kube-state-metrics map {}


ployment.containers.re container resource
sources requests and limits.

VMware, Inc. 222


VMware Telco Cloud Automation User Guide

Parameter Description Type Default value Note

kube_state_metrics.de The kube-state- map {}


ployment.podAnnotati metrics deployments
ons pod annotations.

kube_state_metrics.de The kube-state- map {}


ployment.podLabels metrics deployments
pod labels.

kube_state_metrics.se Type of service to Enum["ClusterIP"] ClusterIP Immutable


rvice.type expose kube-state-
metrics

kube_state_metrics.se kube-state-metrics Integer 80 Immutable


rvice.port service port.

kube_state_metrics.se kube-state-metrics Integer 8080 Immutable


rvice.targetPort service target port.

kube_state_metrics.se kube-state-metrics Integer 81 Immutable


rvice.telemetryPort service telemetry
port.

kube_state_metrics.se kube-state-metrics Integer 8081 Immutable


rvice.telemetryTargetP service target
ort telemetry port.

kube_state_metrics.se kube-state-metrics map {}


rvice.labels service labels.

kube_state_metrics.se kube-state-metrics map {}


rvice.annotations service annotations.

node_exporter.daemo Number of node- Integer 1


nset.replicas exporter replicas.

node_exporter.daemo node-exporter map {}


nset.containers.resour container resource
ces requests and limits.

node_exporter.daemo Host networking boolean false


nset.hostNetwork requested for this
pod.

node_exporter.daemo The node-exporter map {}


nset.podAnnotations deployments pod
annotations.

node_exporter.daemo The node-exporter map {}


nset.podLabels deployments pod
labels.

node_exporter.service Type of service Enum["ClusterIP"] ClusterIP Immutable


.type to expose node-
exporter

node_exporter.service node-exporter Integer 9100 Immutable


.port service port.

node_exporter.service node-exporter Integer 9100 Immutable


.targetPort service target port.

VMware, Inc. 223


VMware Telco Cloud Automation User Guide

Parameter Description Type Default value Note

node_exporter.service node-exporter map {}


.labels service labels.

node_exporter.service node-exporter map {}


.annotations service annotations.

pushgateway.deploym Number of Integer 1


ent.replicas pushgateway
replicas.

pushgateway.deploym pushgateway map {}


ent.containers.resourc container resource
es requests and limits.

pushgateway.deploym The pushgateway map {}


ent.podAnnotations deployments pod
annotations.

pushgateway.deploym The pushgateway map {}


ent.podLabels deployments pod
labels.

pushgateway.service.t Type of service to Enum["ClusterIP"] ClusterIP Immutable


ype expose pushgateway

pushgateway.service.p pushgateway service Integer 9091 Immutable


ort port.

pushgateway.service.t pushgateway service Integer 9091 Immutable


argetPort target port.

pushgateway.service.l pushgateway service map {}


abels labels.

pushgateway.service.a pushgateway service map {}


nnotations annotations.

cadvisor.daemonset.re Number of cadvisor Integer 1


plicas replicas.

cadvisor.daemonset.c cadvisor container map {}


ontainers.resources resource requests
and limits.

cadvisor.daemonset.p The cadvisor map {}


odAnnotations deployments pod
annotations.

cadvisor.daemonset.p The cadvisor map {}


odLabels deployments pod
labels.

ingress.enabled Enable/disable boolean false Immutable, depends


ingress for on cert-manager
prometheus and addon and contour
alertmanager. ingress controller

ingress.virtual_host_fq Hostname string prometheus.system.t Immutable


dn for accessing anzu
promethues and
alertmanager.

VMware, Inc. 224


VMware Telco Cloud Automation User Guide

Parameter Description Type Default value Note

ingress.prometheus_p Path prefix for string / Immutable


refix prometheus.

ingress.alertmanager_ Path prefix for string /alertmanager/ Immutable


prefix alertmanager.

ingress.prometheusSer Prometheus service Integer 80 Immutable


vicePort port to proxy traffic
to.

ingress.alertmanagerS Alertmanager service Integer 80 Immutable


ervicePort port to proxy traffic
to.

ingress.tlsCertificate.tls Optional certificate string Generated cert tls.crt is a key and not
.crt for ingress if nested.
you want to use
your own TLS
certificate. A self
signed certificate is
generated by default.

ingress.tlsCertificate.tls Optional certificate string Generated cert key tls.key is a key and not
.key private key for nested.
ingress if you want
to use your own TLS
certificate.

Ingress.tlsCertificate.ca Optional CA string CA certificate ca.crt is a key and not


.crt certificate. nested.

A sample prometheus addon CR is:

metadata:
name: prometheus
spec:
clusterRef:
name: wc0
namespace: wc0
name: prometheus
namespace: wc0
config:
stringData:
values.yaml: |
prometheus:
deployment:
replicas: 1
containers:
args:
- --storage.tsdb.retention.time=5d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries2
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
service:

VMware, Inc. 225


VMware Telco Cloud Automation User Guide

type: NodePort
port: 80
targetPort: 9090
pvc:
accessMode: ReadWriteOnce
storage: 150Gi
config:
prometheus_yml: |
global:
evaluation_interval: 1m
scrape_interval: 1m
scrape_timeout: 10s
rule_files:
- /etc/config/alerting_rules.yml
- /etc/config/recording_rules.yml
- /etc/config/alerts
- /etc/config/rules
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'kube-state-metrics'
static_configs:
- targets: ['prometheus-kube-state-metrics.tanzu-system-
monitoring.svc.cluster.local:8080']
- job_name: 'node-exporter'
static_configs:
- targets: ['prometheus-node-exporter.tanzu-system-
monitoring.svc.cluster.local:9100']
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__,
__meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name

VMware, Inc. 226


VMware Telco Cloud Automation User Guide

- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: node
- job_name: kubernetes-nodes-cadvisor
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- replacement: kubernetes.default.svc:443
target_label: __address__
- regex: (.+)
replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
source_labels:
- __meta_kubernetes_node_name
target_label: __metrics_path__
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- job_name: kubernetes-apiservers
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: default;kubernetes;https
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
- __meta_kubernetes_endpoint_port_name
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- alertmanager.tanzu-system-monitoring.svc:80
- kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_namespace]
regex: default
action: keep
- source_labels: [__meta_kubernetes_pod_label_app]
regex: prometheus
action: keep
- source_labels: [__meta_kubernetes_pod_label_component]
regex: alertmanager
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_probe]

VMware, Inc. 227


VMware Telco Cloud Automation User Guide

regex: .*
action: keep
- source_labels: [__meta_kubernetes_pod_container_port_number]
regex:
action: drop
alerting_rules_yml: |
{}
recording_rules_yml: |
groups:
- name: vmw-telco-namespace-cpu-rules
interval: 1m
rules:
- record: tkg_namespace_cpu_usage_seconds
expr: sum by (namespace) (rate
(container_cpu_usage_seconds_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_cpu_throttled_seconds
expr: sum by (namespace)
(((rate(container_cpu_cfs_throttled_seconds_total[5m])) ) > 0 or kube_pod_info < bool 0)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_cpu_request_core
expr: sum by (namespace) (kube_pod_container_resource_requests_cpu_cores)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_cpu_limits_core
expr: sum by (namespace) (kube_pod_container_resource_limits_cpu_cores >
0.0 or kube_pod_info < bool 0.1)
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-namespace-mem-rules
interval: 1m
rules:
- record: tkg_namespace_mem_usage_mb
expr: sum by (namespace) (container_memory_usage_bytes{container!
~"POD",container!=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_mem_rss_mb
expr: sum by (namespace) (container_memory_rss{container!~"POD",container!
=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_mem_workingset_mb
expr: sum by (namespace) (container_memory_working_set_bytes{container!
~"POD",container!=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_mem_request_mb
expr: sum by (namespace)
(kube_pod_container_resource_requests_memory_bytes) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_mem_limit_mb

VMware, Inc. 228


VMware Telco Cloud Automation User Guide

expr: sum by (namespace)


((kube_pod_container_resource_limits_memory_bytes / (1024*1024) )> 0 or kube_pod_info < bool
0)
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-namespace-network-rules
interval: 1m
rules:
- record: tkg_namespace_network_tx_bytes
expr: sum by (namespace) (rate
(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_rx_bytes
expr: sum by (namespace) (rate
(container_network_receive_bytes_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_tx_packets
expr: sum by (namespace) (rate
(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_rx_packets
expr: sum by (namespace) (rate
(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_tx_drop_packets
expr: sum by (namespace) (rate
(container_network_transmit_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_rx_drop_packets
expr: sum by (namespace) (rate
(container_network_receive_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_tx_errors
expr: sum by (namespace) (rate
(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_rx_errors
expr: sum by (namespace) (rate
(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_total_bytes
expr: sum by (namespace) (rate
(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_bytes_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_total_packets

VMware, Inc. 229


VMware Telco Cloud Automation User Guide

expr: sum by (namespace) (rate


(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_total_drop_packets
expr: sum by (namespace) (rate
(container_network_receive_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m])
+ rate (container_network_transmit_packets_dropped_total{container!~"POD",pod!="",image!=""}
[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_network_total_errors
expr: sum by (namespace) (rate
(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-namespace-storage-rules
interval: 1m
rules:
- record: tkg_namespace_storage_pvc_bound
expr: sum by (namespace)
((kube_persistentvolumeclaim_status_phase{phase="Bound"}) > 0 or kube_pod_info < bool 0)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_storage_pvc_count
expr: sum by (namespace)
((kube_pod_spec_volumes_persistentvolumeclaims_info)> 0 or kube_pod_info < bool 0)
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-namespace-other-rules
interval: 1m
rules:
- record: tkg_namespace_pods_qty_count
expr: sum by (namespace) (kube_pod_info)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_pods_reboot_5m_count
expr: sum by (namespace) (changes(kube_pod_status_ready{condition="true"}
[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_namespace_pods_broken_count
expr: sum by (namespace) (kube_pod_status_ready{condition="false"})
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-pod-cpu-rules
interval: 1m
rules:
- record: tkg_pod_cpu_usage_seconds
expr: sum by (pod) (rate (container_cpu_usage_seconds_total{container!
~"POD",pod!="",image!=""}[5m])) * 100
labels:
job: kubernetes-nodes-cadvisor

VMware, Inc. 230


VMware Telco Cloud Automation User Guide

- record: tkg_pod_cpu_request_core
expr: sum by (pod) (kube_pod_container_resource_requests_cpu_cores)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_cpu_limit_core
expr: sum by (pod) (kube_pod_container_resource_limits_cpu_cores > 0.0 or
kube_pod_info < bool 0.1)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_cpu_throttled_seconds
expr: sum by (pod)
(((rate(container_cpu_cfs_throttled_seconds_total[5m])) ) > 0 or kube_pod_info < bool 0)
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-pod-mem-rules
interval: 1m
rules:
- record: tkg_pod_mem_usage_mb
expr: sum by (pod) (container_memory_usage_bytes{container!
~"POD",container!=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_mem_rss_mb
expr: sum by (pod) (container_memory_rss{container!~"POD",container!
=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_mem_workingset_mb
expr: sum by (pod) (container_memory_working_set_bytes{container!
~"POD",container!=""}) / (1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_mem_request_mb
expr: sum by (pod) (kube_pod_container_resource_requests_memory_bytes) /
(1024*1024)
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_mem_limit_mb
expr: sum by (pod) ((kube_pod_container_resource_limits_memory_bytes /
(1024*1024) )> 0 or kube_pod_info < bool 0)
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-pod-network-rules
interval: 1m
rules:
- record: tkg_pod_network_tx_bytes
expr: sum by (pod) (rate
(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_rx_bytes
expr: sum by (pod) (rate (container_network_receive_bytes_total{container!
~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor

VMware, Inc. 231


VMware Telco Cloud Automation User Guide

- record: tkg_pod_network_tx_packets
expr: sum by (pod) (rate
(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_rx_packets
expr: sum by (pod) (rate
(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_tx_dropped_packets
expr: sum by (pod) (rate
(container_network_transmit_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_rx_dropped_packets
expr: sum by (pod) (rate
(container_network_receive_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_tx_errors
expr: sum by (pod) (rate
(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_rx_errors
expr: sum by (pod) (rate
(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_total_bytes
expr: sum by (pod) (rate
(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_bytes_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_total_packets
expr: sum by (pod) (rate
(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_total_drop_packets
expr: sum by (pod) (rate
(container_network_receive_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m])
+ rate (container_network_transmit_packets_dropped_total{container!~"POD",pod!="",image!=""}
[5m]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_network_total_errors
expr: sum by (pod) (rate
(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m]))
labels:
job: kubernetes-nodes-cadvisor

VMware, Inc. 232


VMware Telco Cloud Automation User Guide

- name: vmw-telco-pod-other-rules
interval: 1m
rules:
- record: tkg_pod_health_container_restarts_1hr_count
expr: sum by (pod)
(increase(kube_pod_container_status_restarts_total[1h]))
labels:
job: kubernetes-nodes-cadvisor
- record: tkg_pod_health_unhealthy_count
expr: min_over_time(sum by (pod) (kube_pod_status_phase{phase=~"Pending|
Unknown|Failed"})[15m:1m])
labels:
job: kubernetes-nodes-cadvisor
- name: vmw-telco-node-cpu-rules
interval: 1m
rules:
- record: tkg_node_cpu_capacity_core
expr: sum by (node) (kube_node_status_capacity_cpu_cores)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_allocate_core
expr: sum by (node) (kube_node_status_allocatable_cpu_cores)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_usage_seconds
expr: (label_replace(sum by (instance)
(rate(container_cpu_usage_seconds_total[5m])), "node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_throttled_seconds
expr: sum by (instance)
(rate(container_cpu_cfs_throttled_seconds_total[5m]))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_request_core
expr: sum by (node) (kube_pod_container_resource_requests_cpu_cores)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_cpu_limits_core
expr: sum by (node) (kube_pod_container_resource_limits_cpu_cores)
labels:
job: kubernetes-service-endpoints
- name: vmw-telco-node-mem-rules
interval: 1m
rules:
- record: tkg_node_mem_capacity_mb
expr: sum by (node) (kube_node_status_capacity_memory_bytes / (1024*1024))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_allocate_mb
expr: sum by (node) (kube_node_status_allocatable_memory_bytes /
(1024*1024))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_request_mb

VMware, Inc. 233


VMware Telco Cloud Automation User Guide

expr: sum by (node) (kube_pod_container_resource_requests_memory_bytes) /


(1024*1024)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_limits_mb
expr: sum by (node) (kube_pod_container_resource_limits_memory_bytes) /
(1024*1024)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_available_mb
expr: sum by (node) ((node_memory_MemAvailable_bytes / (1024*1024) ))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_free_mb
expr: sum by (node) ((node_memory_MemFree_bytes / (1024*1024) ))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_usage_mb
expr: (label_replace(sum by (instance)
(container_memory_usage_bytes{container!~"POD",container!=""}) / (1024*1024), "node", "$1",
"instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_mem_free_pc
expr: sum ((node_memory_MemFree_bytes{job="kubernetes-pods"} /
node_memory_MemTotal_bytes) *100) by (node)
labels:
job: kubernetes-service-endpoints
- record: tkg_node_oom_kill
expr: sum by(node) (node_vmstat_oom_kill)
labels:
job: kubernetes-service-endpoints
- name: vmw-telco-node-network-rules
interval: 1m
rules:
- record: tkg_node_network_tx_bytes
expr: (label_replace(sum by (instance)
(rate(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_rx_bytes
expr: (label_replace(sum by (instance)
(rate(container_network_receive_bytes_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_tx_packets
expr: (label_replace(sum by (instance)
(rate(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_rx_packets
expr: (label_replace(sum by (instance)

VMware, Inc. 234


VMware Telco Cloud Automation User Guide

(rate(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_tx_dropped_packets
expr: (label_replace(sum by
(instance) (rate(container_network_transmit_packets_dropped_total{container!~"POD",pod!
="",image!=""}[5m])), "node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_rx_dropped_packets
expr: (label_replace(sum by
(instance) (rate(container_network_receive_packets_dropped_total{container!~"POD",pod!
="",image!=""}[5m])), "node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_tx_errors
expr: (label_replace(sum by (instance)
(rate(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_rx_errors
expr: (label_replace(sum by (instance)
(rate(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m])),
"node", "$1", "instance", "(.*)"))
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_total_bytes
expr: label_replace((sum by (instance) (rate
(container_network_transmit_bytes_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_bytes_total{container!~"POD",pod!="",image!=""}[5m]))), "node",
"$1", "instance", "(.*)")
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_total_packets
expr: label_replace((sum by (instance) (rate
(container_network_transmit_packets_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_packets_total{container!~"POD",pod!="",image!=""}[5m]))), "node",
"$1", "instance", "(.*)")
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_total_drop_packets
expr: label_replace((sum by (instance) (rate
(container_network_transmit_packets_dropped_total{container!~"POD",pod!="",image!=""}[5m])
+ rate (container_network_receive_packets_dropped_total{container!~"POD",pod!="",image!=""}
[5m]))), "node", "$1", "instance", "(.*)")
labels:
job: kubernetes-service-endpoints
- record: tkg_node_network_total_errors
expr: label_replace((sum by (instance) (rate
(container_network_transmit_errors_total{container!~"POD",pod!="",image!=""}[5m]) + rate
(container_network_receive_errors_total{container!~"POD",pod!="",image!=""}[5m]))), "node",
"$1", "instance", "(.*)")
labels:

VMware, Inc. 235


VMware Telco Cloud Automation User Guide

job: kubernetes-service-endpoints
- name: vmw-telco-node-other-rules
interval: 1m
rules:
- record: tkg_node_status_mempressure_count
expr: sum by (node)
(kube_node_status_condition{condition="MemoryPressure",status="true"})
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_diskpressure_count
expr: sum by (node)
(kube_node_status_condition{condition="DiskPressure",status="true"})
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_pidpressure_count
expr: sum by (node)
(kube_node_status_condition{condition="PIDPressure",status="true"})
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_networkunavailable_count
expr: sum by (node)
(kube_node_status_condition{condition="NetworkUnavailable",status="true"})
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_etcdb_bytes
expr: (label_replace(etcd_db_total_size_in_bytes, "instance", "$1",
"instance", "(.+):(\\d+)")) * on (instance) group_left (node) (avg by (instance, node)
(label_replace ((kube_pod_info), "instance", "$1", "host_ip", "(.*)")) )
labels:
job: kubernetes-service-endpoints
- record: tkg_node_status_apiserver_request_total
expr: sum((label_replace(apiserver_request_total, "instance", "$1",
"instance", "(.+):(\\d+)")) * on (instance) group_left (node) (avg by (instance, node)
(label_replace ((kube_pod_info), "instance", "$1", "host_ip", "(.*)")) )) by (node)
labels:
job: kubernetes-service-endpoints
ingress:
enabled: false
virtual_host_fqdn: prometheus.system.tanzu
prometheus_prefix: /
alertmanager_prefix: /alertmanager/
prometheusServicePort: 80
alertmanagerServicePort: 80
alertmanager:
deployment:
replicas: 1
service:
type: ClusterIP
port: 80
targetPort: 9093
pvc:
accessMode: ReadWriteOnce
storage: 2Gi
config:
alertmanager_yml: |

VMware, Inc. 236


VMware Telco Cloud Automation User Guide

global: {}
receivers:
- name: default-receiver
templates:
- '/etc/alertmanager/templates/*.tmpl'
route:
group_interval: 5m
group_wait: 10s
receiver: default-receiver
repeat_interval: 3h
kube_state_metrics:
deployment:
replicas: 1
service:
type: ClusterIP
port: 80
targetPort: 8080
telemetryPort: 81
telemetryTargetPort: 8081
node_exporter:
daemonset:
hostNetwork: false
updatestrategy: RollingUpdate
service:
type: ClusterIP
port: 9100
targetPort: 9100
pushgateway:
deployment:
replicas: 1
service:
type: ClusterIP
port: 9091
targetPort: 9091
cadvisor:
daemonset:
updatestrategy: RollingUpdate

In this sample CR:

n The TSDB retention time in parameter prometheus.deployment.containers.args is changed


to 5 days instead of default 42 days.

n Some recording rules are added to prometheus.config.recording_rules_yml. Customize


them or add more as needed.

n The prometheus.service.type is changed to NodePort so that it can be integrated with


external components(e.g. vROPS or Grafana). See Prometheus service type.

Prometheus service type


By default, Prometheus is deployed with a service type of ClusterIP, this means it is NOT
exposable to the outside world.

VMware, Inc. 237


VMware Telco Cloud Automation User Guide

There are three options available for the prometheus.service.type:

n ClusterIP – use the default configuration then prometheus service only can be accessed in
workload cluster. The service can also be exposed via ingress however this depends on
ingress controller and some other munual configuraitons.

n NodePort(recommended) – exposes the prometheus service on a nodeport. TCA does not


support to specify the actual Nodeport, K8s will allocate a random nodeport number(a high-
range port number between 30,000 and 32,767). To determine what this nodeport number
is (post configuration), user must view the service configuration from the TCA cluster with
command kubectl get svc -n tanzu-system-monitoring prometheus-server, as can be
seen in the following output, the prometheus-server is exposed on node port 32020, then
other external components can integrate with Prometheus with URL http://<cluster-endpoint-
ip>:32020

capv@cp0-control-plane-kz5k6 [ ~ ]$ kubectl get svc -n tanzu-system-monitoring prometheus-


server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-server NodePort 100.65.8.127 <node> 80:32020/TCP 25s

n Loadbalancer – leverages Avi load balancer to expose service, this deployment method
depends on load-balancer-and-ingress-service addon. TCA does not support to specify static
VIP for prometheus service, Avi will allocate a VIP from default VIP pool for prometheus
service, then other external components can integrate with Prometheus with URL http://
<prometheus-VIP>.

Advanced Configuration for Fluent-bit Add-On


Use this reference when configuring additional parameters of fluent-bit add-on via the Custom
Resources(CRs) tab.

Configurable parameters

Parameter Description Type Default value Note

fluent_bit.config.service For information about the String Default fluent-bit service


configuration for Fluent Bit service, config.
see the Fluent Bit documentation.

fluent_bit.config.outputs For information about the String Standard output


configuration for Fluent Bit outputs,
see the Fluent Bit documentation.

fluent_bit.config.inputs For information about the String Ingest Kubernetes


configuration for Fluent Bit inputs, container logs using
see the Fluent Bit documentation. the tail plugin and
ingest systemd logs from
Kubelet.

fluent_bit.config.filters For information about the String Default kubernetes filter.


configuration for Fluent Bit filters,
see the Fluent Bit documentation.

VMware, Inc. 238


VMware Telco Cloud Automation User Guide

Parameter Description Type Default value Note

fluent_bit.config.parsers For information about the String JSON parser


configuration for Fluent Bit parsers,
see the Fluent Bit documentation.

fluent_bit.config.plugins Content for Fluent Bit plugins String


configuration.

fluent_bit.config.streams Content for Fluent Bit streams file. String

fluent_bit.daemonset.resources For information about the Map {}


configuration for Fluent Bit
containers resource requirements,
see the Fluent Bit documentation.

fluent_bit.daemonset.podAnnotations The Fluent Bit deamonset pods Map {}


annotations.

fluent_bit.daemonset.podLabels The Fluent Bit deamonset pods Map {}


labels

A sample fluent-bit addon CR is:

metadata:
name: fluent-bit
clusterName: wc0
spec:
name: fluent-bit
clusterRef:
name: wc0
namespace: wc0
config:
stringData:
values.yaml: |
fluent_bit:
config:
service: |
[Service]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service

VMware, Inc. 239


VMware Telco Cloud Automation User Guide

Systemd_Filter _SYSTEMD_UNIT=containerd.service
Read_From_Tail On
outputs: |
[OUTPUT]
Name syslog
Match *
Host 1.2.3.4
Port 514
Mode udp
Syslog_Format rfc5424
Syslog_Hostname_key tca_cluster_name
Syslog_Appname_key pod_name
Syslog_Procid_key container_name
Syslog_Message_key message
filters: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
[FILTER]
Name nest
Match kube.*
Operation lift
Nested_Under kubernetes
[FILTER]
Name record_modifier
Match *
Record tca_cluster_name wc0
parsers: |
[PARSER]
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?
<logtag>[^ ]*) (?<message>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z

In this sample CR:

1 Use the default fluent_bit.config.service value.

2 Collect all Kubernetes container logs and systemd logs for kubelet.service and
containerd.service in fluent_bit.config.inputs value.

3 Use output of type syslog to integrate fluent-bit with VMware vRealize LogInsight, replace
the host IP address 1.2.3.4 to your vRealize LogInsight IP address.

VMware, Inc. 240


VMware Telco Cloud Automation User Guide

4 Use the default filter of type kubernetes in fluent_bit.config.filters value, add a filter of
type nest and a filter of type record_modifier to process the native logs so that the logs can
be easily filtered out and displayed pretty on vRealize LogInsight. Remember to replace the
tca_cluster_name wc0 to your cluster name in record_modifier filter.

5 Use a Regular Expression parser to parse Kubernetes container logs in


fluent_bit.config.parsers value.

Backing Up and Restoring Kubernetes Clusters


This topic provides an overview of the backup and restore process for kubernetes clusters,
including management cluster and workload cluster. You can backup the management cluster
VM nodes and restore them if necessary, while use velero to back up and restore a workload
cluster.

Backing Up and Restoring Management Clusters


You can backup and restore the entire management cluster nodes.

VMware Telco Cloud Automation supports to back up and restore the entire TKG management
cluster nodes (VMs) on top of the same infrastructure.

Note
n Partial backup or restore of TKG management cluster nodes is not supported. You must
backup all cluster nodes and restore them all together.

n Restored management cluster must be associated with the same TCA-CP appliance.

n The infrastructure, including vCenter, networking configuration and datastore must be same
for source and restored cluster node VMs. And the infrastructure must be available to restore
cluster node VMs on it.

n Backup and restore of Kubernetes Persistent Volumes in TKG management cluster is not
supported.

n Before you start to restore all cluster nodes from backups, power off old node VMs, and
then remove them from vCenter inventory or delete them from disk in vCenter. Otherwise old
node VMs may be powered on and join the restored cluster.

n During restoration, keep all node VMs power off until they are all restored. Then power on
them.

Backing Up and Restoring Workload Clusters


You can backup and restore the v2 workload clusters. Backing up clusters helps you restore the
Cloud Network Functions (CNF) initiated on the workload cluster in case of a disaster where you
cannot use the workload cluster.

VMware, Inc. 241


VMware Telco Cloud Automation User Guide

Install and Configure Velero Add-On for the Workload Clusters


You can add Velero when creating a workload cluster or by editing a workload cluster
configuration.

Note
n Only v2 workload clusters are supported.

n Workload cluster should have at least one nodepool as otherwise the status of velero pod
will be in Pending after add-on installation.

The following procedure describes the steps to install the Velero add-on by editing a workload
cluster configuration.

Prerequisites

An S3-compatible object storage with sufficient disk space should be ready for velero to store
backups and associated artifacts. For example, minio. To know more about installation of minio,
refer to Install and Configure an S3-Compatible Object Storage

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Caas Infrastructure > Cluster Instances.

3 Select the workload cluster name that you want to edit.

4 Click on the Add-Ons tab.

5 Select the Options (three dots) icon corresponding to the Velero add-on and click on Edit.

6 In the Add-on Configuration window, enter the configuration details. See Add-On
Configuration Reference for v2 Workload Clusters.

7 Click Ok.

Backing Up and Restoring entire Workload Cluster


You can use velero to backup the entire workload cluster and restore to a new created workload
cluster if necessary. After restoration, you need to remedy the CNFs to the new workload cluster
from TCA.

Note
n Only v2 workload clusters are supported.

n Source and target workload clusters should be under same vCenter and managed by the
same management cluster.

n Some Kubernetes resources in the cluster won't be backed up. If an attempt is made to
backup or restore these Restricted Resources, the backup or restore will be marked as
"Partially Failed".

VMware, Inc. 242


VMware Telco Cloud Automation User Guide

Back up the Workload Cluster


You can use Velero to back up and restore a workload cluster’s current workloads and persistent
volumes state and store the backup file on the object storage. It is recommended for dedicating
a unique storage bucket on the object storage server to each cluster.

After you install the Velero add-on on a workload cluster, you can run the Velero commands on
the web terminal connected with the cluster using the Embedded SSH Client.

Alternatively, you can run the Velero commands on the standalone Velero client. See Install
Standalone Velero Client.

Prerequisites

Install and Configure Velero Add-On for the Workload Clusters

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure.

3 Open the web terminal by clicking Options (three dots) corresponding to the workload
cluster you want to backup and then selecting Open Terminal.

4 On the Web terminal, check the service health of Velero by running the following command:

# kubectl get pod -n velero // check pod status


# kubectl get bsl -n velero // check velero BackupStorageLocation CR

Alternatively, you can check the service health of Velero by performing the following:

a Go to Infrastructure > Caas Infrastructure > Cluster Instances.

b Select the required workload cluster name.

c Click on the Add-Ons tab.

d Select the Velero add-on deployed.

5 Set an environmental variable to exclude the cluster resources from backing up.

# export TCA_VELERO_EXCLUDE_RESOURCES="issuers.cert-manager.io,certificates.cert-
manager.io,certificaterequests.cert-manager.io,gateways.networking.x-
k8s.io,gatewayclasses.networking.x-k8s.io"
# export TCA_VELERO_EXCLUDE_NAMESPACES="velero,tkg-system,tca-system,tanzu-
system,kube-system,tanzu-system-monitoring,tanzu-system-logging,cert-manager,avi-system"

6 Back up the workload cluster.

# velero backup create <example-backup> --exclude-


namespaces=$TCA_VELERO_EXCLUDE_NAMESPACES --exclude-resources=$TCA_VELERO_EXCLUDE_RESOURCES

The above backup command uses velero-plugin-for-vsphere as default to back up the


Persistent Volumes created with vSphere CSI storage class. If the cluster exists in Persistent
Volumes created with nfs-client storage class to back up, you have two options:

VMware, Inc. 243


VMware Telco Cloud Automation User Guide

Option 1: Anotate the pod which mounts volumes to Persistent Volumes created with nfs-
client storage class to backup using Restic.

# kubectl -n <pod_namespace> annotate pod/<pod-name> backup.velero.io/backup-


volumes=<volume-name1>,<volume-name2>,…
# velero backup create <example-backup> --exclude-
namespaces=$TCA_VELERO_EXCLUDE_NAMESPACES --exclude-resources=$TCA_VELERO_EXCLUDE_RESOURCES

You can choose to add the above annotation to the template metadata in the deployment
controller to avoid re-annotating in case the annotated pods restart.

# kubectl -n <deploy_namespace> patch deployment <deployment-


name> -p '{"spec": {"template":{"metadata":{"annotations":{"backup.velero.io/backup-
volumes":"<volume-name1>,<volume-name2>,…"}}}}}'

Option 2: Change the default PV backup plugin to Restic. This will allow Restic to back up all
the types of Persistent Volumes, including the ones created with vSphere CSI plugin.

# velero backup create <example-backup> --default-volumes-to-restic --exclude-


namespaces=$TCA_VELERO_EXCLUDE_NAMESPACES --exclude-resources=$TCA_VELERO_EXCLUDE_RESOURCES

7 Check the backup status and related CR and wait until the processes are "Completed".

# velero backup get // check the backup status

Check the status of uploads CR if using velero-plugin-for-vsphere to backup PV data.

# kubectl get uploads -n velero // get the upload-name


# kubectl get uploads <upload-name> -o yaml // check the uploads status in yaml output

If you annotate pods and use Restic to back up PV data, check the status of
podvolumebackups.

# kubectl get podvolumebackups -n velero // get the podvolumebackup-name


# kubectl get podvolumebackups <podvolumebackup-name> -o yaml // check the
podvolumebackups status in yaml output

What to do next

Restore the Workload Cluster and Remediate the Network Functions.

Restore the Workload Cluster and Remediate the Network Functions


You can restore a workload cluster with a backup when the cluster fails and also remediate the
network functions.
Restore the Workload Cluster
To restore the workload cluster, copy the existing cluster specifications and deploy a new
cluster. Then, restore the data to the new cluster.

VMware, Inc. 244


VMware Telco Cloud Automation User Guide

Prerequisites

n Source and target clusters must be associated with the same Management Cluster and must
be under the same vCenter server.

n Source and target clusters must be associated with the same Kubernetes version.

Procedure

1 Copy Specification and Deploy new Cluster.

Note
n Add nodepools manually as those will not be copied from the source cluster spec.

n Manually enter passwords when configuring Systemsettings and Harbor add-ons.

n If TCA cert-manager add-on is enabled in the source cluster and CNF is configured to use
this add-on, cert-manager service won't renew certificates requested by this CNF after
restoration. Remedy and reconfigure the CNF from TCA to generate missing resources
after the restoration process.

n If load-balancer-and-ingress-service add-on is enabled in the source cluster, use a new


service engine group setting in the target cluster. Refer to the add-on configuration
load-balancer-and-ingress-service(aka AKO)

n If TCA add-on load-balancer-and-ingress-service is enabled in the source cluster


and a CNF is defined to create Kubernetes resources gatewayclasses.networking.x-
k8s.io or gateways.networking.x-k8s.io in the Helm Chart, CNF resources in the restore
namespaces will be in Pending state after restoration is complete. Recreate the resources
in the restored cluster with new service engine group setting. It is recommended to define
these resources in the TCA add-on instead.

2 Restore the workload to the new cluster .

a Log in to the VMware Telco Cloud Automation web interface.

b Navigate to Infrastructure > Virtual Infrastructure.

c Open the web terminal by clicking on the Options (three dots) corresponding to the
workload cluster you want to restore and then selecting Open Terminal.

d On the Web terminal, check the service health of Velero by running the following
command:

# kubectl get pod -n velero // check pod status


# kubectl get bsl -n velero // check velero BackupStorageLocation CR

Alternatively, you can check the velero addon health status from TCA UI

e Retrieve the backup information by running the following command:

# velero backup get

VMware, Inc. 245


VMware Telco Cloud Automation User Guide

f Restore to the cluster by running the following command:

# velero restore create --from-backup <example-backup>

g Check the restoration status and related CR. Wait until the processes are "Completed".

# velero restore get // check the restore status

Check the status of downloads CR if using velero-plugin-for-vsphere to backup PV data.

# kubectl get downloads -n velero // get the download-name


# kubectl get downloads <download-name> -o yaml // check the downloads status in yaml
output

Check the status of podvolumerestores CR if using Restic to backup PV data.

# kubectl get podvolumerestores -n velero // get the podvolumerestore-name


# kubectl get podvolumerestores <podvolumebackup-name> -o yaml // check the
podvolumerestores status in yaml output

Note If the Network Function pod requires late binding for nodepool VMs, the restored
pods might be in Pending status. Follow Remediate Network Functions to heal.

What to do next

Remediate Network Functions.


Remediate Network Functions
After restoring the backup to a new workload cluster, manually remediate each network function
deployed from source to the targeted workload cluster.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Inventory > Network Function.

3 Click on the Options (three dots) corresponding to the network function that you want to
remediate and select Remedy.

Note The Remedy option is available only if Cloud-Native Network Function and Network
Function are instantiated.

4 Click Continue if you have already restored the cluster via Velero successfully.

5 In the Create Network Function Instance window, enter the following details under Inventory
Detail:

n Name - Enter a different name for the new network function instance.

n Description (Optional) - Enter a description for the network function.

VMware, Inc. 246


VMware Telco Cloud Automation User Guide

n Select Cloud - Select a cloud from your network on which you can instantiate the network
function.

Note You can select the node pool only if the network function instantiation requires
infrastructure customization.

n Tags (Optional) - Select the key and value pairs from the drop-down menus.

6 In the Review tab, review your configuration and click on REMEDY.

7 (Optional) To track and monitor the progress of the remediation process, select Inventory >
Network Services and verify that Instantiated is displayed in the State column.

Note In the State column, if Instantiated is displayed, it indicates that the remediation
process is completed successfully and the network function is recovered and ready for use.

Backing up and Restoring in the same Workload Cluster


You can backup and restore specific namespaces in the same workload cluster.

Note
n Only v2 workload clusters are supported.

n Only persistent volumes created through vSphere CSI can be backed up.

n Does not support backup and restore namespaces list in


TCA_VELERO_EXCLUDE_NAMESPACES(velero,tkg-system,tca-system,tanzu-system,kube-
system,tanzu-system-monitoring,tanzu-system-logging,cert-manager,avi-system)

n Some Kubernetes resources in the cluster won't be backed up. If an attempt is made to
backup or restore these Restricted Resources, the backup or restore will be marked as
"Partially Failed".

Backup Specific Namespaces


You can use Velero to back up the kubernetes resources including persistent volumes data under
specific namespaces.

Prerequisites

Install and Configure Velero Add-On for the Workload Clusters.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure.

3 Open the web terminal by clicking the Options (three dots) corresponding to the workload
cluster you want to backup and then selecting Open Terminal.

VMware, Inc. 247


VMware Telco Cloud Automation User Guide

4 On the Web terminal, check the service health of Velero by running the following command:

# kubectl get pod -n velero


# kubectl get bsl -n velero

5 Set an environmental variable to exclude the cluster resources from backing up.

# export TCA_VELERO_EXCLUDE_RESOURCES="issuers.cert-manager.io,certificates.cert-
manager.io,certificaterequests.cert-manager.io,gateways.networking.x-
k8s.io,gatewayclasses.networking.x-k8s.io"

6 Back up specific namespaces.

# velero backup create <example-backup> --exclude-resources


$TCA_VELERO_EXCLUDE_RESOURCES --include-namespaces <example-namespaces-by-comma>

The above backup command uses velero-plugin-for-vsphere as default to backup the


Persistent Volumes created with vSphere CSI plugin. If the cluster exists in Persistent
Volumes, created with nfs-client plugin to backup, you have two options:

Option 1: Annotate the pod which mounts volumes to Persistent Volumes created with nfs-
client storage class to back up using Restic.

# kubectl -n <pod_namespace> annotate pod/<pod-name> backup.velero.io/backup-


volumes=<volume-name1>,<volume-name2>,…
# velero backup create <example-backup> --exclude-resources=$TCA_VELERO_EXCLUDE_RESOURCES
--include-namespaces <example-namespaces-by-comma>

This annotation can also be provided in a pod template spec if you use a
controller to manage your pods. To quickly set the annotation on a pod template
(.spec.template.metadata.annotations) without modifying the full manifest, use 'kubectl patch'
command. For example:

# kubectl -n <pod_namespace> patch deployment <pod_controller_name>


-p '{"spec": {"template":{"metadata":{"annotations":{"backup.velero.io/backup-
volumes":"<volume-name>","<volume2-name>"}}}} }'

Option 2: Change default PV backup plugin to Restic. This will allow Restic to back up all the
types of Persistent Volumes, including the ones created with vSphere CSI plugin.

# velero backup create <example-backup> --default-volumes-to-restic --exclude-


resources=$TCA_VELERO_EXCLUDE_RESOURCES --include-namespaces <example-namespaces-by-comma>

7 Check the backup status and related CRs and wait until the processes are "Completed".

# velero backup get // check the backup status

Check the status of uploads CR if using velero-plugin-for-vsphere to back up PV data.

# kubectl get uploads -n velero // get the upload-name


# kubectl get uploads <upload-name> -o yaml // check the uploads status in yaml output

VMware, Inc. 248


VMware Telco Cloud Automation User Guide

If you annotate pods and use Restic to back up PV data, check the status of
podvolumebackups.

# kubectl get podvolumebackups -n velero // get the podvolumebackup-name


# kubectl get podvolumebackups <podvolumebackup-name> -o yaml // check the
podvolumebackups status in yaml output

What to do next

Restore Specific Namespaces

Restore Specific Namespaces


Restore Specific Namespaces.

Note
n If TCA add-on load-balancer-and-ingress-service is enabled in the source cluster
and a CNF is defined to create Kubernetes resources gatewayclasses.networking.x-
k8s.io or gateways.networking.x-k8s.io in the Helm Chart, CNF pods in the restore
namespaces will be in "Pending" state after restoration is complete. Recreate the resources in
the restored cluster with new service engine group setting. It is recommended to define these
resources in the TCA add-on instead.

n If TCA cert-manager add-on is enabled in the cluster and CNF is configured to use this
add-on, cert-manager service won't be able to renew certificates requested by this CNF any
more after the namespace where the CNF is restored. In such case, reconfigure the CNF from
TCA to generate missing resources after the restore process.

Prerequisites

Install and Configure Velero Add-On for the Workload Clusters.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure.

3 Open the web terminal by clicking the Options (three dots) corresponding to the workload
cluster you want to backup and then selecting Open Terminal.

4 On the Web terminal, check the service health of Velero by running the following command:

# kubectl get pod -n velero


# kubectl get bsl -n velero

5 Get backup information.

# velero backup get

VMware, Inc. 249


VMware Telco Cloud Automation User Guide

6 Delete the namespaces that will be restored from a backup.

# kubectl delete namespaces <example-namespaces-by-comma>

7 Restore specific namespaces to the cluster.

# velero restore create --from-backup <example-backup>

8 Check backup status and related CR. Wait until the processes are "Completed".

# velero restore get // check the restore status

Check the status of downloads CR if using velero-plugin-for-vsphere to back up PV data.

# kubectl get downloads -n velero // get the download-name


# kubectl get downloads <download-name> -o yaml // check the downloads status in yaml
output

Check the status of podvolumerestores CR if using Restic to back up PV data.

# kubectl get podvolumerestores -n velero // get the podvolumerestore-name


# kubectl get podvolumerestores <podvolumebackup-name> -o yaml // check the
podvolumerestores status in yaml output

Backup Specific CNF Persistent Volumes


You can use Velero to backup specific CNF persisent volumes in the same workload cluster.

Prerequisites

Install and Configure Velero Add-On for the Workload Clusters.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure.

3 Open the web terminal by clicking Options (three dots) corresponding to the workload
cluster you want to back up and then selecting Open Terminal.

4 On the web terminal, check the service health of Velero by running the following command:

# kubectl get pod -n velero


# kubectl get bsl -n velero

5 Back up CNF's specific persisent volumes matching the label selector.

1. firstly label the pvc and pv that to be backed up


# kubectl -n <cnf-namespaces> label pvc <example-pvc> <key>=<value>
# kubectl label pv <example-pv> <key>=<value>
2. backup the k8s resources matching the label selector
# velero backup create <example-pv-backup> --selector <key>=<value>

VMware, Inc. 250


VMware Telco Cloud Automation User Guide

Also you can use "--include-resources" flag to back up all persistent volumes under some
namespaces:

# velero backup create <example-pv-backup> --include-resources pvc,pv --include-namespaces


<cnf-namespaces-by-comma>

For more resource filtering method, please refer to velero resource filtering doc.

The backup command above uses velero-plugin-for-vsphere as default to back up the


Persistent Volumes created with vSphere CSI storage class. If there exists Persistent Volumes
created with nfs-client storage class to be backed up, change default PV backup plugin to
Restic with velero cli flag "--default-volumes-to-restic". This will use Restic to back up all kinds
of Persistent Volumes, including the ones created with vSphere CSI storage class.

1. firstly label the pvc/pv. For nfs pv, please also label the pod which is mounting this
pv
# kubectl -n <cnf-namespaces> label pvc <example-pvc> <key>=<value>
# kubectl label pv <example-pv> <key>=<value>
# kubectl -n <cnf-namespaces> label pod <example-pod> <key>=<value>
2. backup the k8s resources matching the label selector
# velero backup create <example-pv-backup> --selector <key>=<value> --default-volumes-to-
restic

You can also choose to back up all the persistent volumes using Restic plugin under some
namespaces:

# velero backup create <example-pv-backup> --include-resources pvc,pv,pod --include-


namespaces <cnf-namespaces-by-comma> --default-volumes-to-restic

6 Check backup status and related CRs and wait until the processes are "Completed".

# velero backup get // check the backup status

Check the status of uploads CR if using velero-plugin-for-vsphere to back up PV data.

# kubectl get uploads -n velero // get the upload-name


# kubectl get uploads <upload-name> -o yaml // check the uploads status in yaml output

If you annotate pods and use restic to back up PV, check the status of podvolumebackups
CR.

# kubectl get podvolumebackups -n velero // get the podvolumebackup-name


# kubectl get podvolumebackups <podvolumebackup-name> -o yaml // check the
podvolumebackups status in yaml output

What to do next

Restore Specific CNF Persistent Volumes

Restore Specific CNF Persistent Volumes


You can use Velero to restore specific CNF persisent volumes in the same workload cluster.

VMware, Inc. 251


VMware Telco Cloud Automation User Guide

Prerequisites

Install and Configure Velero Add-On for the Workload Clusters.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure.

3 Open the web terminal by clicking Options (three dots) corresponding to the workload
cluster you want to back up and then selecting Open Terminal.

4 On the web terminal, check the service health of Velero by running the following command:

# kubectl get pod -n velero


# kubectl get bsl -n velero

5 Get backup information.

# velero backup get

6 Delete the CNF application k8s resources and old pv that will be restored from a backup.
Note: Refer to CNF's inventory page from TCA UI to determine which kind of controller the
CNF is using, for example, when the CNF's controller is Deployment, delete the deployment
CR:

# kubectl -n <cnf-namespaces-by-comma> delete deployment <example-cnf-deployment-resources>


# kubectl -n <cnf-namespaces-by-comma> delete pvc <example-cnf-pvc-bounding-to-pv>

7 Restore PVC/PV. Note: "pod" should also be included if it is a NFS PV restore.

# velero restore create --from-backup <some-backup-include-pv/pvc/pod> --include-resources


pvc,pv(,pod)

8 Check backup status and related CR and wait until the processes are "Completed".

# velero restore get // check the restore status

If the restoration contains PV backup using velero-plugin-for-vsphere, check the status of


downloads CR.

# kubectl get downloads -n velero // get the download-name


# kubectl get downloads <download-name> -o yaml // check the downloads status in yaml
output

If the restoration contains PV data backup using Restic, check the status of
podvolumerestores CR.

# kubectl get podvolumerestores -n velero // get the podvolumerestore-name


# kubectl get podvolumerestores <podvolumebackup-name> -o yaml // check the
podvolumerestores status in yaml output

VMware, Inc. 252


VMware Telco Cloud Automation User Guide

9 Reconfigure the CNF from TCA UI with empty json file to override values. It will create a
new deployment using the restored PVs. If there are new pods created, delete the legacy
ones manually. Note: During "Overrides values" step, please upload an empty json file with
content "{}" or empty yaml file with content "---".

Manage your Backup Schedules, Retention, and Deletion


You can set up your workload cluster backup schedule operation for a specific time. You also can
change the backup retention period and delete a backup.

Schedule a Backup
You can set up the back schedule at a specific time. The schedule time format is defined by Cron
expression. For example, the command below creates a backup that runs at 3:00 AM every day.

# velero schedule create <example-schedule> --exclude-resources=$TCA_VELERO_EXCLUDE_RESOURCES


--include-namespaces <example-namespaces-by-comma> --schedule="0 3 * * *"

Delete a Schedule
Use the following command to delete schedules.

# velero schedule delete <schedule-names>

Note Deleting the backup schedule won't delete the backups created by the schedule.

Set up a Backup Retention


The default backup retention period is managed by the Time to Live (TTL) value. By default,
the TTL value is 30 days (720 hours). To change the TTL value, run the velero command by
specifying the value of hours, minutes, and seconds in the form of --ttl 24h0m0s:

# velero backup create <example-backup> --ttl 24h0m0s

Delete a Backup
You can delete a backup resource including all the data in the object storage by running the
following command:

# velero backup delete <example-backup>

Backup Hooks
When performing a backup, you can specify one or more commands to execute in a container in
a pod while that pod is being backed up. There are two ways to specify hooks: annotations on
the pod itself, and in the Backup spec. For more infomation, refer to velero offical doc.

Adjust Velero Memory Limits (If Necessary)


You can increase the limits and requests memory settings of velero.

VMware, Inc. 253


VMware Telco Cloud Automation User Guide

1. Run the following command.

# kubectl edit deployment/velero -n velero

2. Change the limits and request memory settings from the default of 512Mi and 128Mi to 512Mi
and 256Mi.

ports:
- containerPort: 8085
name: metrics
protocol: TCP
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File

Install and Configure an S3-Compatible Object Storage


You can install and configure MinIO, an S3-compatible storage service that runs locally on the
Linux VM. However, apart from MinIO you can also install and configure any other S3-compatible
object storage service as the destination for Kubernetes workload backups.

For information on the S3-compatible object storage services that Velero supports, see S3-
Compatible object store providers.

Prerequisites

n Your environment has a Linux VM with sufficient storage to install MinIO and store backups.
MinIO service will not operate if disk has less than 1GB of free disk space.

n Target workload clusters can access the MinIO Linux VM.

Procedure

1 Download MinIO binary on the VM.

# curl -O https://dl.minio.io/server/minio/release/linux-amd64/minio

2 Grant permissions to run the MinIO binary.

# chmod +x minio

3 Move the MinIO binary to the directory /usr/local/bin.

# mv minio /usr/local/bin

VMware, Inc. 254


VMware Telco Cloud Automation User Guide

4 Create a new minio-user account to run the MinIO service.

# mkdir -p /usr/local/share/minio

5 Create a new folder to store MinIO data files and grant ownership of the folder to minio-users.

# mkdir -p /usr/local/share/minio
# chown minio-user:minio-user /usr/local/share/minio

6 Create a new folder for MinIO configuration files and grant ownership of the folder to minio-
users.

# mkdir -p /etc/minio
# chown minio-user:minio-user /etc/minio

7 Create a new file for default configurations of MinIO service and enter the details.

# vim /etc/default/minio

Option Description

MINIO_VOLUMES Stores the MinIO data files.

MINIO_OPTS Stores the server configurations.


n Use the -C parameter to set the folder for MinIO configuration files
n Use the –address parameter to set the IP address and port that MinIO
binds to.

Note If the IP address is not specified, MinIO will bind to every address
configured on the server. Therefore, it is recommended to specify the IP
address that Velero add-on can connect to. The default port is 9000.

Access ID ID to access the backup storage.

Access Key Password to access the backup storage.

The following is a sample file content in the folder /etc/default/minio:

MINIO_VOLUMES="/usr/local/share/minio/"
MINIO_OPTS="-C /etc/minio --address 10.196.46.27:9000"
MINIO_ACCESS_KEY="minio"
MINIO_SECRET_KEY="minio123"

8 Download the MinIO service descriptor file.

# curl -O https://raw.githubusercontent.com/minio/minio-service/master/linux-systemd/
minio.service

9 Move minio.service to the folder /etc/systemd/system.

# mv minio.service /etc/systemd/system

VMware, Inc. 255


VMware Telco Cloud Automation User Guide

10 Reload all the systemd units.

# systemctl daemon-reload

11 Enable MinIO to start on booting the system.

# systemctl enable minio

12 Start the MinIO server.

# systemctl start minio

For more information on installing and configuring MinIO storage service, see MinIO
Documentation.

13 If necessary, enable Transport Layer Security (TLS) encryption of incoming and outgoing
traffic on MinIO. For more information, see Enabling TLS.

Install Standalone Velero Client


You can install standalone velero client binary for cluster backup and restore operations.

Prerequisites

Before using standalone velero client, download the kubeconfig file of workload cluster. Refer to
Access Kubernetes Clusters Using kubeconfig

Procedure

1 Download the supported version of the signed Velero binary for vSphere with Tanzu from the
VMware Product Downloads Page.

For TCA 2.2, the supported TKG version is 1.6.1.

Note Ensure that you are using Velero binary signed by VMware so that you are eligible for
support from VMware.

2 Open a command line and change the directory to the Velero CLI download.

3 Unzip the download file. For example, # gunzip velero-linux-vX.X.X_vmware.1.gz .

4 Grant permissions to run the Velero CLI.

# chmod +x velero-linux-vX.X.X_vmware.1.

5 Move the Velero CLI to the following system path for global availability.

# cp velero-linux-vX.X.X_vmware.1 /usr/local/bin/velero

6 Verify the installation by using the following command:

# velero version

VMware, Inc. 256


VMware Telco Cloud Automation User Guide

7 Append the --kubeconfig option for every velero command, for example:

# velero --kubeconfig ./kubeconfig.yaml bakcup get

Remotely Accessing Clusters From VMware Telco Cloud


Automation
VMware Telco Cloud Automation now provides secure options for accessing Kubernetes clusters.
These options ensure that only those users with the required permissions have access to the
clusters.

Earlier, to access a cluster, a user logged in as a Cluster API Provider vSphere (CAPV) user.
The downside to this method was that it provided the user with unrestricted access across all
clusters.

Now, a user can remotely access Kubernetes clusters from VMware Telco Cloud Automation
using one of the following methods:

n Access using the embedded SSH terminal.

n Access using an external SSH terminal with a one time generated token from VMware Telco
Cloud Automation.

n Download and use the kubeconfig file provided by VMware Telco Cloud Automation. This
file contains as endpoint the external address of VMware Telco Cloud Automation and the
token for accessing the Kubernetes cluster.

This way, only those users who have the required permissions can access the cluster and
perform only those operations that are allowed based on their privileges.

Access Kubernetes Clusters Using kubeconfig


You can access Kubernetes clusters from your local system by downloading the kubeconfig
file and using it in a Kubernetes client (kubectl). This file contains the endpoint IP address
of the server and the token for establishing a REST connection between VMware Telco Cloud
Automation and the Kubernetes cluster.

Prerequisites

1 Ensure that Kubernetes CLI Tool (kubectl) is installed on your local system.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure and select the VIM.

3 Click the ⋮ menu and select Download Kube Config.

This action downloads the kubeconfig.yaml file to your local system.

VMware, Inc. 257


VMware Telco Cloud Automation User Guide

4 Use kubectl to interact with the cluster using the downloaded kubeconfig.yamlfile.

The following is an example for listing pods:

kubectl –kubeconfig ./kubeconfig.yaml get pods

Results

You can now perform cluster operations based on your user privileges.

Example: Sample kubeconfig.yml file


#Token ID: <token_id>
#Username: <username@domain>
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <certificate>
server: https://<ip.address>:8500
name: target
contexts:
- context:
cluster: target
user: target
name: target
current-context: target
kind: Config
preferences: {}
users:
- name: target
user:
token: VIM_<token>

Access a Remote Kubernetes Cluster Using an External SSH Client


You can generate login credentials from VMware Telco Cloud Automation and use an external
SSH client to access the Kubernetes cluster using these credentials.

Prerequisites

Ensure that you have installed an external SSH client on your local system.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure and select the VIM.

3 Click the ⋮ menu and select Show Login Credentials.

VMware Telco Cloud Automation generates a one-time token, user name, and password.

Note The expiration time for the token is eight hours.

VMware, Inc. 258


VMware Telco Cloud Automation User Guide

4 Using these login credentials, you can SSH into a Kubernetes cluster and perform cluster
operations based on your user privileges. The SSH connection is established to the endpoint
of VMware Telco Cloud Automation on port 8501.

For example:

ssh <username>@<tcaIp> -p 8501

Access a Remote Kubernetes Cluster Using the Embedded SSH


Client
You can access a remote Kubernetes cluster using the embedded SSH terminal within VMware
Telco Cloud Automation.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure and select the VIM.

3 Click the ⋮ menu and select Open Terminal.

Results

A terminal opens and VMware Telco Cloud Automation connects with the Kubernetes cluster.
You can now perform Kubernetes cluster operations based on your user privileges.

Access Kubernetes Cluster when VMware Telco Cloud Automation is


Down
To access a Kubernetes cluster even when the VMware Telco Cloud Automation Manager or
Control Plane is down, download the recovery kubeconfig file from VMware Telco Cloud
Automation. Ensure that you download and store the recovery kubeconfig file securely prior
to the unavailability of VMware Telco Cloud Automation.

To access the Kubernetes cluster:

Prerequisites

1 Ensure that you have installed an external SSH client on your local system.

2 Ensure that Kubernetes CLI Tool (kubectl) is installed on your local system.

To download the recovery kubeconfig file:

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Virtual Infrastructure and select the VIM.

3 Click the ⋮ menu and select Download Recovery Kube Config.

Procedure

u Use kubectl with the downloaded recovery kubeconfig file.

VMware, Inc. 259


VMware Telco Cloud Automation User Guide

Results

The recovery kubeconfig file establishes a remote connection and lists the pods that are
running on the cluster. You can now perform cluster operations on them.

VMware, Inc. 260


Kubernetes Cluster Upgrade Flow
11
Upgrade sequence for post-VMware Telco Cloud Automation upgrade.

VMware Telco Cloud Automation is integrated with the VMware Tanzu Kubernetes Grid (TKG).
New version of TKG requires you to upgrade the management cluster. After upgrading the
VMware Telco Cloud Automation version, perform the following steps in the sequence provided.

1 Upgrade the management cluster. For details, see Upgrade Management Kubernetes Cluster
Version. For implications of not upgrading the management cluster, see Implications of Not
Upgrading Management Cluster.

2 Upgrade the workload cluster. For details, see Upgrade Management Kubernetes Cluster
Version. For implications of not upgrading the workload cluster, see Implications of Not
Upgrading Workload Cluster.

Note
n For details on supported versions, see Supported Features on Different VIM Types.

This chapter includes the following topics:

n Upgrade Validations

n CaaS Upgrade Backward Compatibility

Upgrade Validations
From version 2.0, VMware Telco Cloud Automation has automated the upgrade validations. The
VMware Telco Cloud Automation performs the following validations when upgrading Kubernetes
clusters to a newer version.

Pre-upgrade Validations for Management Cluster


When upgrading the Management cluster, VMware Telco Cloud Automation performs the
following validations:

n Whether the cluster is reachable.

n Whether the Control Plane and Worker nodes are in a healthy state and fail the upgrade if
they are in the Not Ready state.

VMware, Inc. 261


VMware Telco Cloud Automation User Guide

n Whether key deployment parameters such as names of the folders and network paths are
not renamed or moved.

n Whether the operators are running and display a warning if they are not.

Post-upgrade Validations for Management Cluster


After you upgrade a Management cluster:

n Only the Management cluster add-ons are upgraded.

n The Management cluster upgrade is successful even when the underlying workload clusters
are down.

n If the operators are not the latest versions, the corresponding workload clusters display a
warning for upgrading the add-ons. To upgrade Workload cluster add-ons individually, use
the Upgrade Add-Ons option.

Pre-upgrade Validations for Workload Cluster


When upgrading the Workload cluster, VMware Telco Cloud Automation performs the following
validations:

n Whether the cluster is reachable.

n Whether the control planes are in a healthy state and fail the upgrade if they are in the Not
Ready state.

n Whether the worker nodes are in a healthy state and show a warning if they are in the Not
Ready state.

n Whether key deployment parameters such as names of the folders and network paths are
not renamed or moved.

n Whether the operators are running and display a warning if they are not.

n Whether the Management cluster is reachable and at least one Worker node is in the Ready
state.

n Whether the vmconfig-operator on the Management cluster is running. Cluster upgrade fails
if vmconfig-operator is not running.

Post-upgrade Validations for Workload Cluster


n If any cell site is down during an upgrade, the VMware Tanzu Kubernetes Grid CLI times out
after upgrading the control plane nodes and healthy worker nodes. At this point, you can still
perform life-cycle management operations.

n After the VMware ESXi connects, VMware Telco Cloud Automation upgrades the failed
nodes.

n After you remediate any environment issue, click Retry for completing the upgrade.

VMware, Inc. 262


VMware Telco Cloud Automation User Guide

CaaS Upgrade Backward Compatibility


This section will show the compatibility of the pre-upgrade and the post-upgrade management/
workload clusters after VMware Telco Cloud Automation upgrade to VMware Telco Cloud
Automation 2.3.

Management Clusters Compatibility


For management clusters, the following table lists the cluster compatibility.

Attention The VMware Telco Cloud Automation Manager and VMware Telco Cloud Automation
Control Plane are mandatorily upgraded from 2.2.x to 2.3. The management cluster supports only
V1 version.

TCA 2.2.x
Management TCA 2.3.0
cluster (v1.24.10) Management
Cluster Operations in UI [Before upgrade] Cluster (v1.24.10) Comments

Create New Management No Yes


Cluster

Edit Cluster Configuration No Yes

Edit Control Plane Node No Yes


Configuration

Edit Worker Node Configuration No Yes

Upgrade Add-ons No Yes

Change Password No Yes

Copy Spec and Deploy New No Yes

Delete Management Cluster Yes Yes

Run Diagnostics Yes Yes

Collect Log Support Bundle Yes Yes

Upgrade Management Cluster Yes (v1.24.10) NA

V1 Workload Cluster Compatibility


For V1 workload clusters, the following table lists the cluster compatibility.

Attention The Management cluster is mandatorily upgraded from 1.23.10 to 1.24.10. The
following table lists the workload cluster compatibility after management cluster upgrade to
v1.24.10.

VMware, Inc. 263


VMware Telco Cloud Automation User Guide

TCA 2.3.0 Management Cluster (v1.24.10)

TCA 2.2.x TCA 2.3.0


Workload Cluster (TKG Workload Cluster (TKG
1.6.1) [Before upgrade] 2.1.1)

v1.21.1 v1.22.1 v1.23.1 v1.22.1 v1.23.1 v1.24.1


Cluster Operations in UI 4 3 0 7 6 0 Comments

Create New No No No Yes Yes Yes


Workload Cluster

Edit Workload CNI: antrea/ No Yes Yes Yes Yes Yes


Cluster calico, multus
Configuration CSI: vspgere-csi,
nfs-client
Tool: nodeconfig-
opeartor, helm
Syslog Servers:
syslog
Harbor

Edit Control No Yes Yes Yes Yes Yes


Plane Node
Configuration

Edit Worker Node Add New No Yes Yes Yes Yes Yes
Configuration NodePool

Edit nodePool No Yes Yes Yes Yes Yes

Delete nodePool Yes Yes Yes Yes Yes Yes

Maintenance Yes Yes Yes Yes Yes Yes


Mode

Run Diagnosis

Upgrade Workload Yes Yes Yes Yes Yes NA


Cluster (v1.22.1 (v1.22.1 (v1.23.1 (v1.23.1 (v1.24.1
7) 7,v1.23 6,v1.24 6) 0)
.16) .10)

Upgrade Add-ons No Yes Yes Yes Yes Yes

Transform Transform V1 No No No Yes Yes Yes


Workload Cluster Cluster to V2

Change Password No Yes Yes Yes Yes Yes

Copy Spec and No Yes Yes Yes Yes Yes


Deploy New
Cluster

Delete Current Yes Yes Yes Yes Yes Yes


Workload Cluster

VMware, Inc. 264


VMware Telco Cloud Automation User Guide

TCA 2.3.0 Management Cluster (v1.24.10)

TCA 2.2.x TCA 2.3.0


Workload Cluster (TKG Workload Cluster (TKG
1.6.1) [Before upgrade] 2.1.1)

v1.21.1 v1.22.1 v1.23.1 v1.22.1 v1.23.1 v1.24.1


Cluster Operations in UI 4 3 0 7 6 0 Comments

Run Diagnosis Yes Yes Yes Yes Yes Yes

Collect Log Bundle Yes Yes Yes Yes Yes Yes

V2 Workload Cluster Compatibility


For V2 workload clusters, the following table lists the cluster compatibility.

Attention The Management cluster is mandatorily upgraded from 1.23.10 to 1.24.10. The
following table lists the workload cluster compatibility after management cluster upgrade to
v1.24.10.

TCA 2.3.0 Management Cluster (v1.24.10)

TCA 2.2.x TCA 2.3.0


Workload Cluster (TKG Workload Cluster (TKG
1.6.1) [Before upgrade] 2.1.1)

v1.21.1 v1.22.1 v1.23.1 v1.22.1 v1.23.1 v1.24.1


Cluster Operations 4 3 0 7 6 0 Comments

Create New No Yes Yes Yes Yes Yes


Workload Cluster (calico (calico
) )
No No
(antrea (antrea
) )

Edit Workload Edit Control Plane No Yes Yes Yes Yes Yes
Cluster Node
Configuration

Upgrade Workload Yes Yes Yes Yes Yes NA


Cluster (v1.22.1 (v1.22.1 (v1.23.1 (v1.23.1 (v1.24.1
7) 7,v1.23 6,v1.24 6) 0)
.16) .10)

Node Pools Add New No Yes Yes Yes Yes Yes


NodePool

Edit nodePool No Yes Yes Yes Yes Yes

Delete nodePool Yes Yes Yes Yes Yes Yes

Deploy Similar No Yes Yes Yes Yes Yes


Node Pool

Run Diagnosis Yes Yes Yes Yes Yes Yes

VMware, Inc. 265


VMware Telco Cloud Automation User Guide

TCA 2.3.0 Management Cluster (v1.24.10)

TCA 2.2.x TCA 2.3.0


Workload Cluster (TKG Workload Cluster (TKG
1.6.1) [Before upgrade] 2.1.1)

v1.21.1 v1.22.1 v1.23.1 v1.22.1 v1.23.1 v1.24.1


Cluster Operations 4 3 0 7 6 0 Comments

Add-Ons Deploy Add-Ons: No Yes Yes Yes Yes Yes V2 workload


n CNI: antrea/ cluster Add-ons
calico, multus will upgrade
automatically
n CSI: vspgere-
when the
csi, nfs-client
workload cluster
n Monitoring:
consumes a new
prometheus,
TBR.
fluent-bit
n Networking:
ako
n System:
Harbor,
Systemsetting
s
n TCA-Core-
Addon: nodec
onfig-
opeartor
n Tool: helm

Edit Add-Ons No Yes Yes Yes Yes Yes

Delete Add-Ons No Yes Yes Yes Yes Yes

Copy Spec and No Yes Yes Yes Yes Yes


Deploy New
Cluster

Delete Current Yes Yes Yes Yes Yes Yes


Workload Cluster

Run Diagnosis Yes Yes Yes Yes Yes Yes

Collect Log Bundle Yes Yes Yes Yes Yes Yes

VMware, Inc. 266


Running Cluster Diagnosis
12
As a cluster administrator or VNF administrator, you can troubleshoot for cluster lifecycle and
CNF customizations on the management cluster and the associated workload clusters.

You can run the following types of diagnosis:

n Kubernetes core diagnosis

n TKG component diagnosis

n TCA component diagnosis

n Container Network Interface (CNI ) and Container Storage Interface (CSI) diagnosis

n Advanced diagnosis

n Node diagnosis

Note Approximately 30 minutes are required to complete the diagnosis. And, within that time, if
another diagnosis is submitted, the data is overridden with the last submitted diagnosis result.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Infrastructure > CaaS Infrastructure.

The CaaS Infrastructure page is displayed.

3 Click the Options (⋮) symbol corresponding to the Kubernetes cluster on which you want to
run the diagnosis.

4 Click Run Diagnosis.

The Run Diagnosis window is displayed.

5 (Optional) select one or more test cases that you want to apply for the cluster diagnosis.

Note If you don't select any test case, the system runs a default diagnosis on the cluster.

6 Click START DIAGNOSIS.

VMware, Inc. 267


VMware Telco Cloud Automation User Guide

7 Click the cluster name and navigate to the Diagnosis tab to monitor the progress of the
diagnosis.

After the diagnosis is complete, the DOWNLOAD option is available to download the
diagnosis report and perform a detailed analysis.

VMware, Inc. 268


Managing Network Function
Catalogs 13
A network function, as defined by ETSI Industry Specification Group (ISG), is a functional building
block within a network infrastructure. It has well-defined external interfaces and a well-defined
functional behavior.

This chapter includes the following topics:

n Onboarding a Network Function

n Customizing Network Function Infrastructure Requirements

n Download a Network Function Package

n Edit Network Function Catalog

n Role-based Access Control to CNFs

Onboarding a Network Function


A network function descriptor describes the instantiation parameters and operational behaviors
of the network functions. It contains key requirements for onboarding and managing the life
cycle of a network function. Onboarding a network function includes uploading a network
function package to the catalog, and creating or editing a network function descriptor draft.

Upload a Network Function Package


Using VMware Telco Cloud Automation, you can upload a SOL001/SOL004 compliant Virtual
Network Function Descriptor (VNFD) and Cloud Service Archive (CSAR) package. The system
parses and validates the configuration, and presents the topology in a visual viewer. It then
persists the entry into the Network Function Catalog.

Prerequisites

n Verify that your VNFD complies with the following standards:

n Must be in the CSAR format.

n Must comply with the SOL001 standard or the SOL004 standard.

n Must comply with TOSCA Simple Profile in YAML version 1.2 or TOSCA Simple Profile for
NFV version 1.0.

VMware, Inc. 269


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function and click Onboard.

The Onboard Network Function page is displayed.

3 Select Upload Network Function Package.

4 Enter a name for your network function.

5 (Optional) To add a tag, select the key and value pairs from the drop down menus. You can
add more than one tag.

6 Click Browse and select the network function descriptor (CSAR) file.

7 Click Upload.

Results

The specified network function is added to the catalog. You can now instantiate the function or
use it to create a network service.

What to do next

n To instantiate the network function, see Instantiate a Virtual Network Function.

n To create a network service that includes the network function, see Design a Network Service
Descriptor.

n To obtain the CSAR file corresponding to a network function, select the function in the
catalog and click Download.

n To add or remove tags for your network function, select the desired network function and
click the Edit Tags icon.

n To remove a network function from the catalog, stop and delete all instances using the
network function. Delete all the Network Service catalogs that are using the network function.
Then select the function in the catalog and click Delete.

Designing a Network Function Descriptor


You can create ETSI-compliant network functions using VMware Telco Cloud Automation. The
Network Function Designer is a visual design tool within VMware Telco Cloud Automation that
generates SOL001-compliant TOSCA descriptors based on your design.

Design a Virtual Network Function Descriptor


A Virtual Network Function Descriptor (VNFD) file describes the instantiation parameters and
operational behaviors of the VNFs. You can design SOL001-compliant VNFDs using the Network
Function Designer tool in VMware Telco Cloud Automation.

VMware, Inc. 270


VMware Telco Cloud Automation User Guide

Prerequisites

Add a cloud to your virtual infrastructure.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 From the left navigation pane, go to Catalog > Network Function.

3 Click Onboard.

The Onboard Network Function page is displayed.

4 Select Design Network Function Descriptor.

5 Name - Enter a unique name for your VNF descriptor.

6 Tags (Optional)- Enter the tags to associate your VNF descriptor with.

7 Type - Select the network function type as Virtual Network Function.

8 Click Design.

The Network Function Designer page is displayed.

9 In the General Properties tab, enter the following information:

n Description (Optional) - A general description about the network function.

VMware, Inc. 271


VMware Telco Cloud Automation User Guide

n Version - The version of the network function TOSCA file. This text box is not editable.

a Under Network Function Properties, enter information for the following fields:

n Descriptor ID - The descriptor ID is system generated and not editable.

n Descriptor Version - Enter the descriptor version.

n Provider - Enter the company name of the provider.

n Provider Name - Enter the company name of the vendor.

n Product Name - Enter the product name of the descriptor.

n Version - Enter the product version.

n Software Version - Enter the software version.

b Under Available Operations, select the life-cycle management operations to be made


available for your VNF. Your users can run only those operations that are enabled here.

n Heal

n Scale

n Scale To Level

n Workflow

n Operate

n Upgrade Package

Note The life-cycle management operations are enabled by default.

c The Draft Versions pane displays the available versions of the Network Function catalog
that you can edit. Click the Options (⋮) icon and select the draft that you want to view or
edit.

VMware, Inc. 272


VMware Telco Cloud Automation User Guide

10 In the Topology tab:

a Add internal networks (Virtual Links) to your VNF by dragging the icon from the toolbar
into the design area. During Instantiation, VMware Telco Cloud Automation creates
networks for these virtual links. You can override them and select the existing networks if
necessary.

You can configure the following settings:

n Network Name

n Description

n CIDR

n DHCP

n (Optional) Gateway IP

n (Optional) IP Allocation Pools

n Start IP Address

n End IP Address

When you finish configuring the settings, click Update.

To configure additional settings for your network, click the pencil icon against the
network.

b Add virtual machines (VDU) by dragging the icon from the Toolbar into the design area.
In the Configure VDU pane, specify the following settings for each VDU:

n Name - Name of the VDU.

n Description - Description about the VDU.

n Minimum Instances - The minimum number of VDU instances.

n Image Name - The name of the VM template that is on the backing vCenter Server of
your cloud.

Note
n The image name you enter must match the virtual machine template name on the
vCenter Server.

n The image must be saved as a VM template.

n Virtual CPU - Number of virtual CPUs.

n Virtual Memory - Virtual memory size.

n Virtual Storage - Virtual storage size.

n Advanced Properties (Optional) - Enable Enhanced Platform Awareness (EPA)


capabilities such as CPU Pinning and Huge Page, and select the associated NUMA
Node ID. These properties are then automatically updated on the NFD.yaml file.

VMware, Inc. 273


VMware Telco Cloud Automation User Guide

Note This option is applicable when you configure VMware Integrated OpenStack as
your VIM.

n OVF Properties (Optional) - OVF properties are the OVF inputs to provide to the VM
template. Enter the property, description, type such as string, boolean, or number,
and default value. To make this information mandatory, select the Required option.

n Connection Points - Select an internal or external connection point from the Add
Connection Point drop-down menu:

n Internal Connection Point - Links the VDU to an existing virtual link that is added
to the VNF. At least one virtual link is required for internal connection points.

n External Connection Point - A placeholder for an external virtual link that is


required during instantiation. You must provide a Connection Name that matches
the external virtual link name.

n Depends On (Optional) - Specify the VDUs to be deployed before deploying this


VDU. In a scenario where you deploy many VDUs, there can be dependencies
between VDUs regarding the order in which they are deployed. This option enables
you to specify their deployment order.

Note To enable the Depends On option, you must configure more than one VDU.

Note You must add at least one virtual link before configuring the internal connection
points for your VDUs.

You can modify VDU settings at a later stage by clicking the pencil icon on the desired
VDU.

11 In the Rules tab, add an affinity or anti-affinity rule. For more information, see Working with
Affinity Rules.

12 In the Scaling Policies tab, add scaling aspects and instantiation levels. For more information,
see Scaling Policies.

13 To design workflows, see

14 To save your descriptor as a draft and work on it later, click Save Draft. For information about
working with different draft versions, see Edit Network Function Descriptor Drafts.

15 After designing your network function descriptor, click Onboard Package.

Results

The specified network function is added to the catalog. You can now instantiate the function or
use it to create a network service.

What to do next

n To instantiate the network function, see Instantiate a Virtual Network Function.

VMware, Inc. 274


VMware Telco Cloud Automation User Guide

n To create a network service that includes the network function, see Design a Network Service
Descriptor.

n To obtain the CSAR file corresponding to a network function, select the function in the
catalog and click Download.

n To add or remove tags, go to Catalog > Network Function and click the desired network
function. Then click Edit.

n To remove a network function from the catalog, stop and delete all instances using the
network function. Then select the function in the catalog and click Delete.

Design a Cloud Native Network Function Descriptor


A Cloud Native Network Function Descriptor (CNFD) file describes the instantiation parameters
and operational behaviors of the CNFs. You can design SOL001 - compliant CNFDs using the
Network Function Designer tool in VMware Telco Cloud Automation.

Prerequisites

Add a cloud to your virtual infrastructure.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function and click Onboard.

The Onboard Network Function page is displayed.

3 Select Design Network Function Descriptor.

4 Name - Enter a unique name for your VNF descriptor.

5 Tags (Optional)- Enter the tags to associate your VNF descriptor with.

6 Type - Select the network function type as Cloud Native Network Function.

7 Click Design.

The Network Function Designer is displayed.

8 In the General Info tab, enter the following information:

n Description - A general description about the network function.

VMware, Inc. 275


VMware Telco Cloud Automation User Guide

n Version - The version of the network function TOSCA file. This field is not editable.
a Under Network Function Properties, enter information for the following fields:

n Descriptor ID - The descriptor ID is system generated and not editable.

n Descriptor Version - Enter the descriptor version.

n Provider - Enter the company name of the provider.

n Provider Name - Enter the company name of the vendor.

n Product Name - Enter the product name of the descriptor.

n Version - Enter the product version.

n Software Version - Enter the software version.

b Under Available Operations, select the life-cycle management operations to be made


available for your VNF. Your users can run only those operations that are enabled here.

n Heal

n Scale

n Scale To Level

n Workflow

n Operate

n Upgrade Package

Note The life-cycle management operations are enabled by default.

c The Draft Versions pane displays the available versions of the Network Function catalog
that you can edit. Click the Options (⋮) icon and select the draft that you want to view or
edit.

VMware, Inc. 276


VMware Telco Cloud Automation User Guide

9 In the Topology tab:

a From the Components toolbar, drag a Helm Chart into the design area. Helm is
a Kubernetes application manager used for deploying CNFs. Helm Charts contain a
collection of files that describe a set of Kubernetes resources. Helm uses the resources
from Helm Charts for orchestrating the deployment of CNFs on a Kubernetes cluster.

b In the Configure Helm window, enter the following details:

n Name - Name of the Helm.

n Description - A brief description about the Helm.

n Chart Name - Name of the chart from the Helm repository.

n Chart Version - Version number of the chart from the Helm repository.

n Helm Version - Select the version of the Helm from the drop-down menu.

n ID - Enter the Helm ID.

n (Optional) Helm Property Overrides - Add additional instantiation properties to


override or add a YAML file that contains a list of properties to override. To upload a
YAML file, enter the filename in the Property text box and select the Type as File. You
must upload the YAML file during instantiation.

n (Optional) Helm Scale Properties - You can add the helm properties required for
scale. You can also specify if the property is mandatory or optional for scale.

n (Optional) Depends On - Specify the Helm to be deployed before deploying this Helm.
In a scenario where you deploy many Helms, there can be dependencies between the
Helms regarding the order in which they are deployed. This option enables you to
specify their deployment order.

10 To configure infrastructure requirements, see Infrastructure Requirements Designer.

11 To design workflows, see

12 To save your descriptor as a draft and work on it later, click Save Draft. For information about
working with different draft versions, see Edit Network Function Descriptor Drafts.

13 After designing your network function descriptor, click Onboard Package.

Results

The specified network function is added to the catalog. You can now instantiate the function or
use it to create a network service.

What to do next

n To instantiate the network function, see Instantiate a Virtual Network Function.

n To create a network service that includes the network function, see Design a Network Service
Descriptor.

VMware, Inc. 277


VMware Telco Cloud Automation User Guide

n To obtain the CSAR file corresponding to a network function, select the function in the
catalog and click Download.

n To add or remove tags, go to Catalog > Network Package and click the desired network
function. Then click Edit.

n To remove a network function from the catalog, stop and delete all instances using the
network function. Then select the function in the catalog and click Delete.

Infrastructure Requirements Designer


You can customize the kubernetes cluster with custom infrastructure requirements like
custom packages, network adapters, kernels, and so on, using infrastructure designer. These
customizations are available only for CNF components.

You can use the VMware Telco Cloud Automation to customize the infrastructure requirements
of the node pools. You can define these customizations through user interface and the system
adds these customizations to the corresponding TOSCA file. For more details on the TOSCA
components, see TOSCA Components.

Prerequisites

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 Click Onboard on the Network Function Catalog page.

4 Select Design Network Function Descriptor on the Onboard Network Function page. Add
the following details:

n Name: Name of the network package.

n Tags: Associated tags for the network package. Select the key and value from the drop-
down menu.

n Network Function: Select the type of network function. For infrastructure designer, select
Cloud Native Network Function.

5 Click Design.

6 On the Network Function Designer page, click Infrastructure Requirements.

7 To design the infrastructure, enable Configure Infra Requirements.

n Network Adapter - Click Add to add a new network adapter. Enter the following details:

n Device Type - Select the value from the drop-down menu.

n Network Name - Enter the name of the network.

n Resource Name - Enter the name of the resource.

n (Optional) Target Driver - Select the value from the drop-down menu.

VMware, Inc. 278


VMware Telco Cloud Automation User Guide

n Interface Name - Name of the interface for the vmxnet3 device. This property is
displayed when you select vmxnet3 in Device Type.

n (Optional) Count - Enter the number of adapters.

n PF Group - Enter the name of the PF group for which you want to add the network
adaptor.

n Shared Across NUMA - Select the button to enable or disable sharing of the devices
across NUMA.

Note Shared Across NUMA is appliable only when NUMA Alignments is enabled.

n Additional Properties - This property is displayed when you select vmxnet3 in Device
Type.

n CTX Per Dev - To configure the Multiple Context functionality for vNIC traffic
managed through Enhanced Datapath mode, select the value from the drop-down
menu. For more details, see CTX Per Dev. For more details on Enhanced Datapath
settings, see Configuration to Support Enhanced Data Path Support.

Note When you select Target Driver, the system automatically adds the required DPDK
in Kernel Modules and dependent custom packages in the Custom Packages.

n PCI Pass Through - Click Add to enter the PTP or PCI Devices.

Note When you add a PCI Pass Through device, the system automatically adds
the required Linux-rt in Kernel Type, DPDK in Kernel Modules, and dependent custom
packages in the Custom Packages.

n For the PTP devices, add the following information.

Note
n To use the PTP PHC services, enable PCI passthrough on PF0 on ESXi server
when the E810 card is configured with multiple PF groups.

n To use the PTP VF services, disable the PCI passthrough on PF0 and enable the
SRIOV on both the PFs. E810 card supports 1 VF as PTP and the other VF serves
as SRIOV VF NICs for network traffic.

a Device Type - You can select to add a PTP device or a NIC device. To use
a physical device, select NIC. To use a virtual device, select PTP from the drop-
down menu.

Note To upgrade the device type from PTP PF to PTP VF, delete the existing
PTP PF device and add the new PTP VF device. Do not change the device type
from NIC to PTP directly in the CSAR file.

VMware, Inc. 279


VMware Telco Cloud Automation User Guide

b Shared Across NUMA - Select the button to enable or disable sharing of the
devices across NUMA.

Note Shared Across NUMA is applicable only when NUMA Alignments is


enabled.

c PF Group - Enter the name of the PF group for which you want to PCI Pass
Through device.

d Enter the details for phc2sys and ptp4l files.

n Source - To provide input through file, select File from the drop-down menu.
To provide input during network function instantiation, select Input from the
drop-down menu.

Note To select File from the Source menu, you must first upload the required
file in Artifacts folder available under the Resources tab.

n Content - Name of the file. The value is automatically displayed based on the
Source value.

e Click Add to confirm.

n For the PCI Device, add the following information.

Note
n Before adding the ACC100 Adapter PCI device, ensure the ACC100 Adapter is
enabled in the VMware ESXI server. For details, see Configuring the ESXI Driver
for the Intel vRAN Accelerator ACC100 Adapter.

n You can add the ACC100 Adapter on the workload clusters with kubernetes
version 1.20.5, 1.19.9, or 1.18.17. For workload cluster upgrade, see Upgrade
Management Kubernetes Cluster Version.

a Shared Across NUMA - Select the button to enable or disable sharing of the
devices across NUMA.

Note Shared Across NUMA is applicable only when NUMA Alignments is


enabled.

b Enter the name of the resource in Resource Name.

c Select the Target Driver from the drop-down menu.

Note Based on the Target driver, the system automatically adds the required
Linux in Kernel Type, pciutils and DPDK modules.

d PF Group - Enter the name of the PF group for which you want to PCI device.

e Add the number of PCI devices in Count.

VMware, Inc. 280


VMware Telco Cloud Automation User Guide

f Click Add to confirm.


n Kernel

n Kernel Type - Select the Name and Version from the drop-down menu.

n Kernel Arguments - Click Add to add a new kernel argument. Add the Key and Value
in respective text box.

Note For hugepagesz and default_hugepagez, you can select the value from
the drop-down menu. For other arguments, you can specify the key and value in
respective text box.

n Kernel Modules - Non editable.

n Custom Packages - Click Add to add a new custom kernel package. Add the Name
and Version in the respective text box.
n Files - You can add a file for injection. Click Add and select the file from the drop-down
menu in Content and provide the file path of the target system where the file will be
uploaded in Path text box respectively.

Note To view the file in the drop-down menu, you must upload the file in the scripts
folder. You can upload only .JSON, .XML, and .conf files.

1 Click the Resources tab.

2 Click the > icon corresponding to the root folder.

3 Click the > icon corresponding to the Artifacts folder.

4 Click the + icon corresponding to the scripts folder.

5 Click Choose Files and select the file to upload.

6 Click Upload to upload the selected file.

n Services - You can add stalld and syslog-ng services.

n To add the stalld service, select the stalld from the drop-down menu.

n To add the syslog-ng service, select the syslog-ng from the drop-down menu. When
you select syslog-ng, the Add Service Config Files pop-up appears. Select the
required configuration files for syslog-ng service.

n (Optional) Tuned Profiles - Enter the name of the tuned profile. You can add multiple
tuned profiles separated by commas.

Note When you add a tuned profile, system adds the tuned package in the Custom
Packages.

n NUMA Alignments - Click the corresponding button to enable or disable the support for
NUMA alignments.

VMware, Inc. 281


VMware Telco Cloud Automation User Guide

n Latency Sensitivity - You can set the latency value for high performance profiles. Select
the value from the drop-down menu. You can select both high and low. Default value is
normal.

n I/O MMU Enabled - Click the corresponding button to enable or disable the I/O MMU.

n Upgrade VM Hardware Version - Click the corresponding button to enable or disable


upgrading the hardware version of the virtual machine.

Configuring the ESXI Driver for the Intel vRAN Accelerator ACC100 Adapter
Manual process to add the ESXi driver for Intel vRAN Accelerator ACC100 adapter.

You must enable the support for ACC100 adapter in the VMware ESXi server, before you can
configure the ACC100 adapter in VMware Telco Cloud Automation.

Procedure

1 Download the .VIB file.

a Download the ibbd-pf driver from VMware Hardware Compatibility site.

b Download the async driver .VIB file.

c Copy the downloaded .VIB file to the ESXi host where the devices are present.

2 Install the .VIB file on the VMware ESXi host.

a Switch the ESXi host into the maintenance mode. To switch the ESXi server into
maintenance mode, power down the virtual machines running on the host. You can
also migrate the virtual machines using vMotion. For details on maintenance mode, see
maintenance mode.

b Log in to the ESXi host shell, and run the following command:
# esxcli software vib install -d <full-path-to-vib> --no-sig-check

For example,

# esxcli software vib install -d <full-path-to-vib> --no-sig-check


root@plab-ran-esx3:/vmfs/volumes/5fb2f1ab-bc2690c6-0b33-40a6b70dec30/ibbd] esxcli
software vib install -v /vmfs/volumes/5fb2f1ab-bc2690c6-0b33-40a6b70dec30/ibbd/ibbd-
pf-1.0.4-1OEM.700.1.0.15843807.x86_64.vib
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: INTC_bootbank_ibbd-pf_1.0.4-1OEM.700.1.0.15843807
VIBs Removed:
VIBs Skipped:
root@plab-ran-esx3:/vmfs/volumes/5fb2f1ab-bc2690c6-0b33-40a6b70dec30/
ibbd]

VMware, Inc. 282


VMware Telco Cloud Automation User Guide

After the driver is successfully installed and loaded, check the description of the lspci
command:

[root@plab-ran-esx3:] lspci | grep accel


0000:3b:00.0 Processing accelerators: Intel Corporation ACC FEC IP
root@plab-ran-esx3:] lspci | grep accel

3 Enable SR-IOV for the devices.

a Log into the ESXi Host UI.

b Navigate to Manage > Hardware > PCI devices.

c Find the accelerator manually. You can also search the term FEC to find the accelerators.

d Select the device to configure.

e Click Configure SR-IOV.

f Configure the number of virtual functions required.

g Click Save.

Note Ignore the reboot message.

h Verify the SR-IOV devices with lspci.

[root@plab-ran-esx3:~] lspci | grep accel


0000:3b:00.0 Processing accelerators: Intel Corporation ACC FEC IP
0000:3c:00.0 Processing accelerators: Intel Corporation Device 0d5d
[PF_0.59.0_VF_0]
0000:3c:00.1 Processing accelerators: Intel Corporation Device 0d5d
[PF_0.59.0_VF_1]
0000:3c:00.2 Processing accelerators: Intel Corporation Device 0d5d [PF_0.59.0_VF_2]
0000:3c:00.3 Processing accelerators: Intel Corporation Device 0d5d [PF_0.59.0_VF_3]
0000:3c:00.4 Processing accelerators: Intel Corporation Device 0d5d [PF_0.59.0_VF_4]
0000:3c:00.5 Processing accelerators: Intel Corporation Device 0d5d [PF_0.59.0_VF_5]
0000:3c:00.6 Processing accelerators: Intel Corporation Device 0d5d [PF_0.59.0_VF_6]
0000:3c:00.7 Processing accelerators: Intel Corporation Device 0d5d [PF_0.59.0_VF_7]
0000:d8:00.0 Processing accelerators: Intel Corporation ACC FEC IP
root@plab-ran-esx3:~]

4 Configure the virtual function through BBDEV-CLI.

n You can download the bbdev-cli tool and the user guide from https://www.intel.com/
content/www/us/en/download/19758.

n Install the bbdev-cli.

1 Uncompress the -package.zip file.

2 Copy the .zip file to ESXi. For example, esxcli software component apply -d /tmp/
Intel-ibbd-tools_1.0.7-1OEM.700.1.0.15843807_17865363.zip.

VMware, Inc. 283


VMware Telco Cloud Automation User Guide

n To get the list of devices, run bbdev-cli -l, for example:

[root@plab-ran-esx3:~] /opt/intel/bbdev-cli -l
{
"Devices": [
"{Name:/devices/ifec/dev1, Type:ACC100, Address:0000:d8:00.0}",
"{Name:/devices/ifec/dev0, Type:ACC100, Address:0000:3b:00.0}"
]
}
[root@plab-ran-esx3:~]

The acc100_config_vf_5g.cfg file contains default configuration for 5GNR FlexRAN l1app.
This configuration is applied, by default.
n To apply a new configuration file, use the example.

root@plab-ran-esx3:/opt/intel/ACC100] /opt/intel/bbdev-cli -t /devices/ifec/dev0 -c


acc100_config_vf_5g.cfg
Configuring hardware of ACC100 FEC type
Successfully Configured ACC100 FEC device
root@plab-ran-esx3:/opt/intel/ACC100]

You can get the device name through the bbdev-cli -l command.

Ignore the message Using Default Value Missing Section ....

Configuration to Support Enhanced Data Path Support


VMware Telco Cloud Automation configurations to support Enhanced Data Path or ENS.

Enhanced data path or ENS provides superior network performance. It targets NFV workload
and uses the DPDK capabilities to enhance the network performance. To support ENS on
VMware Telco Cloud Automation, make the following changes in the Infrastructure Requirements
Designer.

Prerequisites

n Ensure that all the hosts in the cluster are homogeneous.

n Install latest ENS NIC drivers compatible with NIC Model and remove the older drivers. For
details, see Enhanced Data Path.

n Ensure that you create a new DVS for ENS workloads and prepare the DVS with NSX-T ENS.

n If you want to use multiple NICs as Uplinks in NSX-T, VMware Telco Cloud Automation
recommends to set the NIC Teaming policy to Load Balance Source.

Note
n When you create DVS for ENS, all connected Portgroups or NSX-T segments start leveraging
Enhanced Datapath.

n VMware Telco Cloud Automation does not recommend the use of ENS DVS for VMotion and
vSAN traffic.

VMware, Inc. 284


VMware Telco Cloud Automation User Guide

Procedure

1 Do not set isNumaConfigNeeded in CSAR as NSX-T can automatically aligns the VNIC of Node
Pool with NUMA, PNIC, and ENS L-cores. However if you set isNumaConfigNeeded = True in
CSAR, then VMware Telco Cloud Automation tries to align the Node Pool with NUMA and
ENS L-cores.

infra_requirements:
node_components:

latency_sensitivity: high
isNumaConfigNeeded: [true | false] <---- if True, TCA will send only memory pinning
to VMConfig operator if underlying network ENS enabled.

2 Set the CTX value.

infra_requirements:
node_components:
network:
devices:
- deviceType: vmxnet3 <-- vmxnet3
networkName: F1U
resourceName: vmxnet3res <-- resourceName
targetDriver: [igb_uio | vfio-pci] <-- driver name
additionalProperties: <-- ENS related config i.e.., Setting
multi context for device
ctxPerDev: [1|2|3]

Note:
i. If ‘targetDriver’ is not provided, then providing interfaceName for the device is
mandatory.
ii. If ctxPerDev is set, map the device to ENS capable network while instantiating CNF.
iii. ctxPerDev property is applied per interface level to the Node pool and can be
seen in VM vmx file as ethernetX.ctxPerDev (where X is the interface number).

Working with Affinity Rules


Affinity Rules govern the hosting of the VMs on a host.

A VM-VM affinity rule specifies whether selected individual virtual machines run on the same host
or kept on separate hosts. This type of rule is used to create affinity or anti-affinity between
individual virtual machines that you select.

An affinity rule ensures that the specified virtual machines are placed together on the same
host. An anti-affinity rule ensures that the specified virtual machines do not share the host. You
can create an anti-affinity rule to guarantee that certain virtual machines are always on different
physical hosts. This way, not all virtual machines are at risk when one of the hosts encounters an
issue.

VMware, Inc. 285


VMware Telco Cloud Automation User Guide

Create Affinity Rules


You can create VM-VM affinity rules to specify whether selected individual virtual machines run
on the same host or kept on separate hosts.

Prerequisites

Create the topology.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Catalog > Network Function.

3 Click the network function on which you want to create affinity rules and click Edit.

4 Select the Rules tab.

5 To add an affinity rule, click the Add under Affinity Rules.

a Add the name of the affinity rule in text box corresponding to Rule Name.

b To create affinity among the VDUs, select the VDU from the list.

6 To add an anti-affinity rule, click the Add under Anti-Affinity Rules.

a Add the name of the anti-affinity rule in text box corresponding to Rule Name.

b To create an anti-affinity rule among the VDUs, select the VDU from the list.

Results

The affinity and anti-affinity rules are added.

Example

Table 13-1. Affinity and Anti Affinity Rules

VDU Affinity Rules Anti-Affinity Rules

VDU 1, VDU 2 The deployed VDUs are always kept The deployed VDUs are always kept
together on the same ESXi host even apart on different ESXi hosts. for
for scaled-out instances. scaled-out instances, an anti-affinity
rule is created for every permutation
and combination.

VDU 1 All the scaled VDU instances of VDU All the scaled VDU instances of VDU
1 are kept together on the same ESXi 1 are kept apart on different ESXi
host. hosts and only one anti-affinity rule is
created.

Scaling Policies
The Scaling Policies tab provides an interface to configure scaling aspects and instantiation
levels for the VDU instances in a VNF.

VMware, Inc. 286


VMware Telco Cloud Automation User Guide

Using Scaling Policies, you can adjust to changing VNF workload demands by increasing or
decreasing the VDU instances. For example, you can scale up the number of VDU instances in a
VNF in anticipation of heavy usage over the weekend.

What is an Aspect?
Aspects are the logical grouping of one ore more VDU instances in a VNF. Scaling aspects define
the VDU instances to scale in discreet steps. Each scale level of a scaling aspect defines a valid
size of the VNF.

What is a Scaling Step?


A scaling step is the smallest increment by which a VNF is scaled for a particular aspect. It
represents the number of instances scaled for a specific set of VDUs. If only a single step is
assigned, the scaling step provides uniform scaling for all aspect levels. Multiple steps allow
non-uniform scaling of VDU instances and requires the maximum scale level number of steps.

What is Maximum Scale Level?


Maximum scale level represents the total number of scaling steps that can be applied during a
scale operation. The minimum scale level is 0.

What is Instantiation Level?


Each instantiation level defines a commonly used scaling level by specifying an aspect and a
fixed scale level for that aspect.

Add a Scaling Policy


To add a scaling policy, perform the following steps:

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Design a Virtual Network Function Descriptor. For more information, see Design a Virtual
Network Function Descriptor.

3 In the Network Function Designer page, click the Scaling Policies tab.

4 To add scaling aspects, click Add under Scaling Aspects.The Add Aspect wizard is displayed.

a Enter the Name and Description of the aspect.

b Max Scale Level - Use the slide bar to select the total number of scale steps to apply for
the aspect.

c Under Available Scaling Steps, select the scaling steps to assign to your aspect. To
create a scaling step, click Create Scaling Step.

d The Scaling Steps table displays details of the scaling steps that are assigned to the
aspect.

e Click Add Aspect.

VMware, Inc. 287


VMware Telco Cloud Automation User Guide

5 To add an instantiation level, click Add. The Add Instantiation Level wizard is displayed.

a Enter the Name and Description of the instantiation level.

b To make this instantiation level as the default level, select Default Level.

c Assign a scaling aspect to the instantiation level. To add a scaling aspect, click Add
Aspect.

d Click Add Level.

6 To save this scaling policy as a draft and edit it at a later time, click Save As Draft.

7 To Upload the Scaling Policy to your VNF, click Upload.

What to do next

To edit a scaling policy, perform the following steps:

1 Go to Catalog > Network Function and click the network function on which you want to
update the scaling policy.

2 Click Edit.

3 In the Network Function Designer page, add scaling aspects or add instantiation levels.

4 To save the updated Network Function, click Save.

5 To create a duplicate of the Network Function that contains the scaling updates, click Save as
New.

Designing Workflows
Starting from release 2.0, VMware Telco Cloud Automation provides a Workflow designer in the
user interface for defining life-cycle management workflows.

The Workflow designer in VMware Telco Cloud Automation is available for Network Functions
(VNFs and CNFs) and Network Services. Using the Workflow designer, you can now create a
workflow, upload an existing workflow specification in JSON format from your local system, or
select a workflow from the Resources folder in VMware Telco Cloud Automation.

You can design workflows for the following life-cycle events, or add a custom workflow. The list
can differ for VNFs, CNFs, and Network Services:

n Instantiate Start

n Instantiate End

n Heal Start

n Heal End

n Scale Start

n Scale End

n Scale Level To Start

VMware, Inc. 288


VMware Telco Cloud Automation User Guide

n Scale Level To End

n Terminate Start

n Terminate End

n Upgrade Start

n Upgrade End

You can also deactivate a life-cycle event if your network function does not support it.

Key Concepts of Workflows


Workflows consist of a schema, attributes, and parameters. The workflow schema is the main
component of a workflow as it defines all the workflow elements and the logical connections
between them.

The workflow attributes and parameters are the variables that workflows use to transfer data.
Orchestrator saves a workflow token every time a workflow runs, recording the details of that
specific run of the workflow.
Workflow Parameters
Workflows receive input parameters and generate output parameters when they run.

Input Parameters

Input parameters are read-only variables. Most workflows require a certain set of input
parameters to run. An input parameter is an argument that the workflow processes when it
starts. The user, an application, another workflow, or an action passes input parameters to a
workflow for the workflow to process when it starts.

For example, if a workflow resets a virtual machine, the workflow requires as an input parameter
the name of the virtual machine.

To modify the value supplied by the workflow caller, or to read the information using an input
parameter, copy the input parameter to an attribute.

Output Parameters

Output parameters are write-only variables. A workflow's output parameters represent the result
from the workflow run. Output parameters can change when a workflow or a workflow element
runs.

For example, if a workflow creates a snapshot of a virtual machine, the output parameter for the
workflow is the resulting snapshot.

To read the value of a variable, use an attribute within the workflow. To pass the value of that
attribute to the workflow caller, copy the attribute to an output parameter.
Workflow Attributes and Variables
Use attributes to pass information between the schema elements inside a workflow.

VMware, Inc. 289


VMware Telco Cloud Automation User Guide

Attributes are read and write variables. It is a common design pattern to copy input parameters
to attributes at the beginning of a workflow so that you can modify the value if necessary within
the workflow. It is a common design pattern to copy attributes to output parameters at the end
of a workflow so that you can read the value if necessary within the workflow.
Workflow Bindings
Bindings populate elements with data from other elements by binding input and output
parameters to workflow attributes.

With parameter bindings, you can explicitly state whether you want each of your workflow
variables to be accessible.

Inward Binding

You can read the value stored in the variable.

Outward Binding

You can change the value stored by a variable. That is, you can write out to the variable.

Create a Workflow
You can create a new life-cycle event workflow for your network function or network service
using the Workflow designer.

You can create a workflow when designing a network function or network service descriptor, or
add workflows at a later stage. In this example, we look at designing a workflow when designing
a network function descriptor.

Procedure

1 Follow the steps for designing a network function descriptor. See Designing a Network
Function Descriptor.

2 In the Network Function Designer page, click the Workflows tab.

3 Under Life Cycle Events, select an event for designing the workflow.

4 To design a new workflow, click Create New.

5 From the drop-down menu, you can select a workflow type for each step. The commonly
used workflows are:

n SSH Command - To run SSH commands on remote SSH servers.

n vRO Workflow - To run custom workflows using vRealize Orchestrator.

n Copy File - To copy a user-given or CSAR-bundled file to the remote host.

n Netconf Command - To perform NETCONF operations.

n Run Scripts Via VM Tools - To run custom scripts through VM Tools.

VMware, Inc. 290


VMware Telco Cloud Automation User Guide

6 Inbound variables: You can assign valid default values to inbound variables. You can also map
the inbound variable to the variables in the Worflow Interface pane on the right.

a To assign a default value for an inbound variable, click the edit icon against the inbound
variable.

b To add an inbound variable, click the + icon. To delete an inbound variable, click the -
icon.

c You can add valid input values from the Worflow Interface pane on the right and map
them to the inbound variables.

7 Outbound variables: You can map an outbound variable to a valid output value from the
Workflow Interface pane on the right.

8 To insert a new step, click the + icon on the left of the step.

9 To change the order of each step, click the step number and select a new step number from
the drop-down menu.

10 To save the workflow as draft, click Save on the top-right section of the page.

11 To remove the workflow, click Clear.

What to do next

Onboard the package or save it as a draft.

Edit Network Function Descriptor Drafts


If you have saved a draft in the Network Function Designer, you can modify or delete it at a later
stage. The draft is saved as a new version every time you save. When you want to edit, select
the version of draft to edit.

Prerequisites

Use the Network Function Designer to create a network function descriptor and save the design
as a draft.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function and click Onboard.

The Onboard Network Function page is displayed.

3 Select Edit Network Function Descriptor Drafts.

4 From the table, locate the desired network descriptor draft and click the Edit icon.

The Network Function Designer page is displayed.

5 To select a draft version for editing, from the Draft Versions table, click the Options symbol ⋮
against the draft and select View. You can restore a previous version from here.

VMware, Inc. 291


VMware Telco Cloud Automation User Guide

6 To save, click Save Draft.

Edit a CSAR File Manually


You can manually edit a CSAR file and upload it to VMware Telco Cloud Automation.

Prerequisites

Note The following steps are valid only on macOS and Linux operating systems. On a Windows
operating system, use the relevant commands to edit the CSAR file.

Procedure

1 Download the CSAR file that you want to edit. For more information, see Download a
Network Function Package.

2 Unzip the CSAR file.

3 Go to the Definitions folder and open the NFD.yaml file.

4 Update the descriptor_id field with the new descriptor ID. You can also update the NFD.yaml
with any other changes, as appropriate.

5 Save the NFD.yaml file.

6 You can also add any other supporting files to their respective folders or edit the existing
files.

For example, you can add a script to the Artifacts > scripts folder.

7 Recreate the CSAR file. Run the following command:

zip -r <new_name>.csar TOSCA-Metadata/ Definitions/ Artifacts/ NFD.mf

8 Upload the CSAR file VMware Telco Cloud Automation. For more information, see Upload a
Network Function Package.

Delete a Network Function


You can delete a network function from the catalog.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 Select the desired network function and click Delete.

4 Confirm the action by clicking OK.

Results

The network function is removed from the catalog.

VMware, Inc. 292


VMware Telco Cloud Automation User Guide

Customizing Network Function Infrastructure Requirements


You can customize the infrastructure of a CNF according to its unique requirements. Customizing
the infrastructure requirements enables you to create a cluster, customize, and deploy the
network functions without any manual user inputs.

Network functions from different vendors have their own unique set of infrastructure
requirements. Defining these requirements in the network functions ensure that they are
instantiated and deployed in a cluster without you having to log in to their master or worker
nodes.

To customize the cluster according to network function requirements, you must add the
requirements in the network function catalog. Go to Catalog > Network Function tab. Click the
network function that requires a customization and select the Infrastructure Requirements tab.

VMware Telco Cloud Automation added a custom extension called infra_requirements to the
TOSCA. In this extension you can define the node, Containers as a Service (CaaS), and Platform
as a Service (PaaS) components:

1 Under node_components, you can define the requirements for the node. These requirements
include kernel type, kernel version, kernel arguments, required packages, tuned
configuration. You can also define networks to be configured for worker node. All the
changes are applied on the worker nodes of the node pool.

2 Under caas_components, define the CaaS components such as CNIs to be installed on each
worker node. At present, supports SRIOV.

After you define the components of infra_requirements in the CNF catalog, the nodepool is
customized according to the differences detected between the CNF catalog and the actual
configuration present in the nodepool during instantiation.

Node Customization
You can customize nodepools of the clusters using network function catalog defined in a TOSCA
(Topology and Orchestration Specification for Cloud Applications) file.

VMware Telco Cloud Automation uses Network Function TOSCA (Topology and Orchestration
Specification for Cloud Applications) extensions to determine the requirements for different VIMs.

Features enabled through TOSCA extensions include:

n SRIOV interface addition and configuration

n NUMA alignment of vCPUs and VF/PFs

n Latency sensitivity

n Tuned profile

n DPDK binding for SRIOV interfaces

n Kernel Update

VMware, Inc. 293


VMware Telco Cloud Automation User Guide

n Kernel Modules

n Custom package installations (pciutils, lxcfs.)

n GRUB config (all configurations used for the CPU isolation, hugepages config.)

n Passthrough devices for PTP

Note The maximum CPU or memory resource allocated to worker nodes within node pools
cannot exceed the CPU or memory resource available at the underlying ESXi host level.

TOSCA Components
You can modify the node components and CaaS components in TOSCA for different Kubernetes
VIMs.

To support various network functions, the Worker nodes may require a customization in the
TOSCA. These customizations include the kernel-related changes, custom packages installations,
network adapter, SRIOV, DPDK configurations, and CPU Pinning of the Worker nodes on which
you deploy the network functions.

Node Components
n Kernel: The Kernel definition uses multiple arguments that require a customization.

n kernel_type: Kernel type for the worker nodes. The kernel types are:

n Linux RealTime (linux-rt)

n Linux Non-RealTime (linux)

The kernel type depends on the network function workload requirement. The required
Linux version is downloaded from TDNF repo[VMware Photon Linux] during customization.
kernel type

infra_requirements:
node_components:
kernel:
kernel_type:
name: linux-rt
version: 4.19.132-1.ph3

n kernel_args: Kernel boot parameters for tuning values that you can adjust when the
system is running. These parameters configure the behavior of the kernel such as
isolating CPUs. These parameters are free form strings. They are defined as 'key' → name
of the parameter and optionally 'value' → if any arguments are provided.
kernel_args

infra_requirements:
node_components:
kernel:
kernel_args:
- key: nosoftlockup
- key: noswap

VMware, Inc. 294


VMware Telco Cloud Automation User Guide

- key: softlockup_panic
value: 0
- key: pcie_aspm.policy
value: performance
- key: intel_idle.max_cstate
value: 1
- key: mce
value: ignore_ce
- key: fsck.mode
value: force

Huge Pages

infra_requirements:
node_components:
kernel:
kernel_args:
- key: default_hugepagesz
value: 1G
- key: hugepagesz
value: 1G
- key: hugepages
value: 17
Note:
i. This order should be maintained.
ii. Nodes will be restarted to set these values
iii. supported hugepagesz are 2M | 1G

isolcpus

infra_requirements:
node_components:
kernel:
kernel_args:
- key: isolcpus
value: 2-{{tca.node.vmNumCPUs}}
Note: TCA will replace the {{tca.node.vmNumCPUs}} with vCPUs configured on the worker
node.

n kernel_modules: To install any kernel modules on Worker nodes. For example, dpdk, sctp,
and vrf.

Note When configuring dpdk, ensure that the corresponding pciutils package is
specified under custom_packages.

dpdk

infra_requirements:
node_components:
kernel:
kernel_modules:
- name: dpdk
version: 19.11.1

VMware, Inc. 295


VMware Telco Cloud Automation User Guide

For a details on supported DPDK versions, see Supported DPDK and Kernel Versions.

n custom_packages: Custom packages include the lxcfs, tuned, pci-utils, and ptp. The
required packages are downloaded from TDNF repo[VMware Photon Linux] during
customization.
custom_packages

infra_requirements:
node_components:
custom_packages:
- name: pciutils
version: 3.6.2-1.ph3
- name: tuned
version: 2.13.0-3.ph3
- name: linuxptp
version: 2.0-1.ph3
Note: Make sure these packages are available on VMWARE TDNF Repository

n additional_config: Helps in the additional customization on node. For example, tuned.

Note While configuring tuned, ensure that the corresponding tuned package is specified
under custom_packages

tuned

infra_requirements:
node_components:
additional_config:
- name: tuned # <--- for setting tuned
value: '[{"name":"custom-profile"}]' # <--- list of profile names to activate.

n file_injection: Inject the configuration files inside the nodes.


file_injection

infra_requirements:
node_components:
file_injection:

- source: file
content: ../Artifacts/scripts/custom-tuned-profile.conf
#<-- File path location which is embedded in CSAR
path: /etc/tuned/custom-profile/tuned.conf #<-- Target location of the
configuration file. Location should align with name of the profile.
- source: file

content: ../Artifacts/scripts/cpu-partitioning-variables.conf #<-- File path


location which is embedded in CSAR
path: /etc/tuned/cpu-partitioning-variables.conf #<-- Supporting files for the
main configuration file.

VMware, Inc. 296


VMware Telco Cloud Automation User Guide

n isNumaConfigNeeded: This feature tries to find a host and a NUMA node that can fit the
VM with the given requirements and assign it. It is useful for high-performance profile
Network Functions such as DU, which require a high throughput. This sets CPU and Memory
reservations to maximum on the Worker node. It sets the affinity for the Worker node cpus to
the ESXi cpus.
isNumaConfigNeeded

infra_requirements:
node_components:
isNumaConfigNeeded: [true | false]

n latency_sensitivity: For Network Functions that require a high-performance profile with


low-latency such as DU, CU-CP, CU-UP, and UPF. These functions require the node latency
sensitivity set on vSphere.

Note Node restarts after customization.

latency_sensitivity

infra_requirements:
node_components:
latency_sensitivity:
[high | low]

n ptp: It is used for customizing PTP services. You can use the configuration files for ptp4l and
phc2sys services customization.

Note
n You must add PTP4L_CONFIG_FILE in User Input section of the catalog.

n Destination path (worker node path) is abstracted out from these services. VMware
Telco Cloud Automation copies content of phc2sys and ptp4l configuration files to /etc/
sysconfig/phc2sys and /etc/ptp4l.conf respectively on the worker node.

PTP

infra_requirements:
node_components:
ptp:
phc2sys:
source: file # <-- Content will come from file
embedded in CSAR
content: ../Artifacts/scripts/phc2sys # <-- Source path location relative to
Definitions folder
ptp4l:
source: input # <-- Content will come from user input
while NF instantiation
content: PTP4L_CONFIG_FILE # <-- Variable name to hold user input
file content while NF instantiation

VMware, Inc. 297


VMware Telco Cloud Automation User Guide

n passthrough_devices: For adding PCI devices. For example, ptp.

Note While specifying passthrough device configurations, ensure that the corresponding
linuxptp package is specified under custom_packages.

passthrough_devices

infra_requirements:
node_components:
passthrough_devices:
- device_type: NIC
pf_group: ptp
isSharedAcrossNuma: [true|false] # <-- This sets the passthrough device to be
sharable across NUMAs. If not present, defaults to falseumber of adapters required
Note:
1. For now the values are hardcoded
2. If 'isSharedAcrossNuma' is set to true, make sure to set
'infra_requirements.node_components.isNumaConfigNeeded' to true.

n network: Creates network adapters on the nodes. For SRIOV, the given resource name will be
allocatable resource on the node.
Network

infra_requirements:
node_components:
network:
devices:
- deviceType: # <-- Network Adapter type [sriov]
networkName: # <-- Input label for User Input to provide Network while
NF Instantiation. Refer below section how to define these input
resourceName: # <-- This is the label the device will be exposed in K8s
node.
dpdkBinding: # <-- The driver this device should used. If not
mentioned, then default OS driver will be used.
count: 3 # <- Number of adapters required.
interfaceName: # <- Sets the interface name inside GUEST OS for this
adapter. Valid only if the dpdkBinding is not "vfio-pci" and "igb_uio"
isSharedAcrossNuma: [true|false] # <-- This sets the network device to
be sharable across NUMAs. If not present, defaults to false
additionalProperties:
mtu: # <-- Optional Input label for user input to provide
Network MTU while NF Instantiation. Refer below section how to define these input

Note:
1. for 'networkName' refer below section
2. dpdkBinding
- igb_uio
- vfio-pci
3. Make sure to have 'pciutils' custom packages and 'dpdk' kernel modules.
4. If 'isSharedAcrossNuma' is set to true, make sure to set
'infra_requirements.node_components.isNumaConfigNeeded' to true.

VMware, Inc. 298


VMware Telco Cloud Automation User Guide

5. MTU is not allowed if dpdk driver is set on the interface. TCA would throw
validation error during NF catalog onboarding
6. If MTU value is not provided during NF instantiation, default value 1500 would be
set on the interface

For SRIOV network adapters, when initiating, add the following:


VnfAdditionalConfigurableProperties

tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.lmn:
derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
properties:
F1U: # <--- label that is provided
infra_requirements.node_components.network.devices.networkName
required: true
propertyName: F1U # <--- label that is provided
infra_requirements.node_components.network.devices.networkName
description: ''
default: ''
type: string
format: network # <- to show the network drop down
PTP4L_CONFIG_FILE: # <-- label that is provided in
infra_requirements.node_components.ptp.ptp4l.content
required: true
propertyName: PTP4L_CONFIG_FILE # <-- label that is provided in
infra_requirements.node_components.ptp.ptp4l.content
description: ''
default: ''
type: string
format: file # <-- to show drop down to select file

helm-abc:
type: tosca.nodes.nfv.Vdu.Compute.Helm.helm-abc
properties:
:
configurable_properties:
additional_vnfc_configurable_properties:
type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.lmn
:
:
F1U: '' # <-- Same label provided above
PTP4L_CONFIG_FILE: '' # <-- Same label provided above

n Services: Defines the systemd service configurations. You can define stalld and syslog-ng
service.

VMware, Inc. 299


VMware Telco Cloud Automation User Guide

stalld

infra_requirements:
node_components:
services:
- name: stalld #<------ Only stalld, syslog-ng are supported

Note
n Ensure that you specify the stalld custom package in custom_packages.

n Use the file injection method to to upload the modified configuration file for stalld service
in /etc/sysconfig/stalld.

syslog-ng

infra_requirements:
node_components:
services:
- name: syslog-ng
serviceConfigFiles:
- name: /etc/syslog-ng/conf.d/serv.conf #<------ Config file of syslog-ng by
Systemd Service so that NodeConfig will monitor this file ans restart syslog-ng when its
content changed

Note Use the file injection method to to upload the required service configuration files for
syslog-ng service and provide the path of the uploaded files in serviceConfigFiles.

caas_components
You can configure CaaS components, such as CNI, CSI, Helm for the Kubernetes. You can install
CNI plugins on Worker nodes during CNF instantiation. Provide CNIs such as SRINOV in Cluster
Configuration in the CaaS Infrastructure.

infra_requirements:
caas_components:
- name: srinov
type: cni

AWS Elastic Kubernetes Service


Node customizations and workflows are not supported for AWS Elastic Kubernetes Service
(EKS).

TOSCA Definition Extension


VMware Telco Cloud Automation uses modified TOSCA, which is an extension of the standard
TOSCA, to determine prerequisites for different VIMs.

The root node tosca.nodes.nfv.VMware.VNF defines the VNF definition like CaaS and NodeConfig
related requirements in the TOSCA.

VMware, Inc. 300


VMware Telco Cloud Automation User Guide

The infra_requirements property at the root node defines these infrastructure requirements for
the Network Function.

The sample shows customized TOSCA with the infrastructure requirements definition.
TOSCA Sample

tosca_definitions_version: tosca_simple_yaml_1_2
description: Network Function description
imports:
- etsi_nfv_sol001_common_2_7_1_types.yaml
- etsi_nfv_sol001_vnfd_2_7_1_types.yaml
- vmware_nfv_custom_vnfd_2_7_1_types.yaml
node_types:
tosca.nodes.nfv.VMware.CNF.testnf:
derived_from: tosca.nodes.nfv.VMware.CNF
interfaces:
Vnflcm:
type: tosca.interfaces.nfv.VMware.Vnflcm
tosca.nodes.nfv.Vdu.Compute.Helm.testnf:
derived_from: tosca.nodes.nfv.Vdu.Compute.Helm
properties:
configurable_properties:
type: tosca.datatypes.nfv.VnfcConfigurableProperties.testnf
required: true
data_types:
tosca.datatypes.nfv.VnfcConfigurableProperties.testnf:
derived_from: tosca.datatypes.nfv.VnfcConfigurableProperties
properties:
additional_vnfc_configurable_properties:
type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.testnf
description: Describes additional configuration for VNFC that can be configured
required: true
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.testnf:
derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
properties:
values:
required: false
propertyName: values
description: ''
default: ''
type: string
format: file
vlan3:
required: true
propertyName: vlan3
description: Network interface providing PF config for sriov with ipam
default: ''
type: string
format: network
vlan4:
required: true
propertyName: vlan4
description: Network interface providing PF config for sriov with igb_uio
default: ''

VMware, Inc. 301


VMware Telco Cloud Automation User Guide

type: string
format: network
vlan5:
required: true
propertyName: vlan5
description: Network interface providing PF config for sriov with vfio-pci
default: ''
type: string
format: network
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
USERNAME:
name: USERNAME
type: string
description: K8s master username
required: true
default: ''
format: string
PASSWORD:
name: PASSWORD
type: password
description: K8s master password
required: true
default: ''
format: password
IP:
name: IP
type: string
description: K8s master ip address
required: true
default: ''
format: string
K8S_NAMESPACE:
name: K8S_NAMESPACE
type: string
description: K8S namespace for testnf
required: true
default: testnf
format: string
NAD_FILE:
name: NAD_FILE
type: string
description: The NAD Config file
required: true
default: ''
format: file
NODE_POOL_FOR_CAT:
name: NODE_POOL_FOR_CAT
type: string
description: Node pool to enable CAT (Cache Allocation Technology), leave it empty if
CAT is not required.
required: false
default: ''
format: string

VMware, Inc. 302


VMware Telco Cloud Automation User Guide

tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
nsCreateResult:
name: nsCreateResult
type: string
description: ''
copyNADResult:
name: copyNADResult
type: string
description: ''
nadCreateResult:
name: nadCreateResult
type: string
description: ''
tosca.datatypes.nfv.VMware.Interface.InstantiateEndInputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
USERNAME:
name: USERNAME
type: string
description: K8s master username
required: true
default: ''
format: string
PASSWORD:
name: PASSWORD
type: password
description: K8s master password
required: true
default: ''
format: password
IP:
name: IP
type: string
description: K8s master ip address
required: true
default: ''
format: string
K8S_NAMESPACE:
name: K8S_NAMESPACE
type: string
description: K8S namespace for testnf
required: true
format: string
NODE_POOL_FOR_CAT:
name: NODE_POOL_FOR_CAT
type: string
description: Node pool to verify CAT (Cache Allocation Technology), leave it empty if
CAT is not enabled.
required: false
default: ''
format: string
tosca.datatypes.nfv.VMware.Interface.InstantiateEndOutputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters

VMware, Inc. 303


VMware Telco Cloud Automation User Guide

properties:
nsCheckResult:
name: nsCheckResult
type: string
description: ''
nadCheckResult:
name: nadCheckResult
type: string
description: ''
copyResult:
name: copyResult
type: string
description: ''
topology_template:
substitution_mappings:
node_type: tosca.nodes.nfv.VMware.CNF.testnf
node_templates:
testnf:
properties:
descriptor_id: vnfd_4501ecbe-4414-11eb-bf08-9b885
provider: VMware
vendor: VMware
product_name: testnf
version: 2.0.0
id: testnf
software_version: 2.0.0
descriptor_version: 2.0.0
flavour_id: default
flavour_description: default
vnfm_info:
- gvnfmdriver
infra_requirements:
node_components:
isNumaConfigNeeded: true
kernel:
kernel_type:
name: linux-rt
version: 4.19.198-5.ph3
kernel_args:
- key: nosoftlockup
- key: noswap
- key: softlockup_panic
value: 0
- key: pcie_aspm.policy
value: performance
- key: intel_idle.max_cstate
value: 1
- key: mce
value: ignore_ce
- key: fsck.mode
value: force
- key: fsck.repair
value: yes
- key: nowatchdog
- key: cpuidle.off

VMware, Inc. 304


VMware Telco Cloud Automation User Guide

value: 1
- key: nmi_watchdog
value: 0
- key: audit
value: 0
- key: processor.max_cstate
value: 1
- key: intel_pstate
value: disable
- key: isolcpus
value: 4-{{tca.node.vmNumCPUs}}
- key: skew_tick
value: 1
- key: irqaffinity
value: 0-3
- key: selinux
value: 0
- key: enforcing
value: 0
- key: nohz
value: 'on'
- key: nohz_full
value: 4-{{tca.node.vmNumCPUs}}
- key: rcu_nocb_poll
value: 1
- key: rcu_nocbs
value: 4-{{tca.node.vmNumCPUs}}
- key: idle
value: poll
- key: default_hugepagesz
value: 1G
- key: hugepagesz
value: 1G
- key: hugepages
value: 8
- key: intel_iommu
value: 'on'
- key: iommu
value: pt
- key: clock
value: tsc
- key: clocksource
value: tsc
- key: tsc
value: reliable
kernel_modules:
- name: dpdk
version: '20.11'
custom_packages:
- name: pciutils
version: 3.6.2-1.ph3
- name: tuned
version: 2.13.0-4.ph3
- name: linuxptp
version: 3.1-1.ph3

VMware, Inc. 305


VMware Telco Cloud Automation User Guide

- name: stalld
version: 1.3.0-8.ph3
file_injection:
- source: file
content: ../Artifacts/scripts/realtime-variables.conf
path: /etc/tuned/realtime-variables.conf
- source: file
content: ../Artifacts/scripts/testnf-stalld.conf
path: /etc/sysconfig/stalld
additional_config:
- name: tuned
value: '[{"name":"realtime"}]'
network:
devices:
- deviceType: sriov
networkName: vlan3
resourceName: sriovpass
- deviceType: sriov
networkName: vlan4
resourceName: sriovigbuio
dpdkBinding: igb_uio
count: 2
- deviceType: sriov
networkName: vlan5
resourceName: sriovvfio
dpdkBinding: vfio-pci
caas_components:
- name: sriov
type: cni
interfaces:
Vnflcm:
instantiate_start:
implementation: ../Artifacts/workflows/PreInstantiation_WF.json
description: Configure testnf using a configmap
inputs:
type: tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters
USERNAME:
PASSWORD:
IP:
K8S_NAMESPACE: testnf
NAD_FILE: ''
NODE_POOL_FOR_CAT: ''
outputs:
type: tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters
nsCreateResult: ''
copyNADResult: ''
nadCreateResult: ''
instantiate_end:
implementation: ../Artifacts/workflows/PostInstantiation_WF.json
description: Configure testnf using a configmap
inputs:
type: tosca.datatypes.nfv.VMware.Interface.InstantiateEndInputParameters
USERNAME:
PASSWORD:
IP:

VMware, Inc. 306


VMware Telco Cloud Automation User Guide

K8S_NAMESPACE: ''
NODE_POOL_FOR_CAT: ''
outputs:
type: tosca.datatypes.nfv.VMware.Interface.InstantiateEndOutputParameters
nsCheckResult: ''
nadCheckResult: ''
type: tosca.nodes.nfv.VMware.CNF.testnf
testnf1:
type: tosca.nodes.nfv.Vdu.Compute.Helm.testnf
properties:
name: testnf
description: Chart for testnf
chartName: testnf-du
chartVersion: 2.0.0
helmVersion: v3
configurable_properties:
additional_vnfc_configurable_properties:
values: ''
vlan3: ''
vlan4: ''
vlan5: ''
interface_types:
tosca.interfaces.nfv.VMware.Vnflcm:
derived_from: tosca.interfaces.nfv.Vnflcm
instantiate_start:
description: interface description
instantiate_end:
description: interface description

Supported DPDK and Kernel Versions


List of compatible Data Plane Development Kit (DPDK) and Photon OS kernel versions.

The table lists DPDK version and compatible Photon OS kernel versions.

For upgrading kernel versions when running Telco Cloud Automation, see How to upgrade
Photon Kernel when running Telco Cloud Automation.

For adding new versions of kernels that are later than the supported versions, see Enabling
Additional Photon-RT Kernel Versions in Telco Cloud Automation.

Photon DPDK Version


OS
Kernel
Version 17.11 17.11.10 18.11 18.11.7 19.08.2 19.11 19.11.1 20.11 21.11 22.11

Linux-4. ✓ ✓ ✓ ✓
19.104-3
.ph3

4.19.98- ✓ ✓ ✓ ✓
rt40-4.p
h3-rt

VMware, Inc. 307


VMware Telco Cloud Automation User Guide

Photon DPDK Version


OS
Kernel
Version 17.11 17.11.10 18.11 18.11.7 19.08.2 19.11 19.11.1 20.11 21.11 22.11

Linux- ✓ ✓ ✓ ✓
rt-4.19.9
8-
rt40-4.p
h3

Linux-4. ✓ ✓ ✓ ✓
19.97-2.
ph3

Linux-4. ✓ ✓ ✓ ✓
19.124-1.
ph3

Linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
32-1.ph3

Linux-4. ✓ ✓ ✓ ✓ ✓ ✓ ✓
19.132-1.
ph3

Linux-4. ✓ ✓ ✓ ✓ ✓
19.115-3.
ph3

Linux-4. ✓ ✓ ✓ ✓ ✓
19.145-2.
ph3

Linux-4. ✓ ✓ ✓ ✓ ✓
19.154-1.
ph3

Linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
54-1.ph3

Linux-4. ✓ ✓ ✓ ✓ ✓
19.154-11
.ph3

linux-4.1 ✓ ✓ ✓ ✓ ✓
9.174-5.
ph3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
74-4.ph
3

linux-4.1 ✓ ✓ ✓ ✓ ✓
9.177-2.
ph3

VMware, Inc. 308


VMware Telco Cloud Automation User Guide

Photon DPDK Version


OS
Kernel
Version 17.11 17.11.10 18.11 18.11.7 19.08.2 19.11 19.11.1 20.11 21.11 22.11

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
77-2.ph
3

linux-4.1 ✓ ✓ ✓ ✓ ✓
9.189-5.
ph3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
77-4.ph
3

linux-4.1 ✓ ✓ ✓ ✓ ✓
9.177-4.
ph3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
77-5.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
77-7.ph
3

linux-4.1 ✓ ✓ ✓ ✓ ✓
9.191-2.
ph3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
91-2.ph3

linux-4.1 ✓ ✓ ✓ ✓ ✓
9.198-4.
ph3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-4.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-5.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-6.ph
3

VMware, Inc. 309


VMware Telco Cloud Automation User Guide

Photon DPDK Version


OS
Kernel
Version 17.11 17.11.10 18.11 18.11.7 19.08.2 19.11 19.11.1 20.11 21.11 22.11

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-9.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-10.p
h3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-11.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
45-2.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
32-2.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-13.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
47-6.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-14.p
h3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-15.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
56-2.ph
3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-18.ph
3

VMware, Inc. 310


VMware Telco Cloud Automation User Guide

Photon DPDK Version


OS
Kernel
Version 17.11 17.11.10 18.11 18.11.7 19.08.2 19.11 19.11.1 20.11 21.11 22.11

linux- ✓ ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-21.ph
3

linux- ✓ ✓ ✓ ✓ ✓ ✓
rt-4.19.1
98-22.p
h3

linux- ✓ ✓ ✓ ✓ ✓
rt-4.19.2
64-7.ph
3

linux- ✓ ✓ ✓ ✓ ✓ ✓ ✓
rt-4.19.2
64-6.ph
3

linux-4.1 ✓ ✓ ✓ ✓ ✓ ✓ ✓
9.264-6.
ph3

linux- ✓ ✓ ✓ ✓ ✓ ✓ ✓
rt-4.19.2
72-4.ph
3

linux-4.1 ✓ ✓ ✓ ✓ ✓ ✓ ✓
9.272-4.
ph3

Note VMware Telco Cloud Automation 2.0 supports linux-rt-4.19.198-5 and above only on
a new workload cluster or an upgraded workload cluster.

CNF with Customizations Example


Here are some CNF customization examples.

Example 1
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Network Function description
imports:
- vmware_etsi_nfv_sol001_vnfd_2_5_1_types.yaml
node_types:
tosca.nodes.nfv.VMware.CNF.cu-up-1.8:
derived_from: tosca.nodes.nfv.VMware.CNF
interfaces:
Vnflcm:

VMware, Inc. 311


VMware Telco Cloud Automation User Guide

type: tosca.interfaces.nfv.Vnflcm
tosca.nodes.nfv.Vdu.Compute.Helm.cuup-helm-chart:
derived_from: tosca.nodes.nfv.Vdu.Compute.Helm
properties:
configurable_properties:
type: tosca.datatypes.nfv.VnfcConfigurableProperties.cuup-helm-chart
required: true
data_types:
tosca.datatypes.nfv.VnfcConfigurableProperties.cuup-helm-chart:
derived_from: tosca.datatypes.nfv.VnfcConfigurableProperties
properties:
additional_vnfc_configurable_properties:
type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.cuup-helm-chart
description: Describes additional configuration for VNFC that can be configured
required: true
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.cuup-helm-chart:
derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
properties:
values:
required: true
propertyName: values
description: Overrides for chart values
default: ''
type: string
format: file
BHU:
required: true
propertyName: BHU
description: ''
default: ''
type: string
format: network
F1U:
required: true
propertyName: F1U
description: ''
default: ''
type: string
format: network
E1C:
required: true
propertyName: E1C
description: ''
default: ''
type: string
format: network
MGMT:
required: true
propertyName: MGMT
description: ''
default: ''
type: string
format: network
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters

VMware, Inc. 312


VMware Telco Cloud Automation User Guide

properties:
USERNAME:
name: USERNAME
type: string
description: K8s master username
required: true
default: capv
format: string
PASSWORD:
name: PASSWORD
type: password
description: K8s master password
required: true
default:
format: password
IP:
name: IP
type: string
description: K8s master ip address
required: true
default:
format: string
K8S_NAMESPACE:
name: K8S_NAMESPACE
type: string
description: K8S namespace for CU-UP
required: true
default:
format: string
NAD_FILE:
name: NAD_FILE
type: string
description: NAD Config File
required: true
default: ''
format: file
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
nsCreateResult:
name: nsCreateResult
type: string
description: ''
topology_template:
substitution_mappings:
node_type: tosca.nodes.nfv.VMware.CNF.cu-up-1.8
node_templates:
cu-up-1.8:
node_type: tosca.nodes.nfv.VMware.CNF.cu-up-1.8
properties:
descriptor_id: nfd_4e7599b5-9a44-4000-850c-7ec65d2f2423
provider: Vendor01
product_name: CU-UP
version: '1.0'
id: id

VMware, Inc. 313


VMware Telco Cloud Automation User Guide

software_version: '1.3.4761'
descriptor_version: '1.8'
flavour_id: default
flavour_description: default
vnfm_info:
- gvnfmdriver
infra_requirements:
node_components:
isNumaConfigNeeded: false
kernel:
kernel_type:
name: linux
version: 4.19.132-1.ph3
kernel_modules:
- name: dpdk
version: 19.11.1
kernel_args:
- key: default_hugepagesz
value: 1G
- key: hugepagesz
value: 1G
- key: hugepages
value: 10
- key: transparent_hugepage
value: never
- key: intel_idle.max_cstate
value: 1
- key: iommu
value: pt
- key: intel_iommu
value: 'on'
- key: tsc
value: reliable
- key: idle
value: pool
- key: intel_pstate
value: disable
- key: rcu_nocb_poll
value: 1
- key: clocksource
value: tsc
- key: pcie_aspm.policy
value: performance
- key: skew_tick
value: 1
- key: isolcpus
value: 11-17
- key: nosoftlockup
- key: nohz
value: 'on'
- key: nohz_full
value: 11-17
- key: rcu_nocbs
value: 11-17
custom_packages:

VMware, Inc. 314


VMware Telco Cloud Automation User Guide

- name: pciutils
version: 3.6.2-1.ph3
- name: tuned
version: 2.13.0-1.ph3
network:
devices:
- deviceType: sriov
networkName: F1U
resourceName: ani_netdevice_cuup_f1u
dpdkBinding: igb_uio
- deviceType: sriov
networkName: BHU
resourceName: ani_netdevice_cuup_bhu
dpdkBinding: igb_uio
- deviceType: sriov
networkName: E1C
resourceName: ani_netdevice_cuup_e1c
- deviceType: sriov
networkName: MGMT
resourceName: ani_netdevice_cuup_mgmt
count: 5
additional_config:
- name: tuned
value: '[{"name":"vendor01-cu"}]'
file_injection:
- source: file
content: ../Artifacts/scripts/tuned.conf
path: /etc/tuned/cu/tuned.conf
- source: file
content: ../Artifacts/scripts/cpu-partitioning-variables.conf
path: /etc/tuned/cpu-partitioning-variables.conf
caas_components:
- name: sriov
type: cni
description: Network Function description
interfaces:
Vnflcm:
instantiate_start:
implementation: ../Artifacts/workflows/CUUP_PreInstantiation_Steps.json
description: Configure Vendor01 CU-UP
inputs:
type: >-
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters
USERNAME: capv
PASSWORD:
IP:
K8S_NAMESPACE:
NAD_FILE: ''
outputs:
type: >-
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters
nsCreateResult: ''
cuup-helm-chart:
type: tosca.nodes.nfv.Vdu.Compute.Helm.cuup-helm-chart
properties:

VMware, Inc. 315


VMware Telco Cloud Automation User Guide

name: cuup-helm-chart
description: cu-up
chartName: cuup-helm-chart
chartVersion: 1.3.4760
helmVersion: v3
id: cuup-helm-chart
configurable_properties:
additional_vnfc_configurable_properties:
type: >-
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.cuup-helm-chart
values: ''
BHU: ''
F1U: ''
E1C: ''
MGMT: ''
policies:
- policy_scale:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: scale
interface_type: operation
isEnabled: true
- policy_workflow:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: workflow
interface_type: operation
isEnabled: true
- policy_reconfigure:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: reconfigure
interface_type: operation
isEnabled: true
- policy_update:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: update
interface_type: operation
isEnabled: true
- policy_upgrade:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: upgrade
interface_type: operation
isEnabled: true
- policy_upgrade_package:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: upgrade_package
interface_type: operation
isEnabled: true
- policy_instantiate_start:
type: tosca.policies.nfv.SupportedVnfInterface
properties:

VMware, Inc. 316


VMware Telco Cloud Automation User Guide

interface_name: instantiate_start
interface_type: workflow
isEnabled: true
- policy_instantiate_start:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: instantiate_start
interface_type: workflow
isEnabled: true

Example 2
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Network Function description
imports:
- vmware_etsi_nfv_sol001_vnfd_2_5_1_types.yaml
node_types:
tosca.nodes.nfv.VMware.CNF.du-1.8:
derived_from: tosca.nodes.nfv.VMware.CNF
interfaces:
Vnflcm:
type: tosca.interfaces.nfv.Vnflcm
tosca.nodes.nfv.Vdu.Compute.Helm.du-helm-chart:
derived_from: tosca.nodes.nfv.Vdu.Compute.Helm
properties:
configurable_properties:
type: tosca.datatypes.nfv.VnfcConfigurableProperties.du-helm-chart
required: true
data_types:
tosca.datatypes.nfv.VnfcConfigurableProperties.du-helm-chart:
derived_from: tosca.datatypes.nfv.VnfcConfigurableProperties
properties:
additional_vnfc_configurable_properties:
type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.du-helm-chart
description: Describes additional configuration for VNFC that can be configured
required: true
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.du-helm-chart:
derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
properties:
Input Yaml:
required: true
propertyName: Input Yaml
description: ''
default: ''
type: string
format: file
F1U:
required: true
propertyName: F1U
description: ''
default: ''
type: string
format: network
F1C:

VMware, Inc. 317


VMware Telco Cloud Automation User Guide

required: true
propertyName: F1C
description: ''
default: ''
type: string
format: network
MGMT:
required: true
propertyName: MGMT
description: ''
default: ''
type: string
format: network
FH:
required: true
propertyName: FH
description: ''
default: ''
type: string
format: network
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
USERNAME:
name: USERNAME
type: string
description: K8s master username
required: true
default: ''
format: string
PASSWORD:
name: PASSWORD
type: password
description: K8s master password
required: true
default: ''
format: password
IP:
name: IP
type: string
description: K8s master ip address
required: true
default: ''
format: string
NAD_FILE:
name: NAD_FILE
type: string
description: The NAD Config file
required: true
default: ''
format: file
K8S_NAMESPACE:
name: K8S_NAMESPACE
type: string
description: K8S namespace for DU

VMware, Inc. 318


VMware Telco Cloud Automation User Guide

required: true
default: ''
format: string
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters:
derived_from: tosca.datatypes.nfv.VnfOperationAdditionalParameters
properties:
nsCreateResult:
name: nsCreateResult
type: string
description: ''
copyNADResult:
name: copyNADResult
type: string
description: ''
nadCreateResult:
name: nadCreateResult
type: string
description: ''
topology_template:
substitution_mappings:
node_type: tosca.nodes.nfv.VMware.CNF.du-1.8
node_templates:
du-1.8:
node_type: tosca.nodes.nfv.VMware.CNF.du-1.8
properties:
descriptor_id: nfd_4e7599b5-9a44-4000-850c-7ec65d2f2422
provider: Vendor01
product_name: DU
version: '1.0'
id: id
software_version: '1.3.4761'
descriptor_version: '1.8'
flavour_id: default
flavour_description: default
vnfm_info:
- gvnfmdriver
infra_requirements:
node_components:
isNumaConfigNeeded: true
kernel:
kernel_type:
name: linux-rt
version: 4.19.132-1.ph3
kernel_modules:
- name: dpdk
version: 19.11.1
kernel_args:
- key: nosoftlockup
- key: noswap
- key: softlockup_panic
value: 0
- key: pcie_aspm.policy
value: performance
- key: intel_idle.max_cstate
value: 1

VMware, Inc. 319


VMware Telco Cloud Automation User Guide

- key: mce
value: ignore_ce
- key: fsck.mode
value: force
- key: fsck.repair
value: yes
- key: nowatchdog
- key: cpuidle.off
value: 1
- key: nmi_watchdog
value: 0
- key: audit
value: 0
- key: processor.max_cstate
value: 1
- key: intel_pstate
value: disable
- key: isolcpus
value: 8-{{tca.node.vmNumCPUs}}
- key: skew_tick
value: 1
- key: irqaffinity
value: 0-7
- key: selinux
value: 0
- key: enforcing
value: 0
- key: nohz
value: 'on'
- key: nohz_full
value: 8-{{tca.node.vmNumCPUs}}
- key: rcu_nocb_poll
value: 1
- key: rcu_nocbs
value: 8-{{tca.node.vmNumCPUs}}
- key: idle
value: poll
- key: default_hugepagesz
value: 1G
- key: hugepagesz
value: 1G
- key: hugepages
value: 17
- key: intel_iommu
value: 'on'
- key: iommu
value: pt
- key: kthreads_cpu
value: 0-7
- key: clock
value: tsc
- key: clocksource
value: tsc
- key: tsc
value: reliable

VMware, Inc. 320


VMware Telco Cloud Automation User Guide

custom_packages:
- name: pciutils
version: 3.6.2-1.ph3
- name: tuned
version: 2.13.0-3.ph3
- name: linuxptp
version: 2.0-1.ph3
additional_config:
- name: tuned
value: '[{"name":"vendor01-du"}]'
file_injection:
- source: file
content: ../Artifacts/scripts/tuned.conf
path: /etc/tuned/du/tuned.conf
- source: file
content: ../Artifacts/scripts/cpu-partitioning-variables.conf
path: /etc/tuned/cpu-partitioning-variables.conf
- source: file
content: ../Artifacts/scripts/realtime-variables.conf
path: /etc/tuned/realtime-variables.conf
network:
devices:
- deviceType: sriov
networkName: F1U
resourceName: ani_netdevice_du_f1u
dpdkBinding: igb_uio
- deviceType: sriov
networkName: F1C
resourceName: ani_netdevice_du_f1c
- deviceType: sriov
networkName: FH
resourceName: ani_netdevice_du_fh
dpdkBinding: vfio-pci
- deviceType: sriov
networkName: MGMT
resourceName: ani_netdevice_du_mgmt
count: 6
passthrough_devices:
- device_type: NIC
pf_group: ptp
caas_components:
- name: sriov
type: cni
interfaces:
Vnflcm:
instantiate_start:
implementation: ../Artifacts/workflows/DU-Preinstantion-WF.json
description: Configure DU using a configmap
inputs:
type: >-
tosca.datatypes.nfv.VMware.Interface.InstantiateStartInputParameters
USERNAME: ''
PASSWORD: ''
IP: ''
NAD_FILE: ''

VMware, Inc. 321


VMware Telco Cloud Automation User Guide

K8S_NAMESPACE: ''
outputs:
type: >-
tosca.datatypes.nfv.VMware.Interface.InstantiateStartOutputParameters
nsCreateResult: ''
copyNADResult: ''
nadCreateResult: ''
du-helm-chart:
type: tosca.nodes.nfv.Vdu.Compute.Helm.du-helm-chart
properties:
name: du-helm-chart
description: Chart for DU
chartName: du-helm-chart
chartVersion: 1.3.4761
helmVersion: v3
id: du-helm-chart-1.0
configurable_properties:
additional_vnfc_configurable_properties:
type: >-
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.du-helm-chart
Input Yaml: ''
F1U: 'cellsite-F1U'
F1C: 'cellsite-F1C'
MGMT: 'cellsite-mgmt'
FH: 'cellsite-FH'
policies:
- policy_scale:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: scale
interface_type: operation
isEnabled: true
- policy_workflow:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: workflow
interface_type: operation
isEnabled: true
- policy_reconfigure:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: reconfigure
interface_type: operation
isEnabled: true
- policy_update:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: update
interface_type: operation
isEnabled: true
- policy_upgrade:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: upgrade
interface_type: operation

VMware, Inc. 322


VMware Telco Cloud Automation User Guide

isEnabled: true
- policy_upgrade_package:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: upgrade_package
interface_type: operation
isEnabled: true
- policy_instantiate_start:
type: tosca.policies.nfv.SupportedVnfInterface
properties:
interface_name: instantiate_start
interface_type: workflow
isEnabled: true

Download a Network Function Package


You can download a network function package in the CSAR format to your local drive.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 Select the desired network function and click Download.

4 Select a location in your local drive and save the CSAR package.

Edit Network Function Catalog


You can edit the properties such as general properties, topology, infrastructure requirements,
workflows, and resources of a network function.

When you edit and update a network function package, the CSAR upgrades to comply with the
latest SOL001 standards.

Edit Network Function Catalog General Properties


You can edit the general properties of a Network Function catalog and update it, or save the
catalog as a new version.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 To edit the general properties:

n Click the desired Network Function catalog and select the General Properties tab.

a Click the edit icon.

VMware, Inc. 323


VMware Telco Cloud Automation User Guide

n Click the Options menu (⋮) against the Network Function.

a Click the General Properties tab.

b Click Edit.

4 To save the changes and work on the general properties later, click Save.

5 To apply the changes to the current version, click Update Package.

6 To save the catalog as a new version, click Save As New.

Results

Changes to the Network Function catalog are saved appropriately.

Edit Network Function Topology


Edit a Network Function topology diagram.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 To edit the topology:

n Click the desired Network Function catalog and select the Topology tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Function.

a Click the Topology tab.

b Click the edit icon on the VDU or virtual link.

4 For more information about designing Network Function descriptors, see Designing a
Network Function Descriptor.

5 To save the changes and work on the topology later, click Save.

6 To apply the changes to the current version, click Update Package.

7 To save the package as a new version, click Save As New.

Results

Changes to the Network Function catalog are saved appropriately.

Edit Infrastructure Requirements


Edit the infrastructure requirements of a Network Function.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

VMware, Inc. 324


VMware Telco Cloud Automation User Guide

2 Select Catalog > Network Function.

3 To edit the infrastructure requirements:

n Click the desired Network Function catalog and select the Infrastructure Requirements
tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Function.

a Click Edit.

b Select the Infrastructure Requirements tab.

4 For more information about using the Infrastructure Requirements Designer, see
Infrastructure Requirements Designer.

5 To save the changes and work on the infrastructure requirements later, click Save.

6 To apply the changes to the current version, click Update Package.

7 To save the catalog as a new version, click Save As New.

Results

Changes to the Network Function catalog are saved appropriately.

Edit Scaling Policies


Edit the scaling policies of a Network Function.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 To edit the scaling policies:

n Click the desired Network Function catalog and select the Scaling Policies tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Function.

a Click Edit.

b Select the Scaling Policies tab.

4 For more information about using the scaling policies, see Scaling Policies.

5 To save the changes and work on the scaling policies later, click Save.

6 To apply the changes to the current version, click Update Package.

7 To save the catalog as a new version, click Save As New.

VMware, Inc. 325


VMware Telco Cloud Automation User Guide

Results

Changes to the Network Function catalog are saved appropriately.

Edit Network Function Rules


Edit the Affinity rules of a Network Function.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 To edit the Affinity rules:

n Click the desired Network Function catalog and select the Rules tab.

n Click the Options menu (⋮) against the Network Function, click Edit, and select the Rules
tab.

4 For information about adding Affinity rules, see Create Affinity Rules.

5 To save the changes and work on the rules later, click Save.

6 To apply the changes to the current version, click Update Package.

7 To save the catalog as a new version, click Save As New.

Results

Changes to the Network Function catalog are saved appropriately.

Edit Workflows
Edit the life cycle event workflows of your Network Function.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 To edit the workflows:

n Click the desired Network Function catalog and select the Workflows tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Function.

a Click Edit.

b Select the Workflows tab.

4 For more information about designing workflows, see Designing Workflows .

5 To save the changes and work on the workflows later, click Save.

VMware, Inc. 326


VMware Telco Cloud Automation User Guide

6 To apply the changes to the current version, click Update Package.

7 To save the catalog as a new version, click Save As New.

Results

Changes to the Network Function catalog are saved appropriately.

Edit the Network Function Catalog Source Files


You can edit the source files of a network function catalog and update it, or save the catalog as a
new version.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 To edit the source files:

n Click the desired Network Function catalog and select the Resources tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Function.

a Click the Resources tab.

b Click Edit.

4 To save the changes and work on the source files later, click Save.

5 To apply the changes to the current version, click Update Package.

6 To save the catalog as a new version, click Save As New.

Results

Changes to the Network Function catalog are saved appropriately.

Enhanced Platform Awareness


Enhanced Platform Awareness (EPA) delivers carrier grade, low latency, data plane performance.
VMware technologies including CPU pinning, Non-Uniform Memory Access (NUMA) placement,
HugePages support, and SR-IOV support allow VNFs to maintain high network performance.

If you have configured VMware Integrated OpenStack as your VIM, you can define certain EPA
attributes for increasing the performance capabilities of your VNFs. You can provide attribute
values that are higher than the default value.

Some of the attributes that you can define are:

n Compute Performance Attributes:

n CPU Pinning

n NUMA Topology Awareness

VMware, Inc. 327


VMware Telco Cloud Automation User Guide

n Memory Page Size (HugePage)

n Data Plane Performance Attributes:

n SR-IOV

VMware Integrated OpenStack supports NUMA aware placement on the underlying vSphere
platform. This feature provides low latency and high throughput to Virtual Network Functions
(VNFs) that run on telecommunications environments. To achieve low latency and high
throughput, it is important that vCPUs, memory, and physical NICs that are used for VM traffic
are aligned on the same NUMA node.

You can enable the EPA capabilities on a VDU using the Network Function Descriptor on
VMware Telco Cloud Automation. For more information, see Design a Virtual Network Function
Descriptor.

Add Cloud-init Script and Key to a VDU


You can customize a VDU in your Network Function by adding Cloud-init scripts. You can also
provide a VMware Integrated OpenStack key to the VDU.

Add the Cloud-init script and the key by editing the NFD.yaml file. Perform the following steps:

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Go to Catalog > Network Function.

3 Select the desired Network Function, click the ⋮ menu, and click Edit.

4 Click the Resources tab and click Edit (pencil icon) against NFD.yaml.

5 Under the VDU properties, update the values of key_name and user-data under the boot_data
property.

Example: VDU with Key and Cloud-init Script


vdu1:
type: tosca.nodes.nfv.Vdu.Compute.vdu1
properties:
name: vdu1
description: vdu1
vdu_profile:
min_number_of_instances: 1
max_number_of_instances: 1
sw_image_data:
name: photon-curl
version: '1'
checksum:
algorithm: sha-256
hash: hash
container_format: bare
disk_format: qcow2
min_disk: 4 GiB

VMware, Inc. 328


VMware Telco Cloud Automation User Guide

size: 4 GiB
boot_data:
content_or_file_data:
data:
key_name: xkey
user-data: |
#!/bin/bash
#this is a test
touch /tmp/abc.log

Role-based Access Control to CNFs


As a system administrator, you can provide permissions to users for accessing CNFs in a specific
cluster and restrict access to any other clusters.

Users are assigned roles and each role has specific permissions. With role-based access control,
you can restrict access to authorized users that have the required permissions to perform
operations on the CNF.

For example, a user having the Network Function Deployer role can view all CNFs but can
perform life cycle management operations only on permitted CNFs.

Remotely Access CNFs Using kubeconfig


You can remotely access CNFs from your local system by downloading the kubeconfig file.
This file contains the endpoint IP address of the server and the token for establishing a REST
connection between VMware Telco Cloud Automation and the CNF.

Prerequisites

1 Ensure that you have installed an external SSH client on your local system.

2 Ensure that Kubernetes CLI Tool (kubectl) is installed on your local system.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Inventory > Network Function and select the CNF.

3 Click the ⋮ menu and select Download Kube Config.

This action downloads the kubeconfig.yaml file to your local system.

4 Use kubectl with the downloaded kubeconfig.yaml file for establishing a remote
connection with the CNF.

Access a Remote CNF Using an External SSH Client


You can generate login credentials from VMware Telco Cloud Automation and use an external
SSH client to log in to the CNF.

VMware, Inc. 329


VMware Telco Cloud Automation User Guide

Prerequisites

Ensure that you have installed an external SSH client on your local system.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Inventory > Network Function and select the CNF.

3 Click the ⋮ menu and select Show Login Credentials.

VMware Telco Cloud Automation generates a one-time token, user name, and password.

Note The expiration time for the token is eight hours.

4 Use these login credentials to access the CNF and perform operations based on your user
privileges.

Access a Remote CNF Using the Embedded SSH Client


You can access a CNF using the embedded SSH terminal within VMware Telco Cloud Automation.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Inventory > Network Function and select the CNF.

3 Click the ⋮ menu and select Open Terminal.

Results

A terminal opens and VMware Telco Cloud Automation connects with the CNF. You can now
perform operations based on your user privileges.

VMware, Inc. 330


Managing Network Function
Lifecycle Operations 14
Using VMware Telco Cloud Automation, you can instantiate, heal, scale in or out, run a workflow,
and terminate a network function. You can also operate, update, and upgrade packages.

This chapter includes the following topics:

n Instantiating a Network Function

n Heal an Instantiated Network Function

n Scale an Instantiated VNF

n Scale an Instantiated CNF

n Operate an Instantiated Network Function

n Run a Workflow on an Instantiated Network Function

n Terminate a Network Function

n Hiding Columns in Network Function Inventory

n Retry, Rollback, and Reset State

n Reconfigure a Container Network Function

n Updating CNF Repository from Chartmuseum to OCI

Instantiating a Network Function


After you upload or create a network function, you can instantiate it in your virtual infrastructure.

Overriding Tags
You can override tags when objects are not compatible with each other. For example, if you have
a cloud with a CNF tag and you want to instantiate a network function catalog with the VNF
tag, you can override the tag. On the Select Cloud pop-up window, expand Advanced Filters,
deselect the CNF tag, and click Apply.

Note When you override a tag, you are explicitly bypassing the system validations and verifying
the success yourself.

VMware, Inc. 331


VMware Telco Cloud Automation User Guide

Instantiate a Virtual Network Function


To instantiate a VNF, follow the steps listed in this section. Starting from VMware Telco Cloud
Automation version 1.9, you can use vApp templates for VMware Cloud Director based clouds.

Prerequisites

n Upload or create a network function.

n Upload all required images and templates to your vCenter Server instance.

n To use a vApp template, upload the required vApp template to its corresponding catalog in
VMware Cloud Director.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 Select the desired VNF and click Instantiate.

The Create Network Function Instance page is displayed.

4 In the Inventory Detail tab, enter the following information:

n Name - Enter a name for your network function instance.

n Description - Provide a description.

n Select Cloud - Select a cloud from your network on which to instantiate the network
function.

Note You can select the node pool only if the network function instantiation requires
infrastructure customization and the required CSAR file is already included.

1 On the Select Cloud, select the cloud and click Next.

2 On the Select Node Pool, select the node pool and click Next.

3 On the Customization Required , review the node configuration. If you do not require
node customization, then in the Advanced Settings, select Skip Node Customization.

4 Click OK to save the changes.

For a VMware Cloud Director based cloud, you can use either a vApp template or a template
from the vSphere Client. For a vSphere based cloud, you can only select a vSphere Client
template.
n Select Compute Profile - Select a compute profile from the drop-down menu. For
allocating compute profiles for each VDC, click Advanced Configuration.

n Select Storage Profile (Optional) - Select a specific storage profile from the list of storage
profiles that are defined in the compute profile.

n Prefix (Optional) - Enter a prefix. All entities that are created for this VNF are prefixed
with this text. Prefixes help in personalizing and identifying the entities of a VNF.

VMware, Inc. 332


VMware Telco Cloud Automation User Guide

n Instantiation Level - Select the level of instances to create. The default level is 0.

n Tags (Optional) - Select the key and value pairs from the drop down menus. To add more
tags, click the + symbol.

n Templates - You can select the templates for the network function instantiation from the
following options:

n vApp - To use vApp templates, select this option and select the appropriate catalog
from VMware Cloud Director in Select Catalog.

n VNF - To use a single vApp template, select this option and select the appropriate
catalog from VMware Cloud Director in Select Catalog.

n vCenter - To use the existing VM template available in vCenter, select this option.

n Select Catalog - The option appears when you select vApp or VNF in Templates. You can
use this option to select the appropriate catalog from VMware Cloud Director.

n Grant Validation - When you deploy a VNF, Grant validates whether the required images
are available on the target system. It also validates whether the required resources such
as CPU, memory, and storage capacity are available for a successful deployment. To
configure Grant, go to Advanced Settings > Grant Validation and select one of the
options:

n Enable: Run validation and feasibility checks for the target cloud. Fail fast if the
validation fails.

n Enable and Ignore: Run validation and feasibility checks for the target Kubernetes
cluster. Ignore failures.

n Disable: Do not run validation or feasibility checks for the target cloud.

Note
1 When selecting VNF as templates, Grant Validation fails if the following conditions
are not met:

n The number of VMs inside the vApp template and the number of VDUs inside the
VNF does not match.

n The names of VMs inside the vApp template and the names of VDUs inside the
VNF does not match.

2 The image_name property is ignored if VNF is selected as template.

3 vApp networks in the vApp template are retained if the vApp network name matches
with the virtual internal network name defined in the VNF.

4 If there is no match for the vApp network name, the vApp network is deleted.

VMware, Inc. 333


VMware Telco Cloud Automation User Guide

n Auto Rollback - During a failure, the Auto Rollback option allows you to retain the Helm
release and Kubernetes resources. To configure Auto Rollback, go to Advanced Settings
> Auto Rollback and select one of the options:

n Enable: During failure, do not retain Helm release and Kubernetes resources.

n Disable: During failure, retain Helm release and Kubernetes resources for debugging.

5 Click Next.

6 In the Network Function Properties tab, the Connection Point Network Mapping table lists
the details of all the VDUs and connection points that are available:

a To map a network to the VDU, click the Options (…) button against the VDU and select
one of the following options:

n Auto Create Network (For internal connection points only): By default, VMware Telco
Cloud Automation creates an internal network.

n Select Existing Network

You can provide the mapping between connection points and an existing network.
VMware Telco Cloud Automation creates and manages the network.

n Refer From Workflow

This option is available only for pre-instantiated workflows. You use Refer From
Workflow option, to refer to the network not created or managed through
VMware Telco Cloud Automation. It uses the network details obtained from the pre-
instantiated workflow to create the VM. For details on external network reference,
see External Network Referencing.

n Map Network to Connection Point <Connection-Point-Name>: Map the network to a


specific connection point.

n Map Network to All Connection Points: Map the network to all the external
connection points.

b Click OK.

7 Click Next.

8 The Inputs tab displays the following types of inputs to be provided:

n The required OVF properties for each VDU within the VNF. Depending on the
instantiation level that you have selected, there can be multiple instances deployed for
each VDU. Ensure that you enter the correct information for each VDU.

n The Helm inputs for each Helm chart within a CNF.

n Any pre-workflows or post-workflows that are defined as a part of the Network Function.

Provide the appropriate information and click Next.

9 In the Review tab, review the configuration.

10 Click Instantiate.

VMware, Inc. 334


VMware Telco Cloud Automation User Guide

Results

VMware Telco Cloud Automation creates the virtual machines and networks required by your
network function on the cloud that you specified. To view a list of all instantiated functions,
select Inventory > Network Function. To track and monitor the progress of the instantiation
process, click the Expand icon on the network function and navigate further. When Instantiated
is displayed in the State column for a network function, it indicates that the instantiation process
is completed successfully and the function is ready to use.

To view more details about an instantiated VNF, go to Inventory > Network Function and click
the VNF. The General Info tab displays all the details about the instantiated VNF.

If you no longer want to use an instantiated network function, click the Options (three dots) icon
and select Terminate. Then select the network function and click Delete.

Instantiate a Cloud Native Network Function


To instantiate a CNF, follow the steps listed in this section.

Prerequisites

n Upload or create a network function.

n Upload all required images and templates to your vCenter Server instance.

Note
n Ensure that all Harbor repository URLs contain the appropriate port numbers such as 80, 443,
8080, and so on.

n Ensure that all the image repository URLs within the values.yaml file contain the appropriate
Harbor port numbers.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Function.

3 Select the desired CNF and click Instantiate.

The Create Network Function Instance page is displayed.

4 In the Inventory Detail tab, enter the following information:

n Name - Enter a name for your network function instance.

n Description - Provide a description.

VMware, Inc. 335


VMware Telco Cloud Automation User Guide

n Select Cloud - Select a cloud from your network on which to instantiate the network
function. If you have created the Kubernetes cluster instance using VMware Telco Cloud
Automation, select the node pool.

Note You can select the node pool only if the network function instantiation requires
infrastructure customization and the required CSAR file is already included.

1 On the Select Cloud, select the cloud and click Next.

2 On the Select Node Pool, select the node pool and click Next.

3 On the Customization Required , review the node configuration. If you do not require
node customization, then in the Advanced Settings, select Skip Node Customization.

4 Click OK to save the changes.

n Tags (Optional) - Select the key and value pairs from the drop down menus. To add more
tags, click the + symbol.

n Grant Validation - When you deploy a CNF, Grant validates whether the required images
are available on the target system. It also validates whether the required resources such
as CPU, memory, and storage capacity are available for a successful deployment. Specific
to CNFs, it downloads the Helm chart and performs a dry run of the operations on
the cluster. If Grant encounters errors, it provides detailed error messages. To configure
Grant, go to Advanced Settings > Grant Validation and select one of the options:

n Enable: Run validation and feasibility checks for the target Kubernetes cluster. Fail
fast if the validation fails.

n Enable and Ignore: Run validation and feasibility checks for the target Kubernetes
cluster. Ignore failures.

n Disable: Do not run validation or feasibility checks for the target Kubernetes cluster.

n Auto Rollback - During a failure, the Auto Rollback option allows you to retain the
Helm release and Kubernetes resources. To configure Auto Rollback, go to Advanced
Settings > Auto Rollback and select one of the options:

n Enable: During failure, do not retain Helm release and Kubernetes resources.

n Disable: During failure, retain Helm release and Kubernetes resources for
debugging.

5 Click Next.

6 In the Helm Charts tab, enter the following information:

n Namespace - Enter the Kubernetes Cluster namespace.

n Repository URL
n Select Repository URL - If you have added Harbor as the third-party repository
provider, select the Harbor repository URL from the drop-down menu.

VMware, Inc. 336


VMware Telco Cloud Automation User Guide

n Specify Repository URL - Specify the repository URL. Optionally, enter the user name
and password to access the repository.

7 Click Next.

8 In the Network Function Properties tab, click Next.

9 The Inputs tab displays any instantiation properties. Provide the appropriate inputs and click
Next.

10 In the Review tab, review the configuration.

11 Click Instantiate.

Results

VMware Telco Cloud Automation creates the virtual machines and networks required by your
network function on the cloud that you specified. To view a list of all instantiated functions,
select Network Functions > Inventory. To track and monitor the progress of the instantiation
process, click the Expand icon on the network function and navigate further. When Instantiated
is displayed in the State column for a network function, it indicates that the instantiation process
is completed successfully and the function is ready to use.

To view more details about an instantiated CNF, go to Network Functions > Inventory and click
the CNF. The General Info tab displays all the details about the instantiated CNF.

If you no longer want to use an instantiated network function, click the Options (three dots) icon
and select Terminate. Then select the network function and click Delete.

External Network Referencing


VMware Telco Cloud Automation provides custom workflows to reference external networks.

You can reference externally created networks when creating network functions. When
instantiating a network function, use preinstantiated network workflows to map between
the connection points and external network IDs. When network instantiation starts, the
preinstantiated network workflow obtains the network information, which VMware Telco Cloud
Automation uses for creating the virtual machines.

Ensure that the pre-instantiation workflow returns correct Network ID. Every unique
network ,that is used as part of the VNF, must have a unique output at the pre-instantiation
workflow.

The value of the Network ID (output field) must map to any of the following:

n For VMware vSphere (vCenter) based Clouds

n MoRef (Managed Object Reference ID) of a Standard Portgroup . For example:


network-26.

n MoRef (Managed Object Reference ID) of a Distributed Virtual Portgroup. For example:
dvportgroup-39.

VMware, Inc. 337


VMware Telco Cloud Automation User Guide

n MoRef (Managed Object Reference ID) of a NSX-T segment within vCenter. For example:
network-o45554.

n For VMware Cloud Director (vCD) based Clouds

n vCD UUID of a Routed Org VDC Network. For example:


a36b7c8d-1a2a-477e-884b-44ac5b735f9b.

n vCD UUID of a Direct Org VDC Network. For example: 3b77e367-fa9e-4eba-


b590-765afc0bbec6.

n vCD UUID of an Isolated Org VDC Network. For example:


f978b866-395f-435a-940d-4f0b9e10b203.

n For VMware Integrated OpenStack (VIO) based Clouds

n UUID of Provider / Tenant network to which VMs will connect. For example: 46947191-
e484-4dcc-adea-3b31a850a7d1.

For detailed procedure on network referencing and instantiation, see Instantiate a Virtual
Network Function

Heal an Instantiated Network Function


If a network function instance does not operate as expected, you can heal it by either rebooting
or recreating the network function.

Prerequisites

Instantiate the network function.

Note This action is not supported on a Cloud Native Network Function (CNF).

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function.

3 Click the Options (three dots) icon for the desired network function and select Heal.

4 In the Heal page, enter a reason for healing the network function.

5 Select whether to restart or recreate the network function and click Next.

6 In the Inputs tab, enter the input variables required for starting and stopping the heal
function. Provide any required inputs appropriately. Click Next.

7 Review the configuration and click Finish.

Results

The instantiated network function is restarted or recreated.

To view relevant information and recent tasks, click the Expand (>) icon on the network function.

VMware, Inc. 338


VMware Telco Cloud Automation User Guide

Scale an Instantiated VNF


You can scale your network function in or out by aspect or instantiation level.

Prerequisites

Note
n Scale aspects and minimum and maximum values cannot be identified for network functions
that are imported from a partner system. For these network functions, you must enter the
valid values manually.

n The scale to level feature is not supported for network functions that are imported from a
partner system.

n You can set the instantiation scale when instantiating a Virtual Network Function (VNF).

Verify that the network function descriptor for the instantiated network function includes scaling
aspects. Network functions without scaling aspects cannot be scaled.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function.

3 To scale a network function by aspect, perform the following steps:

a Click the Options (three dots) icon for the desired network function and select Scale.

b In the Scale tab, select the aspect to scale.

c Drag the scroll bar to select the number of steps to scale to be performed. The default
number of steps is 0.

d Click Next.

e In the Inputs tab, enter the input variables required for starting and ending the scale.
These credentials are required for running a workflow.

f Click Next.

g In the Review tab, review your configuration and click Finish.

4 To scale a network function by instantiation level, perform the following steps:

a Click the Options (three dots) icon for the desired network function and select Scale To
Level.

b Select whether to scale the entire network function or only certain aspects.

c Select the desired scale level and click Next.

d In the Inputs tab, enter the input variables required for starting and ending the scale to
level. Provide any required inputs appropriately.

VMware, Inc. 339


VMware Telco Cloud Automation User Guide

e Click Next.

f Review the configuration and click Finish.

What to do next

To view relevant information and recent tasks, click the Expand (>) icon on the network function.

Scale an Instantiated CNF


You can scale an instantiated CNF by uploading a descriptor YAML file with the new Helm Chart
values.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function.

3 Click the ⋮ icon against the CNF you want to scale, and select Scale.

4 In the Scale tab, click Browse and upload the YAML file that contains the Helm Chart values.

5 Click Next.

6 In the Inputs tab, enter the appropriate properties.

7 Click Next.

8 In the Review tab, review the YAML file and click Finish.

Results

The CNF uses the new Helm values from the YAML file to scale accordingly.

Operate an Instantiated Network Function


To change the power state of a network function, use the Operate life-cycle operation. This
operation powers on or powers off the VDUs belonging to a network function. For the stop
operation, you can either perform a forceful stop or a graceful shutdown.

Prerequisites

Instantiate the network function.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function.

3 Click the Options (three dots) icon for the desired network function and select Operate. You
can also click the network function and select Actions > Operate.

4 In the Operate dialog box, change the power state to Started or Stopped.

VMware, Inc. 340


VMware Telco Cloud Automation User Guide

5 If you select Stopped, select one of the following options:

n Forceful Stop - Powers off the VDUs.

n Graceful Stop - Shuts down the guest operating systems of the VDUs. Optionally, enter
the Graceful Stop Timeout time in seconds.

6 Click OK.

Results

The VDUs in the instantiated network function powers on or powers off according to your
selection.

Run a Workflow on an Instantiated Network Function


You can run a workflow on a network function instance that contains one or more interfaces.

Prerequisites

For information about workflows and interfaces, see the #unique_242.

n Instantiate your network function that contains one or more interfaces.

n To run a vRealize Orchestrator workflow, you must register vRealize Orchestrator with
VMware Telco Cloud Automation Control Plane (TCA-CP). For more information, see the
VMware Telco Cloud Automation Deployment Guide.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function.

3 Click the Options (three dots) icon for the desired network function and select Run a
Workflow.

4 Select the desired workflow and click Next.

5 Enter the required parameters for the workflow.

6 Review the configuration and click Run.

What to do next

To view relevant information and recent tasks of a network function, click the Expand (>) icon on
the network function.

Terminate a Network Function


When you select Terminate on a network function, the underlying workloads are deleted from
VMware Telco Cloud Automation.

VMware, Inc. 341


VMware Telco Cloud Automation User Guide

Prerequisites

The network function must be instantiated.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function.

3 Click the Options (three dots) icon for the desired network function and select Terminate.

VMware Telco Cloud Automation checks for inputs based on the workflows that you added
for the catalog. If there are any inputs, you can update them here.

4 Click Finish after adding the inputs, if any.

Results

The network function is terminated.

To view relevant information and recent tasks, click the Expand (>) icon on the network function.

Hiding Columns in Network Function Inventory


You can customize the layout of network function inventory.

You can customize the layout of the network function inventory. You can hide or un-hide the
columns displayed for the network function inventory.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function.

3 Click the Toggle icon at left of the end of table.

4 Click the checkbox corresponding to the cloumn names in the Show Columns pop-up menu.

System shows only the selected columns.

Retry, Rollback, and Reset State


When a network function instance is in an error state, you can re-instantiate it, roll it back to its
uninstantiated state, or reset it.

When a network function becomes unavailable due to a pre-instantiation error or a post-


instantiation error, the Retry, Rollback, and Reset State options appear.

n Retry - This option retries the network function instantiation operation from its current failed
state. If the Retry operation does not succeed, the network function instance goes back to
the Not Instantiated - Error state.

VMware, Inc. 342


VMware Telco Cloud Automation User Guide

n Rollback - This option rolls the instantiated network function instance back to its
uninstantiated state. VMware Telco Cloud Automation cleans up any deployed resources and
the network function instance changes to Not Instantiated - Rolled Back state.

n Reset State - This option resets the network function instance to its last known successful
state. The network function instance goes back to Not Instantiated - Completed state and
does not work as expected if you re-instantiate it. Ensure that you delete this instance and
clean up any deployed resources.

Note The Retry, Rollback, and Reset State options are not available for CNF upgrade
operations.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Inventory > Network Function.

3 Click the Options (three dots) icon for the desired network function:

n Retry

n Rollback

n Reset State

4 To confirm, click OK.

Reconfigure a Container Network Function


You can reconfigure one or more VDUs of a CNF instance by uploading the override parameters
as a YAML file or providing the repository URL corresponding to each of the VDUs (Helm
deployment) that is to be updated.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function.

3 Click the Options (three dots) icon corresponding to the CNF you want to reconfigure and
select Reconfigure.

4 Select one of the following from the Reconfigure drop-down list.

n Helm Override: Select this option to override Helm parameters by providing a YAML file.

n Click Browse and upload the override parameters YAML file.

n Helm Repository: Select this option to update the repository URL of one or more Helm
charts of a CNF instance.

n Repository URL: Select or type the repository URL from where you want to fetch the
chart.

VMware, Inc. 343


VMware Telco Cloud Automation User Guide

n Both: Select to update both Helm properties and repository URLs of one or more Helms
of the CNF instance.
For more information on updating the repository, see Updating CNF Repository from
Chartmuseum to OCI.

5 In the Inputs tab, enter the appropriate properties and click Next.

6 In the Review tab, review the YAML file and/or the repository URL and click Finish.

Updating CNF Repository from Chartmuseum to OCI


VMware Telco Cloud Automation version 2.3 onwards, the Open Container Initiative (OCI)
repositories are supported. In the previous releases, if you have deployed the CNF instances
using Chartmuseum charts, you can update the CNF instances to OCI repositories.

Note
n If your CNF is using Chartmuesum, and Harbor is upgraded to a version that does not support
Chartmuesum, then the CNF LCM operations fail. In such a scenario, you are alerted with
the message, “CNF is not upgraded to OCI-based helm charts, all consecutive CNF LCM
operations may fail.”

n After the Chartmuesum Helm charts are migrated to OCI, you can update the CNF Helm
repository.

Before updating the CNF repository from Chartmuseum to OCI, you must perform the following:

n Convert the ChartMuseum charts to the OCI format.

n Upload the OCI charts to a Harbor.

n Define one or more partner systems in TCA for OCI repositories.

To update the CNF repository from Chartmuseum to OCI, perform one of the following:

n Reconfigure the CNF to point to the OCI repository instead of ChartMuseum. See Reconfigure
a Container Network Function.

n Upgrade the CNF to point to the OCI registry. See Upgrade a CNF.

Note Harbor with OCI repositories is listed in oci:// URI and the Harbor with ChartMuseum
repositories is listed in https:// URI.

After updating the CNF repository from Chartmuseum to OCI, ensure the following:

n The CNF reconfigure or upgrade operation is successful.

VMware, Inc. 344


VMware Telco Cloud Automation User Guide

n The alarm is cleared on the TCA instance, which indicates that all the CNF charts are pointing
to the OCI repositories.

Recommendations
n If Harbor is upgraded in place, reconfigure or upgrade the CNF when chartmuseum is still
supported for the Harbor. This ensures that the charts are available both in Chartmuseum and
OCI repositories so that auto rollback can be executed in case of CNF upgrade or reconfigure
failures.

n If Harbor is upgraded by creating a new Harbor instance, retain the existing Harbor version
until the CNF migration is completed.

VMware, Inc. 345


Managing Network Service
Catalogs 15
A network service is a combination of network functions that run together. After configuring your
network functions, you can upload network service descriptors or design new network service
descriptors. You can then perform network service life-cycle operations such as instantiate, heal,
monitor, and terminate.

This chapter includes the following topics:

n Onboarding a Network Service

n Download a Network Service Package

n Edit Network Service Catalog

Onboarding a Network Service


Onboarding a network service includes uploading a network service package to the catalog, and
creating or editing a network service descriptor draft.

Upload a Network Service Package


Using VMware Telco Cloud Automation, you can upload a SOL001/SOL004 compliant network
service descriptor and cloud service archive (CSAR) package. The system parses and validates
the configuration, and presents the topology in a visual viewer. It then persists the entry into the
network services catalog.

Prerequisites

n Add a cloud to your virtual infrastructure.

n Add any required network functions to your cloud.

n Verify that your network service descriptor complies with the following standards:

n Must be in the CSAR format.

n Must comply with the SOL001 or SOL004 standard.

n Must comply with TOSCA Simple Profile in YAML version 1.2 or TOSCA Simple Profile for
NFV version 1.0.

VMware, Inc. 346


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service and click Onboard.

The Onboard Network Service page is displayed.

3 Select Upload Network Service Package.

4 Enter a name for your network service.

5 Click Browse and select the network service descriptor (CSAR) file.

6 Click Upload.

Results

The specified network service is added to the catalog. You can now instantiate the network
service.

What to do next

n To instantiate the network service, see Instantiate a Network Service.

n To obtain the CSAR file corresponding to a network service, select the function in the catalog
and click Download.

n To remove a network service from the catalog, first terminate and delete all instances using
the network service. Then select the service in the catalog and click Delete.

Design a Network Service Descriptor


Using the Network Service Designer, you can compose a compliant network service template.
A network service descriptor is a deployment template that describes a network service's
deployment and operational requirement. It is used to create a network service where life-cycle
management operations are performed.

Prerequisites

n Add a cloud to your virtual infrastructure.

n Add network functions to your cloud.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service and click Onboard.

The Onboard Network Function page is displayed.

3 Select Design Network Service Descriptor.

4 Enter a unique name for your network function and click Design.

The Network Service Designer page is displayed.

VMware, Inc. 347


VMware Telco Cloud Automation User Guide

5 In the Network Service Catalog Properties pane, enter the following information:

n Descriptor ID - Enter the descriptor ID.

n Designer - Enter the company name of the designer.

n Version - Enter the product version.

n Name - Enter the name of the descriptor.

n Invariant ID - Enter the invariant ID that is unique to the descriptor.

n Flavor ID - Enter the unique ID for the new flavor.

6 (Optional) Add one or more workflows to your network service.

You can add custom workflows using vRealize Orchestrator. For information about adding
custom workflows, see #unique_242.

a Click Add Workflow and select the desired workflow from the drop-down menu:

n Instantiate Start

n Instantiate End

n Heal Start

n Heal End

n Scale Start

n Scale End

n Scale To Level Start

n Scale To Level End

n Terminate Start

n Terminate End

n Custom

b Click Browse and upload a Workflow Engine in the JSON format.

c Enter any input and output variables specified in your script and select whether they are
required.

7 Click Update.

You can modify these settings later by clicking Edit Network Service Catalog Properties in
the Network Service Designer.

8 You can drag Virtual Network Functions (VNFs), Cloud-Native Network Functions (CNFs),
VNFs that are part of a Specialized Virtual Network Function Manager (SVNFM), and
networks (NS Virtual Link) to the design area. You can also drag other Network Service
catalogs to your Network Service to create a Nested Network Service.

VMware, Inc. 348


VMware Telco Cloud Automation User Guide

9 On each network function and virtual link, click the Edit (pencil) icon to configure additional
settings.

VNF

n Name - Name of the network function.

n Description - Description about the network function.

n External Connection Points - Virtual link for each external connection point.

n Depends On (Optional) - Specify the VNF or CNF to be deployed before deploying this
VNF. In a scenario where you deploy many VNFs and CNFs, there can be dependencies
between them on the order in which they are deployed. This option enables you to
specify their deployment order.

CNF

n Name - Name of the network function.

n Description - Description about the network function.

n Depends On (Optional) - Specify the VNF or CNF to be deployed before deploying this
CNF. In a scenario where you deploy many VNFs and CNFs, there can be dependencies
between them on the order in which they are deployed. This option enables you to
specify their deployment order.

VNFs From SVNFM

VMware Telco Cloud Automation auto-discovers VNFs that are part of an SVNFM registered
as a partner system, and lists them in the catalog. You can use these VNFs for creating a
Network Service Catalog.

n Name - Enter the name of the SVNFM.

n Description (Optional) - Description about the SVNFM.

n Depends On (Optional) - Specify the VNF, SVNFM, or CNF to be deployed before


deploying this SVNFM. In a scenario where you deploy many SVNFMs, VNFs, and CNFs,
there can be dependencies between them on the order in which they are deployed. This
option enables you to specify their deployment order.

Nested Network Services

n Name - Name of the nested network service.

n Description - Description about the nested network service.

Virtual Links

n Network name

n Description

n Protocol

VMware, Inc. 349


VMware Telco Cloud Automation User Guide

When you have finished modifying the settings of an item, click Update.

10 After adding and configuring all the necessary items, click Upload.

If you want to save your work and continue later, click Save as Draft.

Results

The specified network service is added to the catalog. You can now instantiate the service.

What to do next

n To obtain the CSAR file corresponding to a network service, select the service in the catalog
and click Download.

n To remove a network service from the catalog, select the service in the catalog and click
Delete.

Edit Network Service Descriptor Drafts


If you have saved a draft in the Network Service Designer, you can modify or delete the draft
later.

Prerequisites

You must have created and saved a network service descriptor using the Network Service
Designer.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service and click Onboard.

3 Select Edit Network Service Descriptor Drafts.

4 Locate the desired draft in the table.

5 To modify the draft, click the Edit (pencil) icon. To remove the draft, click the Delete icon.

Delete a Network Service


You can delete a network service from the catalog.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service.

3 Select the desired network service and click Delete.

4 Confirm the action by clicking OK.

Results

The network service is removed from the catalog.

VMware, Inc. 350


VMware Telco Cloud Automation User Guide

Download a Network Service Package


You can download a network service package in the CSAR format to your local drive.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service.

3 Select the desired network service and click Download.

4 Select a location in your local drive and save the CSAR package.

Edit Network Service Catalog


You can edit the properties such as general properties, topology, workflows, and resources of a
Network Service.

When you edit and update a Network Service package, the CSAR upgrades to comply with the
latest SOL001 standards.

Edit Network Service Catalog General Properties


You can edit the general properties of a Network Service catalog and update it, or save the
catalog as a new version.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service.

3 To edit the general properties:

n Click the desired Network Service catalog and select the General Properties tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Service.

a Click the General Properties tab.

b Click Edit.

4 To save the changes and work on the general properties later, click Save.

5 To apply the changes to the current version, click Update Package.

6 To save the catalog as a new version, click Save As New.

Results

Changes to the Network Service catalog are saved appropriately.

VMware, Inc. 351


VMware Telco Cloud Automation User Guide

Edit Network Service Topology


Edit a Network Service topology diagram.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service.

3 To edit the topology:

n Click the desired Network Service catalog and select the Topology tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Service.

a Click the Topology tab.

b Click the edit icon on the Network Function or virtual link.

4 For more information about designing Network Service descriptors, see Design a Network
Service Descriptor.

5 To save the changes and work on the topology later, click Save.

6 To apply the changes to the current version, click Update Package.

7 To save the package as a new version, click Save As New.

Results

Changes to the Network Service catalog are saved appropriately.

Edit Network Service Workflows


Edit the life cycle event workflows of your Network Service.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service.

3 To edit the workflows:

n Click the desired Network Service catalog and select the Workflows tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Service.

a Click Edit.

b Select the Workflows tab.

4 For more information about designing workflows, see Designing Workflows .

5 To save the changes and work on the workflows later, click Save.

VMware, Inc. 352


VMware Telco Cloud Automation User Guide

6 To apply the changes to the current version, click Update Package.

7 To save the catalog as a new version, click Save As New.

Results

Changes to the Network Service catalog are saved appropriately.

Edit the Network Service Catalog Source Files


You can edit the source files of a network function catalog and update it, or save the catalog as a
new version.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service.

3 To edit the source files:

n Click the desired Network Service catalog and select the Resources tab.

a Click the edit icon.

n Click the Options menu (⋮) against the Network Service.

a Click the Resources tab.

b Click Edit.

4 To save the changes and work on the source files later, click Save.

5 To apply the changes to the current version, click Update Package.

6 To save the catalog as a new version, click Save As New.

Results

Changes to the Network Service catalog are saved appropriately.

VMware, Inc. 353


Managing Network Service
Lifecycle Operations 16
You can instantiate, run a workflow, or terminate your network service instance.

This chapter includes the following topics:

n Instantiate a Network Service

n Run a Workflow on a Network Service

n Heal a Network Service

n Terminate a Network Service

Instantiate a Network Service


After you upload or create a network service catalog, you can instantiate it in your virtual
infrastructure.

Prerequisites

n Upload or create a network service catalog.

n Register any VIMs required by the network service.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Service.

3 Select the desired network service and click Instantiate.

n If you have saved a validated configuration that you want to replicate on this network
service, click Upload on the top-right corner and upload the JSON file. The fields are then
auto-populated with this configuration information and you can edit them as required.

n If you want to create a network service configuration from the beginning, perform the
next steps.

4 Enter the following details:

a Name - Enter a name for your Network Service instance.

b Description (Optional) - Enter an optional description for your Network Service.

VMware, Inc. 354


VMware Telco Cloud Automation User Guide

c Prefix (Optional) - Enter a prefix. All entities that are created for this Network Service
are prefixed with this text. Prefixes help in personalizing and identifying the entities of a
Network Service.

d Use vApp Template(s) - To use a vApp template, select this option and select the
appropriate catalog from VMware Cloud Director.

e Tags (Optional) - Select the key and value pairs from the drop down menus. To add more
tags, click the + symbol.

5 In the Preview Network Service tab, enter a name for the service, an optional description,
review its design, and click Next.

6 In the Deploy Network Function tab, select a cloud on which to include each network
function in the network service.

Note For a VMware Cloud Director based cloud, you can use either a vApp template or a
template from the vSphere Client. For a vSphere based cloud, you can only select a vSphere
Client template.

7 Click Next.

8 In the Configure Network Functions tab, click the Edit (pencil) icon on each of the network
functions or Nested Network Service catalogs.

a For a Nested Network Service, select a pre-deployed Network Service from the existing
list of Network Services. This list is automatically curated based on the deployed
instances of the Nested Network Service catalog.

Note You can only select pre-instantiated Network Service instances for a Nested
Network Service.

b To deploy a new Network Function, click Instantiate New.

n Optionally, to select a pre-deployed network function from an existing list of VNFs


that are deployed and ready for instantiation, click Select Existing. VMware Telco
Cloud Automation auto-discovers VNFs that are part of an SVNFM registered as a
partner system, and lists them in the catalog. You can use these VNFs for creating a
Network Service Catalog.

n These Network Functions are curated automatically based on the deployed instances
and the selected Cloud.

n Instantiated Network Functions that are connected to other network services are not
displayed in this list.

c In the Inventory Detail tab, select the desired compute profile, select the instantiation
level, and click Next.

d In the Network Function Properties tab, select or edit an internal or external network,
and click Next.

VMware, Inc. 355


VMware Telco Cloud Automation User Guide

e In the Inputs tab, provide the required inputs appropriately and click Next.

f In the Review tab, review your configuration and click Finish.

Note You cannot add a deployment profile or select an internal or an external link on a CNF.

9 In the Instantiate Properties tab, enter the values for any required properties and click Next.

10 In the Review tab, review your configuration. You can download this configuration and reuse
it for instantiating a network service catalog with a similar configuration. Click Instantiate.

Results

VMware Telco Cloud Automation creates the network functions required by your network service
on the clouds that you specified. To view a list of all instantiated functions, select Network
Services > Inventory. To track and monitor the progress of the instantiation process, click the
Expand icon on the network service and navigate further. When instantiated is displayed in
the State column for a network service, it indicates that the instantiation process is completed
successfully and the service is ready for use.

What to do next

To view the relevant information and recent tasks, click the Expand (>) icon on the desired
network service.

If you no longer want an instantiated network service, click the Options (three dots) icon and
select Terminate. Then select the network service and click Delete.

Run a Workflow on a Network Service


You can run a workflow on a network service instance that contains one or more interfaces.

Prerequisites

n Instantiate your network service that contains one or more interfaces.

n To run a vRealize Orchestrator workflow, you must register vRealize Orchestrator with
VMware Telco Cloud Automation Control Plane (TCA-CP). For more information, see the
VMware Telco Cloud Automation Deployment Guide.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Service.

3 Click the Options (three dots) icon for the desired network service and select Run a
Workflow.

4 Select the desired network service or network function workflow and click Next.

5 Enter the required parameters for the workflow.

6 Review the configuration and click Run.

VMware, Inc. 356


VMware Telco Cloud Automation User Guide

What to do next

To view the relevant information and recent tasks, click the Expand (>) icon on the desired
network service.

Heal a Network Service


If your Network Service does not work as expected, you can heal it by running a set of
workflows. These workflows are designed to perform some pre-defined corrective actions on
the Network Service and are pre-packaged when designing the Network Service catalog.

Heal a Network Service.

Prerequisites

n Upload or create a network service.

n Register any VIMs required by the network service.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Service.

3 Click the ⋮ (vertical ellipsis) icon against the Network Service that you want to heal and select
Heal.

In the Heal page, you can either select the Network Service radio button or the Network
Function radio button. Selecting Network Function displays the associated Network
Functions in the Network Service. Select the relevant Network Functions to heal. In this
example, we heal a Network Service.

4 Select the Network Service radio button.

5 In the Select a Workflow tab, select one of the pre-defined types of healing from the Degree
Healing drop-down menu. This option is required for auditing purposes.

6 Select the pre-packaged workflow that is used for healing the Network Service and click
Next.

7 In the Inputs tab, enter the properties of the workflow such as user name, password, host
name, Network Service command, and VIM location.

8 Click Next.

9 In the Review tab, review the changes and click Heal.

Results

The Network Service begins to heal. To view its progress, go to Inventory > Network Service
and expand the Network Service.

VMware, Inc. 357


VMware Telco Cloud Automation User Guide

Terminate a Network Service


When you Terminate a network service, the underlying workloads are deleted from VMware
Telco Cloud Automation.

Prerequisites

The network service must be instantiated.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Service.

3 Click the Options (three dots) icon for the desired network service and select Terminate.

VMware Telco Cloud Automation checks for inputs based on the workflows that you added
for the catalog. The Finish button is then displayed.

4 Click Finish.

Results

The network service is terminated.

To view the relevant information and recent tasks, click the Expand (>) icon on the desired
network service.

VMware, Inc. 358


Upgrading Network Functions and
Network Services 17
VMware Telco Cloud Automation allows you to make minor software updates and major package
and component upgrades to your network functions and network services. You can then map
your upgraded VNFs, CNFs, and Network Services to the latest version in the Catalog.

You can perform the following upgrades or updates:

Update (Software Update)

You can perform software updates only on CNFs. When you perform a software update, it
changes the reference of the CSAR from the instance to point to a newer version of the
CSAR. Consider an example where you find a bug in the Helm chart of a deployed CNF
instance. When you perform a software update, you patch the updated Helm chart image to
the deployed CNF instance and the version of the CNF can remain the same or can change.
From a CSAR perspective, you can perform a software update across CNF versions that do
not have any model related updates.

Upgrade (Component Upgrade)

Upgrade applies only to CNFs. It changes the reference of the CSAR from the instance to
point to a newer version of the CSAR. When you perform an upgrade, it provides a detailed
view of the updates.

Consider an example where your original CSAR file consisted of two Helm charts:

n AMF 1.1.0

n SMF 1.1.0

The new CSAR file contains three Helm charts:

n AMF 2.5

n UPF 2.3

n NRF 2.4

When you perform a component upgrade, VMware Telco Cloud Automation performs the
following tasks on the CNF instance that is running:

1 AMF is upgraded from 1.10 to 2.5.

2 SMF is deleted because it is not present in the new CSAR file.

VMware, Inc. 359


VMware Telco Cloud Automation User Guide

3 UPF is new and it is instantiated with 2.3.

4 NRF is new and it is instantiated with 2.4

Note It is recommended to be consistent with the component (VDU) name across all the
CSAR versions subject to CNF update or upgrade.

Upgrade Package

Upgrade Package applies to VNFs, CNFs, and Network Services. Performing a package
upgrade changes the reference of the CSAR from the instance to point to a newer version
of the CSAR. It does not impact the current running instance in any way and no software or
model is updated. However, the workflows available for running can change.

The following table lists the type of upgrades and updates you can perform for VNFs, CNFs, and
Network Services.

Table 17-1. Type of Upgrades

Network Function/ Service Package Upgrade Software Update Component Upgrade

VNF Yes No No

CNF Yes Yes Yes

Network Service Yes No No

This chapter includes the following topics:

n Upgrade a VNF Package

n Upgrade a CNF Package

n Upgrade a CNF

n Upgrade Network Service Package

Upgrade a VNF Package


Upgrade your VNF package and map it to the latest version in the Catalog.

Prerequisites

You must be a System Administrator or a Network Function Deployer to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function and select the VNF to upgrade.

3 Click the ⋮ symbol against the VNF and select Upgrade Package.

VMware, Inc. 360


VMware Telco Cloud Automation User Guide

4 In the Upgrade Package screen, select the new VNF catalog to upgrade your VNF. The
descriptor version changes accordingly to the selected catalog.

Note Only those VNF catalogs that have the same software provider and product name are
displayed.

5 Click Upgrade.

Results

Your VNF is upgraded to the selected catalog version. The VNF instance now displays the
upgraded catalog name in the Network Functions > Inventory tab.

Upgrade a CNF Package


The Upgrade Package operation modifies artifacts in the CSAR package such as workflows,
scripts, and certificates. These artifacts take effect during the next CNF life cycle management
operation. As an exception, if you have updated the late-binding customizations, the changes
take effect immediately with the Upgrade Package operation.

Prerequisites

You must be a System Administrator or a Network Function Deployer to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function and select the CNF to upgrade.

3 Click the ⋮ symbol against the CNF and select Upgrade Package.

4 In the Upgrade Package screen, select the new CNF catalog to upgrade to. The descriptor
version changes accordingly to the selected catalog.

Note Only those CNF catalogs that have the same software provider and product name are
displayed.

5 Click Upgrade.

Results

Your CNF is upgraded to the selected catalog version. The CNF instance now displays the
upgraded catalog name in the Network Functions > Inventory tab.

Upgrade a CNF
Upgrade the software version, descriptor version, components, repository details, instantiation
properties, and Network Function properties of your CNF and map them to the newer version in
the Catalog.

VMware, Inc. 361


VMware Telco Cloud Automation User Guide

If the existing Helm Chart requires a software upgrade, the system upgrades the software
version of the CNF instance. If the existing CNF instance is not present in the new catalog, you
can map the current CNF instance to a new Helm Chart. If you do not make a selection, then the
existing CNF instance is removed from the Workload Cluster.

Note If there is an issue during the CNF instance update or upgrade operations, you can resolve
the issue based on the error message and trigger the update or upgrade operation again. For
example, during the upgrade operation, if an image is missing in the Harbor repository and the
operation fails due to this, you can upload the missing image and retry the upgrade operation.

Prerequisites

You must be a System Administrator or a Network Function Deployer to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Function and select the CNF to upgrade.

3 Click the ⋮ symbol against the CNF and select Upgrade.

4 In the Upgrade Revision tab, select the software version and Descriptor version to upgrade
to.

5 In the Components tab, select the upgraded components to be included in your CNF.

6 In the Inventory tab, select the repository URL from the drop-down menu, or specify the
repository.

For more information on updating the repository, see Updating CNF Repository from
Chartmuseum to OCI

7 In the Inputs tab, update the instantiation properties, if any.

8 In the Network Function Properties tab, review the updated model. You can download or
delete Helm Charts from the updated model.

9 In the Review tab, review the updates.

Results

Your CNF is upgraded to the specified properties.

Upgrade Network Service Package


Upgrade your Network Service package and map it to the latest version in the Catalog.

Prerequisites

You must be a System Administrator or a Network Service Deployer to perform this task.

VMware, Inc. 362


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Service and select the Network Service to upgrade.

3 Click the ⋮ symbol against the Network Service and select Upgrade Package.

4 In the Upgrade Package screen, select the new Network Service catalog and descriptor
version to upgrade your Network Service.

5 Click Upgrade.

Results

Your Network Service is upgraded to the selected catalog version.

VMware, Inc. 363


Retry or Rollback Cloud Network
Function Upgrades 18
When upgrading a Cloud Network Function (CNF), you can either retry or roll back the operation
if the upgrade fails.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Click the ⋮ symbol corresponding to the CNF which failed to upgrade and select one of the
following:

a Retry: Select this option to continue the upgrade operation from the failed step.

b Rollback: Select this option to roll back the CNF to its previous state. All the charts, VDUs,
and values of the CNF are restored.

Note The Retry and Rollback options are available for selection only when the CNF upgrade
fails and the Auto Rollback option is deactivated.

VMware, Inc. 364


5G Network Slicing Concepts
19
5G Network Slicing is a technology and an architecture that enables service providers to
create on-demand, isolated, and end-to-end logical networks, running on a shared and common
infrastructure.

These custom overlay networks are associated with specific business purposes following a set
of predefined Service Level Agreements (SLAs), Quality of Service indicators (QoS), security, and
regulatory requirements. 5G Network Slicing provides a standard way of managing and exposing
network resources to the end-user (the UE) while assuring the delivered slice's purpose and
characteristics (for example, throughput, latency, geographical location, isolation level, and so
on).

Note On a VM-based environment, the network slicing feature is disabled by default. To use the
feature:

n Contact VMware customer support to enable the feature.

n Enable the network slicing service on the VMware Telco Cloud Automation Manager user
interface.

Adopt VMware Telco Cloud Automation for Network Slicing


VMware Telco Cloud Automation offers, as tech-preview feature for the current version, a 3GPP
standard-compliant Network Slicing management layer (including the CSMF, NSMF, and NSSMF)
enabling users to plan, design, and instantiate end-to-end network slices across the Radio Access
Network (RAN) and the mobile core network domains. The addition of network slicing to VMware
Telco Cloud Automation enables service providers to unify domains automation and close the
gap between the endpoint consumed services and the required network resources, from a
physical or a cloud infrastructure.

Technical Overview
VMware Telco Cloud Automation follows the 3GPP Network Slicing Management architecture,
which comprises of the following components:

n Communication Service Management Function (CSMF), which acts as the interface towards
service order management and Operations Support Systems (OSS).

VMware, Inc. 365


VMware Telco Cloud Automation User Guide

n Network Slice Management Function (NSMF), which manages the life cycle of the end-to-end
slice across the network domains: Radio Access Network (RAN), 5G Core network, and the
transport network.

n Network Slice Subnet Management Function (NSSMF), which manages the lifecycle of
the Network Slice subnets within a network domain and applies the NSMF’s life cycle
management commands (For example, instantiate, scale, heal, terminate). There can be more
than one NSSMF in a network.

Use Cases
Network Slicing allows service providers to create a new breed of services that their customers
are expecting, that utilize their network resources better with fine-grained differentiated services,
and is on-demand and secured.

Differentiated network slices based on traffic or Usage Profiles

n Massive Machine Type Communication (mMTC) for large scale lesser bandwidth connected
devices.

n Ultra-reliable Low Latency Communication (uRLLC) for critical and latency sensitive device
connectivity.

n Enhanced Mobile Broadband (eMBB) for high bandwidth applications.

On-demand and SLA-assured network resources

Design, create, and manage the life cycle of on-demand network slices that are defined and
fulfilled under a user created SLA.

VMware, Inc. 366


VMware Telco Cloud Automation User Guide

Streamline and standardize way network resources are exposed to the OSS layer and the
consumers of the network slices

Network Slicing provides a standard framework to design, create, and manage network
resources that can be packaged and exposed directly to the end users.

This chapter includes the following topics:

n Enable Network Slicing

n Deactivate Network Slicing

Enable Network Slicing


The Network Slicing feature and service is disabled by default. To enable the feature and service,
perform the following steps.

Prerequisites

Contact the VMware customer care to activate the license for the Network Slicing before you
enable the Network Slicing on your setup.

Note You must repeat the steps for enabling the Network Slicing every time you upgrade
VMware Telco Cloud Automation to a newer build of the same release.

VMware, Inc. 367


VMware Telco Cloud Automation User Guide

Procedure

1 To enable the Network Slicing, log in to VMware Telco Cloud Automation Manager as root
using SSH and enable network-slicing.service.

systemctl enable network-slicing.service

Note After enabling, the network-slicing.service boots as part of the VM boot process,
and reboots every time the VM reboots.

2 To start the Network Slicing, use the following command:

systemctl start network-slicing.service

3 (Optional) You can also log in to the appliance management user interface (9443 port) and
start the Network Slicing service.

Deactivate Network Slicing


After enabling the Network Slicing feature and service, you can deactivate it. Perform the
following steps.

Procedure

1 To stop the Network Slicing, log in to VMware Telco Cloud Automation Manager as root using
SSH and run the command.

systemctl stop network-slicing

2 To disable the Network Slicing, run the command.

systemctl disable network-slicing

Disabling the service will prevent the service from starting up automatically during the boot
process. Also, the service does not start after the appliance reboot.

What to do next

After deactivating the Networking Slicing feature, contact VMware Support for deactivating its
license.

VMware, Inc. 368


Managing Network Slice Catalog
20
You can edit, onboard, and instantiate a network slice catalog.

This chapter includes the following topics:

n Onboarding a Network Slice Template

n Edit a Network Slice Template

n Instantiating a Network Slice Template

Onboarding a Network Slice Template


Create a network slice template.

You can onboard a network slice template. Once you have onboarded a network slice template,
you can then instantiate the template and use the network slice function.

Prerequisites

Ensure that the network functions and the network services that you want to add in the network
slice template are available.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Slicing and select the network slicing function.

3 To onboard a network slice template, click Onboard.

The Create Network Slice Template page displays.

4 Enter the required details.

n Name - Name of the network slice template.

n Description - Description of the network slice template.

n Template version - Version number of the network slice template.

5 To create the template, click Create.

6 Add the profile details. For more details on the profile parameters, see Edit a Network Slice
Template.

VMware, Inc. 369


VMware Telco Cloud Automation User Guide

7 Add the topology details. For more details on the topology, see Edit a Network Slice
Template.

Edit a Network Slice Template


Edit an existing Network Slice Template.

You can view the network slice template. You can edit or delete the existing network slice
template.

Prerequisites

Ensure that you have permission to edit the network slice template.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Slicing and select the network slicing function.

3 Click the Slice Template tab.

4 Click the ⋮ symbol against the network slicing template and select the operation from the list.

n To instantiate the network slice template, select Instantiate.

n To delete the service order, select Delete.

n To edit the service order, select Edit.

5 To edit the network slice template, select Edit.

System shows Edit Network Slice Template page.

6 To view the general properties, click General Properties tab. It shows the following details:

n Name - Name of the network slice template.

n Description - Description of the network slice template.

n Template version - Version number of the network slice template.

7 To view the profile details, click Profile tab. The profile tab shows following parameters

n General settings - Shows the general parameters related to the network slice.

n Slice/Service Type - Defines the service type related to a network slice. Select the
value from the drop-down menu.

n 1 - eMBB - Enhanced Mobile Broadband.

n 2 - uRLLC - Ultra Reliable Low Latency Communications.

n 3 - mIOT/mMTC - Massive Internet of Things/ Massive Machine Type


Communications.

n 4 - custom - Custom service type created by customer.

VMware, Inc. 370


VMware Telco Cloud Automation User Guide

n Activity Factor - Percentage of simultaneously active user equipments to the total


active user equipment.

n Allowed Jitter - Maximum allowed time variation.

n Availability - Availability of the network slice service. The value is in percentage.

n Coverage Area - The coverage area of the network slice.

n Latency - Packet transmission latency between the network slice. The value is in milli
seconds.

n Maximum Device Speed - Maximum transmission speed that the network slice can
support.

n Maximum no of UEs - Maximum number of user equipment that can simultaneously


access the network.

n Network Slice Sharing Indicator - Whether the service can share the network slice.

n Shared - Services can share the network slice.

n Non-shared - Services cannot share the network slice.

n Reliability - Reliability of the network slice. The value is in percentage. Value range is
0 to 100.

n Survival Time - The time interval for which an application can survive without
accessing the message. The value is in milli seconds. You can also provide multiple
comma separated time intervals.

n UE Mobility Level - The mobility level of the user equipment that the network slice
supports.

n Stationary - Network slice supports only stationary user equipment.

n Nomadic - Network slice supports only user equipment which has intermittent
availability.

n Restricted Mobility - Network slice supports user equipment restricted mobility.

n Fully Mobility - Network slice supports user equipment with complete mobility.

n PLMN Information(public land mobile networks) - Shows the parameters related to the
public land mobile networks.

n Mobile Country Code (MCC) - Mobile country code of the country.

n Mobile Network Code (MNC) - Mobile network code of the country.

n S-NSSAI (Single – Network Slice Selection Assistance Information) - Shows the


parameters required to uniquely identify a Network Slice.

n Slice/Service Type - The service type associated with the network slice.

n 1 - eMBB - Enhanced Mobile Broadband.

n 2 - uRLLC - Ultra Reliable Low Latency Communications.

VMware, Inc. 371


VMware Telco Cloud Automation User Guide

n 3 - mIOT/mMTC - Massive Internet of Things/ Massive Machine Type


Communications.

n 4 - custom - Custom service type created by customer.

n Slice Differentiator - The value used to differentiate between network slices. If the
parameter is not needed, set the value to FFFFFF.

n Delay Tolerance - Shows the parameters related to delay requirements in a network slice.

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance-related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function-related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n Deterministic Communication - This attribute defines if the network slice supports


deterministic communication for periodic UE traffic. Periodic traffic refers to the type of
traffic with periodic transmissions.

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

VMware, Inc. 372


VMware Telco Cloud Automation User Guide

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance-related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function-related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n Availability - This parameter describes if the network slice supports deterministic


communication.

n Periodicity List - This parameter provides a list of periodicities supported by the


network slice. This parameter must be present when the “Availability” is set to
Supported.

n Downlink Throughput per Network Slice - This attribute relates to the aggregated data
rate in downlink for all UEs together in the network slice.

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

VMware, Inc. 373


VMware Telco Cloud Automation User Guide

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n Guaranteed Downlink Throughput per Network Slice - Minimum required download


speed that the network slice provides.

n Maximum Downlink Throughput per Network Slice - Maximum download speed that
the network slice provides.

n Downlink Throughput per Network Slice ( for UE) - This attribute describes the maximum
data rate supported by the network slice per UE in downlink.

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n Guaranteed Downlink Throughput Per UE per Slice - Minimum required download


speed for each active user equipment that the network slice provides.

n Maximum Downlink Throughput Per UE per Slice - Maximum download speed for
each active user equipment that the network slice provides.

VMware, Inc. 374


VMware Telco Cloud Automation User Guide

n KQIs and KPIs

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance-related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function-related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n List of KQIs and KPIs - Name of the list of KPIs and KQIs to monitor the performance
of the network.

n Maximum Number of Connections per Slice

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function related attributes


are relevant before the slice is instantiated.

VMware, Inc. 375


VMware Telco Cloud Automation User Guide

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n Maximum Number of Concurrent Sessions - Maximum number of simultaneous


sessions that the network slice can support.

n Maximum Supported Packet Size - This attribute describes the maximum packet size
supported by the network slice.

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance-related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function-related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n Maximum Packet Size - Maximum packet size allowed in the network slice.

VMware, Inc. 376


VMware Telco Cloud Automation User Guide

n Overall User Density - This attribute describes the maximum number of connected and/or
accessible devices per unit area (per km2) supported by the network slice.

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance-related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function-related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n Overall User Density - The user device density that the network device can handle.
The value is in number of users per square kilometer.

n Uplink Throughput per Network Slice - This attribute relates to the aggregated data rate
in uplink for all UEs together in the network slice (this is not per UE).

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance-related


attributes are relevant before the slice is instantiated.

VMware, Inc. 377


VMware Telco Cloud Automation User Guide

n Function - Specify functionality provided by the slice. Function-related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n Guaranteed Uplink Throughput per Slice - Minimum required upload speed that the
network slice provides.

n Maximum Uplink Throughput per Slice - Maximum upload speed that the network
slice provides.

n Uplink Throughput per Network Slice per UE - This attribute describes the maximum
data rate supported by the network slice per UE in uplink.

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

VMware, Inc. 378


VMware Telco Cloud Automation User Guide

n Guaranteed Uplink Throughput Per UE per Slice - Minimum required upload speed
for each active user equipment that the network slice provides.

n Maximum Uplink Throughput Per UE per Slice - Maximum upload speed for each
active user equipment that the network slice provides.

User Management Openness - This attribute describes the capability for the NSC
to manage their users or groups of users’ network services and corresponding
requirements.

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n User Management Openness Support - Whether the network slice allows the NSC to
manage the users or groups of users.

n V2X Communication Models (Vehicular-to-Everything) - Shows the parameters related


to the exchange of information between vehicle and infrastructure (V2I) or between
vehicles (V2V).

n Category - Criteria to categorize the network slice through common attributes.

n Character - Characterize a slice. For example throughput, latency, Application


Program Interfaces, etc.

n Scalability - Provide information about the scalability of the slice. For example
number of UEs.

VMware, Inc. 379


VMware Telco Cloud Automation User Guide

n Tagging - You can use the tags as labels attached to the attributes to give additional
information about the nature of each attribute. Each attribute could have multiple
tags. The following tags apply to the character attributes:

n Performance - Specify the KPIs supported by a slice. Performance related


attributes are relevant before the slice is instantiated.

n Function - Specify functionality provided by the slice. Function related attributes


are relevant before the slice is instantiated.

n Operation - Specify which methods are provided to the NSC in order to control
and manage the slice. These attributes are relevant after the slice is instantiated.

n Exposure - The way the attributes interact with the network slice consumer can be
used for tagging:

n API - These attributes provide an API in order to get access to the slice
capabilities.

n KPI - These attributes provide certain performance capabilities, for example


throughput and delay.

n V2X Communication Mode - Whether the network slice supports V2X mode.

n Supported - Network slice supports V2X mode.

n Not Supported - Network slice does not support V2X mode.

8 To configure the topology, click Topology tab.

The Topology tab shows the following details of the network function associated with the
network slicing function:

n Network Slice Subnet - It represents the management aspects of a set of Managed


Functions and the required resources.

n General Settings - You can modify the parameters related to the general settings of
the network slice template.

n PLMN Information - You can modify the parameters related to PLMN parameters of
the network slice template.

n Performance Requirements - You can modify the parameters related to the


performance settings of the network slice template.

VMware, Inc. 380


VMware Telco Cloud Automation User Guide

n S-NSSAI - You can modify the parameters related to the Single – Network Slice
Selection Assistance Information of the network slice template.

n Network Slice Structure - It represents the network functions and network services
associated with the network slice.

n Add Subnet - You can create a subnet within a network slice. You can add separate
network functions and network services to each subnet.

a To create a subnet in a network slice, click Add Subnet. Add the following details
on the Create Network Slice Subnet Template page.

n Name - Name of the network slice subnet.

n Description - Description of the network slice subnet.

b Click Add to save the details.

n Add Descriptor - It represents the network functions and network services associated
with the network slice.

a To add a network function or a network service, click on the Add Descriptor and
select Add Network Service or Add Network Function.

When you select Add Network Service, the Select Network Service For Network
Slice Template page appears. When you select Add Network Function, the Select
Network Function For Network Slice Template page appears.

b Select the network service or the network function that you want to add and click
Select.

Instantiating a Network Slice Template


After you upload or create a network slice template, you can instantiate it.

You can instantiate a network slice template that you created.

Prerequisites

You have an existing network slice template.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Catalog > Network Slice.

3 Click the ⋮ symbol against the network slice template and select the operation from the list.

The Create Network Slice Service Order page appears.

4 Enter the required details:

n Network Slice Name - Name of the network slice.

n Description - Description of the network slice.

VMware, Inc. 381


VMware Telco Cloud Automation User Guide

n Customer Name - Name of the customer for which you want to instantiate the network
slice.

5 To finish the instantiation of the network slice, click Create.

What to do next

Edit a Network Slice Service Order

VMware, Inc. 382


Managing Network Slicing
Lifecycle Operations 21
You can activate, deactivate, deallocate, and edit a network slice.

This chapter includes the following topics:

n Network Slice Function Operations

n Edit a Network Slice Service Order

Network Slice Function Operations


You can perform operations on the network slice function.

You can perform the following operations:

n Activate a network slice function.

n Deactivate a network slice function.

n Deallocate a network slice function.

Prerequisites

Ensure that you have permission to edit the network slice function.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Slicing and select the network slice function.

3 Click the ⋮ symbol against the network slicing function and select the operation from the list.

n To deactivate the network slice function, select Deactivate.

n To activate the network slice function, select Activate.

n To deallocate the network slice function, select Deallocate.

What to do next

Edit a Network Slice Service Order.

VMware, Inc. 383


VMware Telco Cloud Automation User Guide

Edit a Network Slice Service Order


Edit an existing network slice service order.

You can view the network slice service order. You can edit or delete the existing network slice
service order.

Prerequisites

Ensure that you have permission to edit the network slice service order.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Inventory > Network Slicing and select the network slice function.

3 Click the Service Order tab.

4 Click the ⋮ symbol against the network slice service order and select the operation from the
list.

n To delete the service order, select Delete.

n To edit the service order, select Edit.

5 To edit the service order, select Edit.

System shows View Network Slice page.

6 To view the general properties, click General Properties tab. It shows the following details:

n Network Slice Name

n Description

n Customer Name

n Network Function/Service Inventory

7 To view the task details, click Tasks tab.

8 To view the profile details, click Profile tab. The profile tab shows following parameters

n General settings

n PLMN Information (public land mobile networks)

n Mobile Country Code (MCC)

n Mobile Network Code (MNC)

n S-NSSAI (Network Slice Selection Assistance Information)

n Slice/Service Type

n Slice Differentiator

VMware, Inc. 384


VMware Telco Cloud Automation User Guide

n Delay Tolerance

n Category

n Tagging

n Exposure

n Support

n Deterministic Communication

n Category

n Tagging

n Exposure

n Availability

n Periodicity List

n Downlink Throughput per Network Slice

n Category

n Tagging

n Exposure

n Guaranteed Downlink Throughput per Network Slice

n Maximum Downlink Throughput per Network Slice

n Downlink Throughput per Network Slice ( for UE)

n Category

n Tagging

n Exposure

n Guaranteed Downlink Throughput Per UE per Slice

n Maximum Downlink Throughput Per UE per Slice

n KQIs and KPIs

n Category

n Tagging

n Exposure

n List of KQIs and KPIs

n Maximum Number of Connections per Slice

n Category

n Tagging

n Exposure

VMware, Inc. 385


VMware Telco Cloud Automation User Guide

n Maximum Number of Concurrent Sessions

n Maximum Supported Packet Size

n Category

n Tagging

n Exposure

n Maximum Packet Size

n Overall User Density

n Category

n Tagging

n Exposure

n Overall User Density

n Uplink Throughput per Network Slice

n Category

n Tagging

n Exposure

n Guaranteed Uplink Throughput per Slice

n Maximum Uplink Throughput per Slice

n Uplink Throughput per Network Slice per UE

n Category

n Tagging

n Exposure

n Guaranteed Uplink Throughput Per UE per Slice

n Maximum Uplink Throughput Per UE per Slice

User Management Openness

n Category

n Tagging

n Exposure

n User Management Openness Support

n V2X Communication Models (Vehicular-to-Everything)

n Category

n Tagging

n Exposure

VMware, Inc. 386


VMware Telco Cloud Automation User Guide

n V2X Communication Mode

9 To view the deployment configuration, click Deployment Configuration tab.

The Deployment Configuration tab shows the following details of the network function
associated with the network slice function:

n General Settings

n PLMN Information

n Performance Requirements

n S-NSSAI

For details of all the parameters, see Edit a Network Slice Template.

VMware, Inc. 387


Telco Cloud Automation
Workflows 22
In Telco Cloud Automation, workflows are used to automate tasks that are otherwise executed
manually by users. A workflow is a collection of steps that can be executed manually or
automatically. The purpose of the workflow is to define a collection of administrative actions
that implement a single task. For example, a workflow can configure a network function after its
resources are allocated to the cloud.

Workflows are used to automate the following:

n Tasks during Network Function (NF) and Network Service (NS) life cycle management
operations - If the inputs of the LCM operation need to be dynamically computed or the
task needs to be carried out by initializing scripts on the virtual machines, HELM, Kubernetes
PODs, jobs, and operators, workflow supplements the end-to-end configuration of the NS
and NF. This type of workflow is automatically executed as part of the LCM pre-operation or
post-operation (before or after the resources of the NF / NS are manipulated). For example, a
pre-operation is executed to determine the HTTP proxy used by the NF and a post-operation
is executed to register the NF.

n Tasks that are operator specific - Allows the operator to automate tasks that are not part of
LCM operation or are fully operator designed. For example, draining traffic from a selected
NF.

Workflow Architecture
The following diagram illustrates the workflow architecture.

VMware, Inc. 388


VMware Telco Cloud Automation User Guide

You can create workflows by using the TCA user interface. The embedded workflow designs are
available in the Network Function and Network Service packages as raw files until the package
is onboarded, and the workflow catalog entries are created from these raw files at the time of
onboarding.

You can execute a workflow as part of the NF/NS Lifecycle Management operation or through
the VMware Telco Cloud Automation user interface. After the workflow execution intent is
created, the workflow executor evaluates the intent and executes it. To carry out the execution,
the executor on TCA-M needs to contact the executors distributed on the TCA-CP instances.
The executor on TCA-M either contacts the external systems directly or through vRealize
Orchestrator (vRO). The choice between the two alternatives depends on the step to be
executed. It is necessary to have the distributed execution not just from the scaling perspective
but also constrained by network connectivity.

Networking Connectivity
Network connectivity is required so that the workflow executor can carry out the various tasks.
The following diagram illustrates the network connectivity border conditions.

vRO
System
administrator

Kubernetes
A

Mgmt. VM shell

TCA TCA-CP

K8S API POD


B

No connectivity
possible from A to B

Normal
user

The system administrators can reach any system (TCA, TCA-CP, Kubernetes, vRO) directly. The
other users have access to TCA-M only. Due to security reasons, only the unidirectional network
model is supported; that is, the traffic can only be initiated between two entities from one
direction minimizing the possible surfaces that can be attacked. Workflow designs consider the
possible network connectivity.

This chapter includes the following topics:

n Aspects of a Workflow

n Managing Workflow Execution

VMware, Inc. 389


VMware Telco Cloud Automation User Guide

n Role-based access control for workflows

Aspects of a Workflow
A workflow consists of multiple elements, and each element is responsible for describing one
aspect of the workflow. The workflow contains multiple steps which can be interlinked to define
a more complex workflow. The workflow consists of variables, and you can provide the values at
the time of execution.

The following diagram illustrates the relationship between the various elements of a workflow.

Input Variable

Workflow

Step

The top-level element is the workflow itself which contains a few mandatory elements.

The following table lists the mandatory elements.

Workflow Element Description

Name Name of the workflow. It is a non-empty value that you


can use in RBAC filters.

Version Version of the workflow definition. This version is not


used by VMware Telco Cloud Automation but helps
the users to distinguish between the different workflow
variants.

schemaVersion The version of the workflow used by VMware Telco Cloud


Automation. This version configures the features available
for one workflow. The possible values are:
n 3.0: Current
n 2.0: Deprecated
n 1.0: Deprecated

Note The support for 2.0 and 1.0 variants will


be removed in the subsequent VMware Telco Cloud
Automation releases.

VMware, Inc. 390


VMware Telco Cloud Automation User Guide

Workflow Element Description

id The id of the workflow is a conditionally mandatory


element. The id is applicable only if the workflow is
embedded within an NF/NS package where the identifier
is used to match the pairs of workflows during the NF/NS
upgrade scenarios.
Sample code snippet:

"id": "testCrud1"


}

Step A step is an action of the workflow that interacts with


an external system. The information model is classified
into two parts design-time, and runtime. The elements in
design-time represent the definition of the workflow, and
the elements in runtime represent a concrete execution of
the workflow.

Following is the sample code snippet for the workflow elements, Name, Version, and
SchemaVersion:

"name": "Apache instantiate start workflow",


"version": "1.0 version",
"schemaVersion": "3.0",


}

Data Types
Data types are used in various parts of the workflow to define the type of a particular element.
Values that are not conforming to a specific data type cause design or runtime validation errors.

The following table lists the supported data types.

Data Type Description

String A UTF-8-character sequence. A string format can have


the following values:
n String: A character sequence without line breaks
n Text: A character sequence with line breaks (\n)
n Password: A base64 encoded value

Number A 32-bit integer.

Password A base 64 encoded string.

VMware, Inc. 391


VMware Telco Cloud Automation User Guide

Data Type Description

Boolean A logical value (true or false).

File Name of the file.

vimLocation Identifier of the VIM.

virtualMachine Name of the VDU on which the step is executed.

Text A text box that accepts a character sequence with line


breaks.

Input
The workflow may optionally contain inputs that pass on the read-only information to a workflow
at the time of execution. Inputs define the possible inputs (mandatory and default values)
and their characteristics. Most workflows require a certain set of input parameters to run. For
example, if a workflow resets a virtual machine, then the workflow needs to define an input with
a virtual machine data type to allow the caller to control which virtual machine to restart.

The following table lists the fields of an input:

Input Field Description

Type A mandatory data type of the input.

Format An optional format of the data type.

Description: An optional non-empty description of the input.

DefaultValue An optional default value of the input.

Required An optional value of the input (defaults to false).

Note If an input is required, then the value of the


input will be provided. Inputs with a default value are not
required.

Following is the sample code snippet for inputs:

{
"inputs": {
"in1": {
"type": "string",
"format": "password",
"defaultValue": "bXlTZWNyZXQ=",
"description" : "My special input 1"
},
"in2": {
"type": "string",
"defaultValue": "myInput"
},
"in3": {
"type": "string",

VMware, Inc. 392


VMware Telco Cloud Automation User Guide

"required" : true
},
"in4": {
"type": "string",
"format" : "text"
},
"in5": {
"type": "boolean",
"defaultValue" : true
},
"in6": {
"type": "number",
"defaultValue" : 123
},
"in7": {
"type": "number",
"defaultValue" : 123.4
},
"in8": {
"type": "password",
"description" : "do not use deprecated",
"defaultValue" : "bXlTZWNyZXQ="
},
"in9": {
"type": "boolean",
"defaultValue" : true
},
"in10": {
"type": "file",
"defaultValue" : "myAttachmentName.txt"
},
"in4": {
"type": "vimLocation",
"defaultValue" : "25a1a262-715b-11ed-a1eb-0242ac120002"
},
"in4": {
"type": "virtualMachine",
"defaultValue" : "myVduName"
}
},

}

Output
The output of a workflow is the result of workflow execution. Output parameters are set during
the execution of the workflow. Output can be used by various LCM processes. For example, a
pre-workflow that determines the location of the Network Repository Function (NRF) passes it
on as an instantiation parameter (HELM input value) to the Access and Mobility Management
Function (AMF).

The following table lists the various fields of an output.

VMware, Inc. 393


VMware Telco Cloud Automation User Guide

Output Field Description

Type A mandatory data type of the output.

Format An optional format of the data type.

Description An optional non-empty description of the output.

The format of the outputs is identical to the inputs. However, in outputs, the attributes are not
available, and only string, number, boolean, and virtual machine data types are used.

Following is the sample code snippet for outputs:

{
"outputs": {
"output1": {
"type": "string",
"format": "password",
"defaultValue": "bXlTZWNyZXQ=",
"description" : "My special output 1"
},
"output2": {
"type": "boolean",
"defaultValue" : true
},
"output3": {
"type": "number",
"defaultValue" : 123
},
"output4": {
"type": "virtualMachine",
"defaultValue" : "myVduName"
}
},

}

Variables
The workflow may also contain variables. Variables are very similar to inputs. However, variables
are not read-only and may change at the time of workflow execution. The variables may
have user-defined input values or may be automatically assigned based on the context of the
workflow execution. The vnfId variable is automatically assigned if the workflow runs on Network
Function, and if the workflow runs on Network Service, the nsId variable is populated. If these
two variables are defined, they have the string data type.

The following table lists the possible fields of the workflow variables.

Workflow Variable Field Description

Type A mandatory data type of the variable.

Format: An optional format of the data type.

VMware, Inc. 394


VMware Telco Cloud Automation User Guide

Workflow Variable Field Description

Description: An optional non-empty description of the variable.

DefaultValue An optional default value of the variable.

Following is the sample code snippet for variables:

{
"variables": {
"variable1": {
"type": "string",
"format": "password",
"defaultValue": "bXlTZWNyZXQ=",
"description" : "My special variable 1"
},
"vnfId": {
"type": "string",
"description": "The identifier of the NF"
},
"nsId": {
"type": "string",
"description": "The identifier of the NS"
}
}

}

Attachments
You can attach binary files or text files in the workflow definition. Attachments are stored in
the workflow catalog, and the maximum file size is 5 MB. Attachments can be added to the
standalone workflow through the UI or automatically assigned to the workflow if the workflow
is embedded in an NF / NS package. If the workflow is embedded in an NS package, the files
located in the Artifacts/scripts directory in CSAR are automatically attached to each workflow
definition.

Therefore, if the CSAR contains the following files, the files are referred to by “script1.sh”,
“script2.sh” or “subDir/script1.sh” in default values:

n /Artifacts/scripts/script1.sh

n /Artifacts/scripts/script2.sh

n /Artifacts/scripts/subDir/script1.sh

When the workflow is created in the workflow catalog, the administrative elements are assigned
to the workflow. These administrative elements control various operability aspects of the
workflow; for example, RBAC, editability, and so on. You cannot define these attributes in the
workflow template, but the system computes these attributes after the workflow is created.

The following table lists the elements of a variable.

VMware, Inc. 395


VMware Telco Cloud Automation User Guide

Element of a Variable Description

Owner An entity in VMware Telco Cloud Automation that owns


the workflow. The owner is the end user if the workflow
is standalone or if the workflow is embedded NF/NS
package.

CreationTime The time at which the workflow is created.

CreationUser Identifier of the user who created the workflow.

ReadOnly Determines if the workflow can be edited. If the workflow


is created internally from the embedded workflow, then it
is automatically assigned a false value.

Defining Steps in Workflows


Workflow steps are a sequential set of tasks that the workflow performs. A workflow consists of
multiple steps. A step is a building block of the workflow.

The structure of the step and its relation to the workflow is illustrated in the following diagram.

A combination of multiple steps defines a complex behavior of the workflow. The type of the
step ID is defined through the Type attribute. The ID of the step is the key to the structure of the
steps. You can describe the step in the Description field.

Following is the code snippet for a step definition fragment.

{
"steps": {
"stepId": {

"description" : "Configure Apache on the NF",


"type" : "VRO_SSH",

}
},

}

Workflow Step Types


The workflow steps are a sequential set of tasks. A workflow consists of multiple steps. The
behavior of the step is defined through its type. The type of step determines the purpose of the
step, the input parameters the step requires, and how the output parameters of the step are
processed.

VMware, Inc. 396


VMware Telco Cloud Automation User Guide

The structure of the step and its relation to the workflow is illustrated in the following diagram.

inbinding Variable/input

Workflow

outbinding Variable/input

This section provides a detailed description of each step type.

Initial step
The initial step of the workflow is defined by the startStepId field of the workflow, and the steps
that follow are defined by the nextStepId field. The workflow ends if the executed step does not
have the following step. In this scenario, the startStepId field is not available in the step definition.

The link between the steps is illustrated in the following template fragment:

{
"startStepId": "stepId1",
"steps": {
"stepId1": {
nextStepId": "stepId2",
...
},
"stepId2": {
"nextStepId": "stepId3",
...
},
"stepId3": {
...
}
},
...
}

Specifications for step input binding


The step receives the input parameters through its input bindings. An input binding is a source
from which a single-step input takes the value. The input binding consists of the following fields:

n Type: A mandatory data type of the input binding.

n Format: An optional format of the type.

n defaultValue: Default value of the input binding. This value is used if the exportName is not
available or is referring to an element with no value.

n exportName: The name of the input or variable from which the step input takes its value.

VMware, Inc. 397


VMware Telco Cloud Automation User Guide

Inputs and variables of a step are illustrated in the following template fragment.

{
"inputs": {
"input1" : { … },
"input2" : { … }
},
"variables": {
"variable1" : { … },
"variable2" : { … }
},
"steps": {
"stepId1": {
"inBindings": {
"stepInput1" : {
"description" : "my input value for this step",
"type": "string",
"defaultValue" : "foo",
"exportName" : "input1"
},
"stepInput2" : {
"type": "string",
"defaultValue" : "bar"
},
"stepInput3" : {
"type": "string",
"exportName" : "variable1"
}
}
}
},

}

Using double curly brackets in variables


Default values in step input bindings with string data types contain double curly brackets. You
can specify the variable or input names within the curly brackets. The value of the variable or
workflow input is substituted.

The usage of double curly brackets is illustrated in the following sample code snippet:

{
"inputs": {
"input1" : { ... }
},
"variables": {
"variable1" : { ... }
},
"steps": {
"stepId1": {
"inBindings": {
"stepInput1" : {
"type": "string",

VMware, Inc. 398


VMware Telco Cloud Automation User Guide

"name" : "variable1",
"defaultValue" : "fix{{input1}}_{{variable1}}"
}
}
}
},
...
}

Specification of output bindings


The steps are executed based on the step inputs, and after the step execution is complete, the
step outputs are produced. The processing of the step outputs is controlled by the outBindings
section. The outputs of the step are assigned to the workflow variables and outputs. The key
component in outBindings is the name of the output that the step produces. A step output may
be used several times to populate other entities or may never be used.

Each output binding comprises the following fields:

n type: A mandatory data type of the output produced by the step.

VMware, Inc. 399


VMware Telco Cloud Automation User Guide

n name: A mandatory name of the variable or output to which the step output is saved.

Note The type of the step determines the available step outputs.

{
"outputs": {
"output1" : { … },
"output2" : { … }
},
"variables": {
"variable1" : { … },
"variable2" : { … }
},
"steps": {
"stepId1": {
"outBindings": {
"stepOutput1" : {
"type": "string",
"name" : "variable1"
},
"stepOutput1" : {
"type": "string",
"name" : "output1"
},
"stepOutput2" : {
"type": "string",
"name" : "output2"
}
}
},


},

}

Specification of conditional next steps


By default, the nextStepId field controls the step that follows. However, if the step contains
a Conditions section, then first, the Conditions section is evaluated before processing the
nextStepId field, which determines the next step. The purpose of the conditions section is to
create decision points in the workflow. A decision point is an opportunity to select the next step
to be executed based on the given conditions. The conditions section contains an ordered list of
conditions. Each condition represents a decision point, and each decision point is evaluated one
after another.

Each condition consists of the following fields:

n name: The name of the workflow input, output, or variable against which the condition is
evaluated.

VMware, Inc. 400


VMware Telco Cloud Automation User Guide

n comparator: The comparator of the condition.

n value: The optional second operand of the comparator.

n nextStepId: The mandatory identifier of the next step is the evaluation of the condition that is
true.

The following table lists the possible values of the comparator:

Value String Boolean Number Note

equals x x Is the referred


element equal to the
second operand?

isDefined x x Does the referred


element have a
value?

contains x Does the value


contain a second
operand?

match x Does the value


match the regular
expression specified
in the second
operand?

isFalse x Does the referred


element have the
true value?

isTrue x Does the referred


element have a false
value?

different x Is the referred


element different
from the second
operand?

greater x Is the referred


element greater than
the second operand?

greaterOrEquals x Is the referred


element greater than
or equal to the
second operand?

smaller x Is the referred


element smaller than
the second operand?

smallerOrEquals x Is the referred


element smaller than
or equal to the
second operand?

VMware, Inc. 401


VMware Telco Cloud Automation User Guide

The definition of conditional statements is illustrated in the following template fragment:

{
"input": {
"input1" : { ... },
"input2" : { ... }
},
"variables": {
"variable1" : { ... },
"variable2" : { ... }
},
"steps": {
"stepId1": {
"conditions": [
{
"name": "input1",
"comparator": "smaller",
"value": 5,
"nextStepId": "stepId1"
},
{
"name": "variable1",
"comparator": "greaterOrEquals",
"value": 5,
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "isDefined",
"nextStepId": "stepId2"
},
{
"name": "variable2",
"comparator": "equals",
"value": "foo",
"nextStepId": "stepId2"
},
{
"name": "variable2",
"comparator": "match",
"value": "myRegularExpr.*",
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "greater",
"value": 5,
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "smaller",
"value": 5,
"nextStepId": "stepId2"
},

VMware, Inc. 402


VMware Telco Cloud Automation User Guide

{
"name": "variable1",
"comparator": "smallerOrEquals",
"value": 5,
"nextStepId": "stepId2"
},
{
"name": "variable1",
"comparator": "greaterOrEquals",
"value": 5,
"nextStepId": "stepId2"
}
]
},
"stepId2" : { ... }
},
...
}

Common inputs for all step types


The name of the input and output bindings of one step is defined by the type of the step.
Each step type consists of different inputs and outputs. Each step, irrespective of its step type,
defines the timeout and initialDelay input bindings. timeout specifies the maximum waiting
time in seconds for the step execution to complete without considering the waiting time for
user interaction. initialDelay binding specifies the waiting time before executing the step. Both
bindings consist of numbers as the data types.

Input and output bindings are illustrated in the following template fragment:

{
"inputs": {
"inputDelay" : { ... }
},
"steps": {
"stepId1": {
"inBindings": {
"timeout": {
"type": "number",
"defaultValue": 123
},
"initialDelay": {
"type": "number",
"exportName": "inputDelay"
}
}
}
},
...
}

VMware, Inc. 403


VMware Telco Cloud Automation User Guide

Steps that require a vRO instance to interact with a VIM instance define an input binding that
specifies which vRO or VIM instance to be used. The type of this input binding is vimLocation,
and the name can be anything that is not used by the step. This binding is called location. The
presence of the VIM location is mandatory if the system cannot unambiguously identify it. If the
workflow is executed on an NF, then the VIM is deduced from the location of the NF. However, if
the workflow is not executed on an NF, then you need to specify the location of the VIM.

Following is a sample template fragment:

{
"inputs": {
"vimInput": {
"type": "vimLocation",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_SCP",
"inBindings": {
...
"vim": {
"type": "vimLocation",
"exportName": "vimInput"
}
},
...
},
},

}

Workflow Step Input


A step input is the collection of concrete values with which a step instance is executed.

Step Input
A step type can have any of the following input:

n NOOP: Empty step without any action

n VRO_SSH: Execute commands or scripts through an SSH connection

n VRO_SCP: Copy the file to a remote location through SCP

n VRO_CUSTOM: Execute a custom vRO workflow

n VRO_EXEC: Execute a command or script through VM tools

n JavaScript: Execute a Java Script to manipulate inputs

n K8S: Execute a script in a POD with access to the Kubernetes

n API NETCONF: Execute Netconf configuration over a remote target

VMware, Inc. 404


VMware Telco Cloud Automation User Guide

NOOP
This value is used to create a decision point in a workflow without executing a step that has an
external side effect; for example, connecting a VM through SSH. Regardless of the number of
inputs provided, the inputs are discarded, and no outputs are provided.

The following is a sample code snippet:

{
"inputs": {
"inputMode" : { ... }
},
"startStepId": "stepId1",
"steps": {
"stepId1": {
"type" : "NOOP",
"conditions": [
{
"name": "inputMode",
"comparator": "equals",
"value": "active",
"nextStepId": "stepIdActive"
},
{
"name": "inputMode",
"comparator": "passive",
"value": "active",
"nextStepId": "stepIdPassive"
}
]
},
"stepIdActive": { ... },
"stepIdPassive": { ... }
},
...
}

VRO_SSH
You can use the vRO SSH to execute the SSH commands on external entities, such as NFs
and routers. The SSH connection from the TCP perspective originates from the vRO instance
to the target. Connectivity is established between vRO and the external system. The vRO step
that implements the SSH command execution is called SSH Command and can be inspected by
logging into vRO.

The following table lists the vRO SSH step input values.

Name Type Mandatory Note

cmd string Yes Script to be executed.

hostNameOrIP string Yes IP address or DNS name of


the target system.

VMware, Inc. 405


VMware Telco Cloud Automation User Guide

Name Type Mandatory Note

username string Yes Username to be used.

password string with password Yes Password to be used.


format

port number No defaults to 22 TCP port of the SSH


service.

passwordAuthentication boolean No defaults to true

encoding string No defaults to utf-8 Terminal encoding to be


used.

If the script to be executed is very long, you can use the vRO SCP action to transfer the script to
be executed to the target.

The usage of the SSH step is illustrated with the following template fragment:

{
"inputs": {
"target": {
"type": "string",
"required": true
},
"password": {
"type": "string",
"foramt" : "password",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_SSH",
"inBindings": {
"username": {
"type": "string",
"defaultValue": "root"
},
"password": {
"type": "password",
"exportName": "password"
},
"port": {
"type": "number",
"defaultValue": "22"
},
"cmd": {
"type": "string",
"defaultValue": "uptime"
},
"passwordAuthentication": {
"type": "boolean",
"defaultValue": true
},

VMware, Inc. 406


VMware Telco Cloud Automation User Guide

"hostNameOrIP": {
"type": "string",
"exportName": "target"
},
"encoding": {
"type": "string",
"defaultValue": "utf-8"
}
},
"outBindings": {
"out_result": {
"name": "result",
"type": "string"
}
}
}
},
"outputs": {
"out_result": {
"type": "string"
}
},


}

vRO_SCP
You can use the vRO SCP workflow to transfer the file to external systems, such as NF
and router, using the SCP protocol. The vRO SCP workflow is executed using vRO. The SSH
connection from the TCP perspective originates from the vRO instance to the target. Connectivity
between vRO and the is established. The vRO workflow that implements the SCP is called File
Upload and can be inspected by logging into vRO.

The following table lists the step and the corresponding input values.

Name Type Mandatory Note

inFile file Yes Name of the attachment to


be transferred.

destinationFileName string No defaults to the name of Name of the file to which


the attachment the file is copied.

workingDirectory string Yes The path where the file is


copied.

ip string Yes The IP or DNS name of the


target system.

username string Yes The username to be used.

VMware, Inc. 407


VMware Telco Cloud Automation User Guide

The following workflow template fragment illustrates the usage of the vRO SCP step:

{
"inputs": {
"target": {
"type": "string",
"required": true
},
"password": {
"type": "string",
"format": "password",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_SCP",
"inBindings": {
"inFile": {
"type": "file",
"defaultValue": "attachmentName.txt"
},
"username": {
"type": "string",
"defaultValue": "root"
},
"password": {
"type": "password",
"exportName": "password"
},
"destinationFileName": {
"type": "string",
"defaultValue": "foo.txt"
},
"workingDirectory": {
"type": "string",
"defaultValue": "/tmp"
},
"ip": {
"type": "string",
"exportName": "target"
}
},
"outBindings": {
"out_result": {
"name": "result",
"type": "string"
}
}
}
},
"outputs": {
"out_result": {
"type": "string"
}

VMware, Inc. 408


VMware Telco Cloud Automation User Guide

},


}

vRO_CUSTOM
The purpose of the vRealize Orchestrator (vRO) custom workflow tools is to run any vRO
workflow in a Telco Cloud Automation (TCA) workflow. The custom workflow has only one
mandatory input binding called vroWorkflowName. This input binding defines the name of the
custom workflow to be executed. Additional input bindings may be specified to provide input for
the workflow execution in vRO.

The following is a sample code snippet for workflow execution in vRO.

{
"name": "testCustomVro",
"version": "v1",
"schemaVersion": "3.0",
"readOnly": false,
"startStepId": "stepId1",
"inputs": {
"vimInput": {
"type": "vimLocation",
"required": true
}
},
"steps": {
"stepId1": {
"type": "VRO_CUSTOM",
"inBindings": {
"vim": {
"type": "vimLocation",
"exportName": "vimInput"
},
"vroWorkflowName": {
"type": "string",
"defaultValue": "REPLACE_NAME"
},
"vro_in_string": {
"type": "string",
"defaultValue": "in1"
},
"vro_in_integer": {
"type": "number",
"defaultValue": 123
},
"vro_in_double": {
"type": "number",
"defaultValue": 123.4
},
"vro_in_boolean": {
"type": "string",
"defaultValue": true

VMware, Inc. 409


VMware Telco Cloud Automation User Guide

},
"vro_in_file": {
"type": "file",
"defaultValue": "fileInWorkflow.bin"
}
},
"outBindings": {
"out_string": {
"name": "vro_out_string",
"type": "string"
},
"out_integer": {
"name": "vro_out_integer",
"type": "number"
},
"out_double": {
"name": "vro_out_double",
"type": "number"
},
"out_boolean": {
"name": "vro_out_boolean",
"type": "boolean"
},
"out_file": {
"name": "vro_out_file",
"type": "string"
}
}
}
},
"outputs": {
"out_string": {
"type": "string"
},
"out_integer": {
"type": "number"
},
"out_double": {
"type": "number"
},
"out_boolean": {
"type": "boolean"
},
"out_file": {
"type": "string"
}
}
}

Note The vRO custom step allows you to use a vRO workflow in a TCA workflow. However, the
workflow used must exist in vRO.

VMware, Inc. 410


VMware Telco Cloud Automation User Guide

vRO_EXEC
VRO_EXEC allows you to execute scripts on virtual machines without having SSH. You must fulfill
the following prerequisites before implementing the step through vRO:

n VM tools should be present in the virtual machine.

n vRO should be integrated with vCenter as the workflow that resides in vRO interacts with the
vCenter API.

If these prerequisites are fulfilled, you can execute the step on virtual machines in vCenter or
vCD.

The step has the following input bindings.

Name Type Mandatory Note

username string Yes The username to log in to


the virtual machine.

password string with password Yes The password to log in to


format the virtual machine.

vduName virtualMachine Yes The name of the VDU.

script string Yes The script to execute.

scriptType string Yes The type of the script.

scriptTimeout string Yes The maximal time (in


seconds) to wait for the
script to complete.

scriptRefreshTime string Yes The period in seconds at


which the script execution
is checked.

scriptWorkingDirectory string Yes The working directory


from which the script is
executed.

interactiveSession boolean Yes An interactive terminal.

To integrate the vCenter instance in vRO:

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Expand the workflow for which you want to view the vRO instances.

4 In the workflow steps, click below Action.

5 Click Open Session to log in to the vRO orchestration client automatically.

6 Click View Instance.

This navigates you to the execution.

VMware, Inc. 411


VMware Telco Cloud Automation User Guide

7 In Orchestrator, click Library > Workflows.

8 Click on the right side of the workflow beside the filter box.

9 Click Library > vCenter > Configuration > Add a vCenter Server Instance.

10 Click Run.

11 In the Set the vCenter Server instance properties tab, enter the IP / FQDN of vCenter as it is
registered in TCA-CP without HTTPS.

Note Leave the port, SDK URL, and ignore certificate fields as default. Alternatively, for
newer versions, ensure that ignore certificate is selected.

12 In the Set the connection properties tab, deselect the first option and enter the vCenter
administrator credentials.

13 In the Additional Endpoints tab, retain the default values and click Run.

This integrates the vCenter instance in vRO. You must verify that it is successful.

To verify if the vCenter instance is successfully integrated with vRO:

1 In Orchestrator, click Administration > Inventory.

2 Click vSphere vCenter Plugin.

3 From the list of vSphere vCenter Plugins, click the vSphere vCenter Plugin with the IP / FQDN
that you provided for vRO integration with vCenter and verify if your vCenter is listed, and
you can browse the inventory.

Note If your vCenter is listed and you can browse the inventory, it indicates that
your vCenter instance is successfully integrated with vRO. If the integration fails, see vRO
documentation for detailed information on vRO integration with vCenter.

VRO_EXEC does not require connectivity between vRO and the virtual machine but requires
connectivity from vRO to vCenter. The vRO workflow that implements the step is called Run
Script In Guest and can be inspected by logging in to vRO. The step has the following input
values:

{
"inputs": {
"target": {
"type": "string",
"required": true
},
"password": {
"type": "string",
"format": "password",
"required": true
}
},
"steps": {
"stepId1": {

VMware, Inc. 412


VMware Telco Cloud Automation User Guide

"type": "VRO_EXEC",
"inBindings": {
"username": {
"type": "string",
"defaultValue": "root"
},
"password": {
"type": "password",
"exportName": "password"
},
"vduName": {
"type": "virtualMachine",
"defaultValue": "myVduName"
},
"scriptType": {
"type": "string",
"defaultValue": "bash"
},
"script": {
"type": "string",
"defaultValue": "uptime"
},
"scriptTimeout": {
"type": "number",
"defaultValue": 12
},
"scriptRefreshTime": {
"type": "number",
"defaultValue": 3
},
"scriptWorkingDirectory": {
"type": "string",
"defaultValue": "/bin"
},
"interactiveSession": {
"type": "boolean",
"defaultValue": false
}
},
"outBindings": {
"out_result": {
"name": "result",
"type": "string"
}
}
}
},
"outputs": {
"out_result": {
"type": "string"
}
}
}

VMware, Inc. 413


VMware Telco Cloud Automation User Guide

JavaScript
The JavaScript (JS) step input is used to process workflow inputs or variables. The JS step has
one mandatory input binding that specifies the script to be executed. This input binding consists
of string type and text format. The text format allows you to enter multiline strings as values. The
JS input can have any number of additional input bindings that allow the values to be passed
through the script for processing. The output bindings specify how the results of the script
execution are interpreted.

Input binding

The following is a sample code snippet for script input binding:

{
"inputs": {
"input1" : { ... }
},
"variables": {
"variable1" : { ... },
"variable2" : { ... }
},
"steps": {
"stepId1": {
"type": "JS",
"inBindings": {
"script": {
"type": "string",
"format": "text",
"defaultValue": "…"
},
"myJsInput1": {
"type": "number",
"exportName": "input1"
},
"myJsInput2": {
"type": "number",
"exportName": "variable1"
}
},
"outBindings": {
"jsOutput1": {
"name": "variable1",
"type": "number"
}
}
},
...
}

VMware, Inc. 414


VMware Telco Cloud Automation User Guide

The script input binding contains the JavaScript to be executed. This script contains a plain
JavaScript code, and only those features of JavaScript required for data manipulation are used.
Therefore, you cannot access the external resource line HTTP connections or files. At the time of
executing JavaScript, the engine searches for the function with the following signature:

function tcaMainV1(workflowExecutionId, stepExecutionId, inputs, variables, startTime){ … }

The engine executes the function and populates the input parameters of the function with the
following values:

n workflowExecutionId: Workflow execution identifier.

n stepExecutionId: Step execution identifier.

n inputs: Mapping of input binding values.

n variables: Mapping of the variables.

n startTime: The timestamp (EPOCS) at which the step execution started.

The dictionary structure contains the following fields:

n Aborted: A Boolean value if the step execution aborts.

n Output: the output values of the step where the key to the map is the name of the value and
the value is the computed value.

n Logs: An array of log messages that belong to the step execution. A log entry contains the
following mandatory fields:

n msg: The log message.

n Time: The time of the log message

n Level: The level of the log message that contains one of the following values:

n ERROR

n INFO

n DEBUG

function tcaMainV1(workflowExecutionId, stepExecutionId, inputs, variables, startTime)


{
return {
"aborted": false,
"output": {
" jsOutput1": (inputs.myJsInput1 + inputs. myJsInput2) / 2,
},
"logs": [
{
"msg": "msg1",
"level": "DEBUG",
"time": startTime + 1000
},
{
"msg": "msg2",

VMware, Inc. 415


VMware Telco Cloud Automation User Guide

"level": "INFO",
"time": startTime + 2000
}
],
}
}

K8S
The purpose of the K8S action is to interact with the Kubernetes API securely. The K8S action
makes it possible to execute scripts in a POD. These scripts can have UNIX commands such as
awk, bash, jq, nc, and sed or commands that interact with the Kubernetes API such as kubectl and
helm. The commands that interact with the Kubernetes API are prepopulated with the Kubernetes
environment, and they work without credentials.

The system automatically allows access to the API to apply the principle of least privilege. This
provides the service account with access only to the relevant network function or VIM.

Note The helm version used during CNF LCM operations may differ from the helm version
available during the execution of the K8S step.

The scope of the step is defined by the following access rights:

n NF_RO: Read-only access to the network function on which the workflow is executed. Read-
only indicates that only REST requests that require get, watch, and list Kubernetes verbs are
allowed, and only the resources that belong to the network function are visible.

Note The associated resources depend on the configuration mode of the policy service.

n NF_RW: Read-write access to the network function on which the workflow is executed. Only
the resources that are associated with the network function are visible.

Note The associated resources depend on the configuration mode of the policy service.

n VIM_RO: Read-only access to every resource on the Kubernetes cluster in which the network
function is hosted or to the VIM, which is selected by the vimLocation input binding of the
step. Read-only indicates that only REST requests that require get, watch, and list Kubernetes
verbs are allowed.

n VIM_RW: Unrestricted access to every resource on the Kubernetes cluster in which the
network function is hosted or to the VIM, which is selected by the vimLocation input binding
of the step.

From the RBAC perspective, the user who initiated the workflow has sufficient privileges to use
the resource with the selected privilege level (read-only/read-write).

VMware, Inc. 416


VMware Telco Cloud Automation User Guide

You can specify the cluster in which you have created the POD via the target input binding. The
target can have the following values:

n WORKLOAD: The POD starts on the same cluster as the network function or VIM selected
by the vimLocation optional input binding. The workload cluster provides a good distribution
of the used resources as the POD always consumes resources from the network function or
selected VIM.

n MANAGEMENT: The POD starts on the management cluster that manages the VIM of the
network function or the selected VIM. The management clusters should only be used if the
workload cluster has no free capacity to run additional temporary PODs since this solution is
not fully scalable as it is limited by the resources of the workload cluster. Even if the POD is
running on the management cluster, it cannot access the management cluster but can access
the workload cluster.

Optionally, you can specify a node selector to further constrain the location of the PODs. In
this case, you can specify the nodeSelector input binding that sets the kubernetes.io/hostname:
<value> specified as the POD node selector.

Besides these fixed input bindings, you can specify any additional input binding. These additional
input bindings are available as environment variables or files during the step execution.

The following table shows the input bindings.

Name Type Mandatory Note

script string Yes The script to execute.

target string Yes The location of the POD.


n WORKLOAD
n MANAGEMENT

scope string Yes The scope of the


execution.
n NF_RO
n NF_RW
n VIM_RO
n VIM_RW

nodeSelector string No The location constraint of


the POD.

any any (cannot be a file) No The input required for the


script.

any file No The file required for the


script.

The runtime environment of the POD where you execute the script has the following properties.

n Available binaries: awk, bash, jq, head, helm, kubectl, nc, sed, tail.

VMware, Inc. 417


VMware Telco Cloud Automation User Guide

n Each additional input is available as an environment variable. The name of the environment
variable is the TCA_INPUT_ concatenated with the name of the input binding. For a file, the
location of the file is specified as a value.

n The CNF environment variable is set with the identifier of the network function if the step is
executed within the context of a network function.

n The network service environment variable contains the name of the network function if the
step is executed within the context of a network function.

n TCA_NAMESPACES environment variable contains the comma-separated list of namespaces


where the network function resides if the step is executed within the context of a network
function.

n CLUSTER_NAME environment variable contains the name of the workload cluster if the script
is executed within the WORKLOAD target.

The following sample code snippet provides an example workflow template fragment for this
step.

{
"inputs": {
"script": {
"type": "string",
"format" : "text",
"required": true
},
"target": {
"type": "string",
"required": true
},
"scope": {
"type": "string",
"required": true
},
"nodeSelector": {
"type": "string",
"required": false
}
},
"outputs" : {
"FINAL_OUTPUT" : {
"type" : "string",
"description" : "Final Output"
}
},
"steps" : {
"step0" : {
"type" : "K8S",
"inBindings" : {
"timeout": {
"type": "number",
"defaultValue" : 60
},

VMware, Inc. 418


VMware Telco Cloud Automation User Guide

"script" : {
"type" : "string",
"format" : "text",
"exportName" : "script"
},
"inputNumber" : {
"type" : "number",
"defaultValue" : 22
},
"target" : {
"type" : "string",
"exportName" : "target"
},
"scope" : {
"type" : "string",
"exportName" : "scope"
},
"nodeSelector": {
"type" : "string",
"exportName" : "nodeSelector"
},
"file1": {
"type": "file",
"defaultValue" : "file1.txt"
},
"file2": {
"type": "file",
"defaultValue" : "file2.txt"
}
},
"outBindings" : {
"FINAL_OUTPUT" : {
"name" : "result",
"type" : "string"
}
}
}
},

}

The following sample code snippet has a workflow template fragment for running Kubernetes
workflows on a VIM without a network function context.

{
"inputs" : {
"script": {
"type": "string",
"format" : "text",
"required": true
},
"target": {
"type": "string",
"required": true
},

VMware, Inc. 419


VMware Telco Cloud Automation User Guide

"scope": {
"type": "string",
"required": true
},
"vimId": {
"type": "vimLocation",
"required": true
}
},
"outputs" : {
"FINAL_OUTPUT" : {
"type" : "string",
"description" : "Final Output"
}
},
"steps" : {
"step0" : {
"type" : "K8S",
"inBindings" : {
"script" : {
"type" : "string",
"exportName" : "script",
"format" : "text"
},
"inputNumber" : {
"type" : "number",
"defaultValue" : 22
},
"target" : {
"type" : "string",
"exportName" : "target"
},
"scope" : {
"type" : "string",
"exportName" : "scope"
},
"myVimId": {
"type": "vimLocation",
"exportName" : "vimId"
},
"file1": {
"type": "file",
"defaultValue" : "file1.txt"
},
"file2": {
"type": "file",
"defaultValue" : "file2.txt"
}
},
"outBindings" : {
"FINAL_OUTPUT" : {
"name" : "result",
"type" : "string"
}
}

VMware, Inc. 420


VMware Telco Cloud Automation User Guide

}
},

}

API Netconf
The purpose of the Netconf step is to interact with a service that has a Netconf interface.
Network elements provide a Netconf interface as a configuration interface. It is used to set or
retrieve configuration data from a Netconf-capable device.

The following table shows the input bindings.

Name Type Mandatory Note

action string Yes The type of action.


n get: Request
the committed
configuration and
device state
information from the
NETCONF server.
n getconfig: Request
configuration data from
the NETCONF server.
The child tag elements
<source> and <filter>
specify the source and
scope of data to
display.
n merge: The device
merges new
configuration data
into the existing
configuration data.
n replace: The device
replaces existing
configuration data with
the new configuration
data

username string Yes The username to


authenticate

password string with password Yes The password to


format authenticate.

hostname string Yes The IP or DNS name of the


service.

port number Yes The port number of the


service.

VMware, Inc. 421


VMware Telco Cloud Automation User Guide

Name Type Mandatory Note

inFile file Yes, if config is empty and The configuration file used.
the action is "merge" or
"replace".

config string Yes, if inFile is empty and The content of the


the action is "merge" or configuration.
"replace".

The following workflow template fragment illustrates the usage of the netconf step.

{
"inputs": {
"hostname": {
"type": "string"
},
"password": {
"type": "string",
"format": "password"
}
},
"steps": {
"stepId1": {
"nextStepId": "stepId2",
"type": "NETCONF",
"inBindings": {
"action": {
"type": "string",
"defaultValue": "merge"
},
"inFile": {
"type": "file",
"defaultValue": "netconf.content.1.xml"
},
"username": {
"type": "string",
"defaultValue": "admin"
},
"password": {
"type": "string",
"format": "password",
"exportName": "password"
},
"hostname": {
"type": "string",
"exportName": "hostname"
},
"port": {
"type": "number",
"defaultValue": "17830"
}
},
"outBindings": {
"out_step1": {

VMware, Inc. 422


VMware Telco Cloud Automation User Guide

"name": "result",
"type": "string"
}
}
},
"stepId2": {
"nextStepId": "stepId3",
"type": "NETCONF",
"inBindings": {
"vim": {
"type": "vimLocation",
"exportName": "inVim"
},
"action": {
"type": "string",
"defaultValue": "replace"
},
"inFile": {
"type": "file",
"defaultValue": "netconf.content.2.xml"
},
"username": {
"type": "string",
"defaultValue": "admin"
},
"password": {
"type": "string",
"format": "password",
"exportName": "password"
},
"hostname": {
"type": "string",
"exportName": "hostname"
},
"port": {
"type": "number",
"defaultValue": "17830"
}
},
"outBindings": {
"out_step2": {
"name": "result",
"type": "string"
}
}
},
"stepId3": {
"nextStepId": "stepId4",
"type": "NETCONF",
"inBindings": {
"vim": {
"type": "vimLocation",
"exportName": "inVim"
},
"action": {

VMware, Inc. 423


VMware Telco Cloud Automation User Guide

"type": "string",
"defaultValue": "get"
},
"username": {
"type": "string",
"defaultValue": "admin"
},
"password": {
"type": "string",
"format": "password",
"exportName": "password"
},
"hostname": {
"type": "string",
"exportName": "hostname"
},
"port": {
"type": "number",
"defaultValue": "17830"
}
},
"outBindings": {
"out_step3": {
"name": "result",
"type": "string"
}
}
},
"stepId4": {
"type": "NETCONF",
"inBindings": {
"vim": {
"type": "vimLocation",
"exportName": "inVim"
},
"action": {
"type": "string",
"defaultValue": "getconfig"
},
"username": {
"type": "string",
"defaultValue": "admin"
},
"password": {
"type": "string",
"format": "password",
"exportName": "password"
},
"hostname": {
"type": "string",
"exportName": "hostname"
},
"port": {
"type": "number",
"defaultValue": "17830"

VMware, Inc. 424


VMware Telco Cloud Automation User Guide

}
},
"outBindings": {
"out_step4": {
"name": "result",
"type": "string"
}
}
}
},
"outputs": {
"out_step1": {
"type": "string"
},
"out_step2": {
"type": "string"
},
"out_step3": {
"type": "string"
},
"out_step4": {
"type": "string"
}
},

}

Managing Workflow Execution


A workflow is executed with a given context and input parameters.

You must specify the following mandatory parameters when executing a workflow:

n Workflow ID: The identifier of the workflow to be executed.

n Context: The context in which the workflow is executed.

n Inputs: The values of the workflow inputs.

n Creator: The identifier of the user who initiated the execution.

n Attachments: The collection of attachments available for the steps.

n Pause at Steps: The steps at which the execution should be paused.

n Pause at Failure: Indicates if the workflow execution should pause in case of a failed step
execution.

The following information is automatically populated when executing a workflow:

n Workflow Execution ID: Identifier of the workflow execution.

n Retained Until: The time until which the workflow execution is retained.

n Start Time: The time at which the workflow execution started.

n End Time: The time at which the workflow execution is completed.

VMware, Inc. 425


VMware Telco Cloud Automation User Guide

n Is aborted: Indicates if the execution is aborted.

n Variables: The definition of contextual information during the workflow execution. An


example of a dynamic variable is the identifier of the Network Function on which the
workflow is executed.

n Workflow snapshot: Snapshot of the workflow captured when creating the workflow
execution. A snapshot is created from the complete workflow and its related objects, and
the lifecycle of the workflow is associated with the existence of the workflow execution. This
ensures that even if the workflow is modified or deleted, the original workflow is available
throughout the execution of the workflow.

You can execute a workflow in the following context types:

n NONE

n NF

n NS

The behavior of the steps varies based on the selected context. The following table provides the
difference between the three contexts.

Context

Context-Specific Behavior NONE Network Function Network Service

Auto-injected variables vnfId nsId

Context n tca_vnf_id: the n tca_ns_id: the identifier


identifier of the NF of the NS
n tca_vnf_name: the n tca_ns_name: the name
name of the NF of the NS
n tca_vnf_package_id: n tca_ns_package_id: the
the identifier of the NF identifier of the NS
package package
n tca_vnf_vnfd_id: the n tca_ns_nsd_id: the
identifier of the VNFD identifier of the NSD
n tca_opex_id: the n tca_opex_id: the
identifier of the LCM identifier of the LCM
operation operation
n tca_interface_name: n tca_interface_name:
the name of the the name of the
interface in the VNFD interface in the NSD

VIM location Required Deduced from NF Location Required

After the initial parameters of the workflow execution are set, VMware Telco Cloud Automation
creates the initial step execution. The following diagram illustrates the structure of the step
execution.

VMware, Inc. 426


VMware Telco Cloud Automation User Guide

“Snapshot”

Workflow
Step Workflow
execution

“Snapshot” Input assignment


Step
Execution
Inbinding Output assignment
Followed by
Step
(opt condition)
Outbinding Variable assignment

Log

Prev step execution

The step execution consists of the following attributes:

n Step Execution ID: Identifier of the step execution, which is a continuous list of non-negative
numbers starting from zero. Each step execution is followed by a step execution identifier.

n Step snapshot: Snapshot of the step captured from the workflow snapshot when creating
the step execution. This snapshot ensures that the changes that may occur to steps in the
workflow snapshot do not affect the created step executions.

n Start time: The time at which the step execution is created. However, this may not be the
time at which the actual execution started.

n Aborted: A step execution aborted due to execution failure or by performing an Abort


operation during the workflow execution process.

n State: Current state of the step execution, which comprises the following values:

n Waiting Before Execution: Telco Cloud Automation awaits user input to execute the step.

n Executable: The step is ready to be executed.

n Executing: The step execution is in progress.

n Executed: The step execution is completed.

n Waiting After Execution: Telco Cloud Automation awaits user input after the step
execution.

n Retry: The step execution will be retried.

n Ready To Compute Next Step: The step execution is successful, and the next step to be
executed is calculated.

n Finished: All administrative actions are completed with the step execution, and the
execution becomes immutable.

n End time: Optional time at which the step execution is completed. The end time of the step
execution is set when the step reaches the executed state.

VMware, Inc. 427


VMware Telco Cloud Automation User Guide

n Variables: Snapshot of the variables taken before the execution of the current step.

n Outputs: The values the step produces. All values of the step are not used to set variables or
workflow outputs.

The current state of the step execution is illustrated in the following diagram.

Create Executable Executing Executed

If pause
at step

Ready to resume Waiting


resume Compute after
next step execution

Retry (new step


excution scheduled)

Abort or schedule
If pause
Waiting new step execution
at step Abort
before Finished
execution

During the execution of the workflow, log messages are created. These log messages can be
associated with the workflow execution or the step execution. The association depends on the
origin of the log. The log messages are always bound to the step execution. These log messages
consist of the following entries: message, time, and level.

The workflow execution is retained until the retention time, which is 14 days. After the retention
time has passed, the workflow execution and all its related objects, such as logs, attachments,
and step executions, are purged. The retention time can be extended up to 13 days.

Run a Standalone Workflow

Prerequisites

You must create a standalone workflow and select the context to run the workflow from the
below options.

n Network Function (NF)

n Network Service (NS)

n None

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Catalog > Workflows.

VMware, Inc. 428


VMware Telco Cloud Automation User Guide

3 Click the vertical ellipse on a workflow that you want to run and click Execute.

4 From the Context Type drop-down, select the context to run the workflow.

Note Select None as the context type if you want to execute a workflow without any
context.

5 Click the browse icon and select an instance based on the context type selected.

6 Click OK.

7 Enter the mandatory inputs for the workflow.

8 (Optional) From the Steps to Pause drop-down, select the step in which you want to pause
the workflow. If you want to pause the workflow when it fails, select Pause on failure.

9 Click EXECUTE.

Run a Workflow Through a Network Function LCM Operation

Prerequisites

You must enter the inputs for parameterization of the LCM operation (instantiate, scale, heal,
upgrade, and terminate).

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Catalog > Network Function.

3 Click the vertical ellipse of a network function for which you want to run a workflow through a
network function LCM operation and click Instantiate.

4 Enter the inputs in the Inventory Detail tab and click NEXT.

5 In the Network Function Properties tab, click the horizontal ellipse of a connection point from
the Connection Point Network Mappings section.

6 Select a network from the list of networks and click OK.

7 Click NEXT.

8 In the Inputs tab, enter the pre-installation properties in Pre-Installation Properties.

9 Click Post-Installation Properties, enter the post-installation properties, and click NEXT.

10 In the Review tab, review all the information entered for the network function instance and
click INSTANTIATE.

Run an Embedded Network Function Workflow


You can run the workflows embedded in the network function packages outside an LCM
operation.

VMware, Inc. 429


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Network Function.

3 Click the vertical ellipse of a network function for which you want to run an embedded
network function workflow and click Run Workflow.

4 Select the workflow you want to execute and click NEXT.

5 Enter the inputs for the workflow and click NEXT.

6 (Optional) From the Steps to Pause drop-down, select the step in which you want to pause
the workflow. If you want to pause the workflow when it fails, select Pause on failure.

7 Click EXECUTE.

View all Workflow Executions


You can view the details of any workflow from the list of workflows on the Workflows page.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Click the expand (>) icon of a workflow for which you want to view the details.

View Workflow Step Execution Details


You can view the workflow step execution details of any workflow from the list of workflows on
the Workflows page.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Click the expand (>) icon of a workflow for which you want to view the details.

4 Click the expand (>) icon of a workflow step in Workflow Steps to view the workflow step
execution details of that step.

View Workflow Executions Running on a Network Function


You can view the workflow that runs on a network function.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Network Function.

3 Click a network function for which you want to view the workflow execution details.

VMware, Inc. 430


VMware Telco Cloud Automation User Guide

4 Click the Workflows tab to see all the workflows run on that network function.

View Workflow Executions Running on a Network Service


You can view the workflow that runs on a network service.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Network Service.

3 Click a network service for which you want to view the workflow execution details.

4 Click the Workflows tab to see all the workflows run on that network service.

View Workflow Executions Running on vRO


The workflows implemented by vRO are available in the vRO product. vRO provides detailed
information from the workflow that is visible on VMware Telco Clouds Automation. It is important
to check the vRO workflow logs if a workflow fails. When you select a workflow step from
a workflow in VMware Telco Clouds Automation, you can navigate to the workflow that
corresponds to the workflow step.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Click the expand (>) icon of a workflow for which you want to view the vRO instances.

4 In the workflow steps, click the icon below Action.

5 Click Open Session.

6 Click View Instance.

Note You must have system admin privileges to execute the above procedure, else you
must log in to the vRO manually.

Debug Workflow Executions


When creating a workflow, you can specify the step where the workflow needs to stop. You can
edit the steps in a workflow only if the workflow execution pauses.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Click the expand (>) icon of a paused workflow to debug it.

4 Click the expand (>) icon of the Workflow Steps.

VMware, Inc. 431


VMware Telco Cloud Automation User Guide

5 In the paused step, click the edit icon below Action.

6 Edit the workflow step to debug it.

7 Click PATCH WORKFLOW.

8 Click the vertical ellipse on the workflow, which you have debugged, and click Resume.

9 Click RESUME.

Update Original Workflow Based on Workflow Executin Changes


You can update the original workflow based on the changes in the workflow execution.

Note You can update the original workflow for standalone workflows only. The network function
and network service workflows cannot be updated.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Click the vertical ellipse of a workflow for which you want to update the changes.

4 Click Update Workflow.

5 In the Confirm Update popup, click UPDATE.

Delete a Workflow Execution


You can delete a workflow if you do not need it.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Click the vertical ellipse of a workflow that you want to delete.

4 Click Delete.

5 In the Confirm Delete popup, click DELETE.

End a Workflow Execution


You can end a workflow execution irrespective of the workflow type.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Click the vertical ellipse of a workflow execution that you want to end.

4 Click Abort.

VMware, Inc. 432


VMware Telco Cloud Automation User Guide

5 In the Confirm Abort popup, click ABORT.

Extend Retention Time for Workflow


You can extend the retention duration for a workflow.

Note The initial retention is fixed at 14 days and is not configurable. However, you can extend
the retention duration for a workflow execution instance by using the VMware Telco Cloud
Automation portal.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Click Inventory > Workflows.

3 Click the vertical ellipse of a workflow for which you want to extend the retention time.

4 Click Retention Time.

5 Select the retention time from the Retention Time (days) drop-down.

6 Click UPDATE.

Role-based access control for workflows


Role-based access control (RBAC) is a method of regulating access to computer or network
resources based on the roles of individual users within your organization. Telco Cloud
Automation defines three privileges to access the workflows.

n Workflow Read: The user can view the workflow instances using this privilege.

n Workflow Design: The user can design a workflow using this privilege.

n Workflow Execute: The user can execute a workflow using this privilege.

Note If the context of the workflow execution is not "none" then the user needs NF/NS
LCM permission to run workflows on the concrete NF/NS instance. The user may need more
permissions based on the step input values of certain steps.

Telco Cloud Automation has two built-in default roles. They are:

n Workflow Designer: The user can read and design the workflows using this role.

n Workflow Executor: The user can read and execute the workflows using this role.

Uses of Role-Based Access Control for Workflows

n The users can design or execute any workflow using the built-in roles.

n You can restrict the roles of a user with advanced filters.

n You can use the name of the workflow to limit the permission to only access a specified set of
workflows.

VMware, Inc. 433


VMware Telco Cloud Automation User Guide

n You can always access the workflows you have created.

n If you want to prevent a user from accessing all workflow instances, then you can define an
advanced filter with a random name as a designator.

n You can grant access to a user to an embedded workflow instance with the workflow read
privilege, or the user can inherit it from having access to the catalog entry. This means that
if the user can access a catalog entry, it gives the user implicit access to all the workflows
embedded in a catalog.

n The workflow execution creator and system administrator can access the workflows.

VMware, Inc. 434


Updating NETCONF Protocol
Using VMware Telco Cloud
Automation
23
Network Configuration Protocol (NETCONF) is a network management protocol developed and
standardized by the Internet Engineering Task Force (IETF). By using specific workflows and by
uploading the required configuration files to VMware Telco Cloud Automation, you can apply
certain configuration changes to your NETCONF environment.

Supported NETCONF Operations


The following NETCONF operations are supported for VMware Telco Cloud Automation version
1.8:

n get

n getconfig

n edit-config

n merge

n replace

Prerequisites
Unlike other workflows in VMware Telco Cloud Automation that use VMware vRealize
Orchestrator, the NETCONF workflow runs from the NETCONF client that is located within the
Telco Cloud Automation Control Plane (TCA-CP) appliance. From a connectivity and firewall
perspective, ensure that TCA-CP has access to the NETCONF server IP address and port before
running the workflow.

getconfig Workflow Example


You can provide the following possible inputs:

n NETCONF server IP.

n User name for logging in to the NETCONF server.

n Password for logging in to the NETCONF server.

n Port on which the NETCONF server runs.

VMware, Inc. 435


VMware Telco Cloud Automation User Guide

n If you change the action to get, VMware Telco Cloud Automation runs the get command on
the NETCONF server.

{
"id":"netconf_getconfig_workflow",
"name": "Netconf Get-Config Workflow",
"description":"Netconf Get-Config Workflow",
"version":"1.0",
"startStep":"step0",
"variables": [
{"name":"vnfId", "type": "string"}
],
"input": [
{"name": "USER", "description": "Username", "type": "string"},
{"name": "PWD", "description": "Password", "type": "password"},
{"name": "HOSTNAME", "description": "Hostname", "type": "string"}
],
"output": [
{"name":"result", "description": "Output Result", "type": "string"}
],
"steps":[
{
"stepId":"step0",
"workflow":"NETCONF_WORKFLOW",
"namespace": "nfv",
"type":"task",
"description": "Netconf Get-Config Workflow",
"inBinding":[
{"name":"action", "type":"string", "default" : "getconfig"},
{"name": "username", "type": "string", "exportName": "USER"},
{"name": "password", "type": "password", "exportName": "PWD"},
{"name": "port", "type": "number", "default": "17830"},
{"name": "hostname", "type": "string", "exportName": "HOSTNAME"}
],
"outBinding": [
{"name": "result", "type": "string", "exportName": "result"}
],
"nextStep":"END"
}
]
}

merge Workflow Example


You can provide the following possible inputs:

n NETCONF server IP.

n User name for logging in to the NETCONF server.

n Password for logging in to the NETCONF server.

n Port on which the NETCONF server runs.

VMware, Inc. 436


VMware Telco Cloud Automation User Guide

n XML file containing the configuration to be applied on the NETCONF server.

n If you change the action to replace, VMware Telco Cloud Automation runs the edit-config
command with the replace option.

{
"id":"netconf_merge_workflow",
"name": "Netconf Merge Workflow",
"description":"Netconf Merge Workflow",
"version":"1.0",
"startStep":"step0",
"variables": [
{"name":"vnfId", "type": "string"}
],
"input": [
{"name": "USER", "description": "Username", "type": "string"},
{"name": "FILENAME", "description": "Filename", "type": "file"},
{"name": "PWD", "description": "Password", "type": "password"},
{"name": "HOSTNAME", "description": "Hostname", "type": "string"}
],
"output": [
{"name":"result", "description": "Output Result", "type": "string"}
],
"steps":[
{
"stepId":"step0",
"workflow":"NETCONF_WORKFLOW",
"namespace": "nfv",
"type":"task",
"description": "Netconf Merge Workflow",
"inBinding":[
{"name":"action", "type":"string", "default" : "merge"},
{"name": "inFile", "type": "file", "exportName": "FILENAME"},
{"name": "username", "type": "string", "exportName": "USER"},
{"name": "password", "type": "password", "exportName": "PWD"},
{"name": "port", "type": "number", "default": "17830"},
{"name": "hostname", "type": "string", "exportName": "HOSTNAME"}
],
"outBinding": [
{"name": "result", "type": "string", "exportName": "result"}
],
"nextStep":"END"
}
]
}

VMware, Inc. 437


Monitoring Performance and
Managing Faults 24
You can monitor the network functions to track their performance and perform actions based on
their CPU utilization and other parameters.

This chapter includes the following topics:

n Managing Alarms

n Performance Management Reports

n Monitor Instantiated Virtual Network Functions and Virtual Deployment Units

n Monitor Instantiated CNF

n Monitor Instantiated Network Services

Managing Alarms
The Dashboard tab displays the total number of alarms triggered. It also displays the number of
alarms according to their severity.

VNF Alarms
VNF alarms are triggered when VMware Telco Cloud Automation identifies anomalies in the
network connection status or when the power state changes. VMware Telco Cloud Automation
also triggers VNF alarms that are predefined and user-defined in VMware vSphere.

CNF Alarms
CNF triggers alarms for system level and service level anomalies. For example, system level
alarms are triggered when an image or resource is not available, or when a pod becomes
unavailable. Service level alarms are triggered when the number of replicas that you have
specified is not identical to the number of nodes that get created, and so on. Here are some
possible anomalies when VMware Telco Cloud Automation displays an error message and
triggers an alarm. These alarms are in the Critical state:

n Image pull error - The URL to the Helm Chart image is incorrect or the image cannot be
accessed due to network issues.

n Crash loop backoff - The application fails to load.

VMware, Inc. 438


VMware Telco Cloud Automation User Guide

n Progress deadline exceeded - Kubernetes controller exceeds the maximum number of


tries to recover a crashed application.

n Failed create - Kubernetes controller fails to create or schedule a Kubernetes Pod.

n Resource failed - Kubernetes controller fails to create the resources.

VIM Alarms
VIM alarms are triggered at the VIM level for CNF infrastructure anomalies. For example, when
a Kubernetes cluster reaches its memory or CPU resource limit, its corresponding VIM triggers
an alarm. Here are some possible CNF infrastructure anomalies for which alarms are triggered.
These alarms are in the Warning state:

n Network unavailable - Worker node is unable to reach the network.

n PID pressure - Worker node encounters Process ID (PID) limitations.

n Disk pressure - Worker node is running out of disk space.

n Memory pressure - Worker node is running out of memory.

Viewing and Acknowledging Alarms


Alarms are triggered at four levels:

n CNF/VNF level - To view the alarms of individual CNFs and VNF instances, go to the
Inventory tab, click a VNF or CNF instance, and click Alarms.

n Network Service level - VNF and CNF alarms are listed at the corresponding Network Service
level.

n VDU level - For a VNF, the alarms are also listed at the corresponding VDU level.

n Global level - You can view the global alarms for all entities and users from the
Administration > Alarms tab.

To view and acknowledge alarms, perform the following steps:

1 Go to Administration > Alarms. Details of the alarm such as the alarm name, its associated
entity, its associated managed object, alarm severity, alarm triggered time, description, and
state are displayed.

2 To acknowledge a triggered alarm, select the alarm and click Acknowledge. When the
acknowledgment is successful, the state of the alarm changes to Acknowledged. To
acknowledge multiple alarms together, select the alarms that you want to acknowledge and
click Acknowledge.

By default, the list refreshes every 120 seconds. To get the current state of the alarms, click
Refresh.

VMware, Inc. 439


VMware Telco Cloud Automation User Guide

Performance Management Reports


Performance management reports are useful to monitor the behavior of the network. You can
generate performance management reports for a VNF or a CNF instance.

VNF Reports
You can generate reports for performance metrics such as Mean CPU Usage and Mean Memory
Usage for each VNF. Set the frequency of report collection, end date and time, and the
performance metrics that you want to generate reports for.

You can collect the following performance metrics:

n Mean CPU Usage

n Disk Read

n Disk Write

n Mean Memory Usage

n Number of Incoming Bytes

n Number of Outgoing Bytes

n Number of Incoming Packets

n Number of Outgoing Packets

The performance management report includes stats collected at the VNF and VDU levels for a
VNF instance.

CNF Reports
For this release, you can generate only the Mean CPU Usage and Mean Memory Usage
performance metrics reports.

Note To generate performance management reports for CNFs, you must install Prometheus
Operator on the namespace vmware-paas and set the default port to 9090.

Scheduling Performance Management Reports


Create and schedule a performance management job report for a VNF or a CNF.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Network Functions > Inventory.

3 Click the desired CNF or VNF, and from the details page click the PM Reports tab.

4 Click Schedule Reports.

VMware, Inc. 440


VMware Telco Cloud Automation User Guide

5 In the Create Performance Management Job Report window, enter the following details:

n Provide a name for the report.

n Select the collection period time, reporting frequency in hours and minutes, reporting end
date and time.

Note The minimum reporting frequency is 5 minutes.

n Select the performance metrics data to collect.

6 Click Schedule Reports.

The report is scheduled and is available under PM Reports in the details page. It stays active
from the current time stamp until the provided end time.

7 To download the generated report, click the More (>) icon against your report name and click
Download.

The report is downloaded to your system in the CSV format.

Note You can only download those reports that are in the Available state. The generated
reports are available for download for 7 days.

VMware, Inc. 441


VMware Telco Cloud Automation User Guide

Monitor Instantiated Virtual Network Functions and Virtual


Deployment Units
After you instantiate a Virtual Network Function (VNF), you can monitor its performance metrics
and take corrective actions.

Prerequisites

Note This procedure is not supported for network functions that are imported from partner
systems.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Network Functions > Inventory.

3 Click the desired network function to monitor.

The network function topology is displayed.

4 Perform the desired monitoring or management actions:

n To view more details of a Virtual Deployment Unit (VDU) such as alerts, status, name,
memory, vCPU, and storage, click the i icon.

n To view more information about the virtual link, point at the blue square icon on the VDU.

n To view detailed information about the VDU and the VNFs, their performance data,
alarms, and reports, click the ⋮icon on the desired VDU and click Summary. The details
page provides the following tabs:

n Summary - Provides a detailed summary of the VDU.

VMware, Inc. 442


VMware Telco Cloud Automation User Guide

n Alarms - Lists the alarms generated for the VDUs of the selected VNFs. You can
acknowledge alarms from here.

n Performance Monitoring - Provides a graphical view of the performance metrics for


CPU, Network, Memory, and Virtual Disk. For example, to view more information
about the CPU performance, click CPU. For an overview of all metrics, click Overview.
The performance metrics captured here are live with an interval of 1 hour.

n Reports - To set parameters for generating performance reports, click Schedule


Reports. You can generate historic reports for a metric group, set the collection
period, reporting period, and the reporting end date.

n To view historical tasks for a desired network function, go to Network Functions >
Inventory and click the desired network function. The Tasks tab displays the historical
tasks and their status.

Monitor Instantiated CNF


After you instantiate a Containerized Network Function (CNF), you can monitor its performance
metrics and take corrective actions.

Prerequisites

Note This procedure is not supported for network functions that are imported from partner
systems.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Network Functions > Inventory.

3 Click the desired CNF to monitor.

The following tabs are displayed:

VMware, Inc. 443


VMware Telco Cloud Automation User Guide

n Inventory - Displays the summary of the status of the pods, deployments, and services.
To display the tree view, click the tree icon below the Inventory tab.

n Tasks - Displays historical tasks for the CNF.

n Alarms - Lists the alarms generated for the selected CNFs. You can acknowledge alarms
from here.

n PM Reports - Displays the list of performance reports that are being collected. To
set parameters for generating performance reports, click Generate Reports. You can
generate reports for a metric group, set the collection period, reporting period, and the
reporting end date.

n Init Params - Displays the input parameters for the CNF.

VMware, Inc. 444


VMware Telco Cloud Automation User Guide

Monitor Instantiated Network Services


After you instantiate a network service, you can view its topology and task information from the
VMware Telco Cloud Automation web interface.

Prerequisites

Instantiate the network service.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Select Network Services > Inventory.

3 Click the desired network service to monitor.

The network service topology is displayed.

4 Perform the desired monitoring or management actions:

n To view and acknowledge the consolidated alarms of all the VNFs and CNFs that belong
to the network service, click the Alarms tab. You can also view the alarms from the
Topology tab. Click the More (...) icon on the desired network service and select Alarms.

n View historical tasks for the selected network service from the Tasks tab.

VMware, Inc. 445


Administrating VMware Telco
Cloud Automation 25
In the Administration tab, you can perform system updates, view and acknowledge alarms, and
manage tags. You can also view logs and download them for auditing and troubleshooting.

This chapter includes the following topics:

n Managing RBAC Tags

n Viewing Audit Logs

n Troubleshooting and Support

n Administrator Configurations

n License Consumption

n Managing vCenter Certificate Changes

Managing RBAC Tags


You can create, edit, and delete RBAC tags from the VMware Telco Cloud Automation UI.

Create a Tag
Create a tag and associate a key-value pair and objects to it.

Prerequisites

To perform this operation, you require the Tag Admin privilege.

Procedure

1 Log in to the VMware Telco Cloud Automation user interface.

2 Go to Administration > Tags and click Add.

3 In the Add Tag wizard:

a Key - Enter a tag key.

b Values - Enter a tag value.

VMware, Inc. 446


VMware Telco Cloud Automation User Guide

c Description - Enter a description of the tag.

d Associable Objects - Select the associable objects:

n Network Function Catalog

n Network Service Catalog

n Network Function Instance

n Network Service Instance

n Virtual Infrastructure

4 Click Add.

Results

You have successfully created a tag.

Edit a Tag
You can edit a tag to add or delete key values and select or deselect associable objects.

Prerequisites

To perform this operation, you require the Tag Admin privilege.

Procedure

1 Log in to the VMware Telco Cloud Automation user interface.

2 Go to Administration > Tags.

3 Select the tag to edit and click Edit.

4 In the Edit Tag wizard:

a Values - Add or delete a tag value.

b Description - Edit the tag description.

c Associable Objects - Select or deselect the associable objects.

5 Click Update.

Results

You have successfully updated the tag.

Delete a Tag
You can delete a tag and remove it from the Tags list.

Prerequisites

To perform this operation, you require the Tag Admin privilege.

VMware, Inc. 447


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation user interface.

2 Go to Administration > Tags.

3 Select the tag and click Delete.

4 Click Delete in the confirm operation pop-up window.

Results

You have successfully deleted the tag.

Transform Object Tags


You can transform the string format tags that were created in the older versions of VMware
Telco Cloud Automation to key-value pair tags.

Prerequisites

To perform this operation, you require the Tag Admin privilege.

Procedure

1 Log in to the VMware Telco Cloud Automation user interface.

2 Go to Administration > Tags.

3 Select the tag and click Transform Object Tags.

4 Click Transform.

5 Click Update.

Results

You have successfully transformed the tag to a key-value pair tag.

Viewing Audit Logs


If an error occurs, you can review the logs and take corrective actions on your deployment. Or,
you can download the logs for auditing purposes.

To view or download audit logs, go to Administration > Audit Logs.

Log entries from the specified time period are displayed in the table. You can click Download
Audit Logs to download a copy of the displayed logs to your local machine.

Troubleshooting and Support


If VMware Telco Cloud Automation does not operate as expected, you can create a support
bundle that includes logs and database files for analysis.

VMware, Inc. 448


VMware Telco Cloud Automation User Guide

Go to Administration > Troubleshooting, select the logs and click Request to generate a support
bundle.

If you intend to contact VMware support, go to Administration > Support and copy the support
information to your clipboard. This information is required in addition to the support bundle.

Note To enable quick and hassle-free troubleshooting, the Auto Approval option for collecting
VMware Telco cloud Automation logs is enabled by default. To provide the logs manually,
deactivate this option on each appliance.

Administrator Configurations
You can do the following:

n Crate login banners

n Configure Kubernetes policies

n Create Kernel versions

Create Login Banners


You can create the login message and enable the consent button on the login screen.

Prerequisites

To perform this operation, you'll need the Admin privilege.

Procedure

1 Log in to the VMware Telco Cloud Automation user interface.

2 Go to Administration > Configurations.

The Login Banner page displays.

3 Click Edit.

4 On the Login Banner page:

a Status - To enable or disable the login banner, click the button corresponding to Status.
When you enable the Status, the Telco Cloud Automation displays the login banner at the
login screen.

b Checkbox Consent - To enable or disable the consent check box, click the button
corresponding to Checkbox Consent. When you enable the Checkbox Consent, the user
must agree to the message before login to the Telco Cloud Automation.

c Title - Title of the consent message. The maximum allowed length of the title message is
48 characters.

VMware, Inc. 449


VMware Telco Cloud Automation User Guide

d Message - The detailed message for the consent. To view the complete message on the
login screen, you can click the title. The maximum allowed length of the title message is
2048 characters.

5 Click Save.

Results

You have successfully created the login banner.

Kubernetes Policy Configurations


The hierarchical model of the isolation modes (restrictions) for TCA global, VIMs, and CNFs allow
you to migrate your existing VIMS and CNFs to a secure mode and enforce the restrictions on
the new clusters or CNFs that you create.

The hierarchical model of the isolation modes:

n Global default isolation mode: Sets the default isolation mode for Kubernetes clusters.

n VIM level default isolation: Inherits global isolation mode, and this is the isolation mode
of CNFs deployed into the cluster. However, you can edit the settings by navigating
to Infrastructure > Virtual Infrastructure and then clicking the Options (three dots)
corresponding to the cloud instance.

n CNF level isolation mode: Inherits VIM isolation mode and you can edit the settings. The
settings that you modify apply to the next CNF operation.

Prerequisites

To perform this operation, you'll need the Admin privilege.

Procedure

1 Log in to the VMware Telco Cloud Automation user interface.

2 Go to Administration > Configurations > Policy Configurations.

3 From the Default Isolation Mode drop-down list, select one of the isolation modes to be
applied to the Kubernetes clusters:

n Permissive: No restriction is applied during LCM operations or proxy remote accesses.

n Restricted: Each Network Function has access to its namespace, and no access is granted
to any other namespace or cluster-level resources.

Note By default, the Kubernetes VIMs are in permissive mode, and no cluster-level
privilege separation is enforced. To enable restricted policies, you must set the isolation
mode to Restricted.

4 Click Update.

VMware, Inc. 450


VMware Telco Cloud Automation User Guide

Add Kernel Versions


You can add the kernel and DPDK versions to be used in the CNF infrastructure requirement
designer.

Prerequisites

To perform this operation, you'll need the Admin privilege.

Procedure

1 Log in to the VMware Telco Cloud Automation user interface.

2 Go to Administration > Configurations.

3 Click Infrastructure Requirements.

4 To add the kernel version, click ADD KERNEL VERSION and provide the following
information:

n Name: Select the kernel name.

n Version: Enter the kernel version.

5 Click Add.

6 To add the DPDK version, click ADD SUPPORTED DPDK and provide the following
information:

n Kernel: Select the kernel.

n Version: Enter the DPDK version.

n Repository FQDN: Enter the FQDN of the repository where the DPDK resides.

n Repository Path: Enter the path of the respective DPDK.

7 Click Add.

License Consumption
VMware Telco Cloud Automation sends out an alert when the licenses threshold crosses 90%.
However, user operations on Network Functions and Network Services are not blocked.

CPU license usage is calculated based on the number of managed vCPUs per VIM. The
transformation factor used for calculating CPU license usage is 12 vCPUs = 1 CPU Package
License.
To view the details about the number of available and used licenses, perform the following steps:

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 From the left navigation pane, go to Administration > Licensing.

VMware, Inc. 451


VMware Telco Cloud Automation User Guide

Results

The Licensing page displays the CPUs available and utilized per VIM.

Managing vCenter Certificate Changes


The clusters, node pools, and NF operations require require a trusted connection to vCenter
through a certificate for the TCA operations to function as expected. Therefore, you must update
the vCenter certificate in the VMware Telco Cloud Automation appliance.

Re-import the vCenter certificate for TCA-M/TCA-CP


As a first step to managing your vCenter certificate changes, you must reimport the self-signed
vCenter certificate.

Procedure

1 Log in to the VMware Telco Cloud Automation Appliance Manager using the port tca-m/tca-
cp:9443.

2 Click Certificate > Trusted CA Certificate > IMPORT.

3 Select the trusted crtificate type that you want to import and do one of the following:

n Browse and select the file to import.

n Type the URL of the certificate.

n Paste the certificate file content.

4 Click Apply.

Update the Thumbprint of vCenter


You must update the thumbprint of vCenter in TCA and TKG.

Procedure

1 Obtain the latest vCenter Thumbprint.

For more information on obtaining the thrumbprint, see the section Obtain vSphere
Certificate Thumbprints in Prepare to Deploy Management Clusters to vSphere.
2 Log in to the Telco Cloud Automation Control Plane appliance attached to vCenter as an
admin user through the SSH client.

3 Run the following command and note the CR name and namespace:

kubectl get VCenterPrime -A

4 Edit the file with the following command:

kubectl edit VCenterPrime -n <cr_name> <namespace>

VMware, Inc. 452


VMware Telco Cloud Automation User Guide

5 Edit the Thumbprint field and enter the correct thumbprint.

To edit the thumbprint, see Ensure that the vCenter IP and Thumbprint are Accurate.

6 For each management cluster, perform the following:

a SSH into the management cluster control plane virtual IP with the user name capv.

b Run the following command and note the CR name and namespace:

kubectl get VCenterPrime –A

c Edit the file with the following command:

kubectl edit VCenterPrime -n <cr_name> <namespace>

d Edit the Thumbprint field and enter the correct thumbprint.

To edit the thumbprint, see Ensure that the vCenter IP and Thumbprint are Accurate.

Managing the Thumbprint Changes of a Secondary Cloud

If the thumbprint of a secondary cloud changes, perform the following:

Prerequisites

You must use an SSH client to log in to the TCA appliance.

Procedure

1 Run the following command and note the CR name and namespace.

kubectl get VCenterSub -A

2 Edit the file with the following command:

kubectl edit VCenterSub -n <cr_name> <namespace>

3 Edit the Thumbprint field and enter the correct thumbprint.

To edit the thumbprint, see Ensure that the vCenter IP and Thumbprint are Accurate.

Ensure that the vCenter IP and Thumbprint are Accurate

Ensure that you have updated the correct vCenter IP and thumbprint by performing the
following:

n Verify the server address field and ensure that the vCenter IP is correct.

VMware, Inc. 453


VMware Telco Cloud Automation User Guide

n Verify the thumbprint field and edit the value with the lateast thumbprint that you have
already obtained.

apiVersion: telco.vmware.com/v1alpha1
kind: VCenterPrime
metadata:
name: vcprime-mgmt-cluster07
namespace: tca-system spec:
server:
address: 10.10.10.10
credentialRef:
kind: Secret
name: vcprime-mgmt-cluster07-secret
namespace: tca-system
subConfig:
datacenter: tcpscale-VMCCloudDC
thumbprint: FA:3A:8E:E1:B3:23:DR:FE:F3:6E:19:BB:FE:01:D1:18:E8:24:88:F

Update the TLS Thumbprint for TCA and TKG Management Clusters

If vCenter certificate of a secondary cloud changes, perform the following steps to update the
TLS thumbprint.

Procedure

1 SSH to the management cluster control plane virtual IP with the user name capv and update
{mgmt-cluster-name}-vsphere-cpi-addon secret.

kubectl get secret -A | grep cpi-addon

2 Save the original CPI vSphere configuration to a temporary file.

kubectl get secret -n tkg-system {mgmt-cluster-name}-vsphere-cpi-addon -o


jsonpath='{.data.vsphereconf-custom\.lib\.txt}' |base64 -d >/tmp/vsphereconf.txt

3 Update the CPI vSphere configuration with the thumbprint of the temporary file.

Following is the sample vSphere configuration:

[root@tca /home/admin]# vim /tmp/vsphereconf.txt


((@def vsphere_conf(): -@)
[Global]
user = "[email protected]"
password = "Admin!23"
port = "443"
datacenters = "os-test-dc, cellsite-dc" [
VirtualCenter "10.185.11.97"]
datacenters = "os-test-dc"
thumbprint = "13:C1:98:E9:E2:DF:A9:5A:95:EC:6A:96:FA:8D:DE:CF:56:6C:D3:1C"
ip-family = "ipv4" [
VirtualCenter "sc2-10-10-10-130.eng.vmware.com"]
datacenters = "cellsite-dc"
thumbprint = "FD:89:0D:8D:B6:A6:FA:CB:E2:B7:15:GF:D3:F0:47:EB:7C:E3:96:70"
ip-family = "ipv4" [

VMware, Inc. 454


VMware Telco Cloud Automation User Guide

Workspace]
server = "10.10.10.99"
datacenter = "test-dc"
thumbprint = "13:C1:98:D9:E2:DF:A9:6A:95:4C:6A:96:EA:8D:FE:CF:56:6C:D3:1C"
ip-family = "ipv4

Note You must update the thumbprint value.

4 Encode the CPI vSphere configuration with the new thumbprint.

export encoded_vsphereconf_content=`base64 -w 0 /tmp/vsphereconf.txt

5 Update the secret {mgmt-cluster-name}-vsphere-cpi-addon in tkg-system namespace in the


management cluster and wait for the Kapp reconciliation. After the reconciliation, vsphere-
cloud-config configmap in kube-system namespace is updated.

kubectl patch secret {mgmt-cluster-name}-vsphere-cpi-addon -n tkg-system -p


'{"data": {"vsphereconf-custom.lib.txt":"'${encoded_vsphereconf_content}'"}}'

Verify that configmap is updated using the following command:


kubectl -n kube-system get cm vsphere-cloud-config -o yaml

6 Restart the vsphere-cloud-controller-manager pod to mount the new configmap.

kubectl rollout restart ds/vsphere-cloud-controller-manager -n kube-system

Update the TLS Thumbprint for Workload Clusters


vSphere TLS Thumbprint is included in the vspherecluster custom resource and should also be
updated in the workload cluster.

Procedure

1 SSH to the management cluster control plane virtual IP with the user name capv.

2 List all the vSphere clusters including the management cluster.

kubectl get vsphereclusters -A NAMESPACE NAME AGE default tkg-test-workload 62d


default tkg-wld 83d tkg-system tkg-mgmt-cluster 83d

3 Edit each of the vSphere clusters using the following command and update the
spec.thumbprint field with the correct thumbprint.

kubectl edit vsphereclusters tkg-test-workload

4 Verify if the update is completed using the following command:

kubectl get vsphereclusters tkg-test-workload -o yaml

For management clusters, add the tkg-system namespace to the kubectl commands:

kubectl edit vsphereclusters -n tkg-system tkg-mgmt-cluster

VMware, Inc. 455


Upgrading VMware Telco Cloud
Automation 26
When a new version of VMware Telco Cloud Automation or VMware Telco Cloud Automation
Control Plane (TCA-CP) becomes available, you can update your deployment from the web
interface.

This chapter includes the following topics:

n Upgrade VMware Telco Cloud Automation Using the Upgrade Bundle

Upgrade VMware Telco Cloud Automation Using the


Upgrade Bundle
You can upgrade the VMware Telco Cloud Automation or VMware Telco Cloud Automation
Control Plane appliances using the upgrade bundle.

Note The option to upgrade VMware Telco Cloud Automation using the upgrade bundle is only
available for VM-based VMware Telco Cloud Automation.

Procedure

1 Download the VMware Telco Cloud Automation upgrade bundle from VMware Customer
Connect.

2 Save the upgrade bundle in a jump host and ensure that the jump host can access the
appliance to be upgraded.

3 Log in to the Appliance Management interface through FQDN. For example, https://tca-cp-
ip-or-fqdn:9443.

4 Click Administration > Upgrade.

On the Upgrade page, details about the current installed version, upgrade date, and upgrade
state are displayed.

5 Click Upgrade.

6 Click Choose File and upload the upgrade bundle

7 Click Continue.

VMware, Inc. 456


VMware Telco Cloud Automation User Guide

Results

The appliance upgrades to a newer version.

VMware, Inc. 457


Upgrading Cloud-Native VMware
Telco Cloud Automation 27
Procedure to upgrade the cloud-native deployment of VMware Telco Cloud Automation

You can upgrade the cloud-native deployment of VMware Telco Cloud Automation in both the
internet enabled and airgapped environments. You can also perform the basic troubleshooting
related to the upgrade.

This chapter includes the following topics:

n Upgrade Cloud-Native VMware Telco Cloud Automation with Internet Access

n Upgrade Cloud-Native VMware Telco Cloud Automation in an Airgapped Environment

n Cloud-Native VMware Telco Cloud Automation Upgrade Troubleshooting

Upgrade Cloud-Native VMware Telco Cloud Automation


with Internet Access
For every release, VMware Telco Cloud Automation provides an upgrade BOM file in the VMware
Customer Connect site.

Prerequisites

n Ensure that VMware Telco Cloud Automation is in healthy condition by making sure all
services are running in appliance summary of platform Manager.

n Download VMware-Telco-Cloud-Automation-upgrade-files.tar.gz from


customerconnect.vmware.com. This portal contains the upgrade BOM file required for
upgrading Cloud-Native VMware Telco Cloud Automation.

n Back up your VMware Telco Cloud Automation appliance.

VMware, Inc. 458


VMware Telco Cloud Automation User Guide

n Ensure that you have connectivity to JFrog repository.

Note
n You cannot schedule VMware Telco Cloud Automation Control Plane upgrade from a
VMware Telco Cloud Automation Manager appliance that is in HA mode.

n The upgrade process can take up to 45 mins.

n During the upgrade process, you cannot use VMware Telco Cloud Automation for any
operations.

n Upgrade the VMware Telco Cloud Automation Manager before upgrading the VMware Telco
Cloud Automation Control Plane.

Note Upgrading cloud native TCA from an older version to TCA version 2.2 is not supported.

Procedure

1 Extract the upgrade BOMs from the downloaded VMware-Telco-Cloud-Automation-


upgrade-files.tar.gz file.

This file contains two upgrade BOMs:

n VMware-Telco-Cloud-Automation-Mgr-upgrade-bom for VMware Telco Cloud


Automation Manager

n VMware-Telco-Cloud-Automation-CP-upgrade-bom for VMware Telco Cloud


Automation Control Plane.

2 Log in to the VMware Telco Cloud Automation web interface.

3 Go to Administration > System Updates.

4 From the Apply Service Update column, click Select File.

5 Browse and select the upgrade BOM file.

6 To initiate the upgrade, click Upgrade.

Results

The upgrade begins. To monitor the upgrade progress, view the Status column.

What to do next

After the successful upgrade, modify the VMware Tanzu Kubernetes Grid image using
Infrastructure Automation. For details, see Add Images or OVF.

Upgrade Cloud-Native VMware Telco Cloud Automation in


an Airgapped Environment
Upgrade your cloud-native VMware Telco Cloud Automation that is deployed in an airgapped
environment.

VMware, Inc. 459


VMware Telco Cloud Automation User Guide

Prerequisites

n Ensure that VMware Telco Cloud Automation is in healthy condition by making sure all
services are running in appliance summary of platform Manager.

n Back up your VMware Telco Cloud Automation appliance.

n Download VMware-Telco-Cloud-Automation-upgrade-files.tar.gz from


customerconnect.vmware.com. This portal contains the upgrade BOM file required for
upgrading Cloud-Native VMware Telco Cloud Automation.

n To upgrade VMware Telco Cloud Automation in an airgapped environment, it must have been
deployed using an Airgap Server and the FQDN of the Airgap server must not change after
deployment.

n Before starting the upgrade, ensure that the Airgap server is running the same version
to which you want to upgrade VMware Telco Cloud Automation. For example, if you are
upgrading VMware Telco Cloud Automation to version 2.1.0, then the Airgap server must also
be running version 2.1.0.

n If the Airgap server is using a self-signed certificate or private CA-signed certificate, then
the certificate must be the same as the one used while deploying VMware Telco Cloud
Automation.

Note Upgrading cloud native TCA from an older version to TCA version 2.2 is not supported.

Upgrade Procedure for 2.1 and Later Versions


Use the steps listed in this section for upgrading VMware Telco Cloud Automation 2.1 that is
deployed in an airgapped environment to a later version.

Procedure

1 Extract the upgrade BOMs from the downloaded VMware-Telco-Cloud-Automation-


upgrade-files.tar.gz file.

This file contains two upgrade BOMs:

n VMware-Telco-Cloud-Automation-Mgr-upgrade-bom for VMware Telco Cloud


Automation Manager

n VMware-Telco-Cloud-Automation-CP-upgrade-bom for VMware Telco Cloud


Automation Control Plane.

2 Log in to the VMware Telco Cloud Automation web interface.

3 Go to Administration > System Updates.

4 From the Apply Service Update column, click Select File.

5 Browse and select the corresponding upgrade BOM file of the newer release to which you
want to upgrade.

6 Provide the Airgap Server FQDN and then click Upgrade to initiate the upgrade.

VMware, Inc. 460


VMware Telco Cloud Automation User Guide

Results

The upgrade begins. To monitor the upgrade progress, view the Status column.

The upgrade process can take up to 45 mins. During the upgrade process, cannot use VMware
Telco Cloud Automation for any operations. Also, upgrade Telco Cloud Automation Manager
before upgrading VMware Telco Cloud Automation Control Plane.

Upgrade Procedure for 2.0.0 and 2.0.1


If you are running VMware Telco Cloud Automation 2.0.0 or 2.0.1 in an airgapped environment,
edit the upgrade BOM files manually before starting the upgrade. This step is required because
you cannot provide the Airgap server FQDN using the VMware Telco Cloud Automation Upgrade
UI.

Note
n The upgrade process can take up to 45 mins.

n During the upgrade process, you cannot use VMware Telco Cloud Automation for any
operations.

n Upgrade the VMware Telco Cloud Automation Manager before upgrading the VMware Telco
Cloud Automation Control Plane.

Procedure

1 Extract the upgrade BOMs from the downloaded VMware-Telco-Cloud-Automation-


upgrade-files.tar.gz file.

This file contains two upgrade BOMs:

n VMware-Telco-Cloud-Automation-Mgr-upgrade-bom for VMware Telco Cloud


Automation Manager

n VMware-Telco-Cloud-Automation-CP-upgrade-bom for VMware Telco Cloud


Automation Control Plane.

2 Open both VMware Telco Cloud Automation Manager and VMware Telco Cloud Automation
Control Plane upgrade BOMS using any text editor.

3 Find https://vmwaresaas.jfrog.io/artifactory/helm-registry and replace it with


https://<airgapServerFqdn>/chartrepo/registry.

4 Find all occurrences of vmwaresaas.jfrog.io/registry and replace with


<airgapServerFqdn>/registry.

5 Save the upgrade BOM files.

6 Log in to the VMware Telco Cloud Automation web interface.

7 Go to Administration > System Updates.

8 From the Apply Service Update column, click Select File.

VMware, Inc. 461


VMware Telco Cloud Automation User Guide

9 Browse and select the corresponding upgrade BOM file of the newer release to which you
want to upgrade.

10 Click Upgrade.

Results

The upgrade begins. To monitor the upgrade progress, view the Status column.

Cloud-Native VMware Telco Cloud Automation Upgrade


Troubleshooting
General troubleshooting methods for upgrading cloud-native VMware Telco Cloud Automation.

Getting kubeconfig of VMware Telco Cloud Automation cluster


Use the appliance manager REST API to get all the clusters of a bootstrapper virtual machine.

curl -XGET --user "bootstrapperVMUsername:bootstrapperVMPassword" "https://


{bootstrapperVMIP}:9443/api/admin/clusters?clusterType=MANAGEMENT"

API returns JSON response, use clusterName to get the name of the VMware Telco Cloud
Automation cluster. Use the appliance manager REST API to get the kubeconfig.

curl -XGET --user "bootstrapperVMUsername:bootstrapperVMPassword" "https://


{bootstrapperVMIP}:9443/api/admin/clusters/{clusterName}/kubeconfig?clusterType=MANAGEMENT"

API returns JSON response, use kubeconfig to get the base64 encoded kubeconfig. Perform a
bas64 decode of the kubeconfig and use decoded value for the kubectl, helm commands.

Getting logs of Cloud-Native VMware Telco Cloud Automation


upgrade
To view upgrade logs of upgrade, use the following steps:

1 Obtain the names of VMware Telco Cloud Automation upgrade pods using the following
command:

Note tca-mgr is the namespace for VMware Telco Cloud Automation Manager and tca-
system is the namespace for VMware Telco Cloud Automation Control Plane.

kubectl get pods -n tca-mgr/tca-system | grep upgrade-tca

Example

kubectl get pods -n tca-mgr | grep upgrade-tca


-----------------------------------------------
upgrade-tca-manager-agent-5bc47f79cb-t2c7v 2/2 Running 0 16h
upgrade-tca-manager-helm-service-84464bdbd4-sqdc4 2/2 Running 0 16h

VMware, Inc. 462


VMware Telco Cloud Automation User Guide

2 View the logs of VMware Telco Cloud Automation upgrade pods, use the following command
for each of the pod:

kubectl logs <name of the pod> -n tca-mgr/tca-system

Upgrade failed with missing VMware Telco Cloud Automation cluster


details
If upgrade fails with the error message TCA Cluster details not found. Please configure it
in Platform Manager.

Follow steps to store the VMware Telco Cloud Automation cluster kubeconfig.

1 Save the base64 encoded kubeconfig and VMware Telco Cloud Automation cluster name in
the below format in a json file as below:

$ cat tca_kubeconfig.json
{
"data":{
"items":[
{
"config":{
"url":"https://<TCA cluster controlPlaneEndpointIP>:6443",
"clusterName":"<TCA Cluster name>",
"kubeconfig":"<base64 encoded kubeconfig>"
}
}
]
}
}

2 Store kubeconfig in mongodb of VMware Telco Cloud Automation Manager.

curl -k -XPOST --user "<username:password>" "https://<TCA_IP>:9443/api/admin/global/config/


kubernetes?isInternal=true" -H "Content-Type: application/json" -H "Accept: application/
json" -d @tca_kubeconfig.json

3 Store kubeconfig in mongodb of VMware Telco Cloud Automation Control Plane.

curl -k -XPOST --user "<username:password>" "https://<TCA_CP_IP>:9443/api/admin/global/


config/kubernetes?isInternal=true" -H "Content-Type: application/json" -H "Accept:
application/json" -d @tca_kubeconfig.json

VMware, Inc. 463


VMware Telco Cloud Automation User Guide

Unable to login to VMware Telco Cloud Automation due to upgrade


failure
If upgrade fails and you are not able to login to the VMware Telco Cloud Automation, restart the
VMware Telco Cloud Automation API pod using the following command:

kubectl rollout restart deployment/tca-api -n tca-system/tca-mgr

Note tca-mgr is the namespace for VMware Telco Cloud Automation Manager and tca-system is
the namespace for VMware Telco Cloud Automation Control Plane.

After the restart, you can retry to upgrade the VMware Telco Cloud Automation upgrade.

Upgrade failure while setting up upgrade agent


If upgrade fails with the error message failure while setting upgrade agent.

Follow the steps before retrying the upgrade.

1 Restart the VMware Telco Cloud Automation helm service pod using the following command:

kubectl rollout restart deployment/tca-helm-service -n tca-system/tca-mgr

2 Uninstall any upgrade related helm services.

a Obtain the upgrade helm services, use the following command:

helm list -n tca-mgr/tca-system | grep tca-manager/tca-cp

b Uninstall upgrade helm services using the following command:

helm uninstall <serviceName> -n tca-mgr/tca-system

Note tca-mgr is the namespace for VMware Telco Cloud Automation Manager and tca-
system is the namespace for VMware Telco Cloud Automation Control Plane.

Upgrade failure due to Read Timed Out


If upgrade fails with the error Failure while setting upgrade agent: Helm API failed: Read
timed out, check KB for detailed steps.

Reinstalling service which failed to upgrade


If any service fails to upgrade during VMware Telco Cloud Automation upgrade, you can retry to
upgrade the VMware Telco Cloud Automation.

VMware, Inc. 464


VMware Telco Cloud Automation User Guide

However, if the upgrade fails after multiple retries, you can follow the steps to uninstall and
reinstall the services.

1 Save the current override values of the service to a yaml file.

helm get values {serviceName} -n {namespace} > {service}.yaml

2 Uninstall the helm service.

helm uninstall {serviceName} -n {namespace}

3 Add VMware Telco Cloud Automation helm repo to be able fetch helm charts.

helm repo add jfrog https://vmwaresaas.jfrog.io/artifactory/helm-registry

helm repo update

4 Verify the added helm repo.

helm repo list


-----------------
Response:
NAME URL
frog https://vmwaresaas.jfrog.io/artifactory/helm-registry

5 Install the helm service.

helm install {serviceName} {chartName} --version {version} -f {service}.yaml -n


{namespace}

Note VMware Telco Cloud Automation services are deployed in the namespaces: tca-mgr,
tca-system, istio-system, metallb-system, tca-services, postgres-operator-system, and
fluent-system.

Example for reinstalling VMware Telco Cloud Automation helm chart


helm get values tca -n tca-mgr > tca-mgr.yaml

helm uninstall tca -n tca-mgr

helm repo add jfrog https://vmwaresaas.jfrog.io/artifactory/helm-registry

helm repo update

helm install tca jfrog/tca --version 2.1.0 -f tca-mgr.yaml -n tca-mgr

Note You can use helm search repo option to search for VMware Telco Cloud Automation helm
charts.

VMware, Inc. 465


Python Software Development
Kits 28
The VMware Telco Cloud Automation Software Development Kits (SDKs) provide language
bindings for accessing the VMware Telco Cloud Automation NFV orchestration and management
APIs. SDK-Python is used for automating the VMware Telco Cloud Automation operations using
Python programming.

The VMware Telco Cloud Automation APIs are classified into:

n System Management APIs: Used for configuring TCA-Manager and TCA-CPs. These APIs are
mainly used for configuring and troubleshooting the TCA-CP appliances.

n NFV Orchestration APIs: Used to manage VNF Lifecycle, VNF Packages, Network Services,
Event Subscriptions, and so on with NFV SOL APIs (SOL005 and SOL003), CaaS and Virtual
Infrastructure Management APIs, Partner Systems, and Extension APIs.

For information on the supported VMware Telco Cloud Automation versions, see VMware
Product Interoperability Matrix.

To download the VMware Telco Cloud Automation SDKs, go to VMware Telco Cloud Automation
SDK.

For more information on the VMware Telco Cloud Automation SDKs, see Telco Cloud Automation
SDK Programming Guide.

VMware, Inc. 466


Global Settings APIs
29
Use these APIs for configuring the default settings of VMware Telco Cloud Automation.

This chapter includes the following topics:

n API for CNF Debug Options

n Global Settings for Cluster Automation

n Global Settings for Concurrency Limit

API for CNF Debug Options


VMware Telco Cloud Automation provides APIs for debugging or updating the default CNF
options. Run the following API on the relevant VMware Telco Cloud Automation Control Plane
(TCA-CP) instance.

API
PUT https://<TCA_CP>/admin/hybridity/api/global/settings/<namespace>/<option>

Options
Namespace Option Description Sample Request

CnfPackageManager helmTimeoutSec Helm timeout option value.


{
The default timeout value is "value": "2000"
1200 seconds. }

Global Settings for Cluster Automation


VMware Telco Cloud Automation allows you to configure certain cluster automation settings.

You can configure the behavior for virtual machine placement, update the supported hardware
version of vfio-pci device drivers, and update the wait time and poll intervals for customization
tasks. Configure these settings to change the default behavior only when there is an issue with
your existing environment.

VMware, Inc. 467


VMware Telco Cloud Automation User Guide

API for Cluster Automation Global Settings


Run the following API on the relevant VMware Telco Cloud Automation Manager or VMware
Telco Cloud Automation Control Plane (TCA-CP) instance.

API
PUT: /admin/hybridity/api/global/settings/<namespace>/<property>
{
"value": <value>
}

Note The authentication is the same as the other VMware Telco Cloud Automation APIs.

API to Disble CSR Validation


Run the following API on the relevant VMware Telco Cloud Automation Manager or VMware
Telco Cloud Automation Control Plane (TCA-CP) instance.

API for CNF and VNF


PUT /admin/hybridity/api/global/settings/global/NetworkFunctionSchemaValidation
{
"value": false
}

Note The authentication is the same as the other VMware Telco Cloud Automation APIs.

API for NS
PUT /admin/hybridity/api/global/settings/global/NetworkServiceSchemaValidation
{
"value": false
}

Note The authentication is the same as the other VMware Telco Cloud Automation APIs.

Configure Cluster Automation Settings


You can configure the following cluster automation settings listed in this section.

Hardware Version for VFIO PCI Driver


VMware Tanzu Kubernetes Grid deploys a template with the virtual machine hardware version
13 by default. If your network function uses the VFIO PCI driver, it requires the hardware
version 14 for an Intel-based setup and version 18 for an AMD-based setup. VMware Telco
Cloud Automation updates this information to the VMConfig Operator according to the firmware.
Currently, VMware Telco Cloud Automation updates the hardware version as 14 assuming that

VMware, Inc. 468


VMware Telco Cloud Automation User Guide

the setup is Intel-based. If you are using an AMD-based setup, use the following API to update
the global settings to send the hardware version as 18.

Prerequisites
Run this API on VMware Telco Cloud Automation Manager.

API

PUT: /admin/hybridity/api/global/settings/InfraAutomation/vfioPciHardwareVersion
{
"value": "18"
}

Note
n Update the appropriate value based on the firmware.

Enable Virtual Machine Placement in vSphere DRS


During customization, the VMConfig plug-in tries to fit the virtual machines to the correct NUMA
nodes of the ESXi server based on its SR-IOV, Passthrough, and Pinning specifications. To disable
the virtual machine placement operation by the plug-in and enable the virtual machine placement
by vSphere DRS, configure the following settings on the node pool before instantiating the
network function.

Prerequisites
Run this API on VMware Telco Cloud Automation Manager.

API

PUT: /admin/hybridity/api/global/settings/InfraAutomation/<nodePoolId>_enableDRS
{
"value": true
}

Note
n Ensure that you replace all the hyphens (-) in <nodePoolId> with underscores (_).

Update CPU and Memory Reservation During Virtual Machine Placement


During customization, VMware Telco Cloud Automation reserves 2 physical cores (4 Hyper
Threads) and 512 MB of memory for the ESXi host while performing the CPU or memory pinning
operation on the virtual machines. You can update this default configuration and update the
VMware ESXi host information before instantiating your network function.

Prerequisites
Run this API on VMware Telco Cloud Automation Manager.

VMware, Inc. 469


VMware Telco Cloud Automation User Guide

Update the CPU Reservation

PUT: /admin/hybridity/api/global/settings/InfraAutomation/reservedCoresPerNumaNode
{
"value": 3
}

Note Enter the new reservation value in number of physical cores.

Update the Memory Reservation

PUT: /admin/hybridity/api/global/settings/InfraAutomation/reservedMemoryPerNumaNode
{
"value": 1024
}

Note Enter the new reservation value in MB.

Update the VMware ESXi Host Information


After updating the CPU or memory reservation, run the following API:

PUT: /hybridity/api/infra/k8s/clusters/<workloadclusterId>/esxinfo
{
}

Update Wait Timeout for Customization Tasks


VMware Telco Cloud Automations posts the customizations to VMConfig and polls for the
customization tasks to complete. By default, VMware Telco Cloud Automation waits for 30
minutes before polling at 30-second intervals if the customization is not successful. You can
configure this default behavior using the following APIs.

Prerequisites
Run this API on VMware Telco Cloud Automation Manager.

Number of Polls
By default, VMware Telco Cloud Automation polls 60 times. If your customization requires a
longer time to complete, you can increase the poll count.

PUT: /admin/hybridity/api/global/settings/InfraAutomation/nodePolicyStatusRetryCount
{
"value": 120
}

VMware, Inc. 470


VMware Telco Cloud Automation User Guide

Wait Interval Between Each Poll


By default, VMware Telco Cloud Automation waits for 30 seconds between successful polls. If
your customization requires a longer time to complete, you can increase the poll interval.

PUT: /admin/hybridity/api/global/settings/InfraAutomation/nodePolicyStatusWaitTime
{
"value": 120
}

Number of Retries on Failure


On failure, VMware Telco Cloud Automation retries the customization task up to 10 times. You
can increase the retry count.

PUT: /admin/hybridity/api/global/settings/InfraAutomation/nodePolicyStatusFailureRetryCount
{
"value": 30
}

Global Settings for Concurrency Limit


You can control the number of API requests processed concurrently. Any request beyond the
concurrency limit is queued until an ongoing request is completed. The next request is picked
from the queued list based on the request priority, which is determined by the priority header
and start time. This feature is applicable for CaaSv2, CNF, and VNF LCM.

API

PUT:https://<>/admin/hybridity/api/global/settings/{service}/{option}

Options
Namespace Option Description Sample Request

Service intentObserverTaskDelay Updates the request polling


{
interval. By default, the
polling interval is 60 “value”: 120
seconds. }

concurrencyLimit Configures the concurrency


{
limit. This specifies the
number of requests that “value”:512
can run parallelly. By }
default, it is 256.

The possible values for Service are:

n ClusterAutomation - If you want to apply the settings to CaaSv2

n CNFLCM – For CNF LCM Service

VMware, Inc. 471


VMware Telco Cloud Automation User Guide

n VNFLCM - For VNF LCM Service

For example, to update the polling interval for CaasSv2:

PUT:https://<>/admin/hybridity/api/global/settings/ClusterAutomation/intentObserverTaskDelay

VMware, Inc. 472


Registering Partner Systems
30
You can register third-party partner systems with VMware Telco Cloud Automation for managing
VNFs.

The following table lists the supported partner systems and their versions:

Partner System Version

Nokia CBAM 19.5.0.1, 19.5.1

Harbor 1.x, 2.x

Airgap Server Corresponding VMware Telco Cloud Automation release


version.

This chapter includes the following topics:

n Add a Partner System to VMware Telco Cloud Automation

n Edit a Registered Partner System

n Associate a Partner System Network Function Catalog

n Add a Harbor Repository

n Add an Air Gap Repository

n Add a Proxy Repository

n Add Amazon ECR

Add a Partner System to VMware Telco Cloud Automation


Add a partner system to VMware Telco Cloud Automation.

To add a partner system, perform the following steps:

Prerequisites

Note You must add at least one VMware Cloud Director-based cloud to your VMware Telco
Cloud Automation environment before adding a partner system.

You must have the Partner System Admin privileges to perform this task.

VMware, Inc. 473


VMware Telco Cloud Automation User Guide

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Partner Systems and click Register.

3 In the Register Partner System page, select the partner system page and enter the
appropriate information for registering the partner system.

4 Click Next.

5 Associate one or more VIMs to your partner system.

6 Click Finish.

Results

The partner system is added to VMware Telco Cloud Automation and is displayed on the Partner
Systems page.

Example

In this example, we list the steps to add Nokia CBAM as a partner system:

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Partner Systems and click Register.

3 In the Register Partner System page, Nokia CBAM is preselected. Enter the following
information:

n Name - Enter a unique name to identify the partner system in VMware Telco Cloud
Automation.

n Version - Select the version of the partner system from the drop-down menu.

n URL - Enter the URL to access the partner system.

n Secret - Enter the secret passcode for the client.

Note You can get the Client ID and Secret from Nokia CBAM.

n Trusted Certificate (Optional) - Paste the contents of the certificate.

4 Click Next.

5 Associate one or more VIMs to your partner system.

6 Click Finish.

Nokia CBAM is added to VMware Telco Cloud Automation and is displayed in the Partner
Systems page.

What to do next

n You can select the partner system and click Modify Registration or Delete Registration to
edit the configuration or remove the system from VMware Telco Cloud Automation.

VMware, Inc. 474


VMware Telco Cloud Automation User Guide

n You can add a network function catalog from the partner system to VMware Telco Cloud
Automation.

Edit a Registered Partner System


After registering, you can edit the partner system details and its associated VIMs.

Prerequisites

You must have the Partner System Admin privileges to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Partner Systems and select the partner system that you want to
edit.

3 Click Modify Registration.

4 In the Credentials tab, edit the partner system details.

5 Click Next.

6 Select additional VIMs or deselect the VIMs that you do not want to associate your partner
system with.

Note You can dissociate a VIM only if the CNFs instantiated on the VIM are deleted.

7 Click Finish.

Results

The partner system details are updated.

What to do next

To view the updated details of your partner system, go to Infrastructure > Partner Systems,
select your partner system, and click the > icon.

Associate a Partner System Network Function Catalog


VMware Telco Cloud Automation can orchestrate a network function catalog from a partner
system. Add the partner system's network function catalog to VMware Telco Cloud Automation.

Prerequisites

You must have the Partner System Admin privileges to perform this task.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Partner Systems and select the partner system.

VMware, Inc. 475


VMware Telco Cloud Automation User Guide

3 Click Add Network Function Catalog.

4 In the Add Network Function Catalog page, enter the following details:

n Descriptor ID - The descriptor ID of the network function catalog.

n Product Name - The name of the product associated with the network function catalog.

n Software Version - The software version of the partner system.

n Descriptor Version - The version number of the network descriptor.

5 Click Add.

Results

The network function catalog is added to the Network Functions > Catalogs page.

Note You cannot edit the Network Function Description of a network function catalog that is
added from a partner system.

Add a Harbor Repository


Add a Harbor repository to VMware Telco Cloud Automation.

Prerequisites

To perform this task, you must have the Partner System Administrator privileges.

Note
n Ensure that all Harbor repository URLs contain the appropriate port numbers such as 80, 443,
8080, and so on.

n You cannot register multiple Harbor systems on a single VIM.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Partner Systems and click Register.

3 Select Harbor.

4 Enter the following details:

n Name - Provide a name for your repository.

n Version - Select the Harbor version from the drop-down menu.

n URL - Enter the URL of your repository. If you use a Harbor repository from a third-party
application, ensure that you provide this URL in VMware Telco Cloud Automation Control
Plane (TCA-CP).

n To trust the certificates provided by Harbor, select Trust Certificate.

n Username and Password - Provide the credentials to access your repository.

VMware, Inc. 476


VMware Telco Cloud Automation User Guide

5 Click Next.

6 Associate one or more VIMs to your Harbor repository.

7 Click Finish.

Results

You have successfully registered your Harbor repository. You can now select this repository for
resources when instantiating a CNF.

What to do next

The Harbor inventory synchronizes every 5 minutes. After adding, modifying, or deleting a
Harbor repository, you cannot view the changes in your inventory until the next synchronization
happens. To refresh the inventory manually, go to Infrastructure > Partner Systems and click
Refresh Harbor Inventory.

Add an Air Gap Repository


Add the air-gapped repository that you have set up to VMware Telco Cloud Automation.

Prerequisites

To perform this task, you must have the Partner System Administrator privileges.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Partner Systems and click Register.

3 Select Air Gap.

4 Enter the following details:

n Name - Provide a name for your repository.

n FQDN - Enter the URL of your repository.

n CA Certificate - If your air-gapped repository uses a self-signed certificate, paste the


contents of the certificate in this text box. Ensure that you copy and paste the entire
certificate, from ----BEGIN CERTIFICATE---- to ----END CERTIFICATE----.

5 Click Finish.

Results

You have successfully added your air-gapped repository. The repository is now listed in the list
of repositories under Partner Systems.

What to do next

You can now use the air-gapped repository when deploying a management or workload cluster
in your air-gapped environment.

VMware, Inc. 477


VMware Telco Cloud Automation User Guide

Add a Proxy Repository


You can configure a proxy server and route all Internet traffic through it. When deploying a
VMware Tanzu Kubernetes Grid cluster, you can select this proxy server as the repository for the
required CaaS files.

Prerequisites

To perform this task, you must have the Partner System Administrator privileges.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Partner Systems and select Proxy.

3 Enter the following details:

n Name: Name of the Proxy server.

n HTTP Proxy: HTTP URL for receiving HTTP requests.

n HTTPS Proxy: HTTPS URL for receiving HTTPS requests.

n No Proxy: If you do not wish to route Internet traffic through a proxy server, enter the
IP address of a server where VMware Telco Cloud Automation can fetch the required
information. You can add multiple server IP addresses.

n CA Certificate: If the proxy server uses a self-signed certificate, paste the CA certificate
used for signing the Proxy server certificate.

4 Click Finish.

Results

You have successfully registered the Proxy server. You can now select this repository for
resources when deploying a VMware Tanzu Kubernetes Grid cluster.

Add Amazon ECR


Add an Amazon Elastic Container Registry (ECR) for storing, sharing, and deploying container
images.

Prerequisites

To perform this task, you must have the Partner System Administrator privileges.

Note In TCA 2.2, you cannot register ECR through partner systems in an airgap environment.

Procedure

1 Log in to the VMware Telco Cloud Automation web interface.

2 Navigate to Infrastructure > Partner Systems and click Register.

VMware, Inc. 478


VMware Telco Cloud Automation User Guide

3 Select Amazon ECR.

4 Enter the following details:

n Name - Provide a name for your registry.

n FQDN - Enter the fully qualified domain name (FQDN). For example,
example.amazonaws.com.

n EC2 Region - Enter the region where the ECR is hosted.

n ECR Access Key - To perform actions, enter the ECR access key.

n ECR Access Secret - Enter the credentials to access your registry.

n Role ARN (Optional) - Enter the Amazon Resource Name (ARN) specifying the role.

5 Click Next.

6 Associate one or more VIMs to your Amazon ECR.

7 Click Finish.

VMware, Inc. 479


Appendix
31
Reference details for VMware Telco Cloud Automation.

This chapter includes the following topics:

n Enable Virtual Hyper-Threading

n A1: PTP Overview

n A2: Host Profile for PTP and ACC100

n PTP Notifications

n Setup User/Group/Storage Policy in vCenter Server for vSphere CSI

Enable Virtual Hyper-Threading


Hyper-threading (HT) or Simultaneous Multi-Threading (SMT) refers to the CPU capabilities that
allow multiple CPU threads (HT thread) to execute together on the same CPU core. In the case
of virtualization on ESX, the same terminologies are used with prefix v to indicate virtual Hyper-
threading (vHT). Currently, with no vHT on ESX, each virtual CPU (vCPU) represent a virtual core
(vCore) for the GOS. In the case of this feature being enabled, vCPU will refer to one of the vHT
thread on the virtual Core (vCore).

Steps to enable vHT:

Prerequisites

vHT is supported on higher revisions from the following product versions:

Product Version

VMware Telco Cloud Automation and Control Plane 2.1.0

vCenter Server OVA 7.0 U3f

ESXi 7.0 U3f

Procedure

1 Ensure that you are using the supported VMware Telco Cloud Automation/Control Plane,
ESXi, and vCenter Server versions.

2 Log in to VMware Telco Cloud Automation.

VMware, Inc. 480


VMware Telco Cloud Automation User Guide

3 Create a CSAR file or edit an existing file, for example, /Definitions/VNFD.yaml, and add
the following parameter under node_components.

enableSMT: true

For example:

4 Instantiate the Network Function with the modified CSAR file.

5 After instantiating the Network Function, log in to the Worker node and verify that the
following values are set, as expected.

% ssh [email protected]
([email protected]) Password:
Last login: Wed Feb 9 23:42:28 2022 from 172.31.251.4
00:00:02 up 1 day, 4:11, 1 user, load average: 2.06, 2.28, 2.06
capv@wc-smc-np1-759ffb5759-fpfmx [ ~ ]$ sudo su
root [ /home/capv ]# cat /sys/devices/system/cpu/smt/active
1
root [ /home/capv ]# cat /sys/devices/system/cpu/smt/control
on
root [ /home/capv ]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6

VMware, Inc. 481


VMware Telco Cloud Automation User Guide

Model: 106
Model name: Intel(R) Xeon(R) Gold 6312U CPU @ 2.40GHz
Stepping: 6
CPU MHz: 2399.999
BogoMIPS: 4799.99
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 36864K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx
fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl
xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1
sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm
abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase
tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma
clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat
avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg
avx512_vpopcntdq rdpid md_clear flush_l1d arch_capabilities
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list
0-1
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu1/topology/thread_siblings_list
0-1

root [ /home/capv ]# cat /sys/devices/system/cpu/cpu30/topology/thread_siblings_list


30-31
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu31/topology/thread_siblings_list
30-31
root [ /home/capv ]#

If vHT is not enabled, the values are:

capv@wc-h1314-np1314-594dc47cb9-kxr5f [ ~ ]$ sudo su
root [ /home/capv ]# cat /sys/devices/system/cpu/smt/active
0
root [ /home/capv ]# cat /sys/devices/system/cpu/smt/control
notsupported
root [ /home/capv ]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0,2-63
Off-line CPU(s) list: 1
Thread(s) per core: 1
Core(s) per socket: 63
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6338N CPU @ 2.20GHz
Stepping: 6

VMware, Inc. 482


VMware Telco Cloud Automation User Guide

CPU MHz: 1496.484


BogoMIPS: 2992.96
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0,2-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon
nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1
sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm
abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase
tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap avx512ifma
clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat
avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg
avx512_vpopcntdq rdpid md_clear flush_l1d arch_capabilities
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu2/topology/thread_siblings_list
2
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu3/topology/thread_siblings_list
3
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu62/topology/thread_siblings_list
62
root [ /home/capv ]# cat /sys/devices/system/cpu/cpu63/topology/thread_siblings_list
63
root [ /home/capv ]#

A1: PTP Overview


Configurations to use the Precision Time Protocol (PTP).

You can configure the PTP in VMware Telco Cloud Automation in two modes:

n PTP in Passthrough mode.

n PTP over virtual function (VF) mode.

PTP in Passthrough Mode


VMware Telco Cloud Automation supports PTP in Passthrough mode on XXV710 and E810 cards.
To use PTP in Passthrough mode, set Device Type to NIC when configuring the PTP using
Infrastructure Designer. For details, see Infrastructure Requirements Designer.

The diagram shows how the PTP works in the passthrough mode.

VMware, Inc. 483


VMware Telco Cloud Automation User Guide

Figure 31-1. PTP in Passthrough Mode

Note
n For XXV710 card, you can use any Physical Function (PF) port for PTP.

n For E810 card, you can use only Physical Function 0 (PF0) port for PTP.

PTP over VF Mode


VMware Telco Cloud Automation supports PTP over VF only for Intel E810 NICs. You can use any
port of the E810 card for PTP over VF.

The diagram shows how the PTP over VF works.

VMware, Inc. 484


VMware Telco Cloud Automation User Guide

Figure 31-2. PTP over VF

You can create a host profile to use the PTP over VF. For details, see Add a Host Profile.

Note Before you create the host profile for PTP over VF, ensure that you use a PTP-enabled
port connected to PTP-enabled switch.

Creating a host profile enables you to control VF assignment from PF for a PTP.

For PTP over VF, the default VF assignment can happen from any of the SRIOV-enabled PFs.

Driver and Firmware versions for PTP over VF


The table provides information on the minimum versions of the driver and firmware that supports
PTP over VF. For more information, see TCP RAN and TCA Software and Driver Version
Compatibility Matrix.

NIC Type VendorID DeviceID Firmware ESXi Driver SR-IOV driver

Intel E810 8086 0x1591, 0x1592, 0x1593, 0x1599, 3.0 icen 1.6.5 iavf 4.2.7
0x159A, 0x159B

VMware, Inc. 485


VMware Telco Cloud Automation User Guide

Verifying PTP configuration


n Verify that PTP interface has hardware timestamp and PTP Hardware clock.

capv@wc-ptpvf-test4-np1-5b485945fb-dphl4 [ ~ ]$ ethtool -T ptp


Time stamping parameters for ptp:
Capabilities:
hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
off (HWTSTAMP_TX_OFF)
on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
none (HWTSTAMP_FILTER_NONE)
all (HWTSTAMP_FILTER_ALL)
capv@wc-ptpvf-test4-np1-5b485945fb-dphl4 [ ~ ]$

n Verify the running of the ptp4l service.

capv@wc-ptpvf-test4-np1-5b485945fb-dphl4 [ ~ ]$ systemctl status ptp4l


ptp4l.service - Precision Time Protocol (PTP) service
Loaded: loaded (/lib/systemd/system/ptp4l.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-01-26 00:39:10 UTC; 5 days ago
Main PID: 27668 (ptp4l)
Tasks: 1 (limit: 4915)
Memory: 168.0K
CGroup: /system.slice/ptp4l.service
└─27668 /usr/sbin/ptp4l -f /etc/ptp4l.conf
capv@wc-ptpvf-test4-np1-5b485945fb-dphl4 [ ~ ]$

A2: Host Profile for PTP and ACC100


Host profile details for PTP and ACC100 devices.

Note If you had configured PTP and ACC100 in passthrough mode in VMware Telco Cloud
Automation 1.9.1 or 1.9.5 and want to upgrade to VMware Telco Cloud Automation 2.0, you do
not need to create a host profile for PTP.

For the PTP and ACC100, you need to configure the following sections in the Host Profile.

n PCI Device Settings

n PCI Device Groups

VMware, Inc. 486


VMware Telco Cloud Automation User Guide

To configure the device in Passthrough, SRIOV, or Custom mode, use the PCI Device settings. For
example, if the host has an E810 card with four ports, and you want to put PF0 in Passthrough
Active and PF[1-3] in SRIOV mode, you can use PCI Device settings in Host Profile to implement
these configurations.

PCI Device Groups defines the filters for selecting a particular PF for the PTP. For example, if in a
card, you have PF0 and PF1 in Passthrough Active mode and have connected the PTP switch to
PF0, you can use the filters in PCI Device Group to select PF0 for PTP.

Note
n XXV710 and E810 cards support PTP in passthrough mode.

n On XXV710 card, you can use any PF for PTP.

n On E810 card, you can use only PF0 for PTP.

n For PTP in Passthrough mode, configure the PTP port in Passthrough Active and SRIOV
disabled mode. You can perform these configurations using the Host Profile function of the
VMware Telco Cloud Automation.

Prerequisites for ACC100 and PTP


Prerequisites for creating host profile for ACC100 and PTP devices.

Note
n You need to apply the Host Profile on the Network Function, which uses ACC100, before you
can instantiate that Network Function.

n If you have already created a worker node cluster on the host, you can either delete that
worker node cluster and recreate the worker node cluster after applying the Host profile, else
you can set the worker node to Enter Maintenance Mode in VMware Telco Cloud Automation
and power off the worker node in the VMware vCenter.

Obtaining the Custom File for ACC100


Obtaining the .cfg file for the custom property of host profile.

The custom property of the host profile requires the .cfg file available in the VMware EXSi.

Procedure

1 Login to the VMware EXSi host.

2 Navigate to the /opt/intel/ACC100/.

3 Open the acc100_config_vf_5g.cfg.

4 Check the content of acc100_config_vf_5g.cfg. For example:

; SPDX-License-Identifier: Apache-2.0
; Copyright(c) 2020 Intel Corporation

VMware, Inc. 487


VMware Telco Cloud Automation User Guide

[MODE]
pf_mode_en = 0

[VFBUNDLES]
num_vf_bundles = 16

[MAXQSIZE]
max_queue_size = 1024

[QUL4G]
num_qgroups = 0
num_aqs_per_groups = 16
aq_depth_log2 = 4

[QDL4G]
num_qgroups = 0
num_aqs_per_groups = 16
aq_depth_log2 = 4

[QUL5G]
num_qgroups = 4
num_aqs_per_groups = 16
aq_depth_log2 = 4

[QDL5G]
num_qgroups = 4
num_aqs_per_groups = 16
aq_depth_log2 = 4

5 Save the acc100_config_vf_5g.cfg to the local system.

Host Profile for PTP in Passthrough mode and ACC100


Procedure to create the host profile for PTP in Passthrough mode and ACC100.

Follow the procedure to create the host profile for PTP in passthrough and ACC100 device.

1 On VMware Telco Cloud Automation, navigate to Infrastructure Automation >Configuration


>Host Profile.

2 To add a host profile, click Add.

3 Provide a Profile Name. For example, ptp-acc100-hostprofile.

4 To add the PTP device:

a Click Add Device under PCI Device Settings.

b Click Add Action under Device Details.

1 To add the passthrough device, select Passthrough from the Type drop-down menu.

2 To enable the passthrough device, click the toggle button corresponding to the
Enable Passthrough .

VMware, Inc. 488


VMware Telco Cloud Automation User Guide

c To add the filter, click Add Filter.

n To add the items, select the value from the key drop-down menu.

n To add the filter items, click Add Filter Item.

Note Add the following key-value pairs:

n Vendor ID: The vendor identification. For example, 0x8086 denotes the vendorid of
Intel.

n Device ID: The device identification for the port used for PTP. For example, 0x1593.

n Index: Index of the PTP port in Passthrough active devices. For example, 0.

5 To add the ACC100 device:

a Click Add Device under PCI Device Settings.

b Click Add Action under Device Details.

1 To add the ACC100 device, select SR-IOV from the Type drop-down menu.

2 To configure the value of Number of Virtual Functions, type the value of Number of
Virtual Functions in the value field. For example, 16.

c To add the action item for custom properties for ACC100 device, click Add Action under
Device Details.

1 Select CUSTOM from the Type drop-down menu.

2 Add Key as devicetype and value as ACC100.

3 To add the Configuration File value, click browse and navigate and select the
acc100_config_vf_5g.cfg file. For details on the acc100_config_vf_5g.cfg, see
Obtaining the Custom File for ACC100.

d To add the filter, click Add Filter.

Note Add the following key-value pairs:

n Vendor ID: The vendor identification. For example, 0x8086 denotes the vendorid of
Intel.

n Device ID: The device identification for the port used for PTP. For example, 0xd5c.

To add the key-value, select the key from the key drop-down menu and type the
corresponding value in the value field.

n To add the items, select the value from the key drop-down menu.

n To add the filter items, click Add Filter Item.

6 To add the PCI device group, click ADD GROUP under PCI Device Groups.

7 Provide the Device Group Name as PTP.

8 To add the filter, click ADD FILTER.

VMware, Inc. 489


VMware Telco Cloud Automation User Guide

9 Add three filters items in PCI Device Groups

Note Add the following filter items:

n Vendor ID: The vendor identification. For example, 0x8086 denotes the vendorid of Intel.

n Device ID: The device identification for the port used for PTP. For example, 0x1593.

n Index: index of the PTP port in Passthrough active devices. For example, 0.

Figure 31-3. Add Filter Items

Note
n To add filter item, click the + icon available in the filter.

n Ensure that you add filter items. Do not add additional filters. If you click Add Filter, it
adds additional filter and not the filter items.

VMware, Inc. 490


VMware Telco Cloud Automation User Guide

10 Enter a value for the following fields in Reserved cores per NUMA node.

n Reserved cores per NUMA node. For example, 1.

n Reserved memory per NUMA node. For example, 512.

n Min core for CPU reservation per NUMA node. For example, 3.

11 To save the profile, click Save.

What to do next

Applying Host Profile to Cell Site Group.

Host Profile for PTP over VF and ACC100


Host Profile configuration for PTP over VF and ACC100.

Follow the procedure to create the host profile for PTP over VF and ACC100 device.

1 On VMware Telco Cloud Automation, navigate to Infrastructure Automation >Configuration


>Host Profile.

2 To add a host profile, click Add.

3 Provide a Profile Name. For example, ptp-acc100-hostprofile.

4 To add the SR-IOV device:

a Click Add Device under PCI Device Settings.

b Click Add Action under Device Details.

1 To add the SRIOV device, select SR-IOV from the Type drop-down menu.

2 To configure the value of Number of Virtual Functions, type the value of Number of
Virtual Functions in the value field. For example, 8.

c To add the filter, click Add Filter. Add the key as alias and value as vmnic2.

5 To add the ACC100 device:

a Click Add Device under PCI Device Settings.

b Click Add Action under Device Details.

1 To add the ACC100 device, select SR-IOV from the Type drop-down menu.

2 To configure the value of Number of Virtual Functions, type the value of Number of
Virtual Functions in the value field. For example, 16.

a To add the action item for custom properties for ACC100 device, click Add Action under
Device Details.

1 Select CUSTOM from the Type drop-down menu.

2 Add Key as devicetype and value as ACC100.

VMware, Inc. 491


VMware Telco Cloud Automation User Guide

3 To add the Configuration File value, click browse and navigate and select the
acc100_config_vf_5g.cfg file. For details on the acc100_config_vf_5g.cfg, see
Obtaining the Custom File for ACC100.

b To add the filter, click Add Filter.

To add the key-value, select the key from the key drop-down menu and type the
corresponding value in the value field.

n To add the items, select the value from the key drop-down menu.

n To add the filter items, click Add Filter Item.

Note Add the following key-value pairs:

n Vendor ID: The vendor identification. For example, 0x8086 denotes the vendorid of
Intel.

n Device ID: The device identification for the port used for PTP. For example, 0xd5c.

6 To add the PCI device group, click ADD GROUP under PCI Device Groups.

7 Provide the Device Group Name as PTP.

8 To add the filter, click ADD FILTER.

9 Add two filters items in PCI Device Groups

VMware, Inc. 492


VMware Telco Cloud Automation User Guide

Figure 31-4. Add Filter Items

Note
n To add filter item, click the + icon available in the filter.

n Ensure that you add filter items. Do not add additional filters. If you click Add Filter, it
adds additional filter and not the filter items.

10 In the first filter item, select the key as sriovEnabled and enable it from the radio button.

11 In the second filter item, select the key as alias and the value as name of the physical
interface which you want to use for PTP. For example vmnic2.

Note After you add the second filter item, ensure that you can see both alias and
sriovEnabled under a single filter.

12 Enter a value for the following fields in Reserved cores per NUMA node.

n Reserved cores per NUMA node. For example, 1.

VMware, Inc. 493


VMware Telco Cloud Automation User Guide

n Reserved memory per NUMA node. For example, 512.

n Min core for CPU reservation per NUMA node., For example, 3.

13 To save the profile, click Save.

Prerequisites

At present, only Intel E810 NICs supports PTP over VF. You can use any port of the E810 card for
PTP over VF.

What to do next

Applying Host Profile to Cell Site Group.

Applying Host Profile to Cell Site Group


You can apply the host profile configuration to a cell site group.

Perform this procedure to apply the host profile setting on the cell site group.

Prerequisites

Ensure that you have created the host profile.

Procedure

1 Log in to the VMware Telco Cloud Automation.

2 Navigate to Infrastructure Automation > Domains.

3 Click on the Cell Site Group.

4 Click the radio button corresponding to the Cell Site Group on which you need to apply the
host profile.

5 Click Edit.

6 In the Select Host Profile, select the host profile from the drop down menu.

7 Click save.

8 Click the radio button corresponding to the Cell Site Group on which you need to apply the
host profile.

9 Click the Resync button to apply the host profile. Ensure that the Status of the host displays
Provisioned.

What to do next

n If you deleted an already created worker node cluster, recreate that worker node cluster.

n If you had set the worker node to Enter Maintenance Mode in VMware Telco Cloud
Automation, then set that worker node to Exit Maintenance Mode in VMware Telco Cloud
Automation and power on the worker node in the VMware vCenter.

n Instantiate NF that uses ACC100. For details, see Instantiating a Network Function.

VMware, Inc. 494


VMware Telco Cloud Automation User Guide

CSAR Configuration for PTP and ACC100


CSAR configuration for PTP in passthrough mode and ACC100 device.

CSAR configuration for PTP in Passthrough mode and ACC100 device

Note For PTP in Passthrough mode, specify Device Type as NIC. For details on CSAR
modification, see Infrastructure Requirements Designer.

ptp:

required: true

propertyName: ptp

description: Select PCI Group for Device PTP

default: 'ptp'

type: string

format: pf_group

acc100:

required: true

propertyName: acc100

description: 'Select PCI Group for Device sriovacc100igbuio, sriovacc100vfio'

default: 'acc100'

type: string

format: pf_group

....

....

passthrough_devices:

- device_type: NIC

pf_group: ptp

isSharedAcrossNuma: true

- device_type: ACC100

pf_group: acc100

VMware, Inc. 495


VMware Telco Cloud Automation User Guide

resourceName: sriovacc100vfio

dpdkBinding: vfio-pci

isSharedAcrossNuma: true

CSAR configuration for PTP over VF mode and ACC100 device

Note For PTP over VF, specify Device Type as PTP. For details on CSAR modification, see
Infrastructure Requirements Designer.

ptp:

required: true

propertyName: ptp

description: Select PCI Group for Device PTP

default: 'ptp'

type: string

format: pf_group

acc100:

required: true

propertyName: acc100

description: 'Select PCI Group for Device sriovacc100igbuio, sriovacc100vfio'

default: 'acc100'

type: string

format: pf_group

....

....

passthrough_devices:

- device_type: PTP

pf_group: ptp

isSharedAcrossNuma: true

VMware, Inc. 496


VMware Telco Cloud Automation User Guide

- device_type: ACC100

pf_group: acc100

resourceName: sriovacc100vfio

dpdkBinding: vfio-pci

isSharedAcrossNuma: true

Symmetric Layout - Dual Socket Two NUMA System


Architecture diagram of symmetric dual socket two NUMA system.

Architecture Diagram

VMware, Inc. 497


VMware Telco Cloud Automation User Guide

Best Practices
1. It is advisable to have a symmetric layout on both NUMA nodes. For example:

n One E810 on NUMA 0. One E810 on NUMA 1.

n One ACC100 on NUMA 0. One ACC100 on NUMA 1.

n vCPU is equally divided between NUMA 0 and NUMA 1.

n Memory is equally divided between NUMA 0 and NUMA 1.

2. To configure PTP, create PCI groups for each NUMA. Each NIC can provide PTP to only one
Worker node. For example:

n One E810 for each NUMA.

n 4 port E810 in NUMA 0: vmnic0 vmnic1 vmnic2 vmnic3.

n 4 port E810 in NUMA 1: vmnic4 vmnic5 vmnic6 vmnic7.

n Create pci-group-ptp-numa-0 that includes vmnic0. This is used for PTP in NUMA 0 while
instantiating a Network Function.

n Create pci-group-ptp-numa-1 that includes vmnic4. This is used for PTP in NUMA 1 while
instantiating a Network Function.

Note You need not use the isSharedAcrossNuma flag. Both NUMA nodes have E810 cards and
there is no need for cross-NUMA sharing.

3. Create PCI groups for ACC 100. For example, one ACC 100 on each NUMA node.

Note You need not use the isSharedAcrossNuma flag. Both NUMA nodes have ACC 100 cards
and there is no need for cross-NUMA sharing.

Hyper-threading and NUMA


This section provides information about hyper-threading, pinning, NICs in the NUMA node, and
core sibling information inside the DU Worker node.

Hyper-threading
If hyper-threading is enabled, then each core is logically divided into two hyper-threads or
physical CPUs (pCPU):

Core pCPU

Core 0 pCPU 0 and pCPU 1

Core 1 pCPU 2 and pCPU 3

Core 2 pCPU 4 and pCPU 5

And so on.

VMware, Inc. 498


VMware Telco Cloud Automation User Guide

Creating and Pinning 40 vCPU DU Worker Nodes


The following architectural diagram provides information about how a Worker node vCPU can be
pinned to a pCPU if there are no other VMs running on the host.

Note After you pin a DU worker node, it does not move across the pCPU. You can configure
pinning using isNumaConfigNeeded flag in the CSAR file. This flag must be set to true.

NICs in NUMA
When the DU worker node is requesting for I/O devices through CSAR, it can either choose the
I/O connected to the same NUMA or share the I/O with different NUMA. This is configured using
isSharedAcrossNuma flag in the CSAR file. If this flag set to true, it can source the I/O devices
from a different NUMA node. If this flag is set to false or not present, it will source I/O devices
connected to the same NUMA to which the DU worker node is pinned.

Core Sibling Information Inside DU Worker Node


To expose hyper-threading details inside the VM, we must enable the vHT feature through CSAR.
After you enable vHT, the Worker node VM can access hyper-threading sibling relations inside
the VM. After enabling vHT, The lscpu -e -a command displays all vCPUs and its associated
core and socket. The lscpu command also displays the threads per core information.

The following command provides sibling information:

cat /sys/devices/system/cpu/cpu'x'/topology/thread_siblings_list

For information about enabling vHT, see Enable Virtual Hyper-Threading.

VMware, Inc. 499


VMware Telco Cloud Automation User Guide

ACC 100 Support for ESXi 8.0 Upgrade


For the VMware Telco Cloud Automation release 2.2 or prior releases, if you have configured the
ACC 100 device through the TCA host profile on ESX 7.x or prior release and plan to upgrade
ESX to 8.0, then for the ACC 100 device to function as expected you must upgrade VMware
Telco Cloud Automation version to 2.3.

Prerequisites

Upgrade the TCA appliance to 2.3 before upgrading the ESXi host to 8.0.

Procedure

1 Uninstall the ibbd-tools driver applicable for ESXi 7.x by using the following command:

esxcli software vib remove -n ibbd-tools

2 Upgrade the ESXi host to 8.0.

3 Install the ibbd-tools driver released by intel for ESX 8.0 from Intel® vRAN Baseband Driver
and Tools for VMware ESXi.

4 Perform a full-resync on the cell site host configured with ACC100 using ZTP UI/API.

5 Ensure that the custom config for ACC100 is updated on the device by using the following
ESX command:

/opt/ibbdtools/bin/bbdevcli.py -d -t /devices/ifec/dev0

/devices/ifec/dev0 is the ACC100 device path.

PTP Notifications
VMware Telco Cloud Automation deployments have PTP time synchronization for both Radio
Unit (RU) and Distributed Unit (DU). When there is a loss of time synchronization, the DU
application disables transmission until the time synchronization is reacquired.

The following are the PTP notification events:

n Synchronization State

n PTP Synchronization State

n PTP Clock Class Change

The PTP notifications are managed by exposing the REST API to vDU applications to register for
PTP synchronization events. The PTP notification framework monitors the PTP status and delivers
PTP event notifications to the vDU application.

VMware, Inc. 500


VMware Telco Cloud Automation User Guide

The following are the components required to manage the PTP notifications:

n Sidecar Container

n Updated DU specification to run the Sidecar.

n DU application communicates with the Sidecar using the localhost address and port,
which are exposed to the DU application by k8 Downward API.

n PTP event notifications are sent to the DU application through REST APIs exposed by
Sidecar.

n DU application retrieves the current status of PTP, as required.

n O-Cloud API Daemonset

n Daemonset pod contains monitor and MessageQueue container.

n PTP status is monitored using a monitor container.

n Monitor Container pushes the notifications to the DU application through MessageQueue


and Sidecar.

n Daemonset pod should be instantiated using CSAR

Note CSAR is provided by VMware.

Install O-Cloud DaemonSet


Daemonset pod contains Monitor Container and MessageQueue Container. These containers
are used to monitor the PTP notifications and push the notifications to the DU application,
respectively.

Procedure

1 Add the following label to the nodepools where PTP o-cloud daemonset pods are running:

n telco.vmware.com.node-restriction.kubernetes.io/ptp-notifications: true

2 Onboard the ptp-ocloud-notification-daemonset.csar to Telco Cloud Automation in the


Network Function Catalog.

wget https://vmwaresaas.jfrog.io/artifactory/generic-registry/ptp-ocloud-
notifications-daemonset-1.0.0.csar

3 Instantiate the onboarded CSAR to the specific workload cluster.

n Namespace : tca-system.

n Helm Repository URL : https://vmwaresaas.jfrog.io/ui/native/helm-registry/.

n Use the values.yaml file to override the default values and specify the NodeSelector.

VMware, Inc. 501


VMware Telco Cloud Automation User Guide

You must match the label added to the nodepools for the PTP o-cloud daemonSet.

container:
monitor:
image:
repository: vmwaresaas.jfrog.io/registry/ptp-ocloud-notifications-monitor
tag: 1.0.0
holdoverPeriod: 120
pollFrequency: 1
ptpSimulated: False

nodeSelector:
telco.vmware.com.node-restriction.kubernetes.io/ptp-notifications: true

Integrate Sidecar with DU Pod


You must integrate Sidecar with the DU Pod to push the PTP notifications to the DU application.

Procedure

1 Install the Sidecar image from vmwaresaas.jfrog.io/registry/ptp-ocloud-notifications-


sidecar:1.0.0.

2 Modify the helm charts for the pod.

3 To run sidecar container, specify the following in the values.yaml file of the DU pod helm
charts:

sidecarContainers:
- name: sidecar
image: vmwaresaas.jfrog.io/registry/ptp-ocloud-notifications-sidecar:1.0.0
imagePullPolicy: Always
command: ["python3"]
args: ["run-ptpclientfunction.py"]
tty: true
env:
- name: THIS_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: TRANSPORT_USER
value: "admin"
- name: TRANSPORT_PASS
value: "admin"
- name: TRANSPORT_PORT
value: "5672"
volumeMounts:
- name: sidecardatastore
mountPath: /opt/datastore
readOnly: false

sidecarVolumes:

VMware, Inc. 502


VMware Telco Cloud Automation User Guide

- name: sidecardatastore
hostPath:
path: /home/capv
type: Directory

4 In the pod spec yaml file, specify the following under containers:

{{- with .Values.sidecarContainers }}


{{- toYaml . | nindent 8 }}
{{- end }}

Setup User/Group/Storage Policy in vCenter Server for


vSphere CSI
Since TCA 2.3 release, vSphere CSI add-on supports use of customized vCenter Server credential
for deployment and to add multiple storage class at same time. It also supports to add new
feature for storage class to config with selected storage policy. This reference will briefly explain
on how to create user/group/storage policy in vCenter Server before apply vSphere CSI add-on.

Add User/Group in vCenter Server


Follow the steps below to add new user/group and grant roles in vCenter Server:

1 In vSphere Client, click Menu -> Administration on the menu bar

2 Select Users and Groups under Single Sign On

3 In Users tab, select vsphere.local as domain and then click ADD

4 Input prefered username and password, click ADD

5 Select Groups tab and click ADD

6 Input group name in Group Name field. Search username just added in Add Members field to add
the user to this group and click ADD

7 Select Roles under Access Control on left panel of vSphere Client

8 Input a name in field Role name and select privileges in below privileges list. Below is an
example of privileges list for reference, user need to select proper privileges themselves for
the specified role. Click CREATE after privlieges selected

VMware, Inc. 503


VMware Telco Cloud Automation User Guide

Category Privileges

Datastore n Allocate space


n Browse datastore
n Low level file operations
n Remove file
n Update virtual machine files
n Update virtual machine metadata

Folder n Create folder


n Delete folder
n Move folder
n Rename folder

Global n Cancel task


n Capacity planning
n Global tag
n Health
n Log event
n Manage custom attributes
n Proxy
n System tag

vSphere Tagging n Assign or Unassign vSphere Tag


n Assign or Unassign vSphere Tag on Object
n Create vSphere Tag
n Create vSphere Tag Category
n Delete vSphere Tag
n Delete vSphere Tag Category
n Edit vSphere Tag
n Edit vSphere Tag Category
n Modify UsedBy Field For Category
n Modify UsedBy Field For Tag

Namespaces n Allows disk decommission operations

Performance n Modify intervals

Scheduled task n Create tasks


n Modify task
n Remove task
n Run task

Datastore cluster n Configure a datastore cluster

Tasks n Create task


n Update task

VMware, Inc. 504


VMware Telco Cloud Automation User Guide

Category Privileges

Tenant management n Tenant provisioning operations


n Tenant query operations

Virtual machine Provisioning


n Allow disk access
n Allow file access
n Allow read-only disk access
n

9 Select Global Permissions under Roles and click ADD

10 Search the created user in User/Group field and select the created role from the role list in the
Role field. Click OK

Add new Storage Policy in vCenter Server


User needs to create new storage policy for specified datastores in vCenter Server before
assigning the storage policy in vSphere CSI storage class creation.

User can follow steps mentioned in the Set Up CNS and Create a Storage Policy (vSphere)
section of the TKG document here to create storage policies for vSAN or local VMFS datastore.

VMware, Inc. 505

You might also like