Junos Node Slicing PDF
Junos Node Slicing PDF
Modified: 2018-04-06
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
https://www.juniper.net/support/eula/. By downloading, installing or using such software, you agree to the terms and conditions of that
EULA.
Part 1 Overview
Chapter 1 Junos Node Slicing Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Junos Node Slicing Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Benefits of Junos Node Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Components of Junos Node Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Base System (BSYS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Guest Network Function (GNF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Juniper Device Manager (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Junos Node Slicing Administrator Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Mastership Behavior of BSYS and GNF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
BSYS Mastership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
GNF Mastership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Abstracted Fabric (AF) Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Understanding AF Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Features Supported on AF Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
AF Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Multi-Version Software Interoperability Overview . . . . . . . . . . . . . . . . . . . . . . . . . 10
Licensing for Junos Node Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Part 2 Setup
Chapter 2 Hardware and Software Requirements for Junos Node Slicing . . . . . . . . . . 15
Minimum Hardware and Software Requirements for Junos Node Slicing . . . . . . . 15
MX Series Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
x86 Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Server Hardware Resource Requirements (Per GNF) . . . . . . . . . . . . . . . . 16
Shared Server Hardware Resource Requirements . . . . . . . . . . . . . . . . . . 16
Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Part 2 Setup
Chapter 3 Preparing for Junos Node Slicing Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Figure 4: MX2020 Router—External x86 Server Connectivity . . . . . . . . . . . . . . . . 20
Part 2 Setup
Chapter 2 Hardware and Software Requirements for Junos Node Slicing . . . . . . . . . . 15
Table 3: GNF Resource Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Table 4: Shared Server Resources Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Chapter 5 Setting Up YANG-Based Orchestration of GNFs . . . . . . . . . . . . . . . . . . . . . . 49
Table 5: Supported XML RPCs to Manage GNFs . . . . . . . . . . . . . . . . . . . . . . . . . . 52
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at https://www.juniper.net/books.
Supported Platforms
For the features described in this document, the following platforms are supported:
• MX480
• MX960
• MX2010
• MX2020
If you want to use the examples in this manual, you can use the load merge or the load
merge relative command. These commands cause the software to merge the incoming
configuration into the current candidate configuration. The example does not become
active until you commit the candidate configuration.
If the example configuration contains the top level of the hierarchy (or multiple
hierarchies), the example is a full example. In this case, use the load merge command.
If the example configuration does not start at the top level of the hierarchy, the example
is a snippet. In this case, use the load merge relative command. These procedures are
described in the following sections.
1. From the HTML or PDF version of the manual, copy a configuration example into a
text file, save the file with a name, and copy the file to a directory on your routing
platform.
For example, copy the following configuration to a file and name the file ex-script.conf.
Copy the ex-script.conf file to the /var/tmp directory on your routing platform.
system {
scripts {
commit {
file ex-script.xsl;
}
}
}
interfaces {
fxp0 {
disable;
unit 0 {
family inet {
address 10.0.0.1/24;
}
}
}
}
2. Merge the contents of the file into your routing platform configuration by issuing the
load merge configuration mode command:
[edit]
user@host# load merge /var/tmp/ex-script.conf
load complete
Merging a Snippet
To merge a snippet, follow these steps:
1. From the HTML or PDF version of the manual, copy a configuration snippet into a text
file, save the file with a name, and copy the file to a directory on your routing platform.
For example, copy the following snippet to a file and name the file
ex-script-snippet.conf. Copy the ex-script-snippet.conf file to the /var/tmp directory
on your routing platform.
commit {
file ex-script-snippet.xsl; }
2. Move to the hierarchy level that is relevant for this snippet by issuing the following
configuration mode command:
[edit]
user@host# edit system scripts
[edit system scripts]
3. Merge the contents of the file into your routing platform configuration by issuing the
load merge relative configuration mode command:
For more information about the load command, see CLI Explorer.
Documentation Conventions
Caution Indicates a situation that might result in loss of data or hardware damage.
Laser warning Alerts you to the risk of personal injury from a laser.
Table 2 on page xiv defines the text and syntax conventions used in this guide.
Bold text like this Represents text that you type. To enter configuration mode, type the
configure command:
user@host> configure
Fixed-width text like this Represents output that appears on the user@host> show chassis alarms
terminal screen.
No alarms currently active
Italic text like this • Introduces or emphasizes important • A policy term is a named structure
new terms. that defines match conditions and
• Identifies guide names. actions.
Italic text like this Represents variables (options for which Configure the machine’s domain name:
you substitute a value) in commands or
configuration statements. [edit]
root@# set system domain-name
domain-name
Text like this Represents names of configuration • To configure a stub area, include the
statements, commands, files, and stub statement at the [edit protocols
directories; configuration hierarchy levels; ospf area area-id] hierarchy level.
or labels on routing platform • The console port is labeled CONSOLE.
components.
< > (angle brackets) Encloses optional keywords or variables. stub <default-metric metric>;
# (pound sign) Indicates a comment specified on the rsvp { # Required for dynamic MPLS only
same line as the configuration statement
to which it applies.
[ ] (square brackets) Encloses a variable for which you can community name members [
substitute one or more values. community-ids ]
GUI Conventions
Bold text like this Represents graphical user interface (GUI) • In the Logical Interfaces box, select
items you click or select. All Interfaces.
• To cancel the configuration, click
Cancel.
> (bold right angle bracket) Separates levels in a hierarchy of menu In the configuration editor hierarchy,
selections. select Protocols>Ospf.
Documentation Feedback
• Online feedback rating system—On any page of the Juniper Networks TechLibrary site
at https://www.juniper.net/documentation/index.html, simply click the stars to rate the
content, and use the pop-up form to provide us with information about your experience.
Alternately, you can use the online feedback form at
https://www.juniper.net/documentation/feedback/.
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active J-Care or Partner Support Service
support contract, or are covered under warranty, and need post-sales technical support,
you can access our tools and resources online or open a case with JTAC.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
• Find solutions and answer questions using our Knowledge Base: https://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://entitlementsearch.juniper.net/entitlementsearch/
Overview
• Junos Node Slicing Overview on page 3
Junos Node Slicing enables service providers and large enterprises to create a network
infrastructure that consolidates multiple routing functions into a single physical device.
It helps leverage the benefits of virtualization without compromising on performance. In
particular, Junos Node Slicing enables the convergence of multiple services on a single
physical infrastructure while avoiding the operational complexity involved. It provides
operational, functional, and administrative separation of functions on a single physical
infrastructure that enables the network to implement the same virtualization principles
the compute industry has been using for years.
Using Junos Node Slicing, you can create multiple partitions in a single physical MX Series
router. These partitions are referred to as guest network functions (GNFs). Each GNF
behaves as an independent router, with its own dedicated control plane, data plane, and
management plane. This enables you to run multiple services on a single converged MX
Series router, while still maintaining operational isolation between them. You can leverage
the same physical device to create parallel partitions that do not share the control plane
or the forwarding plane, but only share the same chassis, space, and power.
You can also send traffic between GNFs through the switch fabric by using an Abstracted
Fabric (AF) interface, a pseudo interface that behaves as a first class Ethernet interface.
An AF interface facilitates routing control, data and management traffic between GNFs.
• Reduced time-to-market for new services and capabilities—Each GNF can operate
on a different Junos software version. This advantage enables companies to evolve
each GNF at its own pace. If a new service or a feature needs to be deployed on a
certain GNF, and it requires a new software release, only the GNF involved requires an
update. Additionally, with the increased agility, Junos Node Slicing enables service
providers and enterprises to introduce highly flexible Everything-as-a-service business
model to rapidly respond to ever-changing market conditions.
Junos Node Slicing allows a single MX Series router to be partitioned to appear as multiple,
independent routers. Each partition has its own Junos OS control plane, which runs as a
virtual machine (VM), and a dedicated set of line cards. Each partition is called a guest
network function (GNF).
The MX Series router functions as the base system (BSYS). The BSYS owns all the
physical components of the router, including the line cards and the switching fabric. The
BSYS assigns line cards to GNFs.
The Juniper Device Manager (JDM) software orchestrates the GNF VMs. In JDM, a GNF
VM is referred to as a virtual network function (VNF). A GNF thus comprises a VNF and
a set of line cards.
JDM and VNFs are hosted on a pair of external industry standard x86 servers.
Through configuration at the BSYS, you can assign line cards of the chassis to different
GNFs. Figure 1 on page 5 shows three GNFs with their dedicated line cards running on
an external server.
See “Setting Up the Connection Between Servers and the Router” on page 19 for
information about how to connect an MX Series router to a pair of external x86 servers.
Creating a GNF requires two sets of configurations, one to be performed at the BSYS,
and the other at the JDM.
A GNF is defined by an ID. This ID must be the same at the BSYS and JDM.
The BSYS part of the GNF configuration comprises giving it an ID and a set of line cards.
The JDM part of the GNF configuration comprises specifying the following attributes:
• A VNF name.
• A GNF ID. This ID must be the same as the GNF ID used at the BSYS.
The server resource template defines the number of dedicated CPU cores and the size
of DRAM to be assigned to a GNF. For a list of predefined server resource templates
available for GNFs, see the Server Hardware Resource Requirements (Per GNF) section
in “Minimum Hardware and Software Requirements for Junos Node Slicing” on page 15.
After a GNF is configured, you can access it by connecting to the virtual console port of
the GNF. Using the Junos OS CLI at the GNF, you can then configure the GNF system
properties such as hostname and management IP address, and subsequently access it
through its management port.
JDM provides a Junos OS-like CLI and NETCONF interface for configuration and
management.
A JDM instance is hosted on each of the x86 servers. The JDM instances are typically
configured as peers that synchronize the GNF configurations: when a GNF VM is created
on one server, the backup GNF VM is automatically created on the other server.
The following administrator roles enable you to carry out the node slicing tasks:
• JDM administrator—Responsible for the JDM server port configuration, and for the
provisioning and life-cycle management of the GNF VMs (VNFs). JDM CLI commands
are available for these tasks.
The following sections address the mastership behavior of BSYS and GNF in the context
of Routing Engine redundancy.
Figure 2 on page 7 shows the mastership behavior of GNF and BSYS with Routing Engine
redundancy.
BSYS Mastership
The BSYS Routing Engine mastership arbitration behavior is identical to that of an MX
Series router.
GNF Mastership
The GNF VM mastership arbitration behavior is similar to that of MX Series Routing
Engines. Each GNF runs as a master-backup pair of VMs. A GNF VM that runs on server0
is equivalent to Routing Engine slot 0 of an MX Series router, and the GNF VM that runs
on server1 is equivalent to Routing Engine slot 1 of an MX Series router.
The GNF mastership is independent of the BSYS mastership and that of other GNFs. The
GNF mastership arbitration is done through Junos OS. Under connectivity failure conditions,
GNF mastership is handled conservatively.
Abstracted fabric (AF) interface is a pseudo interface that represents a first class Ethernet
interface behavior. An AF interface facilitates routing control and management traffic
between guest network functions (GNFs) through the switch fabric. An AF interface is
created on a GNF to communicate with its peer GNF when the two GNFs are configured
to be connected to each other. AF interfaces must be created at BSYS. The bandwidth
of the AF interfaces changes dynamically based on the insertion or reachability of the
remote line card/MPC. Because the fabric is the communication medium between GNFs,
AFs are considered to be the equivalent WAN interfaces. See Figure 3 on page 8.
AF
PFE - 0 PFE - 0
PFE - 1 PFE - 1
GNF1 GNF2
Understanding AF Bandwidth
An AF interface connects two GNFs through the fabric and aggregates all the Packet
Forwarding Engines (PFEs) that connect the two GNFs. An AF interface can leverage the
sum of the bandwidth of each Packet Forwarding Engine belonging to the AF interface.
For example, if GNF1 has one MPC8 (which has four Packet Forwarding Engines with
240 Gbps capacity each), and GNF1 is connected with GNF2 and GNF3 using AF interfaces
(af1 and af2), the maximum AF capacity of GNF1 would be 4x240 Gbps = 960 Gbps.
GNF1—af1——GNF2
GNF1—af2——GNF3
NOTE: The non-AF interfaces support all the protocols that work on Junos
OS.
• Multicast forwarding
• MPLS applications where the AF interface acts as a core interface (L3VPN, VPLS,
L2VPN, L2CKT, EVPN, and IP over MPLS)
• IPv4 Forwarding
• IPv6 Forwarding
• MPLS
• ISO
• CCC
• With the AF interface configuration, GNFs support the following AF-capable MPCs:
• MPC7E-MRATE
• MPC7E-10G
• MX2K-MPC8E
• MX2K-MPC9E
• MPC2E NG
• MPC2E NG Q
• MPC3E NG
• MPC3E NG Q
NOTE:
• A GNF that does not have the AF interface configuration supports all the
MPCs that are supported by a standalone MX Series router. For the list of
supported MPCs, see MPCs Supported by MX Series Routers.
• We recommend that you set the MTU settings on the AF interface to align
to the maximum allowed value on the XE/GE interfaces. This ensures
minimal or no fragmentation of packets over the AF interface.
AF Restrictions
The following are the current restrictions of AF interfaces:
• There can be minimal traffic drops (both transit and host) during the offline/restart
of an MPC hosted on a remote GNF.
Starting from Junos OS Release 17.4R1, Junos Node Slicing supports multi-version software
compatibility, enabling the BSYS to interoperate with a guest network function (GNF)
which runs a Junos OS version that is higher than the software version of the BSYS. This
feature supports a range of up to two versions between GNF and BSYS. That is, the GNF
software can be two versions higher than the BSYS software. Both BSYS and GNF must
meet a minimum version requirement of Junos OS Release 17.4R1.
While JDM software versioning does not have a similar restriction with respect to the GNF
or BSYS software versions, we recommend that you regularly update the JDM software.
A JDM upgrade does not affect any of the running GNFs.
Please contact Juniper Networks if you have queries pertaining to Junos Node Slicing
licenses.
Setup
• Hardware and Software Requirements for Junos Node Slicing on page 15
• Preparing for Junos Node Slicing Setup on page 19
• Setting Up Junos Node Slicing on page 27
• Setting Up YANG-Based Orchestration of GNFs on page 49
• Managing Junos Node Slicing on page 55
• Minimum Hardware and Software Requirements for Junos Node Slicing on page 15
To set up Junos Node Slicing, you need an MX Series router and a pair of industry standard
x86 servers. The x86 servers host the Juniper Device Manager (JDM) along with the GNF
VMs.
MX Series Router
The following routers support Junos Node Slicing:
• MX2010
• MX2020
• MX480
• MX960
NOTE: For the MX960 and MX480 routers, the Control Boards must be
SCBE2; and the Routing Engines must be interoperable with SCBE2
(RE-S-1800, RE-S-X6-64G).
x86 Servers
Ensure that both the servers have similar (preferably identical) hardware configuration.
The server hardware requirements are thus the sum of the requirements of the individual
GNFs, and the shared resource requirements. The server hardware requirements are a
function of how many GNFs you plan to use.
x86 CPU:
BIOS:
Storage:
• /vm-primary, which must have a minimum available storage space of 350 GB.
Each GNF must be associated with a resource template, which defines the number of
dedicated CPU cores and the size of DRAM to be assigned for that GNF.
2core-16g 2 16
4core-32g 4 32
6core-48g 6 48
8core-64g 8 64
Table 4 on page 17 lists the server hardware resources that are shared between all the
guest network functions (GNFs) on a server:
Component Specification
CPU • Four cores to be allocated for JDM and Linux host processing.
Network Ports • Two 10-Gbps Ethernet interfaces for control plane connection between the server and the router.
• Minimum—1 PCIe NIC card with Intel X710 dual port 10-Gbps Direct Attach, SFP+, Converged
Network Adapter, PCIe 3.0, x8
• Recommended—2 NIC cards of the above type. Use one port from each card to provide redundancy
at the card level.
• One Ethernet interface (1/10 Gbps) for Linux host management network.
• One Ethernet interface (1/10 Gbps) for JDM management network.
• One Ethernet interface (1/10 Gbps) for GNF management network. (This port is shared by all the
GNFs on that server).
• Serial port or an equivalent interface (iDRAC, IPMI) for server console access.
Software Requirements
To enable virtualization for RHEL, choose “Virtualization Host" for the Base Environment
and "Virtualization Platform" as an Add-On from the Software Selection screen during
installation.
NOTE:
• The hypervisor supported is KVM.
• Additional packages—Additional packages are required for Intel X710 NIC Driver and
JDM. For more information, see the “Intel X710 NIC Driver for x86 Servers” on page 23
and “Installing Additional Packages” on page 24 sections.
• If you are using Intel X710 NIC, ensure that you have the latest driver (2.0.23 or later)
installed. For more details, see “Intel X710 NIC Driver for x86 Servers” on page 23.
The servers must also have the BIOS setup as described in “x86 Server CPU BIOS Settings”
on page 21 and the Linux GRUB configuration as described in “x86 Server Linux GRUB
Configuration” on page 22.
NOTE:
• The x86 servers require internet connectivity for you to be able to perform
host OS updates and install the additional packages.
• Ensure that you have the same host OS software version on both the
servers.
NOTE: The following software packages are required to set up Junos Node
Slicing:
• JDM package
To set up Junos Node Slicing, you must directly connect a pair of external x86 servers to
the MX Series router. Besides the management port for the Linux host, each server also
requires two additional ports for providing management connectivity for the JDM and
the GNF VMs, respectively, and two ports for connecting to the MX Series router.
Figure 4 on page 20 shows how an MX2020 router is connected to a pair of x86 external
servers.
Remote
User Management
Network
Linux Linux
iDRAC Host JDM GNF-1-n iDRAC Host JDM GNF-1-n
Mgmt Mgmt Mgmt Mgmt Mgmt Mgmt
Server 0 Server 1
Console Console
Serial Port Serial Port
p3p1 p3p2 p3p1 p3p2
Console Console
RE0 RE1
g043728
MX 2020
According to the example in Figure 4 on page 20, em1, em2, and em3 on the x86 servers
are the ports that are used for the management of the Linux host, the JDM and the GNFs,
respectively. p3p1 and p3p2 on each server are the two 10-Gbps ports that are connected
to the Control Boards of the MX Series router.
NOTE:
• The names of interfaces on the server, such as em1, p3p1 might vary
according to the server hardware configuration.
For more information on the XGE ports of the MX Series router Control Board (CB)
mentioned in Figure 4 on page 20, see:
NOTE: Use the show chassis ethernet-switch command to view these XGE
ports. In the command output on MX960, refer to the port numbers 24 and
26 to view these ports on the SCBE2. In the command output on MX2010 and
MX2020, refer to the port numbers 26 and 27 to view these ports on the
Control Board-Routing Engine (CB-RE).
For Junos Node Slicing, the BIOS of the x86 server CPUs should be set up such that:
• Hyperthreading is disabled.
• The CPU cores are set to reduce jitter by limiting C-state use.
To find the rated frequency of the CPU cores on the server, run the Linux host command
lscpu, and check the value for the field Model name. See the following example:
..
Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
..
To find the frequency at which the CPU cores are currently running, run the Linux host
command grep MHz /proc/cpuinfo and check the value for each CPU core.
On a server that has the BIOS set to operate the CPU cores at their rated frequency, the
observed values for the CPU cores will all match the rated frequency (or be very close
to it), as shown in the following example.
On a server that does not have the BIOS set to operate the CPU cores at their rated
frequency, the observed values for the CPU cores do not match the rated frequency, and
the values could also vary with time (you can check this by rerunning the command).
To set the x86 server BIOS system profile to operate the CPU cores at their rated
frequency, reduce jitter, and disable hyperthreading, consult the server manufacturer,
because these settings vary with server model and BIOS versions.
Related • Minimum Hardware and Software Requirements for Junos Node Slicing on page 15
Documentation
• x86 Server Linux GRUB Configuration on page 22
In Junos Node Slicing, each GNF VM is assigned dedicated CPU cores. This assignment
is managed by Juniper Device Manager (JDM). On each x86 server, JDM requires that all
CPU cores other than CPU cores 0 and 1 be reserved for Junos Node Slicing – and in
effect, that these cores be isolated from other applications. CPU cores 2 and 3 are
dedicated for GNF virtual disk and network I/O. CPU cores 4 and above are available for
assignment to GNF VMs. To reserve these CPU cores, you must set the isolcpus parameter
in the Linux GRUB configuration as described in the following procedure:
For x86 servers running Red Hat Enterprise Linux (RHEL) 7.3, perform the following steps:
1. Determine the number of CPU cores on the x86 server. Ensure that hyperthreading
has already been disabled, as described in “x86 Server CPU BIOS Settings” on page 21.
You can use the Linux command lscpu to find the total number of CPU cores, as shown
in the following example:
Here, there are 24 cores (12 x 2). The CPU cores are numbered as core 0 to core 23.
2. As per this example, the isolcpus parameter must be set to ’isolcpus=2-23’ (isolate
all CPU cores other than cores 0 and 1 for use by JDM and the GNF VMs).
To set the isolcpus parameter in the Linux GRUB configuration file, follow the procedure
described in the section Isolating CPUs from the process scheduler in this Red Hat
document. A summary of the section is as follows:
a. Edit the Linux GRUB file /etc/default/grub to append the isolcpus parameter to
the variable GRUB_CMDLINE_LINUX, as shown in the following example:
GRUB_CMDLINE_LINUX=
"crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet
isolcpus=2-23”
b. Run the Linux shell command grub2-mkconfig to generate the updated GRUB file
as shown below:
# grub2-mkconfig -o /boot/grub2/grub.cfg
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
d. Verify that the isolcpus parameter has now been set, by checking the output of the
Linux command cat /proc/cmdline, as shown in the following example:
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-327.36.3.el7.x86_64 … quiet isolcpus=2-23
Related • Minimum Hardware and Software Requirements for Junos Node Slicing on page 15
Documentation
• x86 Server CPU BIOS Settings on page 21
If you are using Intel X710 NIC, ensure that you have the latest driver (2.0.23 or later)
installed on the x86 servers.
You need to first identify the X710 NIC interface on the servers. For example, this could
be p3p1.
You can check the NIC driver version by running the Linux command ethtool -i interface.
See the following example:
driver: i40e
version: 2.0.23
firmware-version: 5.05 0x80002899 17.5.12
...
Refer to the Intel support page for instructions on updating the driver.
NOTE: Ensure that the host OS is up to date prior to updating the Intel X710
NIC driver.
• kernel-devel
• Development Tools
If you are using RedHat, run the following commands to install these packages:
NOTE: After updating the Intel X710 NIC driver, you might notice the following
message in the host OS log:
"i40e: module verification failed: signature and/or required key missing - tainting
kernel"
Ignore this message. It appears because the updated NIC driver module has
superseded the base version of the driver that was packaged with the host
OS.
Related • Minimum Hardware and Software Requirements for Junos Node Slicing on page 15
Documentation
The x86 servers must have Red Hat Enterprise Linux (RHEL) 7.3 or Ubuntu 16.04 LTS
installed.
NOTE: The x86 Servers must have the virtualization packages installed.
For RHEL 7.3, install the following additional packages, which can be downloaded from
the Red Hat Customer Portal.
• libstdc++-4.8.5-11.el7.i686.rpm
• python-psutil-1.2.1-1.el7.x86_64.rpm
• net-snmp-5.7.2-24.el7.x86_64.rpm
• net-snmp-libs-5.7.2-24.el7.x86_64.rpm
• libvirt-snmp-0.0.3-5.el7.x86_64.rpm
NOTE:
• The package version numbers shown are the minimum versions. Newer
versions might be available in the latest RHEL 7.3 patches.
• The packages with the extension .i686 are 32-bit packages. Make sure that
you install the 32-bit versions of these packages.
• For RHEL, we recommend that you install the packages using the yum
command.
• python-psutil
NOTE:
• For Ubuntu, you can use the apt-get command to install the latest version
of these packages. For example, use:
Complete the following steps before you start installing the JDM:
• Ensure that the MX Series router is connected to the x86 servers as described in “Setting
Up the Connection Between Servers and the Router” on page 19.
• Power on the two x86 servers and both the Routing Engines on the MX Series router.
• Identify the Linux host management port on both the x86 servers. For example, em1.
• Identify the ports to be assigned for the JDM and the GNF management ports. For
example, em2 and em3.
• Identify the two 10-Gbps ports that are connected to the Control Boards on the MX
Series router. For example, p3p1 and p3p2.
Before proceeding to perform the Junos Node Slicing setup tasks, you must have
completed the procedures described in the chapter ’Preparing for Junos Node Slicing
Setup’.
Related • Minimum Hardware and Software Requirements for Junos Node Slicing on page 15
Documentation
NOTE: Ensure that the MX Series router is connected to the x86 servers as
described in “Setting Up the Connection Between Servers and the Router”
on page 19.
Junos Node Slicing requires the MX Series router to function as the base system (BSYS).
Use the following steps to configure an MX Series router to operate in BSYS mode:
1. Install the Junos OS package for BSYS on both the Routing Engines of the MX Series
router.
b. Click Base System > Junos OS version number > Junos version number (64-bit
High-End).
c. On the Software Download page, select the I Agree option under End User License
Agreement and then click Proceed.
2. On the MX Series router, run the show chassis hardware command and verify that the
transceivers on both the Control Boards (CBs) are detected. The following text
represents a sample output:
…
CB 0 REV 23 750-040257 CABL4989 Control Board
Xcvr 0 REV 01 740-031980 ANT00F9 SFP+-10G-SR
Xcvr 1 REV 01 740-031980 APG0SC3 SFP+-10G-SR
CB 1 REV 24 750-040257 CABX8889 Control Board
Xcvr 0 REV 01 740-031980 AP41BKS SFP+-10G-SR
Xcvr 1 REV 01 740-031980 ALN0PCM SFP+-10G-SR
NOTE: A router in the BSYS mode is not expected to run features other than
the ones required to run the basic management functionalities in Junos Node
Slicing. For example, the BSYS is not expected to have interface configurations
associated with the line cards installed in the system. Instead, guest network
functions (GNFs) will have the full-fledged router configurations.
Before installing the JDM RPM package for x86 servers, ensure that you have installed
the additional packages, as described in “Installing Additional Packages” on page 24.
Download and install the JDM RPM package for x86 servers running RHEL as follows:
b. Click JDM > Junos OS version number > Juniper Device Manager version number (for
Redhat).
c. On the Software Download page, select the I Agree option under the End User License
Agreement and then click Proceed.
To install the package on x86 servers running RHEL, perform the following steps on each
of the servers:
1. Disable SELINUX and reboot the server. You can disable SELINUX by setting the value
for SELINUX to disabled in the /etc/selinux/config file.
2. Install the JDM RPM package (indicated by the .rpm extension) by using the following
command. An example of the JDM RPM package used is shown below:
Before installing the JDM Ubuntu package for x86 servers, ensure that you have installed
the additional packages. For more details, see “Installing Additional Packages” on page 24.
Download and install the JDM Ubuntu package for x86 servers running Ubuntu 16.04 as
follows:
b. Click JDM > Junos OS version number > Juniper Device Manager version number (for
Debian).
c. On the Software Download page, select the I Agree option under the End User License
Agreement and then click Proceed.
To install the JDM package on the x86 servers running Ubuntu 16.04, perform the following
steps on each of the servers:
2. Install the JDM Ubuntu package (indicated by the .deb extension) by using the following
command. An example of the JDM Ubuntu package used is shown below:
Use the following steps to configure JDM on each of the x86 servers.
1. At each server, start the JDM, and assign identities for the two servers as server0 and
server1, respectively, as follows:
Starting JDM
Starting JDM
2. Enter the JDM console on each server by running the following command:
root@jdm% cli
New Password:
NOTE:
• The JDM supports root user administration account only.
• The JDM root password must be the same on both the servers.
• The JDM root password overrides the Linux host root password.
root@jdm# commit
8. From the Linux host, run the ssh jdm command to log in to the JDM shell.
• The two 10-Gbps server ports that are connected to the MX Series router.
Therefore, you need to identify the following on each server before starting the
configuration of the ports:
• The server interfaces (for example, p3p1 and p3p2) that are connected to CB0 and CB1
on the MX Series router.
• The server interfaces (for example, em2 and em3) to be used for JDM management
and GNF management.
NOTE:
• You need this information for both server0 and server1.
To configure the x86 server interfaces in JDM, perform the following steps on both the
servers:
NOTE: Ensure that you apply the same configuration on both server0 and
server1.
At both server0 and server1, run the following JDM CLI command:
For example, to log in to the peer server from server0, exit the JDM CLI,
and use the following command from JDM shell:
Similarly, to log in to the peer server from server1, use the following
command:
4. Apply the configuration statements in the JDM CLI configuration mode to set the JDM
management IP address, default route, and the JDM hostname for each JDM instance
as shown in the following example.
root@jdm# set groups server0 interfaces jmgmt0 unit 0 family inet address 10.216.105.112/21
root@jdm# set groups server1 interfaces jmgmt0 unit 0 family inet address 10.216.105.113/21
root@jdm# set groups server0 routing-options static route 0.0.0.0/0 next-hop 10.216.111.254
root@jdm# set groups server1 routing-options static route 0.0.0.0/0 next-hop 10.216.111.254
root@jdm# set groups server0 system host-name test-jdm-server0
root@jdm# set groups server1 system host-name test-jdm-server1
root@jdm# commit
NOTE:
• jmgmt0 stands for the JDM management port. This is different from the
Linux host management port. Both JDM and the Linux host management
ports are independently accessible from the management network.
5. Run the following JDM CLI command on each server and ensure that all the interfaces
are up.
CB0 cb0 up
CB1 cb1 up
JDM mgmt port jmgmt0 up
JDM to HOST port bme1 up
JDM to GNF port bme2 up
JDM to JDM link0* cb0.4002 up
JDM to JDM link1 cb1.4002 up
NOTE: For sample JDM configurations, see “Sample Configuration for Junos
Node Slicing” on page 42.
Configuring a guest network function (GNF) comprises two tasks, one to be performed
at the BSYS and the other at the JDM.
NOTE: You need to assign an ID to each GNF. This ID must be the same at
the BSYS and the JDM.
At the BSYS, specify a GNF by assigning it an ID and a set of line cards by applying the
configuration as shown in the following example:
user@router# commit
In the JDM, the GNF VMs are referred to as virtual network functions (VNFs). A VNF has
the following attributes:
• A VNF name.
• A GNF ID. This ID must be the same as the GNF ID used at the BSYS.
1. Retrieve the Junos OS image for GNFs and place it in the host OS directory
/var/jdm-usr/gnf-images/ on both the servers.
b. Click GNF > Junos OS version number > Junos version number (Guest Network
Function) .
c. On the Software Download page, select the I Agree option under the End User
License Agreement and then click Proceed.
2. Assign this image to a GNF by using the JDM CLI command as shown in the following
example:
Server1:
Added image: /vm-primary/test-gnf/test-gnf.img
3. Configure the VNF by applying the configuration statements as shown in the following
example:
To also specify a baseline or initial Junos OS configuration for a GNF, prepare the GNF
configuration file (example: /var/jdm-usr/gnf-config/test-gnf.conf) on both the servers
and specify the filename as the parameter in the base-config statement as shown
below:
• You use the same GNF ID as the one specified earlier in BSYS.
• The baseline configuration filename (with the path) is the same on both
the servers.
• The GNF name used here is the same as the one assigned to the Junos
OS image for GNF in the step 2.
4. To verify that the VNF is created, run the following JDM CLI command:
5. Log in to the console of the VNF by issuing the following JDM CLI command:
6. Configure the VNF the same way as you configure an MX Series Routing Engine.
NOTE:
• For sample configurations, see “Sample Configuration for Junos Node
Slicing” on page 42.
• If you had previously brought down any physical x86 CB interfaces or the
GNF management interface from Linux shell (by using the command ifconfig
interface-name down), these will automatically be brought up when the
GNF is started.
In Junos Node Slicing, the BSYS owns all the physical components of the router, including
the line cards and fabric, while the GNFs maintain forwarding state on their respective
line cards. In keeping with this split responsibility, Junos CLI configuration under the chassis
hierarchy (if any), should be applied at the BSYS or at the GNF as follows:
• As exceptions, the following two parameters under the chassis configuration hierarchy
should be applied at both BSYS and GNF:
The Juniper Device Manager (JDM) supports the following SNMP traps:
Standard linkUp/linkDown SNMP traps are generated. A default community string jdm
is used.
Standard linkUp/linkDown SNMP traps are generated. A default community string host
is used.
JDM to JDM connectivity loss/regain traps are sent using generic syslog traps
(jnxSyslogTrap) through the host management interface.
The JDM connectivity down trap JDM_JDM_LINK_DOWN is sent when the JDM is not
able to communicate with the peer JDM on another server over cb0 or cb1 links. See
the following example:
.1.3.6.1.4.1.2636.3.35.1.1.1.2.1="JDM_JDM_LINK_DOWN"
.1.3.6.1.4.1.2636.3.35.1.1.1.3.1=""
.1.3.6.1.4.1.2636.3.35.1.1.1.4.1=5
.1.3.6.1.4.1.2636.3.35.1.1.1.5.1=24
.1.3.6.1.4.1.2636.3.35.1.1.1.6.1=0
.1.3.6.1.4.1.2636.3.35.1.1.1.7.1="jdmmon"
.1.3.6.1.4.1.2636.3.35.1.1.1.8.1="JDM-HOST"
.1.3.6.1.4.1.2636.3.35.1.1.1.9.1="JDM to JDM Connection Lost"
.1.3.6.1.6.3.1.1.4.3.0.0=”” } }
The JDM to JDM Connectivity up trap JDM_JDM_LINK_UP is sent when either the cb0 or
cb1 link comes up, and JDMs on both the servers are able to communicate again. See
the following example:
SNMP traps are sent to the target NMS server. To configure the target NMS server details
in the JDM, see the following example:
[edit]
Related • Installing JDM RPM Package on x86 Servers Running RHEL on page 29
Documentation
• Installing JDM Ubuntu Package on x86 Servers Running Ubuntu 16.04 on page 30
Creating an Abstracted Fabric (AF) interface between two guest network functions
(GNFs) involves configurations both at the base system (BSYS) and at the GNF. AF
interfaces are created on GNFs based on the BSYS configuration, which is then sent to
those GNFs.
In this example, af2 is the Abstracted Fabric interface instance 2 and af4 is the
Abstracted Fabric interface instance 4.
NOTE: The allowed AF interface values range from af0 through af9.
The GNF AF interface will be visible and up. You can configure an AF interface the way
you configure any other interface.
The following sections explain the forwarding class- to-queue mapping, and the behavior
aggregate (BA) classifiers and rewrites supported on the Abstracted Fabric (AF)
interfaces.
An AF interface is a simulated WAN interface with most capabilities of any other interface
except that the traffic designated to a remote Packet Forwarding Engine will still have
to go over the two fabric queues (Low/High priority ones).
NOTE: Presently, the AF interface operates in 2-queue mode only. Hence, all
queue-based features such as scheduling, policing, and shaping are not
available on an AF interface.
Packets on the AF interface inherit the fabric queue that is determined by the fabric
priority configured for the forwarding class to which that packet belongs. For example,
see the following forwarding class to queue map configuration:
[edit]
As shown in the preceding example, when a packet gets classified to the forwarding class
VoiceSig, the code in the forwarding path examines the fabric priority of that forwarding
class and decides which fabric queue to choose for this packet. In this case, high-priority
fabric queue is chosen.
You can also apply rewrites for IP packets entering the MPLS tunnel and do a rewrite
of both EXP and IPv4 type of service (ToS) bits. This approach will work as it does on
other normal interfaces.
NOTE:
The following are not supported:
}
}
}
server1 {
system {
host-name test-jdm-server1;
}
server {
interfaces {
cb0 p3p1;
cb1 p3p2;
jdm-management em2;
vnf-management em3;
}
}
interfaces {
jmgmt0 {
unit 0 {
family inet {
address 10.216.105.113/21;
}
}
}
routing-options {
static {
route {
0.0.0.0/0 next-hop 10.216.111.254;
}
}
}
}
}
}
apply-groups [ server0 server1 ];
system {
root-authentication {
encrypted-password "..."; ## SECRET-DATA
}
services {
ssh;
netconf {
ssh;
rfc-compliant;
}
}
}
virtual-network-functions {
test-gnf {
id 1;
chassis-type mx2020;
resource-template 2core-16g;
base-config /var/jdm-usr/gnf-config/test-gnf.conf;
}
}
GNF1 Configuration
interfaces {
xe-4/0/0 {
unit 0 {
family inet {
address 22.1.2.2/24;
}
}
}
af2 {
unit 0 {
family inet {
address 32.1.2.1/24;
}
}
}
}
class-of-service {
classifiers {
dscp testdscp {
forwarding-class assured-forwarding {
loss-priority low code-points [ 001001 000000 ];
}
}
}
interfaces {
xe-4/0/0 {
unit 0 {
classifiers {
dscp testdscp;
}
}
classifiers {
dscp testdscp;
}
}
af1 {
unit 0 {
rewrite-rules {
dscp testdscp; /*Rewrite rule applied on egress AF interface on GNF1.*/
}
}
}
}
rewrite-rules {
dscp testdscp {
forwarding-class assured-forwarding {
loss-priority low code-point 001001;
}
}
}
}
GNF2 Configuration
interfaces {
xe-3/0/0:0 {
unit 0 {
family inet {
address 42.1.2.1/24;
}
}
}
af1 {
unit 0 {
family inet {
address 32.1.2.2/24;
}
}
}
}
class-of-service {
classifiers {
dscp testdscp {
forwarding-class network-control {
loss-priority low code-points 001001;
}
}
}
interfaces {
af1 {
unit 0 {
classifiers {
dscp testdscp; /*Classifier applied on AF at ingress of GNF2*/
}
}
}
}
}
NOTE: Junos Node Slicing also supports GNF life cycle management using
the dual touchpoint method. In this method, ODL sends RPCs to, and receive
responses from, JDM and BSYS separately. To enable dual touchpoint, you
need to mount both BSYS and Juniper Device Manager (JDM) on ODL.
To install the YANG package on the BSYS, use the following command:
NOTE: You need to install the same package on the backup Routing Engine
as well.
After successful installation, you can find the YANG Package contents at the following
location:
root@router:/opt/yang-pkg/junos-node-slicing
You can verify the package installation status on both the Routing Engines of the BSYS,
using the following command:
NOTE:
• Ensure that server0 and server1 are up and running.
• Ensure that you replace the jdm-server0-ip and jdm-server1-ip with proper
IP addresses.
To complete the single touchpoint setup, you need to set the BSYS to communicate
with the JDM. This enables the BSYS to service the RPC requests from ODL.
To enable the BSYS to communicate with the JDM, issue the following commands on
the BSYS.
If you want to disable the YANG-based orchestration of GNFs, delete the YANG package.
Also, in case of a software upgrade, you need to delete the existing package and install
the latest one.
2. Issue the following command on both the master and backup Routing Engines:
Table 5 on page 52 lists the key GNF management tasks, along with the XML RPCs that
are used to perform those tasks.
NOTE:
• In the single touchpoint method, use the XML with the prefix jdm- , as seen
in Table 5 on page 52 (for example,
<rpc><jdm-get-route-information/></rpc>).
• In the dual touchpoint method, use the XML without the jdm- prefix (for
example, <rpc><get-route-information/></rpc>).
Task RPC
Task RPC
Stop a GNF.
<rpc>
<jdm-request-virtual-network-functions>
<vnf-name>
GNF-NAME
</vnf-name>
<stop/>
</jdm-request-virtual-network-functions>
</rpc>
Start a GNF.
<rpc>
<jdm-request-virtual-network-functions>
<vnf-name>
GNF-NAME
</vnf-name>
<start/>
</jdm-request-virtual-network-functions>
</rpc>
Restart a GNF.
<rpc>
<jdm-request-virtual-network-functions>
<vnf-name>
GNF-NAME
</vnf-name>
<restart/>
</jdm-request-virtual-network-functions>
</rpc>
<jdm-get-software-information/>
</rpc>
<jdm-get-server-connections/>
</rpc>
<jdm-get-inventory-software-vnf-information/>
</rpc>
<jdm-get-visibility-vnf-information/>
</rpc>
You can upgrade each of these components independently, as long as they are within
the allowed range of software versions (see “Multi-Version Software Interoperability
Overview” on page 10 for more details). You can also upgrade all of them together.
NOTE: Before starting the upgrade process, save the JDM, GNF VM, and BSYS
configurations for reference.
Upgrading JDM
1. Upgrade the JDM by performing the following tasks on both the servers:
a. Copy the new JDM package (RPM or Ubuntu) to a directory on the host (for
example, /var/tmp).
If you are upgrading the JDM RHEL package, use the following command:
If you are upgrading the JDM Ubuntu package, use the following command:
NOTE: A JDM upgrade does not affect any of the running GNFs.
See also:
• Installing JDM Ubuntu Package on x86 Servers Running Ubuntu 16.04 on page 30
The GNF and BSYS packages can be upgraded in the same way as you would upgrade
Junos OS on a standalone MX Series router.
NOTE: You can also overwrite an existing GNF image with a new one through
JDM by using the JDM CLI command request virtual-network-functions vnf-name
add-image new-image-name force. This can be useful in a rare situation where
the GNF image does not boot. You can also use the force option to perform
a cleanup if, for example, you abruptly stopped an earlier add-image process
by pressing Ctrl-C (example: request virtual-network-functions vnf-name
delete-image image-name force).
GNFs separately. Also, you can run unified ISSU on each GNF independently—without
affecting other GNFs. See also Understanding the Unified ISSU Process.
The following are sample messages that appear if incompatibilities are detected during
software upgrade:
--------------------------------------------------------------------------------
CFG Anomalies for: set snmp interface
--------------------------------------------------------------------------------
FRU-ID Ano-ID ACTION MESSAGE
--------------------------------------------------------------------------------
NONE 102 WARN <sample config incompatibility 1>
--------------------------------------------------------------------------------
FRU Anomalies:
--------------------------------------------------------------------------------
FRU-ID Ano-ID ACTION MESSAGE
--------------------------------------------------------------------------------
0xaa0b 100 WARN <sample FRU incompatibility 1>
Sample output showing how to use the 'force' option to proceed with an upgrade:
The alarms appear only on GNFs even if the upgrade is performed on the BSYS. The
following types of alarm can occur:
To view software incompatibilities from the BSYS, use the CLI as shown in the following
example:
To view software incompatibilities from a GNF, use the CLI as shown in the following
example:
NOTE:
• As shown in the CLI, remember to specify the GNF ID while viewing the
incompatibilities from BSYS.
Server maintenance activities such as hardware or host OS upgrade and fault isolation
might require you to restart the external servers used in Junos Node Slicing. Use the
following procedure to restart the servers:
If you are restarting both the servers, choose the all-servers option while stopping
each GNF as shown in the following example:
If you are restarting a particular server, stop the GNFs on that server by specifying the
server-id as shown in the following example:
NOTE: If you want to view the status of GNFs on both the servers, choose
the all-servers option. Example: show virtual-network-functions all-servers).
3. From the Linux host shell, stop the JDM by using the following command:
4. From the Linux host shell, verify that the JDM status shows as stopped.
5. After rebooting, verify that the JDM status now shows as running.
After a server reboot, the JDM and the configured GNFs will automatically start running.
If you are replacing the servers, ensure that the operating server pair continues to have
similar or identical hardware configuration. If the server pair were to become temporarily
dissimilar during the replacement (this could be the case when replacing the servers
sequentially), it is recommended that you disable GRES and NSR for this period, and
re-enable them only when both the servers are similar once again.
Before updating the host OS on an external server, you must first stop the GNFs and JDM
on that server as described in “Restarting External Servers” on page 59.
Following the host OS update, if you are using Intel X710 NICs, ensure that the version of
the X710 NIC driver in use continues to be the latest version as described in “Intel X710
NIC Driver for x86 Servers” on page 23 .
This procedure involves shutting down a GNF and then deleting it. In JDM, GNF VMs are
called VNFs. Use the following steps to delete a VNF:
1. Shut down a VNF by using the JDM CLI command request virtual-network-functions
gnf-name stop all-servers. For example:
server0:
--------------------------------------------------------------------------
test-gnf stopped
server1:
--------------------------------------------------------------------------
test-gnf stopped
2. Delete the VNF configuration by applying the JDM CLI configuration statement delete
virtual-network-functions gnf-name. See the following example:
3. Delete the VNF image repository by using the JDM CLI command request
virtual-network-functions gnf-name delete-image all-servers. For example:
server0:
--------------------------------------------------------------------------
Deleted the image repository
server1:
--------------------------------------------------------------------------
Deleted the image repository
NOTE:
• To delete a VNF completely, you must perform all the three steps.
• If you want to delete a VNF management interface, you must stop and
delete the VNF first.
To disable Junos Node Slicing, you must uninstall the following packages:
• JDM package
NOTE: Save the JDM configuration if you want to use it for reference.
1. Delete the GNFs first by performing all the steps described in the section “Deleting
Guest Network Functions” on page 61.
2. Stop the JDM on each server by running the following command at the host Linux
shell:
3. Uninstall the JDM on each server by running the following command at the host Linux
shell.
4. To revert the MX Series router from BSYS mode to standalone mode, apply the
following configuration statements on the MX Series router:
• network-slices on page 68
• guest-network-functions on page 69
• gnf on page 70
• control-plane-bandwidth-percent (Node Slicing) on page 71
• description (GNF) on page 72
• fpcs (Node Slicing) on page 73
• af-name on page 74
• peer-gnf on page 75
• description (AF) on page 76
network-slices
Syntax network-slices {
guest-network-functions{
gnf id {
control-plane-bandwidth-percent percent;
description description;
fpcs fpcs;
af-name
}
}
}
guest-network-functions
Syntax guest-network-functions {
gnf id {
control-plane-bandwidth-percent percent;
description description;
fpcs fpcs;
af-name
}
}
gnf
Syntax gnf id {
control-plane-bandwidth-percent percent;
description description;
fpcs fpcs;
af-name
}
Options id—GNF ID
Range: 1–10
• guest-network-functions on page 69
Description Allocate a percentage of the bandwidth that exists on the control plane on the router to
the specified guest network function (GNF). Allocating bandwidth prevents potential
overutilization by one GNF over another.
• gnf on page 70
• description on page 72
• fpcs on page 73
description (GNF)
Description Provide a description string for the specified guest network function (GNF).
Options description—A description string for the specified guest network function (GNF).
• gnf on page 70
• control-plane-bandwidth-percent on page 71
• fpcs on page 73
• gnf on page 70
• control-plane-bandwidth-percent on page 71
• description on page 72
af-name
Syntax af-name {
peer-gnf {
id peer-gnf-id;
remote-af-name;
}
description af-description;
}
Description Configure Abstracted Fabric (AF) interface between a pair of guest network functions
(GNFs). AF interface is a pseudo interface that represents a first class Ethernet interface
behavior. An AF interface is created on a GNF to communicate with the peer GNF when
the two GNFs are connected to each other through the CLI.
peer-gnf
Syntax peer-gnf {
id peer-gnf-id;
remote-af-name;
}
Description Configure the details of the the GNF peer connected using the AF interface.
Options idpeer-gnf -id—Name of the GNF peer connected using the Abstracted Fabric (AF)
interface.
Range: 1 through 10
description (AF)
Description Provide a description string for the specified Abstracted Fabric (AF) interface.
Options description—A description string for the specified Abstracted Fabric (AF) interface.
• control-plane-bandwidth-percent on page 71
• fpcs on page 73
Description Display Junos Node Slicing information for the guest network functions (GNFs) configured
on the base system (BSYS).
Output Fields Table 6 on page 78 lists the output fields for the show chassis network-slices command.
Output fields are listed in the approximate order in which they appear.
Sample Output
Description Display the status of the physical interface cards (PICs) of each Flexible PIC Concentrator
(FPC) assigned to different guest network functions (GNFs).
Sample Output
user@router> show chassis fpc pic-status
Description Display information about Flexible PIC Concentrators (fpcs) assigned to different guest
network functions (GNFs).
Output Fields Table 7 on page 82 lists the output fields for the show chassis fpc command. Output
fields are listed in the approximate order in which they appear.
Slot or Slot State Slot number and state. The state can be one of the following conditions:
Temp (C) or Temperature of the air passing by the FPC, in degrees Celsius or in both Celsius
Temperature and Fahrenheit.
Total CPU Total percentage of CPU being used by the FPC's processor.
Utilization (%)
Interrupt CPU Of the total CPU being used by the FPC's processor, the percentage being used
Utilization (%) for interrupts.
1 min CPU Information about the Routing Engine's CPU utilization in the past 1 minute.
Utilization (%)
5 min CPU Information about the Routing Engine's CPU utilization in the past 5 minutes.
Utilization (%)
15 min CPU Information about the Routing Engine's CPU utilization in the past 15 minutes.
Utilization (%)
Heap Utilization Percentage of heap space (dynamic memory) being used by the FPC's processor.
(%) If this number exceeds 80 percent, there may be a software problem (memory
leak).
Buffer Utilization Percentage of buffer space being used by the FPC's processor for buffering
(%) internal messages.
Sample Output
Release Information Command introduced in Junos OS Release 12.3 for MX2020 3D Universal Edge Routers.
Command introduced in Junos OS Release 12.3 for MX2010 3D Universal Edge Routers.
Command introduced in Junos OS Release 17.2 for MX2008 3D Universal Edge Routers.
Output Fields Table 8 on page 84 lists the output fields for the show chassis adc command. Output
fields are listed in the approximate order in which they appear.
Uptime How long the Routing Engine has been connected to the adapter card and, therefore, how long the
adapter card has been up and running.
Sample Output
Description Display information about the FPCs associated with different guest network functions
(GNFs).
Output Fields Table 9 on page 86 lists the output fields for the show chassis network-slices fpcs
command. Output fields are listed in the approximate order in which they appear.
Sample Output
Description Display incompatibilities between the software version running on the base system
(BSYS) and the software running on a specific guest network function (GNF).
Options gnf-id id—Specify the GNF ID for which you want to view the software incompatibilities.
Output Fields Table 10 on page 87 lists the output fields for the show system anomalies gnf-id command.
Output fields are listed in the approximate order in which they appear.
Anomaly Type Shows the software incompatibility type. The following are the possible values:
Default Action Shows the default actions associated with incompatibilities. The following are
the possible values:
Sample Output
Description Display a list of all hardware components of the chassis, including the hardware version
level and serial number, the GNF Routing Engine details, and the FPCs assigned to the
GNF.
Output Fields Table 11 on page 90 lists the output fields for the show chassis hardware command. Output
fields are listed in the approximate order in which they appear.
Serial number Serial number of the chassis component. The serial number of the backplane
is also the serial number of the router chassis. Use this serial number when you
need to contact Juniper Networks Customer Support about the router or switch
chassis.
Sample Output
bsys-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN11C9CDDAFK MX2010
Midplane REV 35 750-044636 ABAB9184 Lower Backplane
Midplane 1 REV 02 711-044557 ABAB9048 Upper Backplane
PMP REV 04 711-032426 ACAJ2622 Power Midplane
FPM Board REV 09 760-044634 ABCF2618 Front Panel Display
PSM 0 REV 01 740-050037 1EDB3130084 DC 52V Power Supply
Module
PSM 1 REV 01 740-050037 1EDB313001Z DC 52V Power Supply
Module
PSM 2 REV 01 740-050037 1EDB321018D DC 52V Power Supply
Module
PSM 3 REV 01 740-050037 1EDB32101AZ DC 52V Power Supply
Module
PSM 4 REV 01 740-050037 1EDB32202C2 DC 52V Power Supply
Module
PSM 5 REV 01 740-050037 1EDB32100TC DC 52V Power Supply
Module
PSM 6 REV 01 740-050037 1EDB3210166 DC 52V Power Supply
Module
PSM 7 REV 01 740-050037 1EDB3210165 DC 52V Power Supply
Module
PSM 8 REV 01 740-050037 1EDB3210163 DC 52V Power Supply
Module
PDM 0 REV 03 740-045234 1EGA3170177 DC Power Dist Module
Routing Engine 0 REV 08 750-055814 CAFV5537 RE-S-2X00x8
CB 0 REV 08 750-055087 CAFN3426 MX2K Enhanced SCB
Xcvr 0 REV 01 740-031980 ALM0HC7 SFP+-10G-SR
Xcvr 1 REV 01 740-031980 123363A00418 SFP+-10G-SR
CB 1 REV 08 750-055087 CAFN3423 MX2K Enhanced SCB
SPMB 0 REV 05 711-041855 CAEZ5998 PMB Board
SPMB 1 REV 05 711-041855 CAEZ5993 PMB Board
SFB 0 REV 06 711-044466 ABCD6742 Switch Fabric Board
SFB 1 REV 06 711-044466 ABCG5627 Switch Fabric Board
SFB 2 REV 06 711-044466 ABCG5659 Switch Fabric Board
SFB 3 REV 06 711-044466 ABCG5653 Switch Fabric Board
SFB 4 REV 06 711-044466 ABCG5611 Switch Fabric Board
SFB 5 REV 06 711-044466 ABCG5635 Switch Fabric Board
SFB 6 REV 06 711-044466 ABCG5638 Switch Fabric Board
SFB 7 REV 06 711-044466 ABCG3650 Switch Fabric Board
FPC 8 REV 68 750-044130 ABCY5967 MPC6E 3D
CPU REV 12 711-045719 ABCY9696 RMPC PMB
Fan Tray 0 REV 06 760-046960 ACAY0428 172mm FanTray - 6 Fans
Fan Tray 1 REV 06 760-046960 ACAY0800 172mm FanTray - 6 Fans
Fan Tray 2 REV 06 760-046960 ACAY0797 172mm FanTray - 6 Fans
Fan Tray 3 REV 06 760-046960 ACAY1047 172mm FanTray - 6 Fans
gnf2-re0:
--------------------------------------------------------------------------
Chassis GN59081553B0 MX2010-GNF <<<
Routing Engine 0 RE-GNF-1700x4
Description Display information about the Flexible PIC Concentrators (fpcs) assigned to the guest
network function (GNF).
Output Fields Table 12 on page 92 lists the output fields for the show chassis fpc command. Output
fields are listed in the approximate order in which they appear.
Slot or Slot State Slot number and state. The state can be one of the following conditions:
Temp (C) or Temperature of the air passing by the FPC, in degrees Celsius or in both Celsius
Temperature and Fahrenheit.
Total CPU Total percentage of CPU being used by the FPC's processor.
Utilization (%)
Interrupt CPU Of the total CPU being used by the FPC's processor, the percentage being used
Utilization (%) for interrupts.
1 min CPU Information about the Routing Engine's CPU utilization in the past 1 minute.
utilization (%)
5 min CPU Information about the Routing Engine's CPU utilization in the past 5 minutes.
utilization (%)
15 min CPU Information about the Routing Engine's CPU utilization in the past 15 minutes.
utilization (%)
Heap Utilization Percentage of heap space (dynamic memory) being used by the FPC's processor.
(%) If this number exceeds 80 percent, there might be a software problem (memory
leak).
Buffer Utilization Percentage of buffer space being used by the FPC's processor for buffering
(%) internal messages.
Sample Output
4 Online 42 20 0 19 19 19 3584 8 25 2
6 Online 46 12 0 11 11 11 3136 8 19 2
Description Display the status of the physical interface cards (PICs) of each Flexible PIC Concentrator
(FPC) assigned to the guest network function (GNF).
Sample Output
Release Information Command introduced in Junos OS Release 12.3 for MX2020 3D Universal Edge Routers.
Command introduced in Junos OS Release 12.3 for MX2010 3D Universal Edge Routers.
Command introduced in Junos OS Release 17.2 for MX2008 3D Universal Edge Routers.
Description Display chassis information about the adapter cards (ADCs) assigned to the guest network
function (GNF).
Output Fields Table 13 on page 95 lists the output fields for the show chassis adc command. The output
fields are listed in the approximate order in which they appear.
Uptime How long the Routing Engine has been connected to the adapter card and, therefore, how long the
adapter card has been up and running.
Sample Output
Description Display status information for the specified Abstracted Fabric (AF) interface.
Options brief | detail | extensive | terse—(Optional) Display the specified level of output.
Output Fields Table 14 on page 96 describes the output fields for the show interfaces (Abstracted
Fabric) command. Output fields are listed in the approximate order in which they appear.
Physical Interface
Physical interface Name and status of the physical interface. All levels
Interface index Index number of the physical interface, which reflects its initialization sequence. detail extensive none
SNMP ifIndex SNMP index number for the physical interface. detail extensive none
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Link-level type Encapsulation being used on the physical interface. All levels
MTU Maximum transmission unit size on the physical interface. All levels
Device flags Information about the physical device. Possible values are described in the All levels
“Device Flags” section under Common Output Fields Description.
Interface flags Information about the interface. Possible values are described in the “Interface All levels
Flags” section under Common Output Fields Description.
Hold-times Current interface hold-time up and hold-time down, in milliseconds (ms). detail extensive
Last flapped Date, time, and how long ago the interface went from down to up. The format detail extensive none
is Last flapped: year-month-day hour:minute:second:timezone (hour:minute:second
ago). For example, Last flapped: 2002-04-26 10:52:40 PDT (04:33:20 ago).
Statistics last cleared Time when the statistics for the interface were last set to zero. detail extensive
Traffic statistics Number and rate of bytes and packets received and transmitted on the physical detail extensive
interface.
IPv6 transit statistics Number of IPv6 transit bytes and packets received and transmitted on the extensive
interface if IPv6 statistics tracking is enabled.
Input errors Input errors on the interface. The following paragraphs explain the counters extensive
whose meaning might not be obvious:
Output errors Output errors on the interface. The following paragraphs explain the counters extensive
whose meaning might not be obvious:
• Carrier transitions—Number of times the interface has gone from down to up.
• Errors—Sum of the outgoing frame aborts and FCS errors.
• Drops—Number of packets dropped by the output queue.
NOTE:
• MTU errors—Number of packets whose size exceeded the MTU of the interface.
• Resource errors—Sum of transmit drops.
Peer GNF id The GNF peer connected using the AF interface. detail extensive none
Peer GNF Forwarding Shows forwarding element (FE) number and the FPC slot, FE bandwidth, and detail extensive none
element(FE) view FE status (up/down).
Logical Interface
Logical interface Name of the logical interface. All levels
Index Index number of the logical interface, which reflects its initialization sequence. detail extensive none
SNMP ifIndex SNMP interface index number for the logical interface. detail extensive none
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Flags Information about the logical interface. Possible values are described in the All levels
“Logical Interface Flags” section under Common Output Fields Description.
Protocol Protocol family. Possible values are described in the “Protocol Field” section detail extensive none
under Common Output Fields Description.
MTU Maximum transmission unit size on the logical interface. detail extensive none
Traffic statistics Number and rate of bytes and packets received and transmitted on the specified detail extensive
interface set.
Transit statistics Number of IPv6 transit bytes and packets received and transmitted on the extensive
logical interface if IPv6 statistics tracking is enabled.
Local statistics Number and rate of bytes and packets destined to the router. extensive
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Route Table Route table in which the logical interface address is located. For example, 0 detail extensive none
refers to the routing table inet.0.
Flags Information about protocol family flags. Possible values are described in the detail extensive
“Family Flags” section under Common Output Fields Description.
Addresses, Flags Information about the address flags. Possible values are described in the detail extensive none
“Addresses Flags” section under Common Output Fields Description.
protocol-family Protocol family configured on the logical interface. If the protocol is inet, the IP brief
address of the interface is also displayed.
Flags Information about the address flag. Possible values are described in the detail extensive none
“Addresses Flags” section under Common Output Fields Description.
Destination IP address of the remote side of the connection. detail extensive none
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Sample Output
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Input errors:
Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Giants: 0, Policed discards:
0, Resource errors: 0
Output errors:
Carrier transitions: 0, Errors: 0, Drops: 0, MTU errors: 0, Resource errors:
0
Bandwidth : 480 Gbps
Peer GNF id : 4
Peer GNF Forwarding element(FE) view :
FPC slot:FE Num FE Bandwidth(Gbps) Status
6:0 240 Up
6:1 240 Up
Logical interface af4.1 (Index 328) (SNMP ifIndex 593) (Generation 137)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.1 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 414
Output bytes : 139906
Input packets: 9
Output packets: 107
Local statistics:
Input bytes : 414
Output bytes : 598
Input packets: 9
Output packets: 13
Transit statistics:
Input bytes : 0 0 bps
Output bytes : 139308 59240 bps
Input packets: 0 0 pps
Output packets: 94 4 pps
Protocol inet, MTU: 1500
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold
cnt: 0, NH drop cnt: 0
Generation: 162, Route table: 0
Flags: Sendbcast-pkt-to-re
Output Filters: f-basic-sr-tcm-ca
Addresses, Flags: Is-Preferred Is-Primary
Destination: 1.1.1/24, Local: 1.1.1.1, Broadcast: 1.1.1.255, Generation:
148
Protocol multiservice, MTU: Unlimited, Generation: 163, Route table: 0
Policer: Input: __default_arp_policer__
Logical interface af4.1 (Index 328) (SNMP ifIndex 593) (Generation 137)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.1 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 77518213184
Output bytes : 3054342
Input packets: 52450568
Output packets: 4591
Local statistics:
Input bytes : 460
Output bytes : 4600
Input packets: 10
Output packets: 100
Transit statistics:
Input bytes : 77518212724 1944810944 bps
Output bytes : 3049742 68632 bps
Input packets: 52450558 164494 pps
Output packets: 4491 20 pps
Protocol inet, MTU: 1500
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold
cnt: 0, NH drop cnt: 0
Generation: 162, Route table: 0
Flags: Sendbcast-pkt-to-re
Output Filters: f-basic-sr-tcm-ca
Addresses, Flags: Is-Preferred Is-Primary
Destination: 1.1.1/24, Local: 1.1.1.1, Broadcast: 1.1.1.255, Generation:
148
Protocol multiservice, MTU: Unlimited, Generation: 163, Route table: 0
Policer: Input: __default_arp_policer__
Description Display the incompatibilities between the software version running on the guest network
function (GNF) and the version running on the base system (BSYS).
Output Fields Table 15 on page 103 lists the output fields for the show system anomalies command.
Output fields are listed in the approximate order in which they appear.
Anomaly Type Shows the software incompatibility type. The following are the possible values:
Default Action Shows the default actions associated with incompatibilities. The following are
the possible values:
Sample Output
virtual-network-functions
Description Associate a GNF ID, base configuration, chassis type and resource template with the
VNF.
The GNFs that are configured and committed will appear as auto-complete options in
operational commands.
chassis-type chassis-type—Choose the type of the router chassis (for example, MX960)
used as the base system (BSYS) in the node slicing setup.
server
Syntax server {
interfaces {
cb0 cb0-interface;
cb1 cb1-interface;
jdm-management jdm-management-interface;
vnf-management gnf-management-interface;
}
}
Description Configure the server interfaces for the JDM and GNFs. These include a JDM management
interface, a GNF management interface, and two server interfaces that are connected
to the MX Series router.
Options cb0 cb0-interface—The server interface that is connected to the control board 0 of the
MX Series router.
cb1 cb1-interface—The server interface that is connected to the control board 1 of the
MX Series router.
unit unit—Interface unit number. This is a logical unit number. The only supported value
is 0.
• inet—Indicates IPv4.
• inet6—Indicates IPv6.
Syntax routing-options {
static {
route route {
next-hop next-hop;
}
}
}
The following are general guidelines on how to use the JDM server commands:
• Use the commit synchronize command to ensure that the configuration committed on
one server is synchronized with the other server. The synchronization is bidirectional.
A JDM configuration change at either of the servers is synchronized with the other
server. When a virtual machine (VM) is instantiated, the GNF-re0 VM instance starts
on server0 and the GNF-re1 VM instance starts on server1.
NOTE: If you do not use the commit synchronize command, you must
configure and manage the VMs on both the servers manually.
Sample Output
clear log
user@jdm> clear log syslog
Sample Output
monitor list
user@jdm> monitor list
Description Start displaying the system log or trace file and additional entries being added to those
files.
Additional Information Log files are generated by the routing protocol process or by system logging.
Output Fields Table 16 on page 115 describes the output fields for the monitor start command. Output
fields are listed in the approximate order in which they appear.
***filename *** Name of the file from which entries are being displayed.
Sample Output
monitor start
user@jdm> monitor start syslog
t '
Oct 19 19:44:36 jdm mgd[3268]: UI_COMMIT: User 'root' requested 'commit' operati
on (comment: none)
Oct 19 19:44:36 jdm mgd[3268]: UI_COMMIT_PROGRESS: Commit operation in progress:
Additional Information Log files are generated by the routing protocol process or by system logging.
Description Copy the ssh public key to the peer JDM. This command is equivalent to ssh-copy-id
user@jdm-server<0/1>.
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• show virtual-network-functions on page 120
Sample Output
request virtual-network-functions
Description Start, stop or restart the VNFs. Also, you can add or remove the base image.
NOTE: You can issue these commands either on both the servers (server0
and server1) or on one specific server.
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• show virtual-network-functions on page 120
show virtual-network-functions
Description Display the list of guest network functions (GNFs) along with their IDs, status and
availability.
server—Display the details of the GNFs on one specific server. Applicable value is 0 or 1.
vnf-name—Display additional details of a particular GNF. You can use the detail option
to view the detailed output. For example, show virtual-network-functions gnf1 detail.
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• request virtual-network-functions on page 119
Output Fields Table 17 on page 120 lists the output fields for the show virtual-network-functions command.
• Up
• Down
VNF CPU Utilization Shows the GNF CPU utilization details. See also: show system cpu (JDM).
and Allocation
Information
VNF Memory Displays the following memory information about the GNFs:
Information
• Name—GNF name.
• Resident—The memory used by the GNFs.
• Actual—Actual memory.
VNF Storage Displays the following guest network function (GNF) storage information:
Information
• Directories—Names of the directories.
• Size—Total storage size.
• Used—Storage used.
VNF Interfaces Shows the GNF interface statistics information. See also: show system network (JDM).
Statistics
VNF Network Shows the list of Physical Interfaces, Virtual Interfaces and MAC addresses.
Information
Sample Output
show virtual-network-functions
user@jdm> show virtual-network-functions
Sample Output
Sample Output
VNF Information
---------------------------
ID 1
Name: gnf1
Status: Running
Liveness: up
IP Address: 192.168.2.1
Cores: 2
Memory: 16GB
Resource Template: 2core-16g
Qemu Process id: 20478
SMBIOS version: v1
Description Display the hostname and version information about the specified guest network function
(GNF).
Options vnf-name—Name of the GNF for which you want to view the version details.
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• request virtual-network-functions on page 119
Sample Output
Depending on the platform running Junos OS, you might see different installed
sub-packages.
Hostname: gnf2
Model: mx960
Junos: 17.4X48-D10.3
JUNOS OS Kernel 64-bit [20170913.201739_fbsd-builder_stable_11]
JUNOS OS libs [20170913.201739_fbsd-builder_stable_11]
JUNOS OS runtime [20170913.201739_fbsd-builder_stable_11]
JUNOS OS time zone information [20170913.201739_fbsd-builder_stable_11]
JUNOS network stack and utilities [20170926.111120_builder_junos_174_x48_d10]
JUNOS modules [20170926.111120_builder_junos_174_x48_d10]
JUNOS mx modules [20170926.111120_builder_junos_174_x48_d10]
JUNOS libs [20170926.111120_builder_junos_174_x48_d10]
JUNOS OS libs compat32 [20170913.201739_fbsd-builder_stable_11]
JUNOS OS 32-bit compatibility [20170913.201739_fbsd-builder_stable_11]
JUNOS libs compat32 [20170926.111120_builder_junos_174_x48_d10]
JUNOS runtime [20170926.111120_builder_junos_174_x48_d10]
Junos vmguest package [20170926.111120_builder_junos_174_x48_d10]
JUNOS py extensions [20170926.111120_builder_junos_174_x48_d10]
JUNOS py base [20170926.111120_builder_junos_174_x48_d10]
JUNOS OS vmguest [20170913.201739_fbsd-builder_stable_11]
JUNOS OS crypto [20170913.201739_fbsd-builder_stable_11]
JUNOS mx libs compat32 [20170926.111120_builder_junos_174_x48_d10]
JUNOS mx runtime [20170926.111120_builder_junos_174_x48_d10]
JUNOS common platform support [20170926.111120_builder_junos_174_x48_d10]
JUNOS mx libs [20170926.111120_builder_junos_174_x48_d10]
Description Display the version information about the Juniper Device Manager (JDM).
Options all-servers—Display the version details of the JDM instances on both the servers.
server—Display the version details of the JDM instance on one specific server.
Range: 0 through 1
vnf —Display the version details for a particular guest network function (GNF). You need
to mention the GNF name in the command. Example: show version vnf gnf2.
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• request virtual-network-functions on page 119
Sample Output
show version
user@jdm> show version
Hostname: mgb-dvaita-ixr1-jdm
Model: junos_node_slicing
Server slot : 1
JDM package version : 17.4-R1.7
Host Software [Red Hat Enterprise Linux]
JDM container Software [Ubuntu 14.04.1 LTS]
JDM daemon jdmd [Version: 17.4R1.7-secure]
JDM daemon jinventoryd [Version: 17.4R1.7-secure]
JDM daemon jdmmon [Version: 17.4R1.7-secure]
Host daemon jlinkmon [Version: 17.4R1.7-secure]
Hostname: mgb-dvaita-ixr1-jdm
Model: junos_node_slicing
Server slot : 1
JDM package version : 17.4-R1.7
Host Software [Red Hat Enterprise Linux]
Hostname: mgb-dvaita-ixr1-jdm
Model: junos_node_slicing
Server slot : 1
JDM package version : 17.4-R1.7
Host Software [Red Hat Enterprise Linux]
JDM container Software [Ubuntu 14.04.1 LTS]
JDM daemon jdmd [Version: 17.4R1.7-secure]
JDM daemon jinventoryd [Version: 17.4R1.7-secure]
JDM daemon jdmmon [Version: 17.4R1.7-secure]
Host daemon jlinkmon [Version: 17.4R1.7-secure]
KERNEL 3.10.0-514.el7.x86_64
MGD release 17.4R1.7 built by builder on 2017-11-17 11:29:41 UTC
CLI release 17.4R1.7 built by builder on 2017-11-17 10:53:44 UTC
base-actions-dd release 17.4R1.7 built by builder on 2017-11-17 10:06:17 UTC
jdmd_common-actions-dd release 17.4R1.7 built by builder on 2017-11-17 10:06:09
UTC
jdmd_nv_jdm-actions-dd release 17.4R1.7 built by builder on 2017-11-17 10:06:09
UTC
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• request virtual-network-functions on page 119
Output Fields Table 18 on page 128 describes the output fields for the show system cpu command.
Output fields are listed in the approximate order in which they appear.
Sample Output
-------------
VNF CPU-Id(s) Usage Qemu Pid
State
---------------------------------------- ----------------------- ------ --------
-----------
Running
Description Display the JDM storage details such as storage size, used space, and available space.
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• request virtual-network-functions on page 119
Output Fields Table 19 on page 130 describes the output fields for the show system storage command.
Output fields are listed in the approximate order in which they appear.
VNF Storage Information Displays the following guest network function (GNF) storage information:
Sample Output
Description Display the memory usage information about the host server, Juniper Device Manager
(JDM), and guest network functions (GNF).
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• request virtual-network-functions on page 119
Output Fields Table 20 on page 132 describes the output fields for the show system memory command.
Output fields are listed in the approximate order in which they appear.
Memory Usage Information Displays the following memory usage information about host server and JDM:
• Total—Total memory.
• Used—Used memory.
• Free—Available memory.
VNF Memory Information Displays the following memory information about the GNFs:
• Name—GNF name.
• Resident—The memory used by the GNFs.
• Actual—Actual memory.
Sample Output
Description Display the statistics information for physical interface, JDM interface, and interfaces per
guest network function (GNF).
Related • Generic Guidelines for Using JDM Server Commands on page 111
Documentation
• request virtual-network-functions on page 119
Output Fields Table 21 on page 134 describes the output fields for the show system network command.
Output fields are listed in the approximate order in which they appear.
Physical Interfaces
Name Name of the physical interface.
Sample Output
Physical Interfaces
---------------------------------------------------------------------------------------------------------------------------------------------
Name Index MTU Hardware-address Rcvd Bytes Rcvd Packets Rcvd Error
Rcvd Drop Trxd Bytes Trxd Packets Trxd Error Trxd Drop Flags
-------- ----- ------- ----------------- ------------ ------------ ----------
--------- ------------ ------------ ---------- --------- ------
enp3s0f1 4 1500 00:25:90:b5:75:51 8787662837 51975964 0
538926 40009223 407379 0 0 BMPRU
ens3f1 7 1500 3c:fd:fe:08:87:02 1019880532 16723722 0
11243028 19265494115 31971968 0 0 BMPRU
ens3f0 3 1500 3c:fd:fe:08:87:00 5951717054 81330473 0
11226877 139135292735 124708008 0 0 BMPRU
enp3s0f2 5 1500 00:25:90:b5:75:52 3343179197 40806691 0
461955 3449064446 12191724 0 0 BMRU
Name Index MTU Hardware-address Rcvd Bytes Rcvd Packets Rcvd Error Rcvd
Drop Trxd Bytes Trxd Packets Trxd Error Trxd Drop Flags
-------- ----- ----- ----------------- ------------ ------------ ----------
--------- ------------ ------------ ---------- --------- ------
bme1 1433 1500 52:54:00:21:20:2e 502730 4506 0 0
477328 2619 0 0 BMRU
jmgmt0 1439 1500 00:f1:60:3d:20:22 4991675 66429 0 2862
100548 891 0 0 BMRU
bme2 1435 1500 52:54:00:88:b5:dd 2930 33 0 0
3466 39 0 0 ABMRU
cb0.4002 2 1500 00:f1:60:3d:20:20 12204921 209269 0 0
3688591023 195579 0 0 ABMRU
cb1.4002 3 1500 00:f1:60:3d:20:21 161850 3026 0 0
204784 3029 0 0 ABMRU
.......................