NetEngine 8000 M14, M8 and M4 V800R022C10 Configuration Guide 20 NAT and IPv6 Transition
NetEngine 8000 M14, M8 and M4 V800R022C10 Configuration Guide 20 NAT and IPv6 Transition
Configuration Guide
Issue 01
Date 2023-03-31
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://www.huawei.com
Email: [email protected]
Contents
1 Configuration............................................................................................................................1
1.1 NAT and IPv6 Transition....................................................................................................................................................... 1
1.1.1 NAT Configuration.............................................................................................................................................................. 1
1.1.1.1 NAT Description................................................................................................................................................................ 1
1.1.1.1.1 Overview of NAT........................................................................................................................................................... 1
1.1.1.1.2 Understanding NAT...................................................................................................................................................... 2
1.1.1.1.3 NAT Reliability............................................................................................................................................................. 34
1.1.1.1.4 NAT Security................................................................................................................................................................. 50
1.1.1.1.5 NAT Logging................................................................................................................................................................ 51
1.1.1.1.6 Application Scenarios for NAT............................................................................................................................... 96
1.1.1.1.7 Terminology for NAT...............................................................................................................................................112
1.1.1.2 NAT Configuration...................................................................................................................................................... 112
1.1.1.2.1 Overview of NAT...................................................................................................................................................... 112
1.1.1.2.2 Configuration Roadmap.........................................................................................................................................113
1.1.1.2.3 Feature Requirements for NAT............................................................................................................................ 113
1.1.1.2.4 Configuring NAT Session and Bandwidth Resources................................................................................... 113
1.1.1.2.5 Configuring Basic NAT Functions....................................................................................................................... 115
1.1.1.2.6 Configuring Distributed NAT................................................................................................................................127
1.1.1.2.7 Configuring Centralized NAT............................................................................................................................... 135
1.1.1.2.8 Configuring NAT Load Balancing....................................................................................................................... 141
1.1.1.2.9 Configuring a Static Source Tracing Algorithm............................................................................................. 145
1.1.1.2.10 Configuring Internal Host Access Through Public IP Addresses............................................................ 149
1.1.1.2.11 Configuring the NAT ALG Function................................................................................................................. 152
1.1.1.2.12 Configuring NAT Reliability................................................................................................................................ 154
1.1.1.2.13 Configuring NAT Security................................................................................................................................... 163
1.1.1.2.14 Maintaining NAT....................................................................................................................................................172
1.1.1.2.15 Adjusting NAT Performance...............................................................................................................................184
1.1.1.2.16 Configuration Examples for NAT......................................................................................................................187
1.1.2 L2NAT Configuration..................................................................................................................................................... 328
1.1.2.1 L2NAT Description...................................................................................................................................................... 328
1.1.2.1.1 Overview of L2NAT................................................................................................................................................. 328
1.1.2.1.2 Principles..................................................................................................................................................................... 329
1.1.2.1.3 Terminology............................................................................................................................................................... 331
Figures
Figure 1-31 Master/slave device switchover triggered by a NAT service board fault............................45
Figure 1-32 VPN over NAT inter-chassis backup................................................................................................ 46
Figure 1-33 NAT inter-chassis cold backup.......................................................................................................... 47
Figure 1-34 Centralized backup of distributed NAT.......................................................................................... 48
Figure 1-35 NAT process on the distributed NAT device................................................................................. 48
Figure 1-36 NAT process on the centralized device when a traffic switchover is performed............. 49
Figure 1-37 NAT process on the distributed device........................................................................................... 49
Figure 1-38 NAT process on the centralized device when a traffic switchover is performed............. 50
Figure 1-39 Distributed NAT networking.............................................................................................................. 97
Figure 1-40 Centralized NAT networking.............................................................................................................. 98
Figure 1-41 NAT deployment on an enterprise network................................................................................. 98
Figure 1-42 Outbound interface-based NAT networking................................................................................ 99
Figure 1-43 Dual NAT networking........................................................................................................................ 100
Figure 1-44 NAT load balancing in centralized mode.................................................................................... 101
Figure 1-45 NAT load balancing in distributed mode.................................................................................... 102
Figure 1-46 NAT application in the hairpin scenario...................................................................................... 102
Figure 1-47 Centralized NAT444 inter-board hot backup solution based on port pre-allocation.. 104
Figure 1-48 Distributed NAT444 inter-board hot backup based on port pre-allocation....................105
Figure 1-49 Centralized NAT444 inter-chassis hot backup solution based on port pre-allocation
............................................................................................................................................................................................ 106
Figure 1-50 Distributed NAT444 inter-chassis hot backup solution based on port pre-allocation
............................................................................................................................................................................................ 107
Figure 1-51 Centralized backup of distributed NAT inter-board hot backup......................................... 108
Figure 1-52 NAT Easy IP and a GRE tunnel sharing an interface address.............................................. 108
Figure 1-53 Using ping to check a NAT network............................................................................................. 109
Figure 1-54 Networking diagram for a static source tracing algorithm.................................................. 146
Figure 1-55 NAT networking................................................................................................................................... 187
Figure 1-56 NAT networking................................................................................................................................... 194
Figure 1-57 NAT networking................................................................................................................................... 199
Figure 1-58 Example for configuring the centralized NAT function..........................................................202
Figure 1-59 Distributed NAT load balancing..................................................................................................... 206
Figure 1-60 Centralized NAT load balancing..................................................................................................... 213
Figure 1-61 Centralized NAT providing backup for distributed NAT.........................................................217
Figure 1-62 Static NAT source tracing................................................................................................................. 225
Figure 1-63 Centralized NAT load balancing..................................................................................................... 230
Figure 1-64 Distributed NAT static source tracing and load balancing................................................... 234
Figure 1-65 Internal server networking............................................................................................................... 241
Figure 1-66 Networking of the NAT internal server....................................................................................... 247
Figure 1-67 Inter-chassis cold backup associated with CGN service boards.......................................... 252
Figure 1-68 Networking of syslog source tracing for NAT flexible flows................................................ 258
Figure 1-69 Networking for configuring NAT to translate both the source and destination IP
addresses......................................................................................................................................................................... 266
Figure 1-70 Scenario in which NAT traffic distribution on an outbound interface, easy IP, and the
hairpin function are configured............................................................................................................................... 272
Figure 1-71 Configuring a NAT Easy IP address pool and a GRE tunnel to share an interface
address............................................................................................................................................................................. 274
Figure 1-72 Networking for configuring outbound-interface NAT load balancing on an enterprise
network............................................................................................................................................................................ 279
Figure 1-73 Networking for configuring dual-uplink NAT and an internal server on a campus
network............................................................................................................................................................................ 286
Figure 1-74 Example for configuring IPoEoVLAN access together with NAT........................................ 292
Figure 1-75 Distributed VPN NAT..........................................................................................................................296
Figure 1-76 Networking for configuring VPN NAT..........................................................................................300
Figure 1-77 VPN NAT traffic diversion in an L3VPN scenario..................................................................... 309
Figure 1-78 Configuring NAT traffic diversion in an SRv6 scenario.......................................................... 320
Figure 1-79 Address translation process of L2NAT..........................................................................................329
Figure 1-80 Networking for the DS-Lite solution.............................................................................................335
Figure 1-81 Packet encapsulation and decapsulation.................................................................................... 338
Figure 1-82 DS-Lite internal server....................................................................................................................... 339
Figure 1-83 Distributed DS-Lite solution.............................................................................................................341
Figure 1-84 Centralized DS-Lite solution............................................................................................................ 343
Figure 1-85 Networking diagram for a DS-Lite application......................................................................... 401
Figure 1-86 Networking diagram for a DS-Lite application......................................................................... 407
Figure 1-87 Networking diagram of DS-Lite over L3VPN............................................................................. 412
Figure 1-88 Centralized DS-Lite providing backup for centralized DS-Lite............................................. 420
Figure 1-89 Process of establishing a PCP connection................................................................................... 428
Figure 1-90 PCP connection application during P2P data transmission...................................................432
Figure 1-91 PCP server with a static IP address in a distributed NAT444 scenario..............................441
Figure 1-92 PCP server with a static IP address in a distributed DS-Lite scenario............................... 445
Figure 1-93 NAT64 PAT principles......................................................................................................................... 453
Figure 1-94 NAT64 internal server scenario...................................................................................................... 455
Figure 1-95 NAT64 DNS ALG networking...........................................................................................................458
Figure 1-96 NAT64 solution..................................................................................................................................... 461
Figure 1-97 NAT64 networking diagram............................................................................................................ 489
Figure 1-98 Configuring the internal NAT64 server function...................................................................... 494
Figure 1-99 Inter-board backup in hot backup mode.....................................................................................499
Figure 1-100 Inter-board backup in cold/warm backup mode................................................................... 499
Figure 1-101 Inter-chassis hot backup................................................................................................................. 502
Figure 1-102 Inter-chassis hot backup scenario............................................................................................... 503
Figure 1-103 Master/backup switchover triggered by a private-network link fault............................ 504
Figure 1-104 Master/backup switchover triggered by a public-network link fault.............................. 505
Figure 1-105 Master/backup switchover triggered by a service board fault.......................................... 506
Figure 1-106 VPN over NAT inter-chassis hot backup................................................................................... 507
Figure 1-107 Centralized NAT444 inter-board hot backup solution based on port pre-allocation
............................................................................................................................................................................................ 508
Figure 1-108 Distributed NAT444 inter-board hot backup based on port pre-allocation................. 509
Figure 1-109 Centralized NAT444 inter-chassis hot backup solution based on port pre-allocation
............................................................................................................................................................................................ 510
Figure 1-110 Distributed NAT444 inter-chassis hot backup solution based on port pre-allocation
............................................................................................................................................................................................ 511
Figure 1-111 Centralized backup of distributed NAT inter-board hot backup...................................... 512
Figure 1-112 CGN deployment............................................................................................................................... 513
Figure 1-113 2:1 board expansion for inter-board hot backup in distributed NAT444.......................527
Figure 1-114 Distributed DS-Lite networking................................................................................................... 537
Figure 1-115 Centralized DS-Lite networking................................................................................................... 545
Figure 1-116 Networking of inter-board hot backup in a centralized NAT64 scenario..................... 551
Figure 1-117 Networking diagram for centralized NAT444 inter-chassis hot backup........................ 556
Figure 1-118 Networking diagram of configuring distributed dual-device inter-chassis backup... 566
Figure 1-119 Networking diagram of configuring distributed dual-device inter-chassis backup... 579
Figure 1-120 Distributed inter-chassis backup networking in a scenario where global VE interfaces
are used for SRv6 access............................................................................................................................................ 597
Figure 1-121 Inter-chassis hot backup in a NAT load balancing scenario.............................................. 614
Figure 1-122 Networking for configuring centralized NAT load balancing plus HA inter-chassis hot
backup.............................................................................................................................................................................. 631
Figure 1-123 VPN over centralized NAT inter-chassis hot backup.............................................................645
Figure 1-124 VPN over centralized NAT inter-chassis hot backup.............................................................657
Figure 1-125 Networking diagram for configuring distributed dual-device inter-chassis backup..668
Figure 1-126 Networking diagram for configuring distributed dual-device inter-chassis backup..684
Figure 1-127 Inter-chassis backup networking................................................................................................. 700
Figure 1-128 Inter-chassis HA backup................................................................................................................. 701
Figure 1-129 Inter-chassis backup networking................................................................................................. 707
Figure 1-130 Networking of inter-chassis hot backup in a centralized NAT64 scenario................... 720
Figure 1-131 Basic MAP-T/MAP-E architecture................................................................................................. 728
Figure 1-132 Basic MAP-T/MAP-E architecture................................................................................................. 728
Figure 1-133 MAP-T Data Processing Flowchart.............................................................................................. 729
Figure 1-134 MAP-E Data Processing Flowchart.............................................................................................. 729
Figure 1-135 Concept of A+P................................................................................................................................. 730
Figure 1-136 Mapping between IPv4+port information and IPv6 addresses......................................... 731
Figure 1-137 Interface ID..........................................................................................................................................732
Figure 1-138 FMR....................................................................................................................................................... 733
Figure 1-139 Destination IPv6 address format................................................................................................. 734
Figure 1-140 Option 94............................................................................................................................................. 735
Figure 1-141 Option 95............................................................................................................................................. 735
Figure 1-142 Option 89............................................................................................................................................. 736
Figure 1-143 Option 93............................................................................................................................................. 737
Figure 1-144 Option 90............................................................................................................................................. 737
Figure 1-145 Option 91............................................................................................................................................. 738
Figure 1-146 Centralized MAP-T............................................................................................................................ 740
Figure 1-147 Distributed MAP-T.............................................................................................................................749
Tables
1 Configuration
Definition
Network address translation (NAT) translates IP addresses between private and
public networks, which enables multiple private network users to use only a small
number of public IPv4 addresses to access external networks. A NAT device
translates private IPv4 addresses in packets to public IPv4 addresses and records
the mapping before users access the Internet.
Purpose
As the Internet develops and network applications grow, IPv4 address exhaustion
constrains network development. Before IPv6 can be widely used to replace IPv4
that has been running on network devices and is bearing existing applications,
some IPv4-to-IPv6 transition techniques can be used to alleviate IPv4 address
shortage.
NAT provides a transition solution that reuses IP addresses to slow down the
tendency towards IPv4 address exhaustion, which helps smooth transition from
IPv4 to IPv6.
Benefits
NAT offers the following benefits to enterprise users:
● Enhances the ability of deploying security and reliability services, saving
router costs.
● Provides mature service provision techniques, facilitating NAT deployment.
Basic Concepts
Before the basic NAT process is introduced, familiarize yourself with the following
concepts:
● NAT service board: is a physical board that has the NAT capability.
● NAT address pool: is an address pool used to manage NAT address resources.
● NAT traffic diversion: uses diversion rules to identify user packets that need to
be translated using NAT and direct the packets to a NAT service board for
NAT translation.
● NAT instance: is a service configuration unit that is bound to NAT service
boards, address pools, and other NAT attributes.
Basic Process
The following figure shows the forward and reverse NAT implementation
processes. The implementation varies according to deployment modes, address
pool types, and traffic diversion modes. The following sections describe NAT
classification, NAT address pool and its conversion basis, and NAT port
allocation.
NAT Conversion
NAT Classification
NAT translates between private and public IP addresses carried in the headers of
IP data packets. Various NAT modes are defined based on classification rules.
NOTE
In addition to the preceding NAT modes, the following NAT modes are available:
● NAT44: IPv4 addresses are converted to IPv4 addresses.
● NAT444: A CPE performs NAT for user packets once, and a router performs NAT again
for the translated packets. This deployment mode is called NAT444. The router cannot
determine whether the CPE performs NAT. The NAT44 and NAT444 configurations are
the same. Therefore, NAT44 and NAT444 are called NAT for short.
● NAT64: The IP address before translation is an IPv6 address and after translation is an
IPv4 address. For more information, see the NAT64 chapter.
● NAT46: The IP address before translation is an IPv4 address and after translation is an
IPv6 address. NAT46 and NAT64 are reverse to each other. This document uses NAT64
as an example. NAT64 principles can be used as a reference for NAT46. For more
information, see the NAT64 chapter.
On the network shown in Figure 1-1, three data packets with internal IP
addresses reach the NAT device. Packet 1 and packet 2 are from the same
internal IP address but have different source port numbers. Packet 1 and
packet 3 are from different internal IP addresses but have the same source
port number. Through NAPT mapping, the source IP addresses of the three
data packets are translated into the same external address, but each data
packet is assigned with a different source port number. Therefore, the
difference between packets is retained. When the response packets of these
packets arrive at the NAT device, the NAT device can still identify the internal
hosts to which the packets should be forwarded based on the destination IP
addresses and destination port numbers of the response packets.
In NAPT mode, NAT devices translate both IP addresses and port numbers in
the packets. Therefore, using NAPT fully utilizes IP address resources to allow
more internal hosts to access the external network simultaneously. In
addition, NAPT can be performed on packet fragments. In contrast, basic NAT
requires a public IP address for each private IP address, which is a waste of IP
addresses. For this reason, NAPT is more widely used in actual applications.
● No-PAT NAT
No-PAT NAT is called basic NAT. It implements one-to-one translation
between private and public IP addresses. The public and private port numbers
remain unchanged after NAT.
On the network shown in Figure 1-2, after two data packets with different
internal IP addresses and different source port numbers arrive at the NAT
device, the NAT device translates the source IP addresses of the two data
packets into different external IP addresses and remains the source port
numbers unchanged through No-PAT.
Generally, the No-PAT mode is used in industries with high privacy
requirements. For example, customers in the financial industry require that
internal addresses be hidden, and some financial applications have a fixed
port requirement, which means that the port number cannot be changed
after NAT. In this case, the No-PAT function must be supported.
As dedicated boards do not provide any interface, they cannot be used for
direct service access. They perform NAT only when receiving diverted service
packets. After NAT, the packets need to be forwarded to other interface
boards before being forwarded to the next-hop device.
● On-board NAT
NAT is implemented through the main control board of a device. This type of
board can be used for direct service access and forwarding as well as NAT.
Dedicated NAT provides better performance than on-board NAT and is more
suitable for scenarios with a large number of NAT services.
To associate a NAT address pool with a NAT board, the following concepts are
introduced on the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M:
● service-location group: used to specify the board where the NAT task is
performed.
● service-instance group: bound to a service-location group.
● NAT instance: NAT processing policies may differ on various user devices. For
example, a flow rate limit for each user is specified in policies. To facilitate
unified management, the concept of NAT instances is introduced. Users with
the same policy can be assigned to the same instance. Unified address
segments, address allocation policies, and security policies can be configured
in the instance.
The NAT instance must be bound to a specific service-instance group so that
user packets in the NAT instance can be forwarded to the specified board for
NAT processing.
After a NAT instance is created, specify a NAT address pool for the NAT
instance. In this way, the private IP addresses of users can be replaced with
the public IP addresses in the address pool during NAT.
NOTE
After an address pool is specified in a NAT instance, the device generates a user
network route (UNR) destined for the network segment or IP address to route reverse
packets (public network to private network).
NAT Easy IP
By default, an IP address in a NAT address pool cannot be the same as any IP
address that has been used by an interface. Users on enterprise networks cannot
apply for sufficient public network addresses because of limited public address
resources. The NAT function needs to be used when a few public IP addresses are
available. To make full use of limited public network address resources, the
NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can use addresses in a
NAT address pool as interface addresses. This is called NAT Easy IP.
Using DAPs, multiple NAT devices can share address resources of the same
address server, saving address space.
● 3-tuple NAT
3-tuple NAT, also called full-cone NAT, translates IP addresses and filters out
packets based on the 3-tuple information carried in packets. The 3-tuple
information includes the source IP address, source port number, and protocol
number.
A NAT device creates 3-tuple entries for address and port translation. After
NAT mapping, the source address and source port number of a packet are
translated into the same external address and port number, regardless of
whether the destination addresses of the packets are the same. In addition,
the NAT device allows external hosts to access the hosts on the internal
network through the post-translated address and port number. 3-tuple NAT
enables private hosts connected to different NAT devices to communicate.
Carrier networks primarily use 3-tuple NAT.
In Figure 1-6, Host A is an internal host, and Host B and Host C are external
hosts.
The IP address and port number of Host A are A and 2011, respectively. Those
of Host B are B and 2012, respectively. When internal Host A visits external
Host B, after NAT, the source IP address A is replaced with N, the source port
To further improve the usage of public IP addresses, NAT can also translate the
port numbers in IP packets. Port allocation means that a device allocates port
numbers to IP packets during NAT.
In a centralized scenario, port allocation is triggered when the NAT device receives
the first valid packet from a private IP address.In a distributed scenario, port
allocation is triggered when a user goes online.
Port Pre-Allocation
Port pre-allocation, also called the port range mode, enables a NAT device to pre-
allocate a public IP address and port segment to a private IP address when a NAT
device is mapping the private IP address to the public IP address. The public IP
address and ports in the port segment are used in NAT mapping for the private IP
address.
Port pre-allocation can be used in two scenarios:
● In a distributed deployment scenario, a port range is pre-allocated when a
user goes online and released when the user goes offline.
● In a centralized deployment scenario, when data flows reach the device, port
range (identified by the same source IP address) pre-allocation is triggered
and the port range is released through the aging mechanism.
users go online, a NAT device assigns an initial port segment and ports in the
initial segment to users. If the number of used ports exceeds the initial port
segment size, the NAT device assigns an extended port segment. The maximum
number of extensions indicates the number of times extended port segments can
be assigned.
Per-port allocation
Per-port allocation is a dynamic port allocation mode. Specifically, a port, instead
of a port segment, is allocated each time a session is created. Such a mode
features the highest port usage of a public IP address and therefore is used when
few IPv4 public IP addresses exist.
The following table lists the advantages and disadvantages of different port
allocation modes.
Port pre-allocation A fixed port range is pre- Whether the port range
allocated to the users meets the requirement
who are using the same cannot be guaranteed.
public IP address.
Dynamic port allocation A fixed port range is pre- If a user needs to use a
allocated to the users lot of ports, the number
who are using the same of ports allocated to
public IP address. In this other users may be
mode, if the port range insufficient. In this
exceeds the limit, you scenario, the semi-
can set the number of dynamic port allocation
extension times to any mode has an advantage
value. because it allows for
even port allocation to
each user.
The port pre-allocation mode selected in Table 1-2 is used as an example. After a
port range is configured in a NAT/DS-Lite/NAT64 instance, the device pre-
allocates a port range to each private network user for NAT. The size of a port
range is specified in the instance. However, the start and end port numbers are
automatically generated by a device.
As shown in Figure 1-7, after the port range size 1024 is configured on the CGN
device, the port range allocated to CPE1 is [1024, 2047], and the port range
allocated to CPE2 is [2048, 3071].
NOTE
In a single NAT instance, only one public IP address can be assigned to a private network
user.
By default, sessions of different protocols using the same private IP address cannot share
the same public port during port pre-allocation. After the port multiplexing function is
enabled, for the same private IP address, TCP sessions can share a public port with sessions
of other protocols.
Generally, NAT selects public IP addresses in an address pool and assigns the
public IP addresses to private network packets. If user packets need to be traced,
the NAT device needs to send source tracing logs. The NetEngine 8100 M,
NetEngine 8000E M, NetEngine 8000 M also supports the static NAT source
tracing algorithm so that user source tracing is performed without source tracing
logs sent.
Fundamentals
The static source tracing algorithm provides a formula with the input of a private
IP address range, a public IP address range, a port range size, and a port range
and the output of the mapping between each private IP address and a pair of a
public IP address range and a port range. The algorithm used in NAT translation
defines mappings between private IP addresses and a pair of a public IP address
range and a port range. A network element can use the algorithm to perform NAT
user tracing if the network element obtains the NAT source tracing parameters
the same as those configured on a NAT device, without receiving source tracing
logs sent by the NAT device.
Benefits
Different from NAT444 source tracing, the static source tracing algorithm used on
a NAT device does not send source tracing logs to the log server. Source tracing is
complete by searching for the mapping between private and public network
information stored in the static source tracing algorithm file on a source tracing
device (for example, a log server).
With the static source tracing algorithm, a small number of public IP addresses
can be allocated to a large number of private IP addresses, and the mapping
between private and public IP addresses as well as the port range remains
unchanged, which facilitates maintenance.
After user packets arrive at a NAT device, the device does not perform NAT
directly for the user packet. The device diverts packets matching specified ACL
rules to a NAT service board for processing. This implementation is called NAT
traffic diversion. NAT traffic diversion is performed in either inbound-interface or
outbound-interface mode.
After the traffic policy is applied to the inbound interface, NAT translation is
performed for packets matching the ACL rules defined in the traffic policy. The
packets that do not match the traffic policy are forwarded based on the regular
process.
In the carrier scenario, NAT is required for all user traffic. The inbound-interface
traffic diversion mode is recommended.
a. After user packets on the private network reach a device, the device
matches the 5-tuple information in the user packets against the ACL rules
defined in a traffic policy bound to an interface.
▪ If the packets do not match any ACL rule, the packets are forwarded
according to the regular process.
b. The NAT service board allocates a public IP address and port number to
each user packet based on 5-tuple information carried in the user packet,
and translates the private IP address and port number in the user packet
into the public IP address and port number, respectively.
c. The NAT board searches the FIB table for a specified outbound interface
and forwards the converted packets to a next hop through the outbound
interface.
● Forwarding process in inbound-interface traffic diversion mode (reverse
traffic)
▪ If FIB entries hit NAT address pool routes, the device forwards the
packets to the NAT service board for processing.
▪ If FIB entries hit other types of routes, the device forwards the
packets based on the regular forwarding process.
c. The NAT service board replaces the destination address and port number
of each user packet with the corresponding private address and port
number based on the reverse NAT mapping.
d. The NAT service board searches for the outbound interface based on the
post-translated destination private IP addresses and sends each post-
translated packet to the private network through the selected outbound
interface.
▪ If the packets match an ACL rule, the packets are diverted to the
NAT service board for address translation.
▪ If the packets do not match any ACL rule, the packets are directly
sent to the next hop through a specified interface.
d. The NAT service board replaces the private IP address and port number in
each of the user packets with the allocated public IP address and port
number.
e. The NAT service board re-searches the FIB table for the outbound
interface information based on the destination address (the ACL rules on
the outbound interface are no longer matched) and sends the packets to
the next hop through the selected outbound interface.
● Forwarding process in outbound-interface traffic diversion mode (reverse
traffic)
The forwarding process is the same as the reverse traffic forwarding process
in inbound-interface traffic diversion mode. For details, see Forwarding
process in inbound-interface traffic diversion mode (reverse traffic).
NAT Server
NAT Server
For security purposes, most private network hosts do not expect access from
public network users. However, in some applications, public network users need to
access a private network server, for example, a WWW server or a private network
FTP server . In basic NAT or PAT NAT mode, NAT entries cannot be dynamically
created for the access initiated by public network users. As a result, public network
users cannot access private network hosts.
To address this problem, the NAT Server function (also called NAT internal server)
can be configured. This function creates mappings between private IP addresses
+port numbers and public IP addresses+port numbers on a NAT device. With this
function, the NAT device can reversely translate public IP addresses to private IP
addresses so that users on a public network can access the internal servers.
NOTE
After the mapping is specified, a UNR is generated on the device to guide the forwarding of
reverse packets (packets from the public network to the private network).
On the network shown in Figure 1-11, the NAT server function is enabled on a
NAT device, and a private network server's IP address+port number
(192.168.0.2:80) are mapped to a public network IP address+port number
(11.1.1.2:100). When a public network host requires to access the server
192.168.0.2, the NAT device converts 11.1.1.2:100 to 192.168.0.2:80, so that the
service request can reach the server 192.168.0.2 on the private network. Such a
conversion operation will not be performed if the host 192.168.0.3 requires to
access the server 192.168.0.2 on the same private network.
The following uses the network shown in Figure 1-11 as an example to describe
the implementation of the NAT server function.
● Static NAT conversion is configured on the NAT device. The NAT device
generates a static NAT entry and a UNR.
● A public network host sends a request for accessing a private network server,
and the NAT device receives the service request.
● The NAT device searches for a static NAT entry that matches the request
packet's destination IP address+port number, and converts the destination IP
address+port number to the private network IP address+port number
recorded in the matching entry. Then, the NAT server sends the packet to the
target private network server.
● After receiving a response packet from the private network, the NAT device
searches the sessions based on the 5-tuple or 3-tuple information of the
packet, translates the packet based on the query result, and sends the packet
to the public network.
The address conversion function can easily enable NAT internal servers to provide
services for public network hosts. For example, you can use 11.11.11.10 as the
external address of a web server and 11.11.11.11 as the external address of an FTP
server to provide services.
The NAT internal server function can be classified as address-level and port-level
internal servers based on whether both IP addresses and port numbers are
translated.
● Address-level NAT for internal servers: During NAT, the IP address alone is
translated, and the port number is not translated. In this mode, one public IP
address is used only by one internal server.
● Port-level NAT for internal servers: During NAT, both the IP address and port
number in each packet are translated. In this mode, one public IP address can
be allocated to multiple internal servers, and different servers can be
distinguished by port numbers.
Port Forwarding
If the IP address of an internal host changes frequently and the NAT internal
server function is deployed by specifying the mapping between the private IP
address+port number and the public IP address+port number, the mapping
configuration needs to be modified frequently. To reduce configuration complexity,
the port forwarding function can be deployed on the NetEngine 8100 M,
NetEngine 8000E M, NetEngine 8000 M. With this function, public IP addresses
and port numbers can be dynamically bound to internal hosts.
The port forwarding function dynamically binds public IP addresses and port
numbers to internal entities by specifying the address pool and port range.
As shown in Figure 1-12, the port forwarding mechanism allows for access to an
internal server as follows:
1. A public IP address and port segment are pre-configured.
2. A port forwarding policy is specified for the CPE during user authentication
(access-accept).
3. The BRAS fills in the port forwarding policy with a public IPv4 address
specified for the user and then generates the associated NAT entry.
4. During accounting (accounting-request), the BRAS sends packets carrying the
port forwarding policy to the RADIUS server.
5. The RADIUS server monitors the user's NAT status through the NAT-Port-
Forwarding-Info(26-164) attribute.
Figure 1-12 Using the port forwarding mechanism to access an internal server
The RADIUS server specifies a port forwarding policy for the server and configures
the mapping between the URL and public IPv4 address for the DNS service system.
NAT ALG
The NAT application layer gateway (ALG) provides transparent translation for
some application layer protocols during NAT.
The process of using the ALG function to establish an FTP connection is as follows:
1. The private network host and the public network FTP server establish a TCP
control connection using three-way handshakes.
2. The host sends an FTP Port packet carrying a destination IP address and a
port number to the FTP server to request to establish a data connection.
3. Upon receipt of the FTP Port packet, the ALG-capable NAT device maps the
private IP address and port number carried in the payload to the public IP
address and port number. In this example, the private IP address 192.168.0.10
carried in the payload is translated into a public IP address 1.1.1.1, and private
port 1024 into public port 5000.
4. Upon receipt of the FTP Port packet, the public network FTP server parses the
packet and sends a packet with a destination public IP address of 1.1.1.1 and
a port number of 5000 to initiate a data connection to the private network
host.
5. Upon receipt of the packet, the NAT device translates the destination public IP
address and port number to the private IP address and port number,
respectively, before sending the packet to the host. Then the host can
successfully establish a data connection to the FTP server.
client sends an AUTH packet to an FTP server, the FTPS client receives an error
response packet from the FTP server. The NAT device allows the FTPS AUTH and
its response packets to pass through without discarding them. After receiving the
error response packet, the FTPS client automatically switches to the FTP mode and
continues to exchange data with the FTP server using the FTP ALG function. This
ensures that the basic FTP functions are available and services are not affected.
The data payload of the ICMP error message carries a private IP address. Without
the NAT ALG function, the NAT device forwards the message to the public
network host without processing the private IP address in the data payload. Upon
receipt of the message, the public network host cannot identify the application to
which the message belongs. In addition, the private IP address of the private
network FTP server is leaked to the public network.
With the NAT ALG function, the NAT device translates the private IP address
192.168.0.10 of the FTP server to a public IP address 1.1.1.1 and then forwards the
ICMP message to the public network host. Upon receipt of the ICMP message, the
host can properly identify the FTP application. In addition, the risk of leaking the
private IP address of the FTP server to the public network is eliminated.
Figure 1-18 ALG for SIP when users are attached to different NAT devices
● ALG for SIP when users are attached to the same NAT device
As shown in Figure 1-19, all user addresses are private addresses and the
users are connected to the same NAT device. When the SIP packet of UA1 is
processed by the ALG function on the NAT device, and the UAS delivers a
media stream channel to UA1. The NAT device finds that the destination
address is included in an address pool and therefore uses the ALG function to
process the destination information. Subsequently, UA1 and UA2
communicate with each other through the media stream channel.
Figure 1-19 ALG for SIP when users are attached to the same NAT device
● ALG for SIP when users are on private and public networks
As shown in Figure 1-20, UA1 is a private network user, and UA2 is a public
network user. In the direction from UA1 to UA2, the ALG function processes
source information. In the direction from UA2 to UA1, the ALG function
processes destination information.
Figure 1-20 ALG for SIP when users are on private and public networks
NOTE
Fundamentals
With limited public IP addresses and the ever-increasing private network users and
access bandwidth, a single NAT board fails to meet service deployment
requirements. In this situation, multiple service boards need to be used to provide
more session and bandwidth resources.
After NAT service boards are added, to balance services on these boards for fully
using NAT resources, deploy the NAT load balancing function.
Figure 1-22 shows a centralized NAT load balancing scenario. When multiple user
packets are transmitted, the interface board uses the source IP address (SIP) in the
packets to select a service board based on hash algorithm. The packets are then
forwarded from the SFU to the selected service board for NAT.
During centralized NAT load balancing, the multi-core CPU of a service board can
share multiple global static address pools that are matched by different ACLs. In
addition, a NAT instance can be bound to multiple CPUs to ensure flexible
extension of a single NAT user and ensure that public IP addresses are shared by
different multi-core CPUs.
Figure 1-23 shows a distributed NAT load balancing scenario. boards are deployed
on a BRAS. When users go online, load balancing can be implemented by instance
and CPU to associate with NAT for allocation of public IP addresses. All the NAT
instances and CPUs on the same domain are calculated as a whole. The NAT
instance and CPU with the lowest number of online users are selected for user
login to achieve even load balancing. After a NAT instance and associated CPU are
selected, public IP addresses are obtained from this CPU, and both the forward
traffic and reverse traffic are distributed to the CPU in the NAT instance for service
processing.
During distributed NAT load balancing, multiple CPUs are bound to a NAT
instance. When a BRAS is associated to allocate public IP addresses, multiple CPUs
on the NAT instance are used for load balancing, and user traffic is allocated to
these CPUs for NAT service processing.
NAT backup implements data backup between service processing boards (service
board for short) of Carrier Grade NAT devices. It prevents network or service
interruption caused by the failure of a single device or link. This feature improves
device availability and service reliability and ensures stable operation of the carrier
network.
Inter-Board Backup
When a NAT device is equipped with two service boards, you can configure the
active and standby service boards on the NAT device to implement inter-board
backup (also called intra-chassis backup). Inter-board backup allows for a master/
slave switchover upon a fault on the master service board, thereby ensuring data
consistency and service continuity as well as preventing users from perceiving
faults.
NOTE
Backup Principles
In inter-board backup, the active/standby status is statically configured for NAT
service boards. The active service board establishes NAT sessions, and service
traffic passes through the active service board, not the standby service board.
Once the active service board is faulty, the interface board switches traffic to the
standby service board after detecting the fault.
Currently, inter-board backup includes the cold backup, warm backup, and hot
backup modes. In comparison with cold backup and warm backup, hot backup has
the following differences:
● Hot backup
The standby service board automatically synchronizes NAT sessions with the
active service board, without requiring NAT sessions to be reestablished
during traffic switching.
● Cold/Warm backup
The standby service board does not synchronize NAT sessions with the active
service board. When traffic is switched to the standby service board, NAT
sessions need to be re-established. The re-establishment time is determined
by the number of NAT sessions.
Troubleshooting Mechanism
Table 1-3 lists the comparison between cold backup, warm backup, and hot
backup. Inter-board hot backup supports both centralized and distributed
scenarios for rapid service recovery.
Inter- Central The active When the active After the active
board ized service service board service board
cold NAT board becomes faulty, user recovers, traffic is
backup processes traffic is interrupted switched back to the
services, temporarily, and active service board
and the traffic is switched to after a delay and NAT
standby the standby service sessions are
service board. A public IP reestablished.
board does address is re-
not back up allocated, and NAT
any tables. sessions are
reestablished.
Inter- Distrib The active If the active service When the active
board uted service board fails, user service board recovers
warm NAT board traffic is interrupted from the fault, user
backup processes for a short period of table information is
services, time before being backed up to the
and the switched to the active service board,
standby standby service board. and traffic is switched
service NAT sessions are re- back to the active
board backs established using the service board after a
up user backup user table delay. NAT sessions
table information. are re-established
information using the backup user
in real time. table information.
Inter- Central The active If the active service When the active
board ized service board fails, user service board recovers
hot NAT board traffic is interrupted from the fault, the
backup processes for a short period of user table and NAT
Distrib services, time before being session information
uted and the switched to the are backed up to the
NAT standby standby service board. active service board,
service The backup user table and traffic is switched
board backs and NAT session back to the active
up user information are used. service board after a
table and delay. The backup
NAT session user table and NAT
information. session information
are used.
Inter-Chassis Backup
If multiple devices equipped with NAT service boards are deployed on a network,
you can configure the active and standby service boards on the master and slave
devices to implement inter-chassis backup. The inter-chassis backup mechanism
ensures service data consistency between the master and slave devices. If a master
device fails, a service board fails, a link on the public network fails, or a link on the
private network fails, a master/slave device switchover is performed to ensure
service continuity.
Inter-chassis backup works in three modes: cold backup, warm backup, and hot
backup.
Table 1-4 describes the comparison between cold backup, warm backup, and hot
backup. Hot backup is widely used because it delivers high reliability and
minimized impact on the network.
On the network shown in Figure 1-32, when the master and slave devices learn
the peer VPN index of the same VPN instance, NAT device1 backs up the VPN-A
information with the VPN index 100 carried in the NAT session information to
NAT device2. Therefore, when the private network traffic is switched from the
master device to the slave device, packets can be correctly forwarded according to
the NAT forwarding entries backed up from the master device to the slave device.
NOTE
Background
With the large-scale deployment of NAT services and the growth of user
bandwidth, carriers' investment to NAT hardware is also increasing. In the
distributed NAT scenario shown in Figure 1-34, multiple BRASs are deployed on a
network. The NAT function is configured on the BRASs, and at least two NAT
service boards are installed to perform inter-board hot backup. If a NAT service
board on a BRAS fails, distributed inter-board hot backup ensures that BRAS and
NAT services quickly recover. This deployment mode requires a large number of
service boards, leading to the increase of hardware expenses. To reduce the
number of boards and improve maintenance in fault scenarios, centralized backup
of distributed NAT devices is introduced.
Deployment Solutions
In centralized NAT providing backup for distributed NAT, BRASs provide distributed
NAT functions, and a NAT device attached to a CR implements centralized NAT
functions. An interface board on each BRAS processes user packets and allows
users to get online, and the CR does not process such packets. The BRASs and CR
can be either directly connected or connected using tunnels:
● Centralized NAT providing backup for distributed NAT in a non-tunnel
scenario
In Figure 1-35, when a NAT board on a BRAS (distributed NAT device) is
working properly, the board performs NAT for private traffic to translate
private IP addresses to public addresses and then forwards the private traffic
to the Internet over routes.
In Figure 1-36, if a NAT board on a BRAS fails and no backup service boards
are equipped, the BRAS forwards private traffic directly over routes, without
performing NAT. After the private traffic arrives at CR (centralized NAT
Figure 1-36 NAT process on the centralized device when a traffic switchover
is performed
In Figure 1-38, if the distributed NAT board on a BRAS fails, the BRAS
forwards traffic directly to an L3VPN tunnel, without performing NAT. After
receiving the traffic, the CR (centralized NAT device) performs centralized
NAT to translate private IP addresses to public IP addresses and forwards the
traffic to the Internet.
Figure 1-38 NAT process on the centralized device when a traffic switchover
is performed
The NAT device provides security through the limit on the number of ports that
can be allocated, the limit on the number of sessions that can be established,
session aging, and so on.
After these numbers fall below the specified threshold, the IP address can be used
again to initiate connections over TCP, UDP, or ICMP ports.
Session Limiting
NAT is a stateful address translation technique, and session tables are core NAT
resources. If deny of service (DoS) attacks, such as SYN-Flood attacks, are
initiated, all NAT session table resources may be used up, which causes a failure to
establish session tables for common users and therefore access failures. With this
function, a NAT device counts the number of TCP, UDP, and ICMP sessions
established using a single IP address. If the number of sessions initiated using a
source IP address or destined for a destination IP address exceeds a specified
threshold, the IP address cannot be used to initiate new connections.
After the total number of TCP, UDP, and ICMP sessions used by the IP address falls
below the configured threshold, the IP address can be used again to initiate TCP,
UDP, and ICMP connections.
ages the entries and releases session resources. The NAT device can be configured
to forcibly age all session tables or a specific type of session tables.
NAT Blacklist
The NAT blacklist function protects against attacks initiated using the network-
side first packets on a specific set of an IP address, port number, and protocol type
or on all IP addresses. If no internal server is deployed or public network traffic
has no matching entry in the session table on a NAT device, traffic reaching a
specified rate threshold is considered attack traffic. The IP address, destination
UDP port number, and destination TCP port number of the attack traffic is added
to a NAT blacklist on the NAT device. The NAT device discards network-side traffic
that matches the blacklist entry of the specified IP address, port numbers, and
protocol types. If public network traffic that matches only the IP address in the
blacklist, statistics about the traffic are collected, and traffic is not discarded. In
addition, NAT blacklist entries can be automatically cleared. If the reverse attack
traffic rate is less than 16 kpps within 10 minutes, the blacklist entry ages
automatically or is aged manually.
Purpose
NAT logs record information about private network users' access to public
networks and public network users' access to private networks. Without NAT
logging, a NAT device cannot locate a private network user's operation because
multiple private network users share the same public IP address. NAT logging
enables the NAT device to record and trace information about user access, which
improves network security.
The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M supports flow
NAT logs.
Flow Logs
Flow logs apply when a NAT device establishes flow tables and age flow tables.
Flow logs contain the source IP address, source port number, destination IP
address, NATed source IP address, NATed source port number, and protocol type.
They are sent to a log server. Flow logs contain abundant information in a large
amount. Flow logs not only can be used for source tracing, but also be used for
providing information about external networks accessed by users.
Flow logs support the binary formats and are transmitted through a configured
UDP port.
NOTE
NOTE
Field Description
Field Description
Field Description
LOGINFOMNEM BIND
LOGTYPE L
Field Description
Field Description
Field Description
Field Description
Field Description
Field Description
Field Description
instance-id Instance 0
ID.
pool-id Address 1
pool ID.
● Example format of a NAT444 flexible user syslog log: <2020> <Jan> <17>
<10:35:34> 10.1.1.1 test - <NAT444>:<PortA> 1579257334|10.1.1.2|172.16.1.1|
1024|2047
● Example format of a DS-Lite flexible user syslog log: <134>1 2020 Jan 17
16:43:18 10.1.1.1 test - DSLITE:UserbasedW [- - 2001:DB8:2::2 10.38.160.107 -
1024 2047]
● Example format of a NAT64 flexible user syslog: <134>1 2020 Jan 17 17:11:43
10.1.1.1 test - NAT64:UserbasedW [- - 2001:DB8:1::2:2 10.11.11.100 - 2048
3071]
NOTE
Only NAT444 and DS-Lite scenario support user NetStream log format.
It is carried in the
Version number.
header of a user
version The value is fixed 2
NetStream log
at 9.
packet.
Sum of the
number of
template FlowSet
count 2
records and the
number of data
FlowSet records.
Time elapsed
since a CGN
device was
sysUpTime 4
powered on. It is
expressed in
milliseconds.
Number of
seconds since
UNIX Secs 4
January 1, 1970,
00:00 (UTC).
Sequence number
Sequence Number 4
of a packet.
Instance internal
ID (8 bits) + CPU
ID (8 bits) + slot
Source ID ID (8 bits) + 4
scenario (4 bits) +
instance internal
ID (4 bits)
Length of a
NetStream log
Length template. It is 2
expressed in
bytes.
It identifies a
ID of a NetStream NetStream log
log template: template.
● The value is
257 in an
Template ID online scenario. 2
● The value is
258 in an
offline
scenario.
Number of fields:
● The value is 7
in an online
Field Count scenario. 2
● The value is 8
in an offline
scenario.
NAT_LOG_FIELD_I ID of a VPN
2
DX_CONTEXT_ID instance.
Length of a VPN
length 2
instance ID.
NAT_LOG_FIELD_I
Name of a VPN
DX_CONTEXT_NA 2
instance.
ME
Length of a VPN
length 2
instance name.
Time (seconds)
length elapsed since the 2
last login.
Time (seconds)
length elapsed since the 2
last logout.
NAT_LOG_FIELD_I
DX_IPV4_INT_AD
DR Private IP address
2
NAT_LOG_FIELD_I of a user.
DX_IPV6_INT_AD
DR
Length of the
length private IP address 2
of a user.
NAT_LOG_FIELD_I
Public IP address
DX_IPV4_EXT_AD 2
of a user.
DR
Length of the
length public IP address 2
of a user.
Length of the
length 2
start port number.
Length of the
length 2
start port number.
ID of a FlowSet
FlowSet ID 2
record.
VPNID VPN ID 4
Online duration. It
AssignTime is expressed in 4
seconds.
In NAT444
scenarios, the
Private IP address
PRIVATEIP value is 4. In the
of a user.
DS-Lite scenario,
the value is 16.
Public IP address
PUBLICIP 4
of a user.
session is created, and a flow aging log is generated when a session ends. Flow
logs can be divided into three types: flow syslog, flow elog and flow netstream.
Field Description
L4 ID of an application protocol:
● 1: ICMP
● 6: TCP
● 17: UDP
● 58: ICMPv6
PRI 142
VERSION 1
Field Description
L4 ID of an application protocol:
● 1: ICMP
● 6: TCP
● 17: UDP
● 58: ICMPv6
Field Description
L4 Application identifier:
● 1: ICMP
● 6: TCP
● 17: UDP
● 58: ICMPv6
● 2016-12-24-14|27|28|2016|12|24|14|27|28|December|Dec|27|0x1b|December|
Dec|27|0x1b|27|0x1b|10.1.1.1|server|nat64|0.0.0.0|20.20.20.20|0|0x585e85d0|17|
60.0.0.0|0|1728|0|0|16384|
V1 Format (NAT444)
Internal ID of a NAT 1
instance_id
instance
Slot ID of a service 1
slot
board.
tos_ipv4 IP ToS 1
Destination IP address 4
natdip after NAT is
implemented.
Number of user-to- 4
inpkt network flow packets.
(This field is not in use.)
Number of network-to- 4
outpkt user flow packets. (This
field is not in use.)
Number of bytes of 4
network-to-user flow
outbyte
packets. (This field is not
in use.)
pad1 Reserved. 4
pad2 Reserved. 4
Log type: 1
log_type ● 0x14: NAT444 logs
● 0x14: DS-Lite logs
Internal ID of a NAT 1
instance_id
instance
Slot ID of a service 1
slot
board.
Number of user-to- 4
inpkt network flow packets.
(This field is not in use.)
Number of network-to- 4
outpkt user flow packets. (This
field is not in use.)
Number of bytes of 4
network-to-user flow
outbyte
packets. (This field is not
in use.)
pad1 Reserved. 4
pad2 Reserved. 4
Internal ID of a DS-Lite 2
instanceID
instance
Tunnel encapsulation 2
type in DS-Lite mode:
● For an IPinIP tunnel,
usEncapSultionType
the value is 4.
● For a GRE tunnel, the
value is 47.
0050 04 07 23 29 23 29 5b 29 5b 82 00 00 00 00 00 00
0060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0070 00 00 00 00 00 00 00 00 00 00 00 23 00 04 00 00
0080 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Both centralized and distributed CGN devices support flow NetStream logs. Flow
NetStream logs are sent to a log server in binary format.
NOTE
Version number.
version The value is fixed 2
at 9.
Sum of the
number of
template FlowSet
count 2
records and the
number of data
FlowSet records. It is carried in the
header of a flow
Time used since
NetStream log
the service board
sysUpTime 4 packet.
is powered on, in
milliseconds.
Number of
seconds since
UNIX Secs 4
January 1, 1970,
00:00 (UTC).
Sequence number
Sequence Number 4
of a packet.
Instance internal
ID (8 bits) + CPU
ID (8 bits) + slot
Source ID ID (8 bits) + 4
scenario (4 bits) +
instance internal
ID (4 bits)
Length of a
NetStream log
Length template. It is 2
expressed in
bytes.
ID of a NetStream
log template:
● In a session
creation
scenario, the
Template ID 2
value is 259.
● In a session
deletion
scenario, the
value is 260.
It identifies a
The value of this
Field Count 2 NetStream log
field is 13.
template.
timeStamp Timestamp of a
2
packet.
vlanID VPN ID 2
Length of the
Length source IPv4 2
address.
Source IPv4
Post NAT Source
address after NAT 2
IPv4 Address
is implemented.
Length of the
source IPv4
Length 2
address after NAT
is implemented.
Identifier of an IP
Protocol Identifier 2
protocol.
Length of the IP
Length 2
protocol.
Length of the
Length source port 2
number.
Source port
Post NAT source
number after NAT 2
Transport Port
is implemented.
Length of the
source port
Length 2
number after NAT
is implemented.
Length of the
Length destination IPv4 2
address.
Length of the
destination IPv4
Length 2
address after NAT
is implemented.
Length of the
Length destination port 2
number.
Length of the
destination port
Length 2
number after NAT
is implemented.
Initiator of a
session:
● For an access
from a private
network to a
public network,
Nat Originating
the value is 1. 2
Address Realm
● For an access
from a public
network to a
private
network, the
value is 2.
Length of the
Length initiator of a 2
session.
Type of a NAT
event.
● 1: Newly
Nat Event 2
created session
● 2: Non-new
session
Length of the
Length type of a NAT 2
event.
ID of a FlowSet
FlowSet ID 2
record.
Timestamp of a
timeStamp 8 Body of a
packet.
NetStream log
vlanID VPN ID 4
Source IPv4
Post NAT Source
address after NAT 4
IPv4 Address
is implemented.
Identifier of an IP
Protocol Identifier 1
protocol.
Source port
Post NAT source
number after NAT 2
Transport Port
is implemented.
Initiator of a
session:
● For an access
from a private
network to a
public network,
Nat Originating
the value is 1. 1
Address Realm
● For an access
from a public
network to a
private
network, the
value is 2.
Type of a NAT
Nat Event 1
event.
Figure 1-41 shows an enterprise network with NAT deployed. The NetEngine 8100
M, NetEngine 8000E M, NetEngine 8000 Ms are deployed at the network egress
and are connected to firewalls. NAT is configured on the NetEngine 8100 M,
NetEngine 8000E M, NetEngine 8000 Ms to implement mutual access between the
enterprise network and the public network. The firewalls perform only attack
defense.
In Figure 1-42, a campus network consists of a student area and a teacher area.
As the students are more in number, their packets are transmitted through the
education network. If access to the education network fails, students packets are
transmitted through the backup telecom network. The number of teachers is less
than the number of students and has higher network service requirements. When
any user, including both teachers and students, attempts to access resources on a
Financial users include banks. A bank system consists of the head office, tier-1
sub-branches, tier-2 sub-branches, and out-lets, which are connected through
leased lines provided by carriers.
Financial data transmission requires high security. Therefore, an egress gateway of
each sub-branch network must encrypt data and perform NAT for both the source
and destination IP addresses to secure data transmission.
In Figure 1-43, the short message service (SMS) network needs to communicate
with the SMS platform network. In addition, the two networks must be isolated to
protect data transmitted on the networks, and private IP addresses of financial
users must be masked. In this case, dual NAT can be used.
In the centralized load balancing scenario, if the CPU of a CGN board fails and
inter-chassis or inter-board backup is not deployed, user traffic is interrupted when
it passes through the faulty CPU, and another load balancing member that is
available takes over and performs NAT. When the faulty CPU recovers, some user
traffic may be interrupted temporarily and then recover automatically.
Hairpin Scenario
NOTE
Scenario Description
The hairpin scenario is also called NAT loopback, in which two hosts or servers
connected to the same NAT device can access each other using a mapped public
IP address.
Scenario Example
● Gateway address: 192.168.1.1/24
● Host 1 address: 192.168.1.2/24
● Host 2 address: 192.168.1.3/24
Precautions
● The hairpin function automatically takes effect (without configuration) when
NAT traffic distribution is configured on an inbound interface.
● The 3-tuple NAT mode must be configured if the public IP address and port
number are dynamically assigned to a host in the hairpin scenario.
● Dual NAT must be configured if hosts to access each other are connected to
different NAT devices in the hairpin scenario.
● The hairpin scenario does not involve mutual access traffic using private IP
addresses and without being processed by NAT.
NOTE
As shown in Figure 1-47, if the BRAS and CGN functions cannot be integrated on
the same device, a CGN device needs to be connected to the CR or SR. In this
manner, CPEs can dial up through the BRAS, and the BRAS can allocate private IP
addresses for the CPEs. Upon receipt of a packet from a PC, the CPEs perform the
first NAT on the IP address of the PC, and the CGN device performs the second
NAT on the CPEs. The NATed packets are then transmitted to the ISP core network
through the CR. This solution is called centralized NAT444 inter-board hot backup
because there are two NAT operations and the CGN function is centralized on the
CR for inter-board hot backup.
Figure 1-47 Centralized NAT444 inter-board hot backup solution based on port
pre-allocation
NOTE
On the network shown in Figure 1-48, a CPE dials up to a BRAS integrated with a
CGN board and gets online. The BRAS assigns an access IP address, a post-NAT
address, and a port range to the CPE. After receiving a packet from a PC, the
corresponding CPE performs the first NAT for the packet and then sends the
packet to the BRAS. Upon receipt of the packet, the BRAS performs the second
NAT. Taking into consideration the two NATs and CGN functions distributed on
various BRASs, 1:1 inter-board hot backup is deployed to achieve CGN service
reliability. This solution is called distributed NAT444 inter-board hot backup.
Figure 1-48 Distributed NAT444 inter-board hot backup based on port pre-
allocation
Service Description
Inter-chassis hot backup configured on a network ensures service data consistency
between the master and slave devices. If a master device, service board, link on
the public network, or link on the private network fails, a master/slave device
switchover is performed to ensure service continuity.
Networking Description
As shown in Figure 1-49, if the BRAS and CGN functions cannot be integrated on
the same device, a CGN device needs to be connected to the CR or SR. In this
manner, CPEs can dial up through the BRAS, and the BRAS can allocate private IP
addresses for the CPEs. Upon receipt of an access request packet from a PC, a CPE
performs the first NAT on the IP address of the PC, and a CGN device performs the
second NAT on the CPE's IP address. The NATed packets are then transmitted to
the ISP core network through CR1. This solution is called centralized NAT444 inter-
chassis hot backup because there are two NAT operations and the CGN function is
centralized on CR1 and CR2 where inter-chassis hot backup is implemented.
Figure 1-49 Centralized NAT444 inter-chassis hot backup solution based on port
pre-allocation
NOTE
Service Description
Inter-chassis hot backup configured on a network ensures service data consistency
between the master and slave devices. If a master device, service board, link on
the public network, or link on the private network fails, a master/slave device
switchover is performed to ensure service continuity.
Networking Description
On the network shown in Figure 1-50, CPEs dial up through the BRAS (with CGN
integrated). The BRAS assigns private IP addresses, post-NAT IP addresses, and
port segments for CPEs. After receiving a packet from a PC, the corresponding CPE
performs the first NAT for the packet and then sends the packet to BRAS1. Upon
receipt of the packet, BRAS1 performs the second NAT. This solution is called
distributed NAT444 inter-chassis hot backup because there are two NAT
operations, the CGN function is deployed on different BRAS access points, and
inter-chassis hot backup is deployed between BRAS1 and BRAS2 to enhance CGN
service reliability.
Figure 1-50 Distributed NAT444 inter-chassis hot backup solution based on port
pre-allocation
NOTE
Service Overview
Figure 1-51 shows a scenario where centralized backup is provided for distributed
NAT devices. In this scenario, two NAT service boards are installed on the BRAS to
implement inter-board hot backup. A NAT device equipped with a NAT service
board is attached to the CR. When both of the two NAT boards on the BRAS
become faulty, the BRAS does not perform distributed NAT on private network
traffic. Instead, the private network traffic is forwarded over routes to the CR and
then redirected to the NAT device for centralized NAT. After the private IP address
is translated to a public IP address, the traffic goes to the Internet.
NOTE
If the CGN service board on a distributed device fails, users are not logged out, and user
traffic for accessing the Internet is sent to the centralized device for NAT translation. A ping
to the gateway address (BRAS shown in Figure 1-51) fails. After the CGN service board
recovers, NAT services are switched back to the distributed device. The user can successfully
ping the gateway address.
Feature Deployment
● Deploy inter-board hot backup for distributed NAT on the BRAS.
● Deploy centralized NAT on the CR.
Service Overview
Public IP addresses on the Internet are limited. Especially on enterprise networks,
enterprise users cannot obtain sufficient IP addresses. To conserve public IP
address resources, interface addresses on the public network side can be used as
both source addresses of GRE tunnels and public IP addresses of NAT Easy IP. In
this way, both the GRE and NAT services can be deployed on the network.
Networking Description
Figure 1-52 NAT Easy IP and a GRE tunnel sharing an interface address
As shown in Figure 1-52, GRE traffic from the user to the public network's server
1 and NAT traffic from the user to the public network's server 2 share the IP
address of interface 1 on the public network side as a source IP address. GRE
traffic from the public network's server 1 to the user and NAT traffic from the
public network's server 2 to the user share the IP address of interface 1 on the
public network side as a destination IP address.
Feature Deployment
● Configure the NAT service on the NAT device. Enable the Easy IP function to
reuse the public network-side interface's IP address in a NAT address pool.
● Create a GRE tunnel between the NAT device and Device. Use the public
network-side interface's IP address as the source IP address of the GRE tunnel.
NOTE
Table 1-32 describes the support of centralized NAT and on-board NAT for ping in
typical NAT application scenarios. For details about centralized NAT and on-board
NAT, see NAT Classification.
UA User agent
replace IPv4 that has been running on various network devices and bearing a
majority of existing applications, some IPv4-to-IPv6 transition techniques can be
used to alleviate IPv4 address shortage.
NOTE
Usage Scenario
NOTE
NAT session and bandwidth resources are under license control. By default, the
device allocates neither NAT bandwidth nor session resources. Before you
configure the NAT function, adjust NAT bandwidth and session resources.
Pre-configuration Tasks
Before you configure NAT session and bandwidth resources, complete the
following tasks:
● The license has been loaded to the device, and the service board is working
properly.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run license
The license view is displayed.
Step 3 Run active nat session-table size table-size slot slot-id
Session resources are configured for the CPU of a specified dedicated board.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run license
The license view is displayed.
Step 3 Run active nat bandwidth-enhance bandwidth slot slot-id
----End
Prerequisites
NAT session and bandwidth resources have been allocated successfully, and the
service board is running properly.
Procedure
● Run the display nat session-table size [ slot slot-id ] command to check the
session resources allocated to service boards.
● Run the display nat bandwidth [ slot slot-id ] command to check the
bandwidth resources granted by the license.
----End
Usage Scenario
Basic NAT function configurations are as follows:
● A NAT instance is bound to a NAT service board on a device. The device sends
packets to the NAT service board for NAT processing.
● A public IPv4 address range in a NAT address pool can be assigned to a
specific NAT instance so that the instance can translate IPv4 addresses
between public and private networks.
Prerequisites
Before you configure basic NAT functions, complete the following tasks:
Context
By default, a device uses the on-board NAT mode, in which NAT is performed by
the device's main control board. If a dedicated board is also installed on the device
and needs to be used for NAT, NAT must be first enabled on this board.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run vsm on-board-mode disable
Dedicated NAT is enabled for the device.
Step 3 Run commit
The configuration is committed.
----End
Context
After a NAT instance is bound to a service-instance group that is bound to a
service-location group, the NAT instance is bound to the service-location group,
enabling the NAT service board to process NAT services.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run service-location service-location-id
A service-location group is created, and its view is displayed.
Step 3 Run location { follow-forwarding-mode | slot slot-id }
A specified NAT service board is bound to the service-location group.
The supporting status of the location follow-forwarding-mode and location slot
slot-id commands varies with device types.
Step 4 Run commit
The configuration is committed.
In load balancing scenarios where different types of boards are installed on the
same device, to improve CPU utilization, you can set the weight parameter to
adjust the load balancing weight.
NOTE
----End
Context
A NAT address pool can be configured in a NAT instance for address translation
based on 5-tuple or 3-tuple information in user packets.
Procedure
Step 1 Run system-view
▪ Specify the start and end IP addresses (i.e., start-end mode) in the
nat address-group command. The device generates the same
number of public UNR routes as there are public IP addresses in the
range specified by the start-end mode. The mask of the public UNR
routes is 32 bits. For details about UNR routes, see UNR Route
Feature Description
– Run the nat address-group address-group-name command to enter the
NAT address pool view, and then run the section section-num start-ip-
address { mask { mask-length | mask-ip } | end-ip-address } command.
The section command configures multiple public IP address ranges in a
single public IP address pool. The configuration mode can be one of the
following:
▪ Specify the start and end IP addresses (i.e., start-end mode) in the
section command. The device generates the same number of public
UNR routes as there are public IP addresses in the range specified by
the start-end mode. The mask of the public UNR routes is 32 bits.
For details about UNR routes, see UNR Route Feature Description
The mask mode is recommended. With this mode enabled in NAT public
IP address pools, the mask of each route to be advertised is the same as
the mask specified in the nat address-group command. In start-end
mode, the mask of routes to be advertised is 32 bits.
● In Easy IP mode, create the reusing relationship between a NAT address pool
and an interface IP address.
– Run the nat address-group address-group-name group-id id
unnumbered interface interface-name command.
A NAT address pool cannot contain a DHCP server address. You need to properly allocate
address segments. Otherwise, NAT traffic cannot be forwarded.
Step 4 (Optional) Create a NAT overloaded address pool and bind the public address
pool to this overloaded address pool.
1. Configure a NAT overloaded address pool or enter the view of a NAT
overloaded address pool.
You can configure an address segment for the NAT overloaded address pool
using either of the following methods:
– Method 1: Run the nat address-group address-group-name group-id
group-id start-address { mask { address-mask-length | address-mask } |
end-address } [ vpn-instance vpn-instance-name ] over-load command
directly in the NAT instance view.
– Method 2: Run the nat address-group address-group-name [ group-id
group-id [ vpn-instance vpn-instance-name ] ] over-load command to
enter the NAT overloaded address pool view first, and then run the
section section-num start-ip-address { mask { mask-length | mask-ip } |
end-ip-address } command.
NOTE
Note the following issues when configuring a NAT overloaded address pool:
– The common IP address pool in no-PAT mode cannot be used as a NAT
overloaded address pool.
– The address pool created in Easy IP mode cannot be used as, or bound to a NAT
overloaded address pool.
– The nat outbound command cannot be run for a NAT overloaded address pool.
2. Run the nat address-group address-group-name bind-over-load overload-
address-group-name command to bind a common address pool to the
overloaded address pool.
NOTE
After the common address pool is bound to the NAT overloaded address pool, note
the following:
– Step 5 is not allowed. This is because the binding is mutually exclusive with the
nat address-group exclude-ip-address and section exclude-ip-address
commands.
– The NAT instance in which the binding is configured cannot be configured as a
load-balancing instance group.
Step 5 (Optional) Exclude an IP address or IP address range from a NAT address pool.
● If the nat address-group mode is used, run the nat address-group address-
group-name exclude-ip-address start-address [ end-address ] command.
● If the section mode is used, run the section section-id exclude-ip-address
start-address [ end-address ] command.
Step 6 (Optional) Run section section-num lock
----End
Context
A CGN global address pool saves public network addresses and simplifies
configurations, which facilitates public address management.
NOTE
Procedure
Step 1 Run system-view
The lengths of the initial and extended address segments that the dynamic
NAT address pool requests from the dynamic global address pool are
configured.
7. Run nat-instance ip used-threshold upper-limit upper-value lower-limit
lower-value
The address segment requesting and releasing thresholds for the dynamic
NAT address pool are configured.
8. Run detect retransmit retransmit-value interval day-value hour-value
minute-value
The retransmission times and interval for the dynamic global address pool are
configured.
9. (Optional) Run lock
The dynamic global address pool is locked.
10. (Optional) Run nat alarm ip { log | trap } disable
The log or trap function for the global address pool is disabled.
11. Run quit
Return to the system view.
Step 4 Bind the dynamic NAT address pool to a CGN global address pool.
1. Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
2. Run nat address-group address-group-name group-id group-id bind-ip-pool
pool-name [ over-load ]
The dynamic NAT address pool is bound to the CGN global address pool.
NOTE
Binding a dynamic address pool in a NAT instance to a CGN global address pool is
mutually exclusive with configuring a static algorithm, port-forwarding, and common
address pool.
3. Run nat outbound { acl-number | any } address-group address-group-name
A conversion policy is configured for the NAT address pool. Either the ACL
matching mode or non-ACL matching mode can be selected.
4. Run quit
Return to the system view.
5. Run commit
The configuration is committed.
----End
Context
The available port allocation modes are dynamic allocation, static pre-allocation,
semi-dynamic pre-allocation, and per-port allocation. During network deployment,
a port allocation mode is configured based on the service deployment scale.
● Port pre-allocation, also called the port range mode, enables a NAT device to
pre-allocate a public IP address and port range to a private IP address for the
NAT device to map the private IP address to the public IP address and a port
in the port range.
● The semi-dynamic port allocation (Semi-Dynamic) mode is an extension of
the port range mode. Semi-dynamic port allocation extends a single port
segment used in the port range mode to three parameters: initial port range,
extended port range, and maximum number of times a port range can be
extended. Before users go online, a NAT device assigns an initial port segment
and ports in the initial segment to users. If the number of used ports exceeds
the initial port segment size, the NAT device assigns an extended port
segment, which can repeat for a specified maximum number of times.
● Dynamic port pre-allocation (Port Dynamic) enables a NAT device to pre-
allocate a public IP address and a port range with 64 ports to a private IP
address. If the number of used ports exceeds the initial port range size, the
NAT device assigns another port range with 64 ports to the user. The
allocation process repeats without a limit on the maximum number of
extended port ranges. This mode features high usage of public network IP
addresses, but involves a huge amount of log information. This requires a log
server to handle logs.
● Per-port allocation, a type of dynamic port mode, enables a device to assign a
port, not a port range each time the device creates a session. With per-port
allocation enabled, port usage of public IP addresses is maximized. The per-
port allocation mode applies when a few IPv4 or IPv6 public addresses are
available.
NOTE
Procedure
Step 1 Run system-view
The maximum number of private network users sharing the same public IP
address in PAT mode is set.
NOTE
The nat ip access-user limit command is mandatory in per-port allocation mode and is
recommended in dynamic port allocation mode and semi-dynamic port allocation mode.
Otherwise, a large number of users use the same public IP address, affecting subsequent
user services.
Step 4 Configure a port allocation mode. Modes 1 and 2 are mutually exclusive for a NAT
instance. Before changing the port allocation mode, disable the existing mode
first.
● A port allocation mode works only for new online users in a NAT instance.
● After a NAT instance is bound to a service board, the port-single enable and undo
port-single enable commands cannot both be run within 1 minute.
The port multiplexing function is enabled for TCP and other protocols.
NOTE
● By default, sessions of different protocols using the same private IP address cannot
share the same public port during port pre-allocation. After the port multiplexing
function is enabled, for the same private IP address, TCP sessions can share a public port
with sessions of other protocols.
● The port-reuse enable and port-single enable commands are mutually exclusive.
NOTE
A port range specified in the port-range command that is assigned in cursor-based port
allocation mode will not be immediately used after being released. In some special
scenarios, such a port range may be used immediately after being released.
----End
Context
To enable a RADIUS server to issue NAT configuration policies, configure a NAT
template on a device and define some NAT configuration policies, such as a limit
on the number of NAT sessions in the template. After the RADIUS server issues a
template name to the device, the device finds the configured NAT policy template
and applies it to a specified NAT instance.
NOTE
The configurations in a NAT policy template take effect only after the NAT policy template
is issued by a RADIUS server. The implementation is supported only in the distributed CGN
scenario.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat-policy template template-name
A NAT policy template is configured.
A maximum of four NAT policy templates can be configured on a device.
Step 3 Run nat session-limit { tcp | udp | icmp | total } session-number
The maximum number of private-to-public sessions that can be established for
each user IP address using the NAT policy template is set.
Step 4 Run nat reverse-session-limit { tcp | udp | icmp | total } session-number
The maximum number of reverse NAT sessions that can be established for each
user IP address is set.
The limit on the number of forward or reverse NAT sessions configured in the NAT
policy template prevails. If a limit is set in the template, the setting takes effect. If
no limit is set in the NAT policy template, the default value in the template takes
effect. If a delivered NAT policy template is invalid, the setting in the NAT instance
takes effect.
Whether the limit on the number of forward or reverse NAT sessions takes effect
is determined by the configuration in a NAT instance. If the limit function is
disabled in the NAT instance, forward or reverse NAT sessions can be established
without restriction.
Step 5 Run nat alg { all | ftp | pptp | rtsp | sip }
The application protocol types that the NAT ALG detects are configured.
If the nat alg command is run in both the NAT policy template and NAT instance
views, both command instances take effect.
Step 6 Run port-range port-num [ extended-port-range extended-port-range
extended-times extended-times ]
----End
Context
NAT attributes can be configured only after a NAT instance is configured. After
user traffic enters a NAT instance, NAT translates information in the user traffic
and redirects traffic to a specified next-hop IP address.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name id id
A NAT instance is created, and the NAT instance view is displayed.
Step 3 Perform either of the following operations to configure NAT redirection:
● To redirect all user traffic, run the following command to set the next-hop IP
address: redirect ip-nexthop ip-address { inbound | outbound }
A single redirect ip-nexthop command instance can be run for either public
network-to-private network (inbound) or private network-to-public network
(outbound) packets in each NAT instance.
● To redirect traffic of a specific user, run the following command to set the
rnext-hop IP address: redirect ip-nexthop ip-address { inbound | outbound }
redirect-id [ tcp | udp | protocol-id ] [ source-ip ip-address { ip-mask | mask-
length } [ source-port port-number ] [ vpn-instance vpn-instance-name ] |
destination-ip ip-address { ip-mask | mask-length } [ destination-port port-
number ] [ vpn-instance vpn-instance-name ] ] *
In this situation, a maximum of 16 redirect ip-nexthop command instances
can be run for public network-to-private network (inbound) packets, private
network-to-public network (outbound) packets, or packets in both directions
in each NAT instance. The redirect-id parameter is set to identify each
command instance.
----End
Prerequisites
Basic NAT functions have been configured.
Procedure
● Run the display nat instance [ instance-name ] command to check the
configuration of a NAT instance.
● Run the display nat address-usage [ by-session ] instance instance-name
address-group address-group-name [ slot slot-id ] [ verbose ] command to
check the public port usage of a NAT address pool.
● Run the display nat-policy template [ template-name ] command to check
configuration information about a NAT template.
----End
Application Scenarios
NOTE
After configuring basic NAT functions and user group information, you can apply a
NAT traffic diversion policy in the inbound direction of user traffic to distribute the
user traffic to a service board for NAT translation. Meanwhile, you can apply a
NAT translation policy to translate the private IPv4 addresses in user packets into
public IPv4 addresses. In this way, users can access public network services.
Pre-configuration Tasks
Before configuring NAT for user traffic, complete the following tasks:
● Configure basic NAT functions.
Context
Multiple associations between user groups and NAT instances can be configured
in a domain. After a user goes online, the system checks the number of users in
each user group and adds the user to the group that has the least number of
members so that the user's traffic can be allocated to the least busy NAT instance
to implement load balancing among NAT instances.
Procedure
Step 1 Configure a user access mode.
For detailed configurations, see the HUAWEI NetEngine 8100 M, NetEngine 8000E
M, NetEngine 8000 M Configuration Guide - User Access.
CGN public network user offline is triggered to make the user access the
network with a private IP address.
5. (Optional) Run load-balance user-group refer-service-location
NOTE
NAT is implemented for users who go online from a domain before user data
is forwarded based on a routing table in distributed CGN scenarios.
9. (Optional) Run public-address nat-instance-down
The function of allowing CGN users to access the network with a public IP
address in the event of a CGN service board fault is enabled.
10. Run commit
----End
Context
A service board does not provide any interface. An inbound interface board
therefore must direct user traffic to a NAT service board for NAT processing. You
can configure a traffic policy to direct the packets matching the traffic policy to a
service board.
Procedure
Step 1 Run system-view
A user ACL number ranges from 6000 to 9999. To create such an ACL, run the
acl name ucl-acl-name [ ucl | [ ucl ] number ucl-acl-number ] ] [ match-
order { auto | config } ] command.
2. Create a user ACL rule based on the protocol used.
NOTE
Generally, source IP addresses are matched against in ACL rules. To specify multiple
ACL rules, repeat Step Step 2.2.
Rules in an ACL to which traffic is matched against are used based on the depth first
principle (with auto configured) or the configuration order (with config configured).
By default, rules are matched against in the configuration order (with config
configured).
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 3 Configure a traffic classifier.
1. Run traffic classifier classifier-name [ operator { and | or } ]
A traffic classifier is configured, and the traffic classifier view is displayed.
2. Run if-match acl acl-number
An ACL-based matching rule is defined for multi-field traffic classification.
To configure multiple ACL-based matching rules, repeat this step.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 4 Configure a traffic behavior.
1. Run traffic behavior behavior-name
A traffic behavior is created, and the traffic behavior view is displayed.
2. Run nat bind instance instance-name
A traffic behavior is bound to a NAT instance.
NOTE
The nat bind instance and redirect ip-nexthop commands are mutually exclusive.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 5 Configure a traffic policy.
1. Run traffic policy policy-name
A traffic policy is created, and the traffic policy view is displayed.
2. Run classifier classifier-name behavior behavior-name
A traffic behavior is bound to a specified traffic classifier in the traffic policy.
3. Run commit
----End
Context
The router is deployed on the egress of an enterprise network, whereas NAT does
not need to be performed for a great amount of traffic transmitted within the
enterprise network. To prevent an inbound interface from enforcing a NAT traffic
policy that directs intra-enterprise network traffic to a NAT service board for NAT
processing, the NAT traffic policy can be configured on an outbound interface
connected to a public network. This enables the device to match traffic only
destined for a public network against the NAT traffic policy.
Procedure
Step 1 Run system-view
– For a basic ACL (numbered from 2000 to 2999), run the following
command:
rule [ rule-id ] [ name rule-name ] { deny | permit } [ fragment-type
{ fragment | non-fragment | non-subseq | fragment-subseq |
fragment-spe-first } | source { source-ip-address { source-wildcard | 0 |
src-netmask } | any } | time-range time-name | [ vpn-instance vpn-
instance-name | vpn-instance-any ] | logging ] *
– For an advanced ACL (numbered from 3000 to 3999), run the following
command as required by the protocol type:
rule [ rule-id ] [ name rule-name ] { deny | permit } { protocol | tcp }
[ [ dscp dscp | [ precedence precedence | tos tos ] * ] | { destination
{ destination-ip-address { destination-wildcard | 0 | des-netmask } | any }
| destination-pool destination-pool-name } | { destination-port operator
port-number | destination-port-pool destination-port-pool-name } |
fragment-type { fragment | non-fragment | non-subseq | fragment-
subseq | fragment-spe-first } | { source { source-ip-address { source-
wildcard | 0 | src-netmask } | any } | source-pool source-pool-name } |
{ source-port operator port-number | source-port-pool source-port-pool-
name } | { tcp-flag | syn-flag } { tcp-flag [ mask mask-value ] |
established |{ ack [ fin | psh | rst | syn | urg ] * } | { fin [ ack | psh | rst |
syn | urg ] * } | { psh [ fin | ack | rst | syn | urg ] * } | { rst [ fin | psh | ack
| syn | urg ] * } | { syn [ fin | psh | rst | syn | urg ] * } | { urg [ fin | psh |
rst | syn | urg ] * } } | time-range time-name | ttl ttl-operation ttl-value |
packet-length length-operation length-value ] *
The preceding configuration uses TCP as the example protocol. The
configurations of the other protocols are similar. For details, see the rule
(Advanced ACL view) (UDP), rule (advanced ACL view) (ICMP), and
rule (Advanced ACL view) (gre-igmp-ip-ipinip-ospf) commands.
NOTE
A source IP address is usually configured in an ACL rule. In each ACL rule, a protocol
number, source IP address, destination IP address, source port number, destination port
number, VPN instance name, and fragment flag can be specified for matching data
flows. The ACL rule requires the consecutive subnet masks whose 0s or 1s must be
consecutive, for example, 255.255.255.0.
An ACL configured in the NAT diversion policy on an outbound interface contains
multiple ACL rules. The ACL rules are used in ascending order by sequence number to
match packets.
3. Run the commit command to commit the configuration.
4. Run the quit command to return to the system view.
2. Run the nat bind acl { acl-index | name acl-name } [ mode deny-forward ]
instance instance-name command to bind the NAT instance and ACL to the
interface.
3. Run the commit command to commit the configuration.
----End
Context
There are two NAT conversion policies:
● ACL-based
In this policy, an ACL and a NAT address pool are configured, and NAT can be
performed only for user packets that match a permit ACL rule.
● Non-ACL-based
In this policy, packets diverted to a NAT service board are not matched
against ACL rules. By default, the IP addresses of these packets are converted
into public IP addresses from a specified NAT address pool.
Procedure
● Configure an ACL-based NAT conversion policy.
a. Run system-view
The system view is displayed.
b. Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
c. Run nat outbound acl-number address-group address-group-name
A NAT conversion policy in which the ACL bound to the address pool is
used to filter packets for translation.
NOTE
When performing this step, ensure that the address range of the ACL rule used in
the conversion policy covers the address range of the ACL rule used in the traffic
diversion policy. Otherwise, service interruptions may occur. To prevent such
interruptions, use either of the following measures:
● Keep acl-number in this step consistent with that in the traffic diversion
policy.
● Configure a default address pool in this step for translating the addresses
unmatched by the configured traffic conversion policy. For example:
The ACL rule referenced by the traffic diversion policy is as follows:
#
acl number 3000
rule 1 permit source 1.1.1.1 0.0.0.255
#
acl number 3999 //Default address pool configuration
rule 1 permit ip //Default address pool configuration
The conversion policy is as follows:
nat instance nat1 id 1
nat address-group group1 group-id 1 10.10.1.0 10.10.1.254
nat address-group group2 group-id 2 10.10.1.255 10.10.1.255 //Default address pool
configuration
nat outbound 3000 address-group address-group1
nat outbound 3999 address-group address-group2 //Default address pool configuration
d. Run commit
The configuration is committed.
● Configure a non-ACL-based NAT conversion policy.
a. Run system-view
The system view is displayed.
b. Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
c. Run nat outbound any address-group address-group-name
A non-ACL-based NAT conversion policy is configured.
d. Run commit
The configuration is committed.
----End
Prerequisites
NAT traffic policy has been configured.
Procedure
● Run the display nat instance [ instance-name ] command to check the
configuration of a NAT instance.
● Run the display nat user-information { cpe ipv4 ipv4-address | port-usage
port-usage | session-discard | top-ten | domain domain-name | user-id user-
Usage Scenario
After configuring basic NAT functions, you can apply a NAT traffic diversion policy
for inbound traffic received by a NAT service board. You can also apply a NAT
traffic conversion policy to translate a private IPv4 address in each data packet
into a public IPv4 address, allowing a user to access public network services.
Pre-configuration Tasks
Before configuring centralized NAT for traffic, complete the following task:
● Configure basic NAT functions.
Context
A service board does not provide any interface. Therefore, all user traffic that
requires NAT must be diverted from an inbound interface board to a service board
for processing. You can configure a traffic policy to direct packets matching the
configured traffic policy to the NAT service board.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Configure a traffic classification rule.
1. Run either of the following commands to create an ACL and enter the ACL
view:
– For a basic ACL (numbered from 2000 to 2999), run the acl { name
basic-acl-name { basic | [ basic ] number basic-acl-number } |
[ number ] basic-acl-number } [ match-order { config | auto } ]
command.
– For an advanced ACL (numbered from 3000 to 3999), run the acl { name
advance-acl-name [ advance | [ advance ] number advance-acl-
number ] | [ number ] advance-acl-number } [ match-order { config |
auto } ] command.
2. Run either of the following commands to create an ACL rule:
– For a basic ACL (numbered from 2000 to 2999), run the following
command:
rule [ rule-id ] [ name rule-name ] { deny | permit } [ fragment-type
{ fragment | non-fragment | non-subseq | fragment-subseq |
fragment-spe-first } | source { source-ip-address { source-wildcard | 0 |
src-netmask } | any } | time-range time-name | [ vpn-instance vpn-
instance-name | vpn-instance-any ] | logging ] *
– For an advanced ACL (numbered from 3000 to 3999), run the following
command as required by the protocol type:
rule [ rule-id ] [ name rule-name ] { deny | permit } { protocol | tcp }
[ [ dscp dscp | [ precedence precedence | tos tos ] * ] | { destination
{ destination-ip-address { destination-wildcard | 0 | des-netmask } | any }
| destination-pool destination-pool-name } | { destination-port operator
port-number | destination-port-pool destination-port-pool-name } |
fragment-type { fragment | non-fragment | non-subseq | fragment-
subseq | fragment-spe-first } | { source { source-ip-address { source-
wildcard | 0 | src-netmask } | any } | source-pool source-pool-name } |
{ source-port operator port-number | source-port-pool source-port-pool-
name } | { tcp-flag | syn-flag } { tcp-flag [ mask mask-value ] |
established |{ ack [ fin | psh | rst | syn | urg ] * } | { fin [ ack | psh | rst |
syn | urg ] * } | { psh [ fin | ack | rst | syn | urg ] * } | { rst [ fin | psh | ack
| syn | urg ] * } | { syn [ fin | psh | rst | syn | urg ] * } | { urg [ fin | psh |
rst | syn | urg ] * } } | time-range time-name | ttl ttl-operation ttl-value |
packet-length length-operation length-value ] *
The preceding configuration uses TCP as the example protocol. The
configurations of the other protocols are similar. For details, see the rule
(Advanced ACL view) (UDP), rule (advanced ACL view) (ICMP), and
rule (Advanced ACL view) (gre-igmp-ip-ipinip-ospf) commands.
NOTE
Generally, source IP addresses are matched against in ACL rules. To specify multiple
ACL rules, repeat Step Step 2.2.
Rules in an ACL to which traffic is matched against are used based on the depth first
rule (with auto configured) or in a configuration order (with config configured). By
default, rules are used in a configuration order (with config configured).
When an ACL rule is associated with an instance, the address wildcard of the ACL rule
must be a subnet mask in consecutive mode (with 0s and 1s consecutively sequenced,
such as 255.255.255.0), instead of a subnet mask in non-consecutive mode (with 0s
and 1s inconsecutively sequenced, such as 255.0.255.0).
3. Run commit
NOTE
The nat bind instance and redirect ip-nexthop commands are mutually exclusive.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 5 Configure a traffic policy.
1. Run traffic policy policy-name
A traffic policy is configured, and the traffic policy view is displayed.
2. Run classifier classifier-name behavior behavior-name
The traffic behavior is bound to the traffic classifier in the traffic policy.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 6 Apply the traffic policy to an interface.
For NAT user traffic on an inbound interface, apply the traffic policy to a user-side
Layer 3 interface.
1. Run interface interface-type interface-number
The interface view is displayed.
2. Run traffic-policy policy-name inbound [ link-layer | all-layer | mpls-layer ]
The traffic policy is applied to the interface.
3. Run commit
The configuration is committed.
For NAT user traffic on an inbound interface, apply the traffic policy to a user-side
Layer 2 Ethernet interface that is added to a VLAN.
1. Run interface interface-type interface-number
The interface view is displayed.
2. Run portswitch
The interface is switched to the Layer 2 mode.
3. Run port link-type { access | dot1q-tunnel | hybrid | trunk }
A type is set for the Layer 2 Ethernet interface.
4. Run either of the following commands to add an interface to a VLAN:
– For an access or QinQ interface:
i. Run the port default vlan vlan-id command to add the interface to
the VLAN.
To add interfaces to a VLAN in a batch, run the port interface-type
{ interface-number1 [ to interface-number2 ] } &<1-10> command
in the VLAN view.
NOTE
----End
Context
The router is deployed on the egress of an enterprise network, whereas NAT does
not need to be performed for a great amount of traffic transmitted within the
enterprise network. To prevent an inbound interface from enforcing a NAT traffic
policy to direct intra-enterprise network traffic to a NAT service board for NAT
processing, a NAT traffic policy can be configured on an outbound interface
connected to a public network. This enables the device to match traffic only
destined for a public network against the NAT traffic policy.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Configure a traffic classification rule.
1. Run either of the following commands to create an ACL and enter the ACL
view:
– For a basic ACL (numbered from 2000 to 2999), run the acl { name
basic-acl-name { basic | [ basic ] number basic-acl-number } |
[ number ] basic-acl-number } [ match-order { config | auto } ]
command.
– For an advanced ACL (numbered from 3000 to 3999), run the acl { name
advance-acl-name [ advance | [ advance ] number advance-acl-
number ] | [ number ] advance-acl-number } [ match-order { config |
auto } ] command.
2. Run either of the following commands to create an ACL rule:
– For a basic ACL (numbered from 2000 to 2999), run the following
command:
rule [ rule-id ] [ name rule-name ] { deny | permit } [ fragment-type
{ fragment | non-fragment | non-subseq | fragment-subseq |
fragment-spe-first } | source { source-ip-address { source-wildcard | 0 |
src-netmask } | any } | time-range time-name | [ vpn-instance vpn-
instance-name | vpn-instance-any ] | logging ] *
– For an advanced ACL (numbered from 3000 to 3999), run the following
command as required by the protocol type:
rule [ rule-id ] [ name rule-name ] { deny | permit } { protocol | tcp }
[ [ dscp dscp | [ precedence precedence | tos tos ] * ] | { destination
{ destination-ip-address { destination-wildcard | 0 | des-netmask } | any }
| destination-pool destination-pool-name } | { destination-port operator
port-number | destination-port-pool destination-port-pool-name } |
fragment-type { fragment | non-fragment | non-subseq | fragment-
subseq | fragment-spe-first } | { source { source-ip-address { source-
wildcard | 0 | src-netmask } | any } | source-pool source-pool-name } |
{ source-port operator port-number | source-port-pool source-port-pool-
name } | { tcp-flag | syn-flag } { tcp-flag [ mask mask-value ] |
established |{ ack [ fin | psh | rst | syn | urg ] * } | { fin [ ack | psh | rst |
syn | urg ] * } | { psh [ fin | ack | rst | syn | urg ] * } | { rst [ fin | psh | ack
| syn | urg ] * } | { syn [ fin | psh | rst | syn | urg ] * } | { urg [ fin | psh |
rst | syn | urg ] * } } | time-range time-name | ttl ttl-operation ttl-value |
packet-length length-operation length-value ] *
The preceding configuration uses TCP as the example protocol. The
configurations of the other protocols are similar. For details, see the rule
(Advanced ACL view) (UDP), rule (advanced ACL view) (ICMP), and
rule (Advanced ACL view) (gre-igmp-ip-ipinip-ospf) commands.
NOTE
ACL rules usually match against source IP addresses. In each ACL rule, a protocol
number, source IP address, destination IP address, source port number, destination port
number, VPN instance name, and fragment flag can be specified for matching data
flows. The ACL rule requires the consecutive subnet masks whose 0s or 1s must be
consecutive, for example, 255.255.255.0.
An ACL configured in the NAT traffic diversion policy on an outbound interface
contains multiple ACL rules. The ACL rules are used in ascending order by sequence
number to match packets.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 3 Apply the traffic policy to an interface.
1. Run interface interface-type interface-number
The interface view is displayed.
NOTE
The following interfaces are supported: GE main interface and its sub-interfaces; POS
interface; Eth-Trunk main interface and its sub-interfaces; Ethernet main interface and
its sub-interfaces; IP-Trunk interface; VLANIF interface; serial interfaces, including
Serial-Trunk interfaces; MP-group interface.
2. Run nat bind acl { acl-index | name acl-name } [ mode deny-forward ]
instance instance-name
The ACL is bound to the NAT instance.
3. Run commit
The configuration is committed.
----End
Prerequisites
A NAT traffic policy has been configured.
Procedure
● Run the display nat instance [ instance-name ] command to check the
configuration of a NAT instance.
● Run the display nat user-information { user-id user-id | user-name user-
name | domain domain-name | ipv4 ipv4-address | port-usage port-usage |
session-discard } [ slot slot-id ] [ verbose ] command to check NAT user
information.
----End
Usage Scenario
NOTE
Pre-configuration Tasks
Before configuring NAT load balancing, complete the following tasks:
● Install a CGN license and wait for the service board to work properly.
● Configure data link layer protocol parameters and IP addresses for interfaces
so that the data link layer protocol on each interface can go Up.
Context
After service-location groups are bound to CPUs of the specified dedicated
boards, one service-instance group can be bound to these service-location groups
and bound to a NAT instance.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run service-location service-location-id
A service-location group is created, and its view is displayed.
Step 3 Run location slot slot-id [ backup slot slot-id ]
The CPU of the service board is bound in the service-location group view.
NOTE
It is recommended that the number of session resources of all load balancing members be
set to the same value.
NOTE
NOTE
In an online upgrade or capacity expansion scenario with centralized load balancing, some
user traffic will be interrupted temporarily and then recover automatically.
Context
During load balancing, one NAT instance can be bound to the CPUs of multiple
service boards. These CPUs share the same global static address pool, ensuring
flexible extension for a single NAT user and the same public network address for
multi-core CPUs.
Procedure
Step 1 Run system-view
A global static address pool is created, and the global static address pool view is
displayed.
The mask lengths/masks are configured for the initial and extended address
segments of the global static address pool.
It is recommended that the default upper and lower thresholds for the number of
address segments in a global address pool be used, and the difference between
upper-value and lower-value be greater than 60 (for example, use the nat-
instance ip used-threshold upper-limit 90 lower-limit 20 configuration).
The load balancing function is enabled for IP addresses in the global static address
pool.
----End
Context
If one NAT service is bound to only one service board CPU, the service board of a
single NAT instance may reach the performance threshold as the number of users
increases. In this case, expand the board to support user traffic forwarding. If a
global static address pool is bound to a NAT instance, multiple boards can share
the same address pool.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
Step 3 (Optional) Run nat ip-address ip-address mask-len [ vpn-instance vpn-instance-
name ]
The IP address for distributing traffic between the master and slave chassis is
configured.
This command is mandatory when service traffic needs to be distributed to CGN
boards in load balancing over centralized dual-device hot backup scenarios.
Step 4 Run nat address-group address-group-name group-id group-id bind-ip-pool
pool-name
The NAT instance is bound to a global static address pool.
Step 5 Run commit
The configuration is committed.
----End
Prerequisites
NAT load balancing has been configured.
Procedure
● Run the display nat instance dynamic-address-group command to view
information about the dynamic address pool of the specified NAT instance.
● Run the display nat ip-pool command to view information about the global
static address pool.
● Run the display nat statistics command to view NAT statistics about service
boards.
----End
Usage Scenario
A static NAT source tracing algorithm is in essence a set of formula. The input is a
private IP address range, public IP address range, port segment size, and port
range. The output is the mapping between private IP addresses, public IP
addresses, and port range. When a device uses a static source tracing algorithm to
implement NAT, the mapping between the private IP addresses, public IP
addresses, and port range is fixed. In this case, NAT source tracing can be
performed for NAT source tracing NEs, such as the AAA server, based on the
algorithm so long as the source tracing algorithm parameters that are the same
as those on the NAT device are obtained, instead of depending on the source
tracing logs sent by the NAT device.
The static source tracing algorithm applies to centralized and distributed NAT
deployment scenarios. In a centralized NAT scenario, load balancing can also be
configured. Figure 1-54 shows the centralized deployment scenario of the static
source tracing algorithm.
Pre-configuration Tasks
Before configuring a static NAT source tracing algorithm, complete the following
tasks:
● Configure data link layer protocol parameters and IP addresses for interfaces
so that the data link layer protocol on each interface can go Up.
Context
Before configuring a static source tracing algorithm, plan the mapping between
private address pool and public address pool/port range. Once the private and
public address pools are bound, the IP addresses in the public address pool can be
used only by IP addresses in the private address pool.
Procedure
Step 1 Run system-view
One public or private address pool can be bound to only one static source tracing
algorithm.
The range of ports that are not allocated based on the static source tracing
algorithm is configured.
----End
Context
A static source tracing algorithm and a dynamic NAT algorithm are mutually
exclusive in the NAT instance view. After a static source tracing algorithm is bound
to a NAT instance, the mapping relationship of the static source tracing algorithm
is applied to the NAT instance.
Procedure
Step 1 Run system-view
The view of the NAT instance to which the static source tracing algorithm needs
to be applied is displayed.
----End
Procedure
● Run the display nat static-mapping static-mapping-id command to check
parameters of the static source tracing algorithm with a specified ID.
● Run the display nat static-mapping { global-pool | inside-pool } pool-id
command to check the public and private address pools of the static source
tracing algorithm.
● Run the display nat static-mapping ipv4 ipv4-address command to check
the mapping between private IP address and public IP addresses/port range
calculated based on the static source tracing algorithm.
● Run the display nat static-mapping global-ipv4 command to check the
private IP addresses of the static source tracing algorithm.
----End
Usage Scenario
NOTE
Pre-configuration Tasks
Before you configure access to internal hosts using public addresses, complete the
following tasks:
Usage Scenario
NAT can be configured to allow users on a private network to access public
network services, while hiding the structure of the private network and devices on
the private network. In this case, a user on an external network cannot
communicate with a private network user.
To address this problem, the NAT server function can be configured on the private
network. The NAT server function enables a NAT device to translate a public IP
address into a private IP address based on either of the following entries:
● A static mapping entry that contains a private IP address, a private port
number, a public IP address, and a public port number
● A static mapping entry that contains a private IP address and a public IP
address
Procedure
Step 1 Run system-view
Step 3 (Optional) Create a NAT address pool. For details, see Creating a NAT Address
Pool.
When a user accesses the internal NAT server, a user entry needs to be created. If
the nat server-mode enable command is not run to enable the address-level NAT
server mode, the public IP address in the user entry is obtained from the NAT
address pool. In this case, a NAT address pool needs to be configured.
When a user accesses the internal NAT server, a user entry needs to be created.
After the address-level NAT server mode is enabled, the public IP address in the
user entry used for NAT server access is not obtained from a NAT address pool.
Instead, the public IP address is obtained through the nat server global
command.
The device is enabled to create a session after the device receives the first TCP
packet that matches the NAT server mapping.
● If multiple NAT servers are assigned the same public IP address, run the nat
server protocol { tcp | udp | protocol-number } global global-address
[ global-protocol ] [ vpn-instance global-vpn-instance-name ] inside inside-
address [ inside-protocol ] [ vpn-instance inside-vpn-instance-name ]
[ outbound ] command to configure internal NAT servers for different types
of packets.
To configure one-to-many relationships between public IP addresses and NAT
servers in batches, run the nat server protocol { tcp | udp | protocol-
number } global global-start-address { global-end-address | mask { global-
address-mask-length | global-address-mask } } [ global-protocol ] [ vpn-
instance vpn-instance-name ] inside host-start-address { host-end-address |
mask { host-address-mask-length | host-address-mask } } [ host-protocol ]
[ vpn-instance vpn-instance-name ] command.
● If each NAT server is assigned a specific public IP address, run the nat server
global global-address [ vpn-instance vpn-instance-name ] inside inside-
address [ vpn-instance vpn-instance-name ] [ outbound ] command to
configure an internal NAT server.
To configure one-to-one relationships between public IP addresses and NAT
servers in batches, run the nat server global global-start-address { global-
end-address | mask { global-address-mask-length | global-address-mask } }
[ vpn-instance vpn-instance-name ] inside host-start-address { host-end-
NOTE
● The IP address of the NAT server cannot be the same as the IP address of a DHCP
server. Otherwise, a message indicating a conflict is displayed.
● If the nat server-mode enable command is not run, the public IP address of the
address-level NAT server must differ from an assigned public IP address in the NAT
address pool.
● If no NAT address pool is configured in a NAT instance, you must run the nat server-
mode enable command to enable the address-level NAT server function.
● The port-level NAT Server mode uses the nat server protocol command to implement
address and port translation between public and private addresses. A user entry needs
to be created when a user accesses a NAT server. To create this entry, a public IP address
must be obtained from a NAT address pool, which therefore must be configured in a
NAT instance.
● If the nat server-mode enable command is not run, the nat outbound command must
be run to bind an address-level NAT server to a NAT address pool.
● A port-level NAT server must be bound to a NAT address pool using the nat outbound
command.
----End
Context
The IP addresses of internal servers may frequently change. To avoid frequent
manual modifications of the configurations of internal NAT servers, you can
deploy the port forwarding function to dynamically associate each internal server
with a public IP address and a public port.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
Step 3 Run port-forwarding address-group addr-grp-name port-scope start-port-value
end-port-value
The NAT address pool and port range are dynamically associated with internal
servers.
NOTE
This command configures only the port range of the port forwarding service. The port
forwarding rules, however, are issued by a RADIUS server when a user goes online.
----End
Procedure
● Run the display nat server-map [ dynamic | static | port-forwarding ] [ ip
ip-address | port port-number | vpn-instance vpn-instance-name | slot slot-
id ] * command to check server-map entry information of an internal server.
----End
Usage Scenario
Packets of many application layer protocols contain user information, including IP
addresses and port numbers. These protocol packets may fail to be forwarded
because NAT can only identify the IP addresses and port numbers in TCP/UDP
headers of user traffic and cannot identify the IP addresses and port numbers
carried in these protocol packets. For special protocols, such as FTP, the Data field
in a packet contains IP address or port information. In this case, an inconsistency
or errors occur because NAT does not take effect on an IP address or port
information in the Data field of a packet. A good way to solve the NAT issue for
these special protocols is to use the ALG function. Functioning as a special
conversion agent for application protocols, the ALG interacts with the NAT device
to establish states. The ALG uses NAT state information to change the specific
data in the Data field of IP packets and to complete other necessary work, so that
application protocols can run across internal and external networks.
NAT ALG supports various protocols, such as ICMP, SIP, RTSP, PPTP, DNS, and FTP.
NAT ALG does not support packets longer than 2048 bytes (NAT ALG SIP does not
support packets longer than 8192 bytes), UDP-based RTSP, TCP-based SIP, TCP-
based DNS, or TCP fragments.
Pre-configuration Tasks
Before configuring the NAT ALG function, complete the following tasks:
● Configure basic NAT functions.
● Configure the NAT conversion function.
Context
Most application layer protocol packets carry user IP addresses and port numbers.
NAT translates only network layer addresses and transport layer ports. Therefore,
an ALG can be configured to translate IP addresses and port numbers carried in
the Data field of application layer packets.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
Step 3 Run nat alg { dns | ftp [ rate-threshold rate-threshold-value ] | pptp | rtsp | all |
sip [ separate-translation ] }The NAT ALG function is enabled for one or more
application layer protocols.
To configure NAT for the SIP control channel and data channel separately, run the
nat alg sip separate-translation command.
Step 4 Run commit
The configuration is committed.
----End
Usage Scenario
If a device on an enterprise network without a DNS server needs to use a DNS
domain name to communicate with a server within the enterprise network, the
communication can be implemented using a DNS server on a public network.
To meet this requirement, the DNS mapping function can be configured. When a
DNS packet arrives at a NAT device, the NAT device searches for a static entry
based on the configured DNS mapping entry and translates the public IP address
into the private IP address. Then the NAT device sends the private IP address that
replaces the DNS domain name to the user.
Procedure
Step 1 Run system-view
----End
Procedure
● Run the display nat instance [ instance-name ] command to check the
configuration of a NAT instance.
----End
Usage Scenario
If a fault occurs on a NAT device or NAT board during NAT operation, NAT cannot
be performed. As a result, user services are interrupted.
Currently, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M
supports inter-chassis cold backup, inter-board hot backup, and inter-chassis hot
backup. For details about inter-chassis hot backup configurations, see 1.1.6.2 CGN
Reliability Configuration.
Pre-configuration Tasks
Before configuring NAT reliability, complete the following tasks:
● Configure basic NAT functions.
● Configure NAT for traffic.
Context
When a NAT device is equipped with two service boards, you can configure the
active and standby service boards in the same chassis on the NAT device to
implement inter-board backup on the single device. The inter-board backup
mechanism verifies that the data stored on the active NAT service board is
consistent with that stored on the standby NAT service board. If the active NAT
service board fails, an active/standby NAT service board switchover is performed
to ensure that services are running properly. In this situation, users are unaware of
this fault.
Procedure
Step 1 (Optional) Configure value-added service management (VSM) high availability
(HA) hot backup functions.
1. Run system-view
The system view is displayed.
2. Run service-ha hot-backup enable
HA hot standby is enabled.
3. Run service-ha delay-time delay-time
The delay time is set for VSM HA hot backup.
NOTE
On a device with the service-ha delay-time command run, session entries can be
backed up only if the active time of session traffic is longer than the delay time
configured for VSM HA hot backup.
4. (Optional) Run service-ha preempt-time preempt-time
The delay for active/standby revertive switching is set.
NOTE
When the active service board recovers, it takes over services from the standby service
board after the specified delay elapses.
5. Run commit
The configuration is committed.
Step 2 Configure a service-location group that implements single-chassis inter-board VSM
HA hot backup.
1. Run system-view
The system view is displayed.
2. Run service-location service-location-id
A service-location group is created, and its view is displayed.
3. For dedicated NAT, run the location slot slot-id backup slot backup-slot-id
command to bind the service-location group to the CPUs of the active and
standby service boards.
For on-board NAT:
– For the NetEngine 8000 M4, run the location slot command to bind the
CPUs of the service boards.
– For other products in the HUAWEI NetEngine 8100 M14/M8, NetEngine
8000 M14K/M14/M8K/M8/M4 & NetEngine 8000E M14/M8 series, run
the location follow-forwarding-mode command to bind service boards.
4. Run commit
The configuration is committed.
5. Run quit
Return to the system view.
Step 3 Create a service-instance group and bind it to the service-location group.
1. Run service-instance-group service-instance-group-name
A service-instance group is created, and its view is displayed.
2. Run service-location service-location-id [ weight weight-value ]
The service-location group is bound to the service-instance group.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 4 Bind a NAT instance to a service-instance group.
1. Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
2. Run service-instance-group service-instance-group-name
The NAT instance is bound to the service-instance group.
NOTE
Each NAT instance group can only be bound to a single service-instance group.
Different NAT instance groups can be bound to the same service-instance group.
3. Run commit
The configuration is committed.
----End
Configuring Association of CGN Board Faults with Inter-chassis NAT Cold Backup
You can associate CGN board faults with inter-chassis NAT cold backup to improve
NAT device reliability.
Context
Two NAT devices equipped with CGN boards cannot detect the board faults. To
resolve this issue, bind interfaces to an HA status monitoring group. The NAT
devices can then detect the running status of the CGN boards. If one board fails, a
master/backup device switchover is triggered, preventing services from being
interrupted for a long time.
NOTE
Procedure
Step 1 Configure basic HA group functions.
1. Run system-view
The system view is displayed.
2. Run service-location service-location-id
An HA group is created, and its view is displayed.
3. Run location slot slot-id [ backup slot backup-slot-id ]
The active/standby information of the HA group is configured.
4. Run commit
The configuration is committed.
5. Run quit
Return to the system view.
Step 2 Create an HA status monitoring group and bind it to the service-location group.
1. Run monitor-location-group group-name
An HA status monitoring group is created, and its view is displayed.
2. Run service-location service-location-id
The HA status monitoring group is bound to the service-location group.
NOTE
NOTE
----End
Context
When both the BRAS and CR exist on the live network, the BRAS supports
distributed NAT, and the NAT device attached to the CR supports centralized NAT.
If the service board of the BRAS becomes faulty, the BRAS distributes user traffic
to the CR for NAT implementation, existing users do not go offline, and new users
are allowed to go online. When the fault on the service board of the BRAS is
rectified, the BRAS reestablishes NAT user tables, and user traffic is switched back
to the BRAS for NAT implementation.
NOTE
NOTE
In this case, cold backup is used to distribute user traffic from distributed devices to
centralized devices for NAT implementation. The IP addresses in the NAT address pool of
the CR cannot duplicate with those in the NAT address pool of the BRAS.
Procedure
Step 1 Configure centralized backup for distributed NAT on the BRAS.
1. Create a NAT instance and bind it to a service-instance group. For details, see
Configuring Basic NAT Functions.
2. Configure the distributed NAT traffic policy and conversion policy. For details,
see Configuring NAT for User Traffic in Distributed Mode.
3. Configure the automatic switching and switchback function for centralized
backup of distributed NAT based on the policy selected. (The following
configurations are performed in the NAT instance view.)
– Configure the automatic switching and switchback function for
centralized backup of distributed NAT.
i. Run the nat centralized-backup enable command to enable the
automatic switching and switchback function for centralized backup
of distributed NAT.
After the NAT function on the BRAS becomes unavailable, the BRAS
switches user traffic to the NAT device attached to the CR for NAT
implementation. After the NAT function on the BRAS is restored, user
traffic is switched back to the BRAS for NAT implementation.
NOTE
NOTE
The redirection VPN instance name for centralized backup of distributed NAT must be
the same as that configured on the centralized NAT device and CR.
5. (Optional) Run nat centralized-backup switch down-number down-number
The maximum number of allowable CPU faults in centralized backup of
distributed NAT is configured.
When down-number is less than or equal to the number of CPUs of service
boards to which an instance is bound, the nat centralized-backup switch
down-number command may have the following impacts on the system:
– When the number of CPU faults on the current service board in the NAT
instance is greater than or equal to down-number, user traffic
automatically switches to centralized devices for NAT implementation.
– When the number of CPU faults on the current service board in the NAT
instance is less than down-number, user traffic can be manually switched
back to distributed devices for NAT implementation. During the
switchback, if the CPUs in the NAT instance do not recover, users may be
logged out.
– When no CPU fault occurs on the current service board in the NAT
instance, user traffic can automatically switch back to distributed devices
for NAT implementation.
If down-number is greater than the total number of service boards' CPUs to
which a NAT instance is bound, running the nat centralized-backup switch
down-number command poses the following impacts:
– If the maximum number of faulty CPUs allowed in a NAT instance is
greater than the value of down-number, all CPUs on the service boards
are faulty and user traffic automatically switches to centralized devices
for NAT implementation.
– Because the number of service boards' faulty CPUs in the NAT instance is
always less than down-number, ensure that at least one CPU of a service
board to which the NAT instance is bound is available. In this case, user
traffic can be manually switched back to the distributed device for NAT
processing.
6. Run commit
The configuration is committed.
NOTE
After the preceding operations are performed, configure a route advertisement policy.
– For centralized backup of distributed NAT in a non-tunnel deployment scenario,
both the distributed and centralized NAT devices need to be deployed with a
private routing policy. This leads to security risks, and the user IP addresses of
different BRASs cannot overlap. The route advertisement policy in this scenario is
as follows:
▪ NAT public network address pool route advertisement: Both the distributed
and centralized NAT devices advertise routes from the NAT public network
address pool to the internet.
▪ NAT public network address pool route advertisement: Both the distributed
and centralized NAT devices advertise routes from the NAT public network
address pool to the internet.
▪ Private network address pool route advertisement: Routes from the private
network address pool on the distributed NAT device are advertised to the
centralized NAT device in a VPN instance.
Step 2 Deploy the centralized NAT function on the centralized NAT device.
1. Create a NAT instance and bind it to a service-instance group. For details, see
Configuring Basic NAT Functions.
2. Configure a centralized NAT traffic policy and a conversion policy. For details,
see Centralized NAT for User Traffic.
3. Run commit
----End
Prerequisites
The single-chassis inter-board hot backup has been configured.
Procedure
● Run the display service-ha global-information delay-time command to
check the delay time configured for inter-board VSM HA hot backup on a
single chassis.
● Run the display service-location service-location-id command to check the
configuration of a service-location group.
● Run the display service-instance-group service-instance-group-name
command to check the configuration of a service-instance-group instance
group.
● Run the display nat instance [ instance-name ] command to check the
configuration of a NAT instance.
----End
Usage Scenario
You can deploy the NAT security function to guarantee secure operations on a
NAT device and prevent attacks.
Pre-configuration Tasks
Before configuring the NAT security function, complete the following task:
● Configure basic NAT functions.
● Configure NAT for traffic.
Context
If the number of established Transmission Control Protocol (TCP), User Datagram
Protocol (UDP), Internet Control Message Protocol (ICMP) NAT sessions, or the
total number of NAT sessions involving the same source IP address exceeds a
configured threshold, a device stops establishing such sessions. The limit helps
prevent resource overconsumption from resulting in a failure to establish
connections for other users.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
Step 3 (Optional) Run nat session-limit enable
The user-based NAT session number limit function is enabled.
Step 4 Run nat session-limit { tcp | udp | icmp | total } session-number
The maximum number of NAT sessions that can be established is set.
Step 5 Run commit
The configuration is committed.
----End
Context
If the number of established Transmission Control Protocol (TCP), User Datagram
Protocol (UDP), Internet Control Message Protocol (ICMP) NAT sessions, or the
total number of NAT sessions involving the same destination IP address exceeds a
configured threshold, a device stops establishing such sessions. The limit helps
prevent resource overconsumption from resulting in a failure to establish
connections for other users.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
Step 3 (Optional) Run nat reverse-session-limit enable
The device is enabled to monitor the number of established user-specific network-
to-user NAT sessions.
Step 4 Run nat reverse-session-limit { tcp | udp | icmp | total } session-number
The maximum number of network-to-user NAT sessions that can be established is
set.
----End
Context
By checking whether the total number of TCP/UDP/ICMP/TOTAL ports used for
connections involving the same source or destination address exceeds the
configured threshold, the system can determine whether to restrict the initiation
of new connections in the direction, preventing individual users from occupying
excessive port resources and causing the connection failure of other users.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
Step 3 Run nat port-limit enable
The user-based port number limit function is enabled.
Step 4 Run nat port-limit { tcp | udp | icmp | total } limit-value
The limit on the number of user-based ports is configured.
----End
Context
The centralized NAT deployment scenario is used as an example. The port filter
function is configured on two network interfaces of the NAT device to filter out
packets with destination port 1434 (Worm.NetKiller2003). When a packet from
the public network side reaches the private network side, the NAT device checks
the packet's destination port. If the port is within the filtered port range, the
device discards the packet. After NAT services are deployed on the private network
side, a filtered port on the NAT device may be used as the post-NAT source port of
a packet sent from the private network side to the public network side. After
receiving a return packet, the NAT device finds that the packet's destination port is
within the filtered port range and discards the packet, impacting user services. To
resolve this issue, deploy the port filtering function for NAT services to filter out
the filtered ports on the NAT device. This prevents returned user packets from
being discarded by the NAT device.
Procedure
Step 1 Run system-view
----End
Context
When a user attempts to get online, it selects a public address pool mapped to
private network information based on a NAT traffic conversion policy. A public IP
address is randomly selected from the public address pool. This process ensures
that public IP addresses are evenly assigned to users. In dynamic port allocation
scenarios, if a user dynamically applies for a great number of public ports, high
usage of the ports mapped to a public IP address results in the sufficient number
of ports assigned to other users within the same public network, which
compromises services of the other users.
NOTE
Procedure
Step 1 Run system-view
----End
Setting the Maximum Number of Private Network Users Sharing the Same Public
IP Address
In PAT mode, multiple uses can use the same public IP address for access.
However, excessive users may lead to network congestion. To address this issue,
you can set the maximum number of private network users sharing the same
public IP address.
Context
This configuration is applicable to the dynamic port allocation and per-port
allocation modes where multiple users use the same public IP address for access.
When dynamic port range allocation or per-port allocation is used, without a port
range specified, if a device creates a large number of NAT sessions for each of
private network users sharing a public IP address, user traffic may fail to be
forwarded. As a result, the number of users who attempt to use single public IP
addresses to get online is reduced.
NOTE
Procedure
Step 1 Run system-view
The maximum number of allowed private network users sharing the same public
IP address is set.
----End
Context
By default, service boards transparently transmit received non-NAT service
packets, such as the packets containing only IP headers. Considering network
security, however, you may hope not to forward such non-NAT service packets.
Then you can configure the device to discard non-NAT service packets.
NOTE
Procedure
Step 1 Run system-view
The system is configured to discard the packets that are not NATed after being
distributed to the service board.
After this command is run, some IP packets that are distributed to the service
board without the need for matching the NAT policy are discarded based on the
packet type.
After this command is run, some IP packets that are distributed to the service
board with the need for matching the NAT policy are discarded based on the
packet type.
----End
Setting the Rate at Which Packets Are Sent to Create a Flow for a User
A device can be configured to dynamically detect the traffic forwarding rate and
limit the rate at which packets are sent to create a flow for each user.
Context
A NAT device with a multi-core structure allows flow construction and forwarding
processes to share CPU resources. To minimize or prevent NAT packet loss and a
CPU usage increase, the device has to maintain a proper ratio of the forwarding
rate to the flow creation rate.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
Step 3 (Optional) Run nat user-session create-rate limit enable
The limit on the rate at which packets are sent to create a user flow is set.
If both the nat user-session create-rate extended-range and nat user-session create-
rate commands are run, the latest configuration takes effect.
----End
Configuring a NAT Blacklist and Limiting the Rate at Which Packets Are Sent to
Create Sessions
The NAT blacklist and session creation rate limit are used to prevent users from
occupying a large number of CPU resources through traffic attacks, which affects
the forwarding of common traffic.
Context
The NAT blacklist function defends a device against attacks initiated by sending
network-side first packets with a specified set of a public IP address, a port
number, and a protocol ID or to all IP addresses. If no internal service is configured
or if public network traffic does not match entries in a session table on a NAT
device, the NAT device considers traffic transmitting at a rate reaching a specified
threshold as attack traffic. The NAT device adds the IP address and UDP or TCP
destination port number of attack traffic to a NAT blacklist. Once network-side
attack traffic matches the blacklist, the NAT device drops the traffic or collects
statistics about the traffic.
The NAT blacklist-based detection is performed in either of the following modes:
● Address-level detection: An address-level rate threshold is set for a NAT device
to detect attacks only on IP addresses.
● Port-level detection: A port-level rate threshold is set for a NAT device to
detect attacks using packets with a specified IP address, a specified port
number, and a specified protocol ID.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run undo nat flow-defend reverse-blacklist disable
The NAT blacklist function is enabled.
Step 3 Run nat flow-defend { forward | fragment | reverse } rate rate-number slot
slot-id
The limit on the rate at which packets are sent to create sessions on a service
board is set.
Step 4 (Optional) Run nat flow-defend reverse-blacklist detect-threshold ip-port-
level high-threshold ip-port-level-high-threshold-value
A port-level rate threshold for the NAT blacklist is configured for a device.
Threshold-crossing traffic is blacklisted.
If the NAT blacklist function is enabled, after this command is run, the device
detects the attack traffic for the IP address, port number, and protocol number.
You can set the port-level rate threshold for the NAT blacklist to adjust the attack
traffic detection rate.
Step 5 (Optional) Run nat flow-defend reverse-blacklist detect-threshold ip-level
high-threshold ip-level-high-threshold-value
An address-level rate threshold for the NAT blacklist is configured for a device.
Threshold-crossing traffic is blacklisted.
If the NAT blacklist function is enabled, after this command is run, the device
detects the attack traffic for the IP address. You can set the address-level rate
threshold for the NAT blacklist to adjust the attack traffic detection rate.
The port- and address-level rate thresholds can be configured together for the
NAT blacklist. The two commands are independent of each other.
Step 6 Run nat flow-defend reverse-blacklist auto-lock enable
Automatic locking of the NAT blacklist is enabled.
If NAT attack traffic reaches a specified detection threshold, entries are generated
in a NAT blacklist. The NAT device automatically isolates IP addresses listed in the
blacklist. Users with such IP addresses are logged out, and the IP addresses are not
assigned to new users who want to get online.
Step 7 Run nat flow-defend reverse-blacklist manual-lock lock-ip-address lockip-
address [ vpn-instance vpn-instance-name ]
An IP address to be locked and a VPN instance are configured for NAT attack
defense.
After this command is run, the specified IP address is isolated, and a blackhole
route is generated for the IP address. All traffic sent to the IP address is discarded
on the involved interface board. In addition, all users using this IP address go
offline, and new users cannot use this IP address to go online. To delete the
manually locked IP address and VPN instance information in the NAT blacklist, run
the reset nat flow-defend reverse-blacklist manual-lock lock-ip-address
command.
NOTE
● The automatic lock function of NAT blacklists can be configured only on dedicated
boards.
● IP address locking and VPN instance information of NAT attack defense can be
configured only on dedicated boards.
● The automatic lock function of NAT blacklists and the manual IP address lock function
of NAT attack defense cannot take effect at the same time.
● In on-board mode, only alarms are generated after the NAT blacklist function is
enabled.
----End
Prerequisites
NAT security functions have been configured.
Procedure
● Run the display nat flow-defend { forward | reverse | fragment } rate [ slot
slot-id ] command to check the configured limit on the rate at which the first
packet is sent to create sessions.
● Run the display nat user-information command to check NAT user
information.
● Run the display nat flow-defend reverse-blacklist slot slot-id command to
check blacklist entry about the first reverse packet on a CPU of the dedicated
board.
----End
Usage Scenario
The following functions can be configured to maintain NAT:
● NAT logs: Flow logs record NAT user and translation information. They also
help a device monitor and record private network access to public networks.
● NAT alarms: A device can be configured to generate alarms if the number of
established NAT sessions or assigned NAT ports reaches a specified alarm
threshold. After obtaining alarm information, the device administrator can
add NAT service boards or modify service configurations.
Prerequisites
Before you configure NAT maintainability, complete the following tasks:
● Configure basic NAT functions.
● Configure NAT for traffic.
Context
NAT logs are generated by a NAT device during NAT operation. The information
contains basic user information and NAT operation information. NAT logs also
record private network users' access to public networks and public network users'
access to private network servers. When private network users access a public
network through the NAT device, they share an external network address. For this
reason, the users accessing the public network cannot be located. The log function
helps record users' access to external networks in real time, enhancing network
maintainability.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 (Optional) Configure an IPsec tunnel. For details, see Configuration > Security >
IPsec Configuration.
For the sake of security, you are advised to configure an IPsec tunnel to
implement encrypted NAT log transfer. Particularly, IPsec must be configured on
both the master and backup devices in inter-chassis backup scenarios.
Step 3 (Optional) Run nat log tcp start-port port
The start port number used by the NAT device to send user logs to a log host
through TCP is set.
To establish a TCP connection between a NAT device and a log host, run this
command to set the start port number used in the TCP connection.
Step 4 (Optional) Run nat log user keepalive enable syslog type3
The Keepalive log function is enabled for NAT users, with the type 3 log format.
The interval for sending Keepalive logs for NAT users is set.
Step 10 Run nat log session enable [ { elog | syslog | netstream } [ secondary { elog |
syslog | netstream } ] ]
Step 11 Run nat log user enable [ { syslog | netstream } [ secondary { syslog |
netstream } ] ]
Step 12 Choose one of the following methods to configure the NAT log host information
based on the log transmission mode (TCP or UDP).
● To enable the NAT device to send logs to a log host using UDP, run the nat
log host host-ip-address host-port source source-ip-address source-port
[ name name ] [ vpn-instance vpn-instance-name ] [ inside-vpn vpn-
instance-name ] [ secondary ] command.
● To enable the NAT device to send logs to a log host using TCP, run the nat
log tcp host host-ip-address host-port source source-ip-address [ name
name ] [ vpn-instance vpn-instance-name ] command.
NOTE
This command is mutually exclusive with the nat log host command. Only one user log
transmission mode can be configured in a NAT instance.
Only one log host can be configured in a NAT instance to receive user syslogs in TCP
format.
This command takes effect only on dedicated boards.
NOTE
The device is enabled to select a log server based on the user-side VPN instance in
the NAT instance.
1. Run quit
Return to the system view.
2. Run nat syslog flexible template { user | session }
A NAT syslog flexible template is created, and its view is displayed.
3. (Optional) Run nat time local
The local time is displayed in the logs.
4. (Optional) Run nat time { endtime-second-dec | endtime-second-hex |
starttime-second-dec | starttime-second-hex | starttime | timestamp-
second-dec | timestamp-second-hex | timestamp } { local | utc }
The end time, start time, or timestamp in the log is set to the local time or
UTC time.
NOTE
The time format configured using the nat time command takes precedence over that
configured using the nat time local command in the flexible log template view. If the nat
time command is run, the time format of the end time, start time, or timestamp takes
effect according to the configured format. If the nat time command is not run, the nat
time local command takes effect. If neither the nat time command nor the nat time
local command is run, the default UTC time takes effect.
5. Run nat position
A flexible NAT log template is configured.
6. Run quit
Return to the system view.
7. Run nat syslog descriptive format flexible template { session | user }
A flexible log template type is specified.
Step 18 (Optional) Run nat syslog descriptive format { cn | type2 | type3 [ local-
timezone ] | type4 | type5 | type6 [ dual-sequence ] }
NAT syslogs are configured to be in the extended format.
Step 19 Run commit
The configuration is committed.
----End
Context
NAT sessions and ports available are important resources for NAT services. If these
resources are exhausted, NAT cannot be performed for traffic sent by newly
logged-in users. Therefore, the usage of these resources must be properly
monitored. The NAT alarm function enables a NAT device to generate an alarm
when resource usage reaches a specified alarm threshold, which instructs the
customer to implement capacity expansion or service adjustment.
Procedure
● Set a maximum number of alarm packets that a NAT service board sends
every second.
NOTE
c. Run commit
The trap or log function for the number of NAT sessions is enabled.
c. Run nat alarm session-number threshold threshold-value
The log or trap function for the port usage of a public address pool is
disabled.
d. Run nat alarm address-group port-number threshold threshold
An alarm threshold is set for the port usage based on the NAT address
pool.
NOTE
The PAT address pool on dedicated boards does not support alarms based on a
single IP address.
Port usage of a NAT PAT address pool = Number of ports used by the public
network addresses in the PAT address pool/Total number of ports available for
the public network addresses in the PAT address pool
● Set an alarm threshold for the user-based port usage.
a. Run system-view
The system view is displayed.
b. Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
c. (Optional) Run nat alarm port-number [ log | trap ] user enable
The log or trap function for the port usage of NAT users is enabled.
d. Run nat alarm user port-number threshold threshold
An alarm threshold is set for the user-based port usage.
NOTE
The log or trap function is enabled for the pre-allocation rate of the
reserved PCP ports in scenarios where a RADIUS authentication user with
Huawei proprietary attributes 162 and 163 requests to go online in the
NAT instance. The log and tap functions are enabled by default.
d. Run nat alarm pcp-reservation port-number threshold threshold
The log and alarm functions for server-map entries are disabled.
The functions are enabled by default. To disable the functions, run the
nat alarm server-map disable command.
c. Run nat alarm server-map threshold threshold-value
The device is configured to generate an alarm when server-map entry
usage exceeds a specified threshold.
NOTE
Run the display nat memory-usage servermap command to query the number
of used server-map entries and the number of supported server-map entries.
Server-map entry usage is as follows:
Server-map entry usage = Number of used server-map entries/Number of server-
map entries supported by the service board
d. Run commit
The configuration is committed.
----End
Context
Perform the following steps on the NetEngine 8100 M, NetEngine 8000E M,
NetEngine 8000 M:
Procedure
● Enable collection of NAT forwarding payload statistics in the system view.
NOTE
a. Run system-view
The system view is displayed.
b. Run nat statistics payload slot slot-id enable
Collection of NAT forwarding payload statistics is enabled. After this
function is enabled, the average and maximum values of the outbound
and inbound forwarding payloads on the service board can be obtained.
After this function is enabled, the device samples data every 10 minutes
and records data collected in the last 72 hours. To check the statistics, run
the display nat statistics payload command.
c. Run commit
The configuration is committed.
● Enable collection of NAT forwarding payload statistics in the NAT instance
view.
a. Run system-view
The system view is displayed.
----End
Context
The function of locking an address segment allows reclaiming of useless IP
addresses and prevents new users from using these IP addresses. The IP addresses
in the locked address segment are reclaimed after all the old users go offline. In
distributed scenarios, after an address segment is locked, the CGN device does not
allocate IP addresses from the address segment after it is associated with online
users. In centralized scenarios, after an address segment is locked, the CGN device
does not allocate IP addresses from the address segment after a user table is
created.
NOTE
Procedure
Step 1 Run system-view
IP addresses in the locked address segment cannot be allocated to users any more.
To restore this address segment, run the undo section lock command.
----End
Context
Before you collect NAT statistics within a specified period of time, delete existing
NAT statistics using the reset nat statistics command.
NOTICE
Once the statistics are deleted, they cannot be restored. Exercise caution when
using the command.
Procedure
● After confirming to delete NAT service statistics, run the reset nat statistics
command.
----End
Context
To delete unnecessary NAT session entries, run the reset nat session table
command. This allows new NAT session entries to be created for fault location
and rectification.
NOTICE
Services will be interrupted after NAT session entries are cleared. Exercise caution
when performing this operation.
NOTE
Procedure
● After determining the NAT session entries to be cleared, run reset nat session
table
----End
Context
When an address segment is locked, the users who have gone online from this
address segment need to be logged out so that the locked address segment can
be reclaimed.
NOTE
NOTICE
After user information of a NAT address segment is cleared, users on the NAT
address segment will be logged out. Exercise caution when you perform this
operation.
Procedure
● Run the reset nat user nat-ip-pool section command to clear the user
information from the address segment in a global address pool.
● Run the reset nat user address-group section command to clear the user
information from the address segment in an address pool of a specified NAT
instance.
----End
Context
In routine maintenance, you can run the following commands in any view to
check the NAT running status.
Procedure
● Run the display nat flow-defend command to check the configured rate at
which the first packet is sent to create sessions for a user.
● Run the display nat instance command to check the configuration of a NAT
instance.
● Run the display nat memory-usage command to check entry-specific
memory usage on a NAT service board.
● Run the display nat session aging-time command to check the configured
aging time for NAT session entries.
● Run the display nat session table command to check NAT session entry
information.
● Run the display nat statistics command to check NAT service statistics.
● Run the display nat user-information command to check NAT user
information.
● Run the display nat server-map command to check server-map entry
information about one or more internal servers.
● Run the display nat syslog flexible command in any view to check the log
format of a flexible log template.
● Run the display nat statistics port-usage distribute command in any view
to check statistics about users who are assigned a specified port range.
----End
Usage Scenario
You can set the following parameters to improve NAT performance:
● Aging time of NAT session entries: Increasing the NAT session aging time can
speed up the aging of NAT entries.
● TCP MSS: If the size of packets for NAT processing is larger than a link MTU,
the packets are fragmented. You can reduce the MSS value in TCP, which
prevents a NAT service board from fragmenting packets and helps improve
NAT efficiency.
Pre-configuration Tasks
Before adjusting NAT performance, complete the following tasks:
● Configure basic NAT functions.
● Configure NAT for traffic.
Context
Perform the following steps on the router:
Procedure
Step 1 Run system-view
Step 2 (Optional) Run nat session long-link [ inbound | outbound ] tcp { source-ip ip-
address { ip-mask | mask-length } [ source-port port-number ] [ vpn-instance
vpn-instance-name ] | { destination-ip ip-address { ip-mask | mask-length }
[ destination-port port-number ] | destination-port port-number } [ vpn-
instance vpn-instance-name ] } *
After this policy is configured, the aging time of the TCP connection of the
specified session is changed to 200 hours by default. To reset the aging time of
the persistent TCP connection, run the nat session aging-time command.
Step 3 Run nat session aging-time { tcp | udp | icmp | fin-rst | syn | fragment | dns |
ftp | http | rtsp | sip | pptp | tcp long-link | ip } aging-time
The changed aging time takes effect on new rather than existing NAT sessions.
Step 4 (Optional) Set the fast aging time for DNS sessions.
You are advised to configure this function when DNS traffic is heavy. After the fast
aging function for DNS sessions is enabled, if the device receives DNS request and
response packets at the same time, the DNS sessions age according to the
configured fast aging time to save system resources.
----End
Context
If the size of packets for NAT processing is larger than a link MTU, the packets are
fragmented. You can reduce the MSS value in TCP, which prevents a NAT service
board from fragmenting packets and helps improve NAT efficiency.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat tcp-mss mss-value
The MSS value carried in TCP SYN packets for NAT processing is set.
Step 3 Run commit
The configuration is committed.
NOTE
If the configured MSS value is smaller than the MSS value of TCP packets and the MSS
values are configured in both the NAT instance view and system view, the MSS value
configured in the NAT instance view takes precedence over that configured in the system
view. In other situations, the globally configured MSS value takes effect.
----End
Procedure
● Run the display nat session aging-time command to check the configured
aging time for NAT session entries.
● Run the display nat session-table size [ slot slot-id ] command to check the
session resources allocated to service boards.
----End
Networking Requirements
As shown in Figure 1-55, home users access the Internet through the BRAS using
PPPoE, IPoE, web authentication, or other ways. As well as implementing user
authentication, authorization, and accounting, the BRAS also provides the NAT
service to translate home users' private IP addresses into public ones.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
● Name of a NAT instance
● NAT address pool's number and start and end IP addresses
● User group name
● ACL number and UCL number
● Information about the NAT traffic diversion policy
Procedure
Step 1 Set the dedicated NAT working mode.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS_NAT
[*HUAWEI] commit
[~BRAS_NAT] vsm on-board-mode disable
[*BRAS_NAT] commit
Step 2 Set the maximum number of sessions that can be created on the service board in
slot 9 to 6M.
[~BRAS_NAT] license
[~BRAS_NAT-license] active nat session-table size 6 slot 9
[*BRAS_NAT-license] active nat bandwidth-enhance 40 slot 9
[*BRAS_NAT-license] commit
[~BRAS_NAT-license] quit
Step 3 Create a NAT instance named nat1 with ID 1 and bind the NAT instance to the
service board.
[~BRAS_NAT] service-location 1
[*BRAS_NAT-service-location-1] location slot 9
[*BRAS_NAT-service-location-1] commit
[~BRAS_NAT-service-location-1] quit
[~BRAS_NAT] service-instance-group group1
[*BRAS_NAT-service-instance-group-group1] service-location 1
[*BRAS_NAT-service-instance-group-group1] commit
[~BRAS_NAT-service-instance-group-group1] quit
[~BRAS_NAT] nat instance nat1 id 1
[*BRAS_NAT-nat-instance-nat1] service-instance-group group1
[*BRAS_NAT-nat-instance-nat1] commit
[~BRAS_NAT-nat-instance-nat1] quit
Step 4 Configure a NAT address pool with addresses ranging from 11.11.11.101 to
11.11.11.105.
[~BRAS_NAT] nat instance nat1
[~BRAS_NAT-nat-instance-nat1] nat address-group address-group1 group-id 1
[*BRAS_NAT-nat-instance-nat1-nat-address-group-address-group1] section 1 11.11.11.101 11.11.11.105
[*BRAS_NAT-nat-instance-nat1-nat-address-group-address-group1] commit
[~BRAS_NAT-nat-instance-nat1-nat-address-group-address-group1] quit
[~BRAS_NAT-nat-instance-nat1] quit
[~BRAS_NAT-aaa-domain-isp1] quit
[~BRAS_NAT-aaa] domain isp2
[*BRAS_NAT-aaa-domain-isp2] authentication-scheme auth1
[*BRAS_NAT-aaa-domain-isp2] accounting-scheme acct1
[*BRAS_NAT-aaa-domain-isp2] radius-server group rd1
[*BRAS_NAT-aaa-domain-isp2] commit
[~BRAS_NAT-aaa-domain-isp2] ip-pool pool2
[~BRAS_NAT-aaa-domain-isp2] quit
[~BRAS_NAT-aaa] quit
Step 6 Configure a traffic classification rule, a NAT behavior, and a NAT traffic diversion
policy. Then apply the policy.
1. Configure ACLs numbered 6001 and 6002 and ACL rules numbered 1 and 2.
[~BRAS_NAT] acl 6001
[*BRAS_NAT-acl-ucl-6001] rule 1 permit ip source user-group group1
[*BRAS_NAT-acl-ucl-6001] commit
[~BRAS_NAT-acl-ucl-6001] quit
[~BRAS_NAT] acl 6002
[*BRAS_NAT-acl-ucl-6002] rule 2 permit ip source user-group group2
[*BRAS_NAT-acl-ucl-6002] commit
[~BRAS_NAT-acl-ucl-6002] quit
3. Configure two traffic behaviors: b1 and b2. The action of b1 binds traffic to
the NAT instance named nat1, and the action of b2 is deny.
[~BRAS_NAT] traffic behavior b1
[*BRAS_NAT-behavior-b1] nat bind instance nat1
[*BRAS_NAT-behavior-b1] commit
[~BRAS_NAT-behavior-b1] quit
[~BRAS_NAT] traffic behavior b2
[*BRAS_NAT-behavior-b2] deny
[*BRAS_NAT-behavior-b2] commit
[~BRAS_NAT-behavior-b2] quit
4. Define a NAT policy to associate the ACL rule with the traffic behavior.
[~BRAS_NAT] traffic policy p1
[*BRAS_NAT-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS_NAT-trafficpolicy-p1] classifier c2 behavior b2
[*BRAS_NAT-trafficpolicy-p1] commit
[~BRAS_NAT-trafficpolicy-p1] quit
Step 9 Configure the route of the NAT address pool as the static black-hole route and
advertise it to the routing protocol. In this example, OSPF with process ID as 1 is
used. (Assume that OSPF is used as an IGP to advertise routes on the enterprise's
internal network.)
[~BRAS_NAT] ip route-static 11.11.11.0 24 null 0
[*BRAS_NAT] commit
[~BRAS_NAT] ospf 1
[*BRAS_NAT-ospf-1] import-route static
[*BRAS_NAT-ospf-1] commit
[~BRAS_NAT-ospf-1] quit
VPN Instance : -
Address Group : address-group1
NAT Instance : nat1
Public IP : 11.11.11.101
Start Port : -
Port Range : -
Port Total : -
Extend Port Alloc Times : 0
Extend Port Alloc Number : 0
First/Second/Third Extend Port Start : 0/0/0
Total/TCP/UDP/ICMP Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Session Current : 0/0/0/0
Total/TCP/UDP/ICMP Rev Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Rev Session Current: 0/0/0/0
Total/TCP/UDP/ICMP Port Limit : 0/0/0/0
Total/TCP/UDP/ICMP Port Current : 0/0/0/0
Nat ALG Enable : NULL
Token/TB/TP : 0/0/0
Port Forwarding Flag : Non Port Forwarding
Port Forwarding Ports : 00000
Aging Time(s) : -
Left Time(s) : -
Port Limit Discard Count : 0
Session Limit Discard Count : 0
Fib Miss Discard Count : 0
-->Transmit Packets : 0
-->Transmit Bytes : 0
-->Drop Packets : 0
<--Transmit Packets : 0
<--Transmit Bytes : 0
<--Drop Packets : 0
---------------------------------------------------------------------------
----End
Configuration Files
● BRAS configuration file
#
sysname BRAS_NAT
#
vsm on-board-mode disable
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
interface Virtual-Template1
ppp authentication-mode auto
#
interface GigabitEthernet2/0/0.1
user-vlan 1
pppoe-server bind Virtual-Template 1
bas
access-type layer2-subscriber default-domain authentication isp1
authentication-method ppp
#
interface GigabitEthernet2/0/0.2
user-vlan 2
pppoe-server bind Virtual-Template 1
bas
access-type layer2-subscriber default-domain authentication isp2
authentication-method ppp
#
ip pool pool1 bas local
domain isp2
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool pool2
user-group group2
#
ip route-static 11.11.11.0 24 null 0
#
ospf 1
import-route static
#
return
Networking Requirements
In Figure 1-56, the router performs the NAT function to help PCs within the
enterprise network access the Internet. The router uses GE 0/2/0 to connect to the
enterprise network. The router is connected to the Internet through GE 0/2/1. The
company has five public IP addresses ranging from 11.11.11.101/32 to
11.11.11.105/32.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Service-location group index: 1
● Slot ID (9) of NATA's service board
● Name of a service-instance group (group1)
● NAT instance name (nat1) and index (1)
● NATA's NAT address pool name (address-group1), address pool number (1), a
range of public IP addresses (11.11.11.101 through 11.11.11.105)
● ACL number (3001)
● Traffic classifier (classifier1)
● Traffic behavior (behavior1)
● Traffic policy (policy1)
● Name and IP address of each interface to which a NAT traffic diversion policy
is applied
Procedure
Step 1 Configure basic NAT functions.
1. Configure session resources and bandwidth resources for the service board in
slot 9.
<HUAWEI> system-view
[~HUAWEI] sysname NATA
[*HUAWEI] commit
[~NATA] vsm on-board-mode disable
[*NATA] commit
[~NATA] license
[~NATA-license] active nat session-table size 6 slot 9
[*NATA-license] active nat bandwidth-enhance 40 slot 9
[*NATA-license] commit
[~NATA-license] quit
NOTE
The method for configuring bandwidth resources varies according to the board type.
As such, set parameters in the active nat bandwidth-enhance command as required
by the board type.
2. Create a NAT instance named nat1 and bind it to the service board.
[~NATA] service-location 1
[*NATA-service-location-1] location slot 9
[*NATA-service-location-1] commit
[~NATA-service-location-1] quit
[~NATA] service-instance-group group1
[*NATA-service-instance-group-group1] service-location 1
[*NATA-service-instance-group-group1] commit
[~NATA-service-instance-group-group1] quit
[~NATA] nat instance nat1 id 1
[*NATA-nat-instance-nat1] service-instance-group group1
[*NATA-nat-instance-nat1] commit
[~NATA-nat-instance-nat1] quit
A NAT traffic diversion policy on an inbound interface and that on an outbound interface
are mutually exclusive on a device.
● Apply the NAT traffic diversion policy to the inbound interface.
a. Configure an ACL numbered 3001 and an ACL rule numbered 1 to allow
hosts only within the network segment 192.168.10.0/24 to access the
Internet.
[~NATA] acl 3001
[*NATA-acl4-advance-3001] rule 1 permit ip source 192.168.10.0 0.0.0.255
[*NATA-acl4-advance-3001] commit
[~NATA-acl4-advance-3001] quit
b. Configure a traffic classifier.
[~NATA] traffic classifier classifier1
[*NATA-classifier-classifier1] if-match acl 3001
[*NATA-classifier-classifier1] commit
[~NATA-classifier-classifier1] quit
c. Configure a traffic behavior named behavior1, which binds traffic to the
NAT instance named nat1.
[~NATA] traffic behavior behavior1
[*NATA-behavior-behavior1] nat bind instance nat1
[*NATA-behavior-behavior1] commit
[~NATA-behavior-behavior1] quit
d. Configure a NAT traffic policy named policy1 to associate the ACL rule
with the traffic behavior.
[~NATA] traffic policy policy1
[*NATA-trafficpolicy-policy1] classifier classifier1 behavior behavior1
[*NATA-trafficpolicy-policy1] commit
[~NATA-trafficpolicy-policy1] quit
e. Apply the NAT traffic diversion policy in the view of GE 0/2/0.
[~NATA] interface gigabitEthernet 0/2/0
[~NATA-GigabitEthernet0/2/0] ip address 192.168.10.1 24
[*NATA-GigabitEthernet0/2/0] traffic-policy policy1 inbound
[*NATA-GigabitEthernet0/2/0] commit
[~NATA-GigabitEthernet0/2/0] quit
● Apply the NAT traffic diversion policy to the outbound interface.
a. Configure an ACL numbered 3001 and an ACL rule numbered 1 to allow
hosts only within the network segment 192.168.10.0/24 to access the
Internet.
[~NATA] acl 3001
[*NATA-acl4-advance-3001] rule 1 permit ip source 192.168.10.0 0.0.0.255
[*NATA-acl4-advance-3001] commit
[~NATA-acl4-advance-3001] quit
After centralized NAT is configured, routes to NAT public addresses need to be advertised
so that the Internet can learn such routes.
----End
Configuration Files
● NATA configuration file when a NAT traffic diversion policy is used on an
inbound interface
#
sysname NATA
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
service-location 1
location slot 9
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
port-range 1024
service-instance-group group1
nat address-group address-group1 group-id 1
section 1 11.11.11.101 11.11.11.105
nat outbound 3001 address-group address-group1
nat filter mode full-cone
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat1
#
traffic policy policy1
share-mode
classifier classifier1 behavior behavior1 precedence 1
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
traffic-policy policy1 inbound
#
interface GigabitEthernet 0/2/1
undo shutdown
ip address 1.1.1.1 255.255.255.0
#
return
port-range 1024
service-instance-group group1
nat address-group address-group1 group-id 1
section 1 11.11.11.101 11.11.11.105
nat outbound 3001 address-group address-group1
nat filter mode full-cone
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
interface GigabitEthernet 0/2/1
undo shutdown
ip address 1.1.1.1 255.255.255.0
nat bind acl 3001 instance nat1 precedence 0
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
#
return
Networking Requirements
In Figure 1-57, the router performs the NAT function to help PCs within the
enterprise network access the Internet. The router uses GE0/2/0 to connect to the
enterprise network and uses GE0/2/1 to connect to the Internet. The enterprise is
assigned five public IP addresses: 11.11.11.101/32 through 11.11.11.105/32.
Figure 1-57 shows IP addresses of interfaces. The configuration requirements are
as follows:
● Only PCs on the network segment of 192.168.10.0/24 can access the Internet.
● Many-to-many NAT needs to be performed for IP addresses between the
private and public networks.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● NAT instance name (nat1) and index (1)
● NATA's NAT address pool name (address-group1), address pool number (1), a
range of public IP addresses (11.11.11.101 through 11.11.11.105)
● ACL number: 3001
● Name and IP address of the interface to which a NAT traffic diversion policy is
applied
Procedure
Step 1 Configure basic NAT functions.
1. Create a NAT instance named nat1.
[~NATA] nat instance nat1 id 1 simple-configuration
[*NATA-nat-instance-nat1] commit
[~NATA-nat-instance-nat1] quit
Step 2 Configure a NAT traffic diversion policy. Simplified NAT supports only outbound
interface-based traffic diversion.
1. Configure an ACL numbered 3001 and an ACL rule numbered 1 to allow hosts
only within the network segment 192.168.10.0/24 to access the Internet.
[~NATA] acl 3001
[*NATA-acl4-advance-3001] rule 1 permit ip source 192.168.10.0 0.0.0.255
[*NATA-acl4-advance-3001] commit
[~NATA-acl4-advance-3001] quit
2. Apply the ACL-based traffic policy in the view of GE0/2/1. You can bind the
traffic diversion policy either to the instance or to the address pool on one
interface.
– Bind the policy to the address pool.
[~NATA] interface gigabitEthernet 0/2/1
[~NATA-GigabitEthernet0/2/1] ip address 1.1.1.1 24
[*NATA-GigabitEthernet0/2/1] nat bind acl 3001 address-group address-group1
[*NATA-GigabitEthernet0/2/1] commit
[~NATA-GigabitEthernet0/2/1] quit
----End
Configuration Files
NATA configuration file when a NAT traffic diversion policy is used on an
outbound interface (In this configuration file, the traffic diversion policy is bound
to the NAT address pool.)
#
sysname NATA
#
nat instance nat1 id 1 simple-configuration
#
nat address-group address-group1 group-id 1 11.11.11.101 11.11.11.105
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
interface GigabitEthernet 0/2/1
undo shutdown
ip address 1.1.1.1 255.255.255.0
nat bind acl 3001 address-group address-group1
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
#
return
Networking Requirements
NOTE
On the network shown in Figure 1-58, the router performs the NAT function to
help PCs within the enterprise network access the Internet. The NetEngine 8100
M, NetEngine 8000E M, NetEngine 8000 M uses GE 0/1/0 to connect to the
enterprise network. The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000
M is connected to the Internet through GE 0/1/1. The company has five public IP
addresses ranging from 11.11.11.101/32 to 11.11.11.105/32.
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Service-location group index: 1
● Service-instance group name: group1
● NAT instance name: nat1; NAT instance index: 1
● NAT address pool name for NAT device: address-group1; NAT address pool ID:
1; IP address segment: 11.11.11.101 to 11.11.11.105
● ACL number: 3001
● Name and IP address of the interface to which a NAT traffic diversion policy is
applied
Procedure
Step 1 Configure basic NAT functions.
1. Create a NAT instance named nat1 and bind it to the service board.
<HUAWEI> system-view
[~HUAWEI] sysname NATA
[*HUAWEI] commit
[~NAT] service-location 1
[*NAT-service-location-1] location follow-forwarding-mode
[*NAT-service-location-1] commit
[~NAT-service-location-1] quit
[~NAT] service-instance-group group1
[*NAT-service-instance-group-group1] service-location 1
[*NAT-service-instance-group-group1] commit
[~NAT-service-instance-group-group1] quit
[~NAT] nat instance nat1 id 1
[*NAT-nat-instance-nat1] service-instance-group group1
[*NAT-nat-instance-nat1] commit
[~NAT-nat-instance-nat1] quit
2. Configure a NAT address pool and specify a range of IP addresses of
11.11.11.101 through 11.11.11.105 in the pool.
[~NAT] nat instance nat1 id 1
[~NAT-nat-instance-nat1] nat address-group address-group1 group-id 1
[*NAT-nat-instance-nat1-nat-address-group-address-group1] section 1 11.11.11.101 11.11.11.105
[*NAT-nat-instance-nat1-nat-address-group-address-group1] commit
[~NAT-nat-instance-nat1-nat-address-group-address-group1] quit
[~NAT-nat-instance-nat1] quit
NOTE
A NAT traffic diversion policy on an inbound interface and that on an outbound interface
are mutually exclusive on a device.
● Apply the NAT traffic diversion policy to the inbound interface.
a. Configure an ACL numbered 3001 and an ACL rule numbered 1 to allow
hosts only within the network segment 192.168.10.0/24 to access the
Internet.
[~NAT] acl 3001
[*NAT-acl4-advance-3001] rule 1 permit ip source 192.168.10.0 0.0.0.255
[*NAT-acl4-advance-3001] commit
[~NAT-acl4-advance-3001] quit
d. Configure a NAT traffic policy named policy1 to associate the ACL rule
with the traffic behavior.
[~NAT] traffic policy policy1
[*NAT-trafficpolicy-policy1] classifier classifier1 behavior behavior1
[*NAT-trafficpolicy-policy1] commit
[~NAT-trafficpolicy-policy1] quit
e. Apply the NAT traffic diversion policy in the GE 0/1/0 interface view.
[~NAT] interface gigabitEthernet 0/1/0
[~NAT-GigabitEthernet0/1/0] ip address 192.168.10.1 24
[*NAT-GigabitEthernet0/1/0] traffic-policy policy1 inbound
[*NAT-GigabitEthernet0/1/0] commit
[~NAT-GigabitEthernet0/1/0] quit
----End
Configuration Files
● NAT device configuration file when a NAT traffic diversion policy is used on
an inbound interface
#
sysname NAT
#
service-location 1
location follow-forwarding-mode
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1
section 1 11.11.11.101 11.11.11.105
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat1
#
traffic policy policy1
share-mode
classifier classifier1 behavior behavior1 precedence 1
#
interface GigabitEthernet 0/1/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
traffic-policy policy1 inbound
#
interface GigabitEthernet 0/1/1
undo shutdown
ip address 1.1.1.1 255.255.255.0
#
return
● NAT device configuration file when a NAT traffic diversion policy is used on
an outbound interface
#
sysname NAT
#
service-location 1
location follow-forwarding-mode
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1
section 1 11.11.11.101 11.11.11.105
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
interface GigabitEthernet 0/1/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
#
interface GigabitEthernet 0/1/1
undo shutdown
ip address 1.1.1.1 255.255.255.0
nat bind acl 3001 instance nat1 precedence 0
#
return
Networking Requirements
In the distributed NAT scenario shown in Figure 1-59, home users access a BRAS
through PPPoE, IPoE, web authentication, or other ways. The BRAS implements
user authentication, authorization, and accounting. It also provides NAT services
to convert between the private addresses of home users and external public
addresses in the CPU-based load balancing mode for home users to access the
Internet.
It is required that IP addresses of the home users (PC1 and PC2 in Figure 1) in the
user group group1 be evenly distributed to CPUs 0 and 1 of the NAT service board
for NAT, so that these users can access the Internet.
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a NAT load balancing instance.
2. Configure a CGN global static address pool.
3. Bind the NAT instance to the global address pool.
4. Configure NAT user information and RADIUS authentication on the BRAS.
5. Configure a NAT traffic diversion policy.
6. Configure a NAT traffic conversion policy.
7. Configure a user-side interface.
Data Preparation
To complete the configuration, you need the following data:
● NAT load balancing instance
● Name and ID of the NAT address pool, and name of the global static address
pool to which the NAT address pool is bound
● User group name, ACL number, and UCL number
● Information about the NAT traffic diversion policy
Procedure
1. Create a NAT load balancing instance.
a. Set the maximum number of sessions that can be created on the CPU of
the NAT service board to 6M.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS
[*HUAWEI] commit
[~BRAS] vsm on-board-mode disable
[*BRAS] commit
[~BRAS] license
[~BRAS-license] active nat session-table size 6 slot 9
[~BRAS-license] active nat session-table size 6 slot 10
[~BRAS-license] active nat bandwidth-enhance 40 slot 9
[~BRAS-license] active nat bandwidth-enhance 40 slot 10
[~BRAS-license] quit
b. Create a service-instance group named groupa and bind it to service-
location groups 1 and 2.
[~BRAS] service-location 1
[*BRAS-service-location-1] location slot 9
[*BRAS-service-location-1] commit
[~BRAS-service-location-1] quit
[~BRAS] service-location 2
[*BRAS-service-location-2] location slot 10
[*BRAS-service-location-2] commit
[~BRAS-service-location-2] quit
[~BRAS] service-instance-group groupa
[*BRAS-service-instance-group-groupa] service-location 1
[*BRAS-service-instance-group-groupa] service-location 2
[*BRAS-service-instance-group-groupa] commit
[~BRAS-service-instance-group-groupa] quit
c. Create a NAT instance named cpe1 and bind it to the service-instance
group groupa.
[~BRAS] nat instance cpe1 id 11
NOTE
When users are online, if you want to change the address segment of a global address
pool or the length of an assigned address segment, run the section lock command
first. For example:
[~BRAS] nat ip-pool pool1
[*BRAS-nat-ip-pool-pool1] section 0 11.11.11.1 mask 24
[*BRAS-nat-ip-pool-pool1] section 0 lock
[*BRAS-nat-ip-pool-pool1] commit
[~BRAS-nat-ip-pool-pool1] reset nat user nat-ip-pool pool1 section 0
[~BRAS-nat-ip-pool-pool1] undo section 0
[*BRAS-nat-ip-pool-pool1] commit
[~BRAS-nat-ip-pool-pool1] nat-instance subnet initial 24 extend 24
[*BRAS-nat-ip-pool-pool1] commit
[~BRAS-nat-ip-pool-pool1] section 0 1.2.3.4 mask 24
[*BRAS-nat-ip-pool-pool1] commit
[~BRAS-nat-ip-pool-pool1] quit
3. Bind a dynamic address pool in the NAT instance to the global static address
pool.
# In the view of the NAT instance cpe1, configure a dynamic address pool
named group1 and bind it to the global static address pool pool1.
[~BRAS] nat instance cpe1
[~BRAS-nat-instance-cpe1] nat address-group group1 group-id 1 bind-ip-pool pool1
[*BRAS-nat-instance-cpe1] commit
[~BRAS-nat-instance-cpe1] quit
4. Configure NAT user information.
a. Configure the BRAS service on the device so that users can go online. For
details, see AAA and User Management Configuration (Access Users) in
HUAWEI NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M
Configuration Guide-User Access.
[~BRAS] ip pool baspool1 bas local
[~BRAS-ip-pool-baspool1] gateway 10.110.10.101 255.255.255.0
[~BRAS-ip-pool-baspool1] section 1 10.110.10.1 10.110.10.100
[*BRAS-ip-pool-baspool1] dns-server 192.168.7.252
[*BRAS-ip-pool-baspool1] commit
[~BRAS-ip-pool-baspool1] quit
[~BRAS] radius-server group rd1
[*BRAS-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS-radius-rd1] radius-server shared-key huawei
[*BRAS-radius-rd1] commit
[~BRAS-radius-rd1] radius-server type plus11
[~BRAS-radius-rd1] radius-server traffic-unit kbyte
[~BRAS-radius-rd1] quit
[~BRAS-aaa] aaa
[~BRAS-aaa] authentication-scheme auth1
[*BRAS-aaa-authen-auth1] authentication-mode radius
[*BRAS-aaa-authen-auth1] commit
[~BRAS-aaa-authen-auth1] quit
[~BRAS-aaa] accounting-scheme acct1
[*BRAS-aaa-accounting-acct1] accounting-mode radius
[*BRAS-aaa-accounting-acct1] commit
[~BRAS-aaa-accounting-acct1] quit
[~BRAS-aaa] domain isp1
[*BRAS-aaa-domain-isp1] authentication-scheme auth1
[*BRAS-aaa-domain-isp1] accounting-scheme acct1
[*BRAS-aaa-domain-isp1] radius-server group rd1
[*BRAS-aaa-domain-isp1] commit
[~BRAS-aaa-domain-isp1] ip-pool baspool1
c. Configure a traffic behavior named b1, which binds traffic to the NAT
instance named cpe1.
[~BRAS] traffic behavior b1
[*BRAS-behavior-b1] nat bind instance cpe1
[*BRAS-behavior-b1] commit
[~BRAS-behavior-b1] quit
d. Define a NAT policy to associate the ACL rule with the traffic behavior.
[~BRAS] traffic policy p1
[*BRAS-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS-trafficpolicy-p1] commit
[~BRAS-trafficpolicy-p1] quit
[*BRAS-nat-instance-cpe1] commit
[~BRAS-nat-instance-cpe1] quit
7. Configure a user-side interface.
[~BRAS] interface Virtual-Template 1
[*BRAS-Virtual-Template1] ppp authentication-mode auto
[*BRAS-Virtual-Template1] commit
[~BRAS-Virtual-Template1] quit
[~BRAS] interface GigabitEthernet 0/2/0.1
[*BRAS-GigabitEthernet0/2/0.1] commit
[~BRAS-GigabitEthernet0/2/0.1] user-vlan 1
[~BRAS-GigabitEthernet0/2/0.1-vlan-1-1] quit
[~BRAS-GigabitEthernet0/2/0.1] pppoe-server bind Virtual-Template 1
[*BRAS-GigabitEthernet0/2/0.1] commit
[~BRAS-GigabitEthernet0/2/0.1] bas
[~BRAS-GigabitEthernet0/2/0.1-bas] access-type layer2-subscriber default-domain authentication
isp1
[*BRAS-GigabitEthernet0/2/0.1-bas] authentication-method ppp
[*BRAS-GigabitEthernet0/2/0.1-bas] commit
[~BRAS-GigabitEthernet0/2/0.1-bas] quit
[~BRAS-GigabitEthernet0/2/0.1] quit
8. Verify the configuration.
# Display detailed user information on CPU 0 of the service board in slot 9.
[~BRAS] display nat user-information slot 9 verbose
This operation will take a few minutes. Press 'Ctrl+C' to break ...
Slot: 9
Total number: 1.
---------------------------------------------------------------
User Type : NAT444
CPE IP : 10.110.10.100
User ID : 2
VPN Instance : -
Address Group : group1
NoPAT Address Group : -
NAT Instance : cpe1
Public IP : 11.11.11.1
Start Port : 1152
Port Range : 0
Port Total : 5
Extend Port Alloc Times : 0
Extend Port Alloc Number : 0
First/Second/Third Extend Port Start : 0/0/0
Total/TCP/UDP/ICMP Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Session Current : 5/5/0/0
Total/TCP/UDP/ICMP Rev Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Rev Session Current: 0/0/0/0
Total/TCP/UDP/ICMP Port Limit : 0/0/0/0
Total/TCP/UDP/ICMP Port Current : 5/5/0/0
Nat ALG Enable : NULL
Token/TB/TP : 0/0/0
Port Forwarding Flag : Non Port Forwarding
Port Forwarding Ports : 00000
Aging Time(s) : -
Left Time(s) : -
Port Limit Discard Count : 0
Session Limit Discard Count : 0
Fib Miss Discard Count : 0
-->Transmit Packets : 15
-->Transmit Bytes : 660
-->Drop Packets : 0
<--Transmit Packets : 40
<--Transmit Bytes : 1740
<--Drop Packets : 0
---------------------------------------------------------------
# Display load balancing statistics of the NAT instance named cpe1 on the
service board in slot 9.
[~BRAS] display nat statistics global nat-instance cpe1 slot 9
Slot: 9
---------------------------------------------------------------------------
Session table number :10
User table number :1
Total setup sessions :10
Total teardown sessions :10
---------------------------------------------------------------------------
Slot: 10
---------------------------------------------------------------------------
Session table number :10
User table number :1
Total setup sessions :10
Total teardown sessions :10
---------------------------------------------------------------------------
Configuration Files
BRAS configuration file
#
sysname BRAS
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat session-table size 6 slot 10
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
#
user-group group1
#
ip pool baspool1 bas local
gateway 10.110.10.101 255.255.255.0
section 1 10.110.10.1 10.110.10.100
dns-server 192.168.7.252
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
interface Virtual-Template1
ppp authentication-mode auto
#
service-location 1
location slot 9
#
service-location 2
location slot 10
#
service-instance-group groupa
service-location 1
service-location 2
#
nat ip-pool pool1
section 0 11.11.11.1 mask 24
nat-instance subnet length initial 25 extend 27
nat-instance ip used-threshold upper-limit 60 lower-limit 40
nat alarm ip threshold 60
#
acl number 3001
rule 10 permit ip source 10.110.10.0 0.0.0.255
#
acl number 6001
rule 1 permit ip source user-group group1
#
nat instance cpe1 id 11
service-instance-group groupa
nat address-group group1 group-id 1 bind-ip-pool pool1
nat outbound 3001 address-group group1
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance cpe1
#
traffic policy p1
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
aaa
authentication-scheme auth1
authentication-mode RADIUS
#
accounting-scheme acct1
accounting-mode RADIUS
#
domain isp1
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool baspool1
user-group group1 bind nat instance cpe1
#
interface GigabitEthernet0/2/0.1
user-vlan 1
pppoe-server bind Virtual-Template 1
bas
access-type layer2-subscriber default-domain authentication isp1
authentication-method ppp
#
return
Networking Requirements
On the network shown in Figure 1-60, in a centralized NAT scenario, the router
performs the NAT function to help PCs within an enterprise network access the
Internet. The router uses the Ethernet interface 0/2/0 to connect to the enterprise
network.
Figure 1-60 shows IP addresses of interfaces. The configuration requirements are
as follows:
● Only PCs on the network segment of 192.168.10.0/24 can access the Internet.
● Many-to-many NAT needs to be performed for IP addresses between the
private and public networks.
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a NAT load balancing instance.
2. Configure a CGN global static address pool.
3. Bind the NAT instance to the global address pool.
4. Configure a NAT traffic diversion policy.
5. Configure a NAT traffic conversion policy.
Data Preparation
To complete the configuration, you need the following data:
● NAT load balancing instance
● ID of the NAT address pool and the name of the global static address pool to
be bound to the NAT instance
● Information about the NAT traffic diversion policy
NOTE
If the address pool usage is too high, perform the following operations. Otherwise,
address pool resources will be unevenly allocated, causing users to fail to access the
Internet.
● Before CPU capacity expansion, expand address pool resources so that new CPUs
can be assigned normal initial address segments.
● After capacity expansion, you are advised to run the reset nat session table
command to clear the sessions and perform load balancing again.
Procedure
1. Create a NAT load balancing instance.
a. Set the maximum number of sessions that can be created on the CPU of
the NAT service board to 6M.
<HUAWEI> system-view
c. Bind the NAT instance named cpe1 to the service-instance group named
groupa.
[~CGNA] nat instance cpe1 id 11
[*CGNA-nat-instance-cpe1] service-instance-group groupa
[*CGNA-nat-instance-cpe1] commit
[~CGNA-nat-instance-cpe1] quit
NOTE
When users are online, if you want to change the address segment of a global address
pool or the length of an assigned address segment, run the section lock command
first. For example:
[~CGNA] nat ip-pool pool1
[*CGNA-nat-ip-pool-pool1] section 0 11.11.11.1 mask 24
[*CGNA-nat-ip-pool-pool1] section 0 lock
[*CGNA-nat-ip-pool-pool1] commit
[~CGNA-nat-ip-pool-pool1] reset nat user nat-ip-pool pool1 section 0
[~CGNA-nat-ip-pool-pool1] undo section 0
[*CGNA-nat-ip-pool-pool1] commit
[~CGNA-nat-ip-pool-pool1] nat-instance subnet initial 24 extend 24
[*CGNA-nat-ip-pool-pool1] commit
[~CGNA-nat-ip-pool-pool1] section 0 1.2.3.4 mask 24
[*CGNA-nat-ip-pool-pool1] commit
[~CGNA-nat-ip-pool-pool1] quit
c. Configure a traffic behavior named b1, which binds traffic to the NAT
instance named cpe1.
[~CGNA] traffic behavior b1
[*CGNA-behavior-b1] nat bind instance cpe1
[*CGNA-behavior-b1] commit
[~CGNA-behavior-b1] quit
d. Define a NAT policy to associate the ACL rule with the traffic behavior.
[~CGNA] traffic policy p1
[*CGNA-trafficpolicy-p1] classifier c1 behavior b1
[*CGNA-trafficpolicy-p1] commit
[~CGNA-trafficpolicy-p1] quit
# Display load balancing statistics of the NAT instance named cpe1 on the
service board in slot 9.
[~CGNA] display nat statistics global nat-instance cpe1 slot 9
Slot: 9
---------------------------------------------------------------------------
Session table number :20
User table number :0
Total setup sessions :10
Total teardown sessions :10
---------------------------------------------------------------------------
Slot: 9
---------------------------------------------------------------------------
Session table number :30
User table number :0
Total setup sessions :15
Total teardown sessions :15
---------------------------------------------------------------------------
Configuration Files
CGNA configuration file
#
sysname CGNA
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat session-table size 6 slot 10
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
#
service-location 1
location slot 9
#
service-location 2
location slot 9
#
service-instance-group groupa
service-location 1
service-location 2
#
Example for Configuring Centralized NAT Providing Backup for Distributed NAT
This section provides an example for configuring centralized NAT providing backup
for distributed NAT.
Networking Requirements
As shown in Figure 1-61, the BRAS is equipped with a dedicated board. The NAT
device is attached to the CR to back up data for the BRAS. In normal cases, the
BRAS implements NAT for user traffic. If the BRAS becomes faulty, user traffic is
switched to the NAT device for NAT implementation.
It is required that computers with the IP addresses on the network segment
10.10.10.0/24 can access the Internet.
Interface 1, interface 2, and interface 3 in this example represent GE 0/2/1, GE 0/2/2, and
GE 0/2/0, respectively.
BRAS GE 0/2/0.1 —
GE 0/2/1 192.168.10.1/24
CR GE 0/2/2 192.168.10.2/24
GE 0/2/1 192.168.11.1/24
Configuration Roadmap
The configuration roadmap is as follows:
1. For the distributed NAT device, the configuration roadmap for centralize NAT
providing backup for distributed NAT is as follows:
a. Configure basic NAT functions.
b. Configure NAT user information and RADIUS authentication on the BRAS.
c. Configure a NAT traffic diversion policy.
d. Configure a NAT traffic conversion policy.
2. For the CR, the configuration roadmap for redirecting user traffic to the NAT
device is as follows:
a. Configure a traffic diversion policy.
b. Configure an inbound interface redirection policy.
3. For the centralized NAT device, the configuration roadmap for centralized
NAT is as follows:
a. Configure basic NAT functions.
b. Configure a NAT traffic diversion policy.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 On the BRAS, configure centralized NAT providing backup for distributed NAT.
1. Configure basic NAT functions.
a. Set the maximum number of sessions that can be created on the NAT
service board in slot 9 to 6M.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS
[*HUAWEI] commit
[~BRAS] vsm on-board-mode disable
[~BRAS] license
[~BRAS-license] active nat session-table size 6 slot 9
[~BRAS-license] active nat bandwidth-enhance 40 slot 9
[~BRAS-license] quit
b. Create a NAT instance named nat1 and bind it to the service board.
[~BRAS] service-location 1
[*BRAS-service-location-1] location slot 9
[*BRAS-service-location-1] commit
[~BRAS-service-location-1] quit
[~BRAS] service-instance-group group1
[*BRAS-service-instance-group-group1] service-location 1
[*BRAS-service-instance-group-group1] commit
[~BRAS-service-instance-group-group1] quit
[~BRAS] nat instance nat1 id 1
[*BRAS-nat-instance-nat1] service-instance-group group1
[*BRAS-nat-instance-nat1] commit
[~BRAS-nat-instance-nat1] quit
a. Configure the BRAS service on the device so that users can go online. For
details, see AAA and User Management Configuration (Access Users) in
HUAWEI NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M
Configuration Guide-User Access.
[~BRAS] ip pool pool1 bas local
[~BRAS-ip-pool-pool1] gateway 10.10.10.1 255.255.255.0
[*BRAS-ip-pool-pool1] commit
[~BRAS-ip-pool-pool1] section 1 10.10.10.1 10.10.10.100
[~BRAS-ip-pool-pool1] dns-server 192.168.7.252
[*BRAS-ip-pool-pool1] commit
[~BRAS-ip-pool-pool1] quit
[~BRAS] radius-server group rd1
[*BRAS-radius-rd1] radius-server authentication 192.168.7.249 1812 weight 0
[*BRAS-radius-rd1] radius-server accounting 192.168.7.249 1813 weight 0
[*BRAS-radius-rd1] radius-server shared-key-cipher YsHsjx_202206
[*BRAS-radius-rd1] commit
[~BRAS-radius-rd1] quit
[~BRAS] aaa
[~BRAS-aaa] authentication-scheme auth1
[*BRAS-aaa-authen-acct1] authentication-mode radius
[*BRAS-aaa-authen-acct1] quit
[*BRAS-aaa] accounting-scheme acct1
[*BRAS-aaa-accounting-acct1] accounting-mode radius
[*BRAS-aaa-accounting-acct1] quit
[*BRAS-aaa] commit
[*BRAS-aaa] domain isp1
[~BRAS-aaa-domain-isp1] authentication-scheme auth1
[*BRAS-aaa-domain-isp1] accounting-scheme acct1
[*BRAS-aaa-domain-isp1] commit
[~BRAS-aaa-domain-isp1] ip-pool pool1
[*BRAS-aaa-domain-isp1] commit
[~BRAS-aaa-domain-isp1] quit
[~BRAS-aaa] quit
Step 2 Configure policy-based routing on the CR to redirect user traffic to the NAT device.
1. Configure a traffic diversion policy.
[~CR] acl 2001
[*CR-acl4-basic-2001] rule 10 permit source 10.10.10.0 0.0.0.255
[*CR-acl4-basic-2001] commit
[~CR-acl4-basic-2001] quit
[~CR] traffic classifier c1
[*CR-classifier-c1] if-match acl 2001
[*CR-classifier-c1] commit
[~CR-classifier-c1] quit
[~CR] traffic behavior b1
[*CR-behavior-b1] redirect ip-nexthop 192.168.11.2
[*CR-behavior-b1] commit
[~CR-behavior-b1] quit
[~CR] traffic policy p1
[*CR-policy-p1] classifier c1 behavior b1
[*CR-policy-p1] commit
[~CR-policy-p1] quit
2. Configure an inbound interface redirection policy.
[~CR] interface gigabitEthernet 0/2/2
[~CR-GigabitEthernet0/2/2] ip address 192.168.10.2 24
[*CR-GigabitEthernet0/2/2] traffic-policy p1 inbound
[*CR-GigabitEthernet0/2/2] commit
[~CR-GigabitEthernet0/2/2] quit
3. Configure a static route.
[~CR] ip route-static 11.11.11.0 255.255.255.0 192.168.10.1
[*CR] commit
----End
Configuration Files
● BRAS configuration file
#
sysname BRAS
#
vsm on-board-mode disable
#
radius-server group rd1
radius-server authentication 192.168.7.249 1812 weight 0
radius-server accounting 192.168.7.249 1813 weight 0
radius-server shared-key-cipher %^%#glhJ;yPG#$=tC&(Is%q!S_";(k.Ef$:978$$e:TY%^%
#
interface GigabitEthernet0/2/0.1
user-vlan 1
bas
access-type layer2-subscriber default-domain authentication isp1
authentication-method bind
#
ip pool pool1 bas local
gateway 10.10.10.1 255.255.255.0
section 1 10.10.10.1 10.10.10.100
dns-server 192.168.7.252
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
service-location 1
location slot 9
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1
section 1 11.11.11.101 11.11.11.105
nat outbound 3001 address-group address-group1
Networking Requirements
On the network shown in Figure 1-62, the CPE performs NAT on the packets sent
by PCs on the intranet and sends the packets to the BRAS. The BRAS connects to
the RADIUS server and to the IPv4 network through the CR. The NAT device
connects to the CR in off-path mode. The NAT device is connected to the CR
through GE 0/2/0. The enterprise has 100 public IP addresses ranging from
11.11.11.1/24 to 11.11.11.100/24.
It is required that only PCs on the network segment ranging from 10.0.0.1/24 to
10.0.0.255/24 access the Internet.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
● Index of the service-location group: 1; name of the service-instance-group:
group 1; index of the NAT instance named nat1: 1
● IDs of the private and public address pools for the static source tracing
algorithm
● Name and IP address of each interface to which a NAT traffic diversion policy
is applied
● Private network address segment for NAT static source tracing: 10.0.0.1 to
10.0.0.255; public network address segment for NAT static source tracing
11.11.11.1 to 11.11.11.100
● Port number range for the public address pool: 256 to 1023; port segment
size: 256
● ACL number: 3001; traffic classifier name: c1; traffic behavior name: b1; traffic
policy name: p1
Procedure
Step 1 Set the maximum number of sessions that can be created on the service board in
slot 9 to 6M.
<HUAWEI> system-view
[~HUAWEI] vsm on-board-mode disable
[*HUAWEI] commit
[~HUAWEI] license
[~HUAWEI-license] active nat session-table size 6 slot 9
[*HUAWEI-license] active nat bandwidth-enhance 40 slot 9
[*HUAWEI-license] commit
[~HUAWEI-license] quit
Step 3 Configure a NAT instance named nat1 and bind it to the NAT service board.
[~HUAWEI] nat instance nat1 id 1
[*HUAWEI-nat-instance-nat1] service-instance-group group1
[*HUAWEI-nat-instance-nat1] commit
[~HUAWEI-nat-instance-nat1] quit
Step 4 Configure a group of NAT static source tracing algorithm parameters, with the
private address pool containing IP addresses from 10.0.0.1 to 10.0.0.255, the public
address pool containing IP addresses from 11.11.11.1 to 11.11.11.100, the port
range from 256 to 1023, and port segment size as 256.
[~HUAWEI] nat static-mapping
[*HUAWEI-nat-static-mapping] inside-pool 1
[*HUAWEI-nat-static-mapping-inside-pool-1] section 1 10.0.0.1 10.0.0.255
[*HUAWEI-nat-static-mapping-inside-pool-1] quit
[*HUAWEI-nat-static-mapping] global-pool 1
[*HUAWEI-nat-static-mapping-global-pool-1] section 1 11.11.11.1 11.11.11.100
[*HUAWEI-nat-static-mapping-global-pool-1] quit
[*HUAWEI-nat-static-mapping] static-mapping 10 inside-pool 1 global-pool 1 port-range 256 1023 port-
size 256
[*HUAWEI-nat-static-mapping] commit
[~HUAWEI-nat-static-mapping] quit
Step 5 Enable the NAT static source tracing algorithm in the NAT instance named nat1
and specify the algorithm ID as 10.
[~HUAWEI] nat instance nat1
[~HUAWEI-nat-instance-nat1] nat bind static-mapping 10
[*HUAWEI-nat-instance-nat1] commit
[~HUAWEI-nat-instance-nat1] quit
3. Configure a traffic behavior, which binds traffic to the NAT instance named
nat1.
[~HUAWEI] traffic behavior b1
[*HUAWEI-behavior-b1] nat bind instance nat1
[*HUAWEI-behavior-b1] commit
[~HUAWEI-behavior-b1] quit
4. Defines a NAT traffic policy to associate the ACL rule with the traffic behavior.
[~HUAWEI] traffic policy p1
[*HUAWEI-trafficpolicy-p1] classifier c1 behavior b1
[*HUAWEI-trafficpolicy-p1] commit
[~HUAWEI-trafficpolicy-p1] quit
---------------------------------------------------------------------------
----End
Configuration Files
NAT device configuration file
#
sysname HUAWEI
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
nat static-mapping
inside-pool 1
section 1 10.0.0.1 10.0.0.255
global-pool 1
section 1 11.11.11.1 11.11.11.100
static-mapping 10 inside-pool 1 global-pool 1 port-range 256 1023 port-size 256
#
service-location 1
location slot 9
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat bind static-mapping 10
#
acl number 3001
rule 1 permit ip source 10.0.0.0 0.0.0.255
#
traffic classifier c1 operator or
if-match acl 3001 precedence 1
#
traffic behavior b1
nat bind instance nat1
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
traffic-policy p1 inbound
#
return
Example for Configuring the Centralized NAT Static Source Tracing Algorithm and
Load Balancing
This section provides an example for configuring the centralized NAT source
tracing algorithm and load balancing, which implements many-to-many
translation between a company's private IP addresses and public IP addresses and
allows PCs only on a specified network segment to access the Internet.
Networking Requirements
In Figure 1-63, the router performs the NAT function to help PCs within the
enterprise network access the Internet. The router uses GE 0/2/0 to connect to the
enterprise network. The router is connected to the Internet through GE 0/2/1. The
enterprise is assigned 100 public IP addresses of 11.11.11.101/32 through
11.11.11.200/32. When NAT load balancing is not used, a NAT service can be
bound to only one CPU of a service board. As a result, the forwarding performance
of a service board in a NAT instance will easily reach the upper limit. In NAT load
balancing mode, multiple service board CPUs can be bound to a NAT instance to
increase the NAT bandwidth of the same type of users. This saves instance
configurations and reduces manual allocation of address pools and manual
intervention on traffic. In addition, by using the static source tracing algorithm to
find the private IP address based on the public IP address and port number, the
device does not need to send source tracing logs to record information about
intranet users' access to the external network, which enhances network security.
Figure 1-63 shows IP addresses of interfaces. The configuration requirements are
as follows:
● Only PCs on the network segment of 192.168.10.0/24 can access the Internet.
● Many-to-many NAT needs to be performed for IP addresses between the
private and public networks.
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a NAT load balancing instance.
2. Configure a NAT traffic diversion policy.
Data Preparation
To complete the configuration, you need the following data:
● NAT load balancing instance
● Information about the NAT traffic diversion policy
● IDs of the private and public address pools for the static source tracing
algorithm
● Private and public address segments of the static source tracing algorithm
● Port number range and port range size of the public address pool in the static
source tracing algorithm
Procedure
1. Create a NAT load balancing instance.
a. Set the maximum number of sessions that can be created on the CPU of
the NAT service board to 6M.
<HUAWEI> system-view
[~HUAWEI] sysname CGNA
[*HUAWEI] commit
[~CGNA] vsm on-board-mode disable
[*CGNA] commit
[~CGNA] license
[~CGNA-license] active nat session-table size 6 slot 9
[*CGNA-license] active nat session-table size 6 slot 10
[*CGNA-license] active nat bandwidth-enhance 40 slot 9
[*CGNA-license] active nat bandwidth-enhance 40 slot 10
[*CGNA-license] commit
[~CGNA-license] quit
c. Configure a traffic behavior named b1, which binds traffic to the NAT
instance named cpe1.
[~CGNA] traffic behavior b1
[*CGNA-behavior-b1] nat bind instance cpe1
[*CGNA-behavior-b1] commit
[~CGNA-behavior-b1] quit
d. Configure a NAT traffic diversion policy and associate the ACL rule with
the traffic behavior.
[~CGNA] traffic policy p1
[*CGNA-trafficpolicy-p1] classifier c1 behavior b1
[*CGNA-trafficpolicy-p1] commit
[~CGNA-trafficpolicy-p1] quit
4. Enable the static source tracing algorithm in the NAT instance named cpe1
and set the algorithm parameter ID to 10.
[~HUAWEI] nat instance cpe1
[~HUAWEI-nat-instance-cpe1] nat bind static-mapping 10
[*HUAWEI-nat-instance-cpe1] commit
[~HUAWEI-nat-instance-cpe1] quit
Configuration Files
CGNA configuration file
#
sysname CGNA
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat session-table size 6 slot 10
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
#
nat static-mapping
inside-pool 1
section 1 192.168.10.128 192.168.10.255
global-pool 1
section 1 11.11.11.101 11.11.11.200
static-mapping 10 inside-pool 1 global-pool 1 port-range 256 1279 port-size 256
#
service-location 1
location slot 9
#
service-location 2
location slot 10
#
service-instance-group groupa
service-location 1
service-location 2
#
nat instance cpe1 id 11
service-instance-group groupa
nat bind static-mapping 10
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
traffic classifier c1 operator or
if-match acl 3001 precedence 1
#
traffic behavior b1
nat bind instance cpe1
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
interface GigabitEthernet 0/2/0
ip address 193.168.10.1 255.255.255.0
undo shutdown
traffic-policy p1 inbound
#
return
Example for Configuring the Distributed NAT Static Source Tracing Algorithm and
Load Balancing
This section provides an example for configuring the distributed NAT static source
tracing algorithm and load balancing. After the configuration is performed, IP
addresses of multiple home users can be balanced to different CPUs for NAT in
the multiple NAT service instances.
Networking Requirements
In the distributed NAT scenario shown in Figure 1-64, home users access the
Internet through the BRAS using PPPoE, IPoE, web authentication, or other ways.
As well as implementing user authentication, authorization, and accounting, the
BRAS also provides the NAT service to translate home users' private IP addresses
into public ones. To improve network reliability, load balancing is implemented
based on different CPUs. In addition, the static source tracing algorithm is used to
find private IP addresses based on public IP addresses and port numbers so that
home users can access the Internet. By using this algorithm, the BRAS does not
need to send source tracing logs to record information about intranet users' access
to the external network, thereby enhancing network security.
The configuration requirements are as follows:
● Load balancing can be implemented by configuring multiple user groups and
binding each user group to a NAT instance.
● Each home user (such as PC1 and PC2 in Figure 1) dials up to access the user
groups bound to load balancing.
● IP addresses of multiple home users can be balanced to different CPUs for
NAT in the multiple NAT service instances.
Figure 1-64 Distributed NAT static source tracing and load balancing
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Create NAT load balancing instances.
2. Configure the NAT static source tracing algorithm mapping.
3. Bind a dynamic NAT address pool to a global address pool.
4. Configure NAT user information and RADIUS authentication on the BRAS.
5. Configure a NAT traffic diversion policy.
6. Configure a NAT traffic conversion policy.
7. Configure a user-side interface.
Data Preparation
To complete the configuration, you need the following data:
● NAT instance.
● User group name and UCL number
● Information about the NAT traffic diversion policy
● IDs of the private and public address pools for the static source tracing
algorithm
● Private and public address segments of the static source tracing algorithm
● Port number range and port range size of the public address pool in the static
source tracing algorithm
Procedure
1. Create a NAT instance.
a. Configure licenses.
<HUAWEI>system-view
[~HUAWEI]sysname BRAS
[*HUAWEI]commit
[~BRAS]vsm on-board-mode disable
[*BRAS]commit
[~BRAS]license
[*BRAS-license]active nat session-table size 32 slot 9
[*BRAS-license]active nat session-table size 32 slot 10
[*BRAS-license]active nat bandwidth-enhance 40 slot 9
[*BRAS-license]active nat bandwidth-enhance 40 slot 10
[*BRAS-license]commit
[~BRAS-license]quit
b. Bind service-instance groups group1 and group2 to service-location
groups 1 and 2, respectively.
[~BRAS]service-location 1
[*BRAS-service-location-1]location slot 9
[*BRAS-service-location-1]commit
[~BRAS-service-location-1]quit
[~BRAS]service-location 2
[*BRAS-service-location-2]location slot 10
[*BRAS-service-location-2]commit
[~BRAS-service-location-2]quit
[~BRAS]service-instance-group group1
[*BRAS-service-instance-group-group1]service-location 1
[*BRAS-service-instance-group-group1]commit
[~BRAS-service-instance-group-group1]quit
[~BRAS]service-instance-group group2
[*BRAS-service-instance-group-group2]service-location 2
[*BRAS-service-instance-group-group2]commit
[~BRAS-service-instance-group-group2]quit
2. Configure the NAT static source tracing algorithm mapping.
[~BRAS]nat static-mapping
[*BRAS-nat-static-mapping]inside-pool 1
[*BRAS-nat-static-mapping-inside-pool-1]section 1 192.168.0.10 192.168.0.110
[*BRAS-nat-static-mapping-inside-pool-1]quit
[*BRAS-nat-static-mapping]inside-pool 2
[*BRAS-nat-static-mapping-inside-pool-2]section 1 192.168.0.130 192.168.0.230
[*BRAS-nat-static-mapping-inside-pool-2]quit
[*BRAS-nat-static-mapping]global-pool 1
[*BRAS-nat-static-mapping-global-pool-1]section 1 10.1.1.5 10.1.1.125
[*BRAS-nat-static-mapping-global-pool-1]quit
[*BRAS-nat-static-mapping]global-pool 2
[*BRAS-nat-static-mapping-global-pool-2]section 1 10.1.1.130 10.1.1.250
[*BRAS-nat-static-mapping-global-pool-2]quit
[*BRAS-nat-static-mapping]static-mapping 1 inside-pool 1 global-pool 1 port-range 256 1023 port-
size 256
[*BRAS-nat-static-mapping]static-mapping 2 inside-pool 2 global-pool 2 port-range 256 1023 port-
size 256
[*BRAS-nat-static-mapping]commit
[~BRAS-nat-static-mapping]quit
3. Bind the static NAT and the service-instance groups to the NAT instances.
[~BRAS]nat instance nat1 id 1
[*BRAS-nat-instance-nat1]service-instance-group group1
[*BRAS-nat-instance-nat1]nat bind static-mapping 1
[*BRAS-nat-instance-nat1]commit
[~BRAS-nat-instance-nat1]quit
[~BRAS]nat instance nat2 id 2
[*BRAS-nat-instance-nat2]service-instance-group group2
[*BRAS-nat-instance-nat2]nat bind static-mapping 2
[*BRAS-nat-instance-nat2]commit
[~BRAS-nat-instance-nat2]quit
b. Configure an ACL numbered 6002 and set an ACL rule to match the
traffic from the user group group2 so that the traffic can be diverted to
the NAT service board.
[~BRAS]acl 6002
[*BRAS-acl-ucl-6002]rule 2 permit ip source user-group group2
[*BRAS-acl-ucl-6002]commit
[~BRAS-acl-ucl-6002]quit
e. Configure a traffic behavior named b1, which binds traffic to the NAT
instance.
[~BRAS]traffic behavior b1
[*BRAS-behavior-b1]nat bind instance nat1
[*BRAS-behavior-b1]commit
[~BRAS-behavior-b1]quit
f. Configure a traffic behavior named b2, which binds traffic to the NAT
instance.
[~BRAS]traffic behavior b2
[*BRAS-behavior-b2]nat bind instance nat2
[*BRAS-behavior-b2]commit
[~BRAS-behavior-b2]quit
g. Configure a NAT policy for group1, and associate the ACL rule with the
traffic behavior.
[~BRAS]traffic policy p1
[*BRAS-trafficpolicy-p1]classifier c1 behavior b1 precedence 1
[*BRAS-trafficpolicy-p1]commit
[~BRAS-trafficpolicy-p1]quit
h. Configure a NAT policy for group2, and associate the ACL rule with the
traffic behavior.
[~BRAS]traffic policy p1
[*BRAS-trafficpolicy-p1]classifier c2 behavior b2 precedence 2
[*BRAS-trafficpolicy-p1]commit
[~BRAS-trafficpolicy-p1]quit
Configuration Files
BRAS configuration file
#
sysname BRAS
#
vsm on-board-mode disable
#
license
active nat session-table size 32 slot 9
active nat session-table size 32 slot 10
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
#
user-group group1
#
user-group group2
#
ip pool baspool1 bas local
gateway 192.168.0.2 255.255.255.128
section 1 192.168.0.10 192.168.0.110
dns-server 192.168.7.252
#
ip pool baspool2 bas local
gateway 192.168.0.129 255.255.255.128
section 1 192.168.0.130 192.168.0.230
dns-server 192.168.7.252
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
nat static-mapping
inside-pool 1
section 1 192.168.0.10 192.168.0.110
inside-pool 2
section 1 192.168.0.130 192.168.0.230
global-pool 1
section 1 10.1.1.5 10.1.1.125
global-pool 2
section 1 10.1.1.130 10.1.1.250
static-mapping 1 inside-pool 1 global-pool 1 port-range 256 1023 port-size 256
static-mapping 2 inside-pool 2 global-pool 2 port-range 256 1023 port-size 256
#
service-location 1
location slot 9
#
service-location 2
location slot 10
#
service-instance-group group1
service-location 1
#
service-instance-group group2
service-location 2
#
acl number 6001
rule 1 permit ip source user-group group1
#
acl number 6002
rule 2 permit ip source user-group group2
#
nat instance nat1 id 1
service-instance-group group1
nat bind static-mapping 1
#
nat instance nat2 id 2
service-instance-group group2
nat bind static-mapping 2
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic classifier c2 operator or
if-match acl 6002 precedence 1
#
traffic behavior b1
nat bind instance nat1
#
traffic behavior b2
nat bind instance nat2
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
classifier c2 behavior b2 precedence 2
#
traffic-policy p1 inbound
#
aaa
authentication-scheme auth1
authentication-mode RADIUS
#
accounting-scheme acct1
accounting-mode RADIUS
#
domain isp1
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool baspool1
ip-pool baspool2
user-group group1 bind nat instance nat1 ip-pool baspool1
user-group group2 bind nat instance nat2 ip-pool baspool2
#
interface GigabitEthernet0/2/0
undo shutdown
bas
access-type layer2-subscriber default-domain authentication isp1
authentication-method bind
#
return
Example for Configuring the NAT Server Function (Dedicated Board Scenario)
This section provides an example for configuring the NAT Server function. This
function enables a user to configure a mapping entry that contains the private
and public IP addresses and port numbers on a NAT device. The NAT device then
helps hosts on a public network access servers on a private network.
Networking Requirements
In Figure 1-65, the router performs the NAT function to help PCs within the
enterprise network access the Internet. A service board is installed in slot 9 of the
router. The router is connected to the enterprise network through GE 0/2/0 and to
the Internet through GE 0/3/0.
The internal network address of the enterprise network is 192.168.0.0/16. The
enterprise provides the FTP service. The internal FTP server address is
192.168.10.10/24. Only PCs on the network segment of 192.168.10.0/24 can access
the Internet. External PCs can access the internal server. Five public IP addresses
1.1.1.101/24 through 1.1.1.105/24 are assigned to the enterprise. In addition, the
internal server has an independent public IP address 1.1.1.100. Through 1:1 NAT,
the internal FTP server can be accessed from the external IP address 3.3.3.2.
The configurations in this example are mainly performed on NATA and DeviceB.
Interfaces 1 and 2 in this example represent GE 0/2/0 and GE 0/3/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic NAT functions.
2. Configure a NAT traffic diversion policy.
3. Configure a NAT traffic conversion policy.
4. Configure an internal server.
Data Preparation
To complete the configuration, you need the following data:
● Service-location group index: 1
● Slot ID (9) of the NATA's NAT service board
● Name of a service-instance group (group1)
● NAT instance name (nat1) and index (1)
● NATA's NAT address pool name (address-group1), address pool number (1), a
range of public IP addresses (1.1.1.101 through 1.1.1.105)
● ACL number (3001)
● Traffic classifier (classifier1)
● Traffic behavior (behavior1)
● Traffic policy (policy1)
● Name (GE 0/2/0) and IP address (192.168.10.1/24) of an interface to which a
NAT traffic diversion policy is applied
● Internal server's private IP address (192.168.10.10) and public IP address
(1.1.1.100)
Procedure
Step 1 Configure basic NAT functions.
1. Set the maximum number of sessions that can be created on the service
board in slot 9 to 6M.
<HUAWEI> system-view
[~HUAWEI] sysname NATA
[*HUAWEI] commit
[~NATA] vsm on-board-mode disable
[*NATA] commit
[~NATA] license
[~NATA-license] active nat session-table size 6 slot 9
[~NATA-license] active nat bandwidth-enhance 40 slot 9
[~NATA-license] quit
2. Create a NAT instance named nat1 and bind it to the service board.
[~NATA] service-location 1
[*NATA-service-location-1] location slot 9
[*NATA-service-location-1] commit
[~NATA-service-location-1] quit
[~NATA] service-instance-group group1
[*NATA-service-instance-group-group1] service-location 1
[*NATA-service-instance-group-group1] commit
[~NATA-service-instance-group-group1] quit
[~NATA] nat instance nat1 id 1
[*NATA-nat-instance-nat1] service-instance-group group1
[*NATA-nat-instance-nat1] commit
[~NATA-nat-instance-nat1] quit
A NAT traffic diversion policy on an inbound interface and that on an outbound interface
are mutually exclusive on a device.
d. Configure a NAT traffic policy named policy1 to associate the ACL rule
with the traffic behavior.
[~NATA] traffic policy policy1
[*NATA-policy-policy1] classifier classifier1 behavior behavior1
[*NATA-policy-policy1] commit
[~NATA-policy-policy1] quit
e. Apply the NAT traffic diversion policy in the GE 0/2/0 interface view.
[~NATA] interface gigabitEthernet 0/2/0
[~NATA-GigabitEthernet0/2/0] ip address 192.168.10.1 24
[*NATA-GigabitEthernet0/2/0] traffic-policy policy1 inbound
[*NATA-GigabitEthernet0/2/0] commit
[~NATA-GigabitEthernet0/2/0] quit
NOTE
Step 4 Configure an internal server with the private IP address 192.168.10.10 and the
public IP address 1.1.1.100.
[~NATA] nat instance nat1
[~NATA-nat-instance-nat1] nat server-mode enable
[*NATA-nat-instance-nat1] nat server global 1.1.1.100 inside 192.168.10.10
[*NATA-nat-instance-nat1] commit
[~NATA-nat-instance-nat1] quit
Step 5 Configure a default route as a static route and set the next hop address of the
default route to 2.2.2.2.
[~NATA] ip route-static 0.0.0.0 0.0.0.0 2.2.2.2
[*NATA] commit
This operation will take a few minutes. Press 'Ctrl+C' to break ...
Slot: 9
Total number: 2.
NAT Instance: nat1
Protocol:ANY, VPN:--->-
Server:192.168.10.10[1.1.1.100]->ANY
Tag:0x0, TTL:-, Left-Time:-
CPE IP:
192.168.10.10
NAT Instance: nat1
Protocol:ANY, VPN:--->-
Server reverse:ANY->1.1.1.100[192.168.10.10]
Tag:0x0, TTL:-, Left-Time:-
CPE IP:192.168.10.10
----End
Configuration Files
● NATA configuration file (traffic diversion policy on the inbound interface)
#
sysname NATA
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
service-location 1
location slot 9
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1 1.1.1.101 1.1.1.105
nat outbound 3001 address-group address-group1
nat server-mode enable
nat server global 1.1.1.100 inside 192.168.10.10
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat1
#
traffic policy policy1
classifier classifier1 behavior behavior1 precedence 1
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
traffic-policy policy1 inbound
#
interface GigabitEthernet 0/3/0
undo shutdown
ip address 2.2.2.1 255.255.255.0
#
ospf 1
area 0.0.0.0
network 2.2.2.0 0.0.0.255
#
ip route-static 0.0.0.0 0.0.0.0 2.2.2.2
#
return
Networking Requirements
NOTE
In Figure 1-66, the router performs the NAT function to help PCs within the
enterprise network access the Internet. The router uses GE 0/1/1 to connect to an
internal network and GE 0/1/2 to connect to the Internet.
The internal network address of the enterprise network is 192.168.0.0/16. The
internal server address is 192.168.10.10/24. Only PCs on the network segment of
192.168.10.0/24 can access the Internet. External PCs can access the internal
server. The enterprise has five valid IP addresses ranging from 1.1.1.101/24 to
1.1.1.105/24. The internal server of the enterprise has an independent public
address 1.1.1.100. The internal server can be accessed from the external network
address 3.3.3.2 through 1:1 NAT.
The configurations in this example are mainly performed on NATA and DeviceB.
Interfaces 1 through 3 in this example represent GE 0/1/1, GE 0/1/2, and GE 0/1/3,
respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic functions of NAT.
2. Configure a NAT traffic diversion policy.
3. Configure an internal NAT server.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure basic functions of NAT.
1. Create a NAT instance named nat1 and bind it to the service board.
<HUAWEI> system-view
[~HUAWEI] sysname NATA
[*HUAWEI] commit
[~NATA] service-location 1
[*NATA-service-location-1] location follow-forwarding-mode
[*NATA-service-location-1] commit
[~NATA-service-location-1] quit
[~NATA] service-instance-group group1
[*NATA-service-instance-group-group1] service-location 1
[*NATA-service-instance-group-group1] commit
[~NATA-service-instance-group-group1] quit
[~NATA] nat instance nat1 id 1
[*NATA-nat-instance-nat1] service-instance-group group1
[*NATA-nat-instance-nat1] commit
[~NATA-nat-instance-nat1] quit
A NAT traffic diversion policy on an inbound interface and that on an outbound interface
are mutually exclusive on a device.
d. Configure a NAT traffic policy named policy1 to associate the ACL rule
with the traffic behavior.
[~NATA] traffic policy policy1
[*NATA-trafficpolicy-policy1] classifier classifier1 behavior behavior1
[*NATA-trafficpolicy-policy1] commit
[~NATA-trafficpolicy-policy1] quit
Step 3 Define the internal server address as 192.168.10.10 and external address as
1.1.1.100. Use the address-level mode to ensure 1:1 relationship between the
public and private IP addresses.
[~NATA] nat instance nat1
[~NATA-nat-instance-nat1] nat server-mode enable
[*NATA-nat-instance-nat1] nat server global 1.1.1.100 inside 192.168.10.10
[*NATA-nat-instance-nat1] commit
[~NATA-nat-instance-nat1] quit
Step 4 Configure a default route as a static route and set the next hop address of the
default route to 2.2.2.2.
[~NATA] ip route-static 0.0.0.0 0.0.0.0 2.2.2.2
[*NATA] commit
----End
Configuration Files
● NATA configuration file (traffic diversion policy on the inbound interface)
#
sysname NATA
#
service-location 1
location follow-forwarding-mode
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1 1.1.1.101 1.1.1.105
nat server-mode enable
nat server global 1.1.1.100 inside 192.168.10.10
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat1
#
traffic policy policy1
classifier classifier1 behavior behavior1 precedence 1
#
interface GigabitEthernet 0/1/1
undo shutdown
ip address 192.168.10.1 255.255.255.0
traffic-policy policy1 inbound
#
interface GigabitEthernet 0/1/2
undo shutdown
ip address 2.2.2.1 255.255.255.0
#
ospf 1
area 0.0.0.0
network 2.2.2.0 0.0.0.255
#
ip route-static 0.0.0.0 0.0.0.0 2.2.2.2
#
return
Networking Requirements
On the centralized NAT network shown in Figure 1-67, a NAT service board is
equipped in slot 9 on CGN1 and slot 9 on CGN2. CGN1 and CGN2 are standalone
CGN devices attached to CRs on the MAN.
If a CGN service board fails, BGP does not withdraw the default route advertised
to a downstream device. As a result, traffic attempts to reach the CGN device over
the static route but is interrupted. To prevent the traffic interruption, inter-chassis
cold backup can be associated with CGN service boards. A ServiceIf interface is
associated with a HA service status monitoring group to monitor the CGN board
status in real time. Once the ServiceIf interface detects the abnormal CGN service
board status, a master/backup CGN device switchover is triggered so that traffic
switches to the backup CGN device.
Figure 1-67 Inter-chassis cold backup associated with CGN service boards
NOTE
Context
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
No. Data
2 Slot ID and CPU ID of the master CPU on CGN1's service board (CPU 0
in slot 9 is used in this scenario)
3 Slot ID and CPU ID of the backup CPU on CGN2's service board (CPU
0 in slot 9 is used in this scenario)
Procedure
Step 1 Set the maximum number of sessions that can be created on the NAT service
board in slot 9 to 6M on master and backup devices.
Step 3 Create a service-instance group and bind it to the service-location group on each
CGN device.
Step 4 Create a service status monitoring group and bind it to the service-location group
on each CGN device.
# On CGN1, create a service status monitoring group named group1 and bind it
to service-location group 1.
[~CGN1] monitor-location-group group1
[*CGN1-monitor-location-group-group1] service-location 1
[*CGN1-monitor-location-group-group1] commit
[~CGN1-monitor-location-group-group1] quit
# On CGN2, create a service status monitoring group named group1 and bind it
to service-location group 1.
[~CGN2] monitor-location-group group1
[*CGN2-monitor-location-group-group1] service-location 1
[*CGN2-monitor-location-group-group1] commit
[~CGN2-monitor-location-group-group1] quit
Step 5 Create a ServiceIf interface and bind it to the HA service status monitoring group
on each CGN device.
Step 6 Configure a static route with the outbound interface set to the ServiceIf interface.
Use a routing protocol to advertise the static route on each CGN device to the
downstream BRAS to form the primary and backup routes.
NOTE
Configure a default route with the outbound interface set to serviceIf 1 on the master CGN
device, use BGP to advertise the route to downstream devices, and set route cost 1.
Configure the same route on the backup CGN device and set route cost 2 that is greater
than route cost 1. Traffic is preferentially sent over the route advertised by the master
device based on cost values.
# On CGN1, configure a static route with the outbound interface set to serviceIf
1.
[~CGN1] ip route-static 0.0.0.0 0.0.0.0 serviceif 1 preference 10
[*CGN1] commit
[~CGN1] quit
# On CGN2, configure a static route with the outbound interface set to serviceIf
1.
[~CGN2] ip route-static 0.0.0.0 0.0.0.0 serviceif 1 preference 20
[*CGN2] commit
[~CGN2] quit
Step 7 Create a NAT instance on each CGN device and bind it to the service-instance
group.
# On CGN1, create a NAT instance named nat and bind it to a service instance
group named group 1.
[~CGN1] nat instance nat id 1
[*CGN1-nat-instance-nat] service-instance-group group1
[*CGN1-nat-instance-nat] nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
[*CGN1-nat-instance-nat] commit
[~CGN1-nat-instance-nat] quit
# On CGN2, create a NAT instance named nat and bind it to a service instance
group named group 1.
[~CGN2] nat instance nat id 1
[*CGN2-nat-instance-nat] service-instance-group group1
[*CGN2-nat-instance-nat] nat address-group address-group1 group-id 1 11.11.11.106 11.11.11.110
[*CGN2-nat-instance-nat] commit
[~CGN2-nat-instance-nat] quit
Step 8 Configure a NAT traffic diversion policy and a NAT traffic conversion policy on the
master and backup devices. For details, see Example for Configuring Centralized
NAT in IPv6 Transition Technology > NAT Configuration.
# Configure a NAT traffic diversion policy and a NAT traffic conversion policy on
CGN1.
d. Configure a NAT traffic policy named policy1 to associate the ACL rule
with the traffic behavior.
[~CGN1] traffic policy policy1
[*CGN1-policy-policy1] classifier classifier1 behavior behavior1
[*CGN1-policy-policy1] commit
[~CGN1-policy-policy1] quit
NOTE
The configuration of CGN2 is similar to that of CGN1. For configuration details, see CGN2
configuration file in this section.
# Run the display nat instance nat command on each CGN device to verify NAT
configurations.
[~CGN1] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat outbound 3001 address-group address-group1
[~CGN2] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.106 11.11.11.110
nat outbound 3001 address-group address-group1
----End
Configuration Files
● CGN1 configuration file
#
sysname CGN1
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat
#
traffic policy policy1
classifier classifier1 behavior behavior1 precedence 1
#
service-location 1
location slot 9
#
service-instance-group group1
service-location 1
#
monitor-location-group group1
service-location 1
#
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat outbound 3001 address-group address-group1
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
traffic-policy policy1 inbound
#
interface serviceif 1
ip address 10.1.1.1 255.255.255.0
track monitor-location-group group1
#
ip route-static 0.0.0.0 0.0.0.0 serviceif 1 preference 10
#
return
● CGN2 configuration file
#
sysname CGN2
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat
#
traffic policy policy1
classifier classifier1 behavior behavior1 precedence 1
#
service-location 1
location slot 9
#
service-instance-group group1
service-location 1
#
monitor-location-group group1
service-location 1
#
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.106 11.11.11.110
nat outbound 3001 address-group address-group1
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
traffic-policy policy1 inbound
#
interface serviceif 1
ip address 10.1.1.2 255.255.255.0
track monitor-location-group group1
#
ip route-static 0.0.0.0 0.0.0.0 serviceif 1 preference 20
#
return
Example for Configuring Syslog Source Tracing for NAT Flexible Flows
This section provides an example for configuring syslog source tracing for flexible
NAT flows. The log function can be used to record information about intranet
users' access to external networks in real time, improving network maintainability.
A networking diagram is provided to help you understand the configuration
procedure.
Networking Requirements
NOTE
In Figure 1-68, the NAT device (NAT1) performs the NAT function to help PCs
within an enterprise network access the Internet. The NAT device is connected to
the enterprise network through GE 0/2/0 and to the Internet through GE 0/2/1.
The enterprise is assigned public IP addresses of 11.11.11.11/32 through
11.11.11.15/32. The router DeviceA is connected to the log server through GE
0/2/0 and to the external network through GE 0/2/1. An IPsec tunnel is
established between NAT1 and DeviceA to transfer syslogs for NAT flexible flows
to the log server.
The configuration requirements are as follows:
● Only PCs on the network segment of 192.168.10.0/24 can perform NAT and
access the external network.
● The syslog server can record the actions of users when they access Internet
applications.
● NAT1 and DeviceA support IPsec services.
Figure 1-68 Networking of syslog source tracing for NAT flexible flows
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic NAT functions.
2. Configure a NAT traffic diversion policy.
3. Configure the syslog function for NAT flexible flows.
4. Configure an IPsec tunnel between NAT1 and DeviceA. Configure NAT1 and
DeviceA to encrypt packets using the HMAC-SHA256 and AES-256 algorithms
for authentication.
5. Configure a routing policy to ensure that the syslog server is reachable.
Data Preparation
To complete the configuration, you need the following data:
● Service-location group index: 1 and 2.
● Service-instance group name: group1 and group2.
● NAT instance name (nat1) and index (1)
● NAT1's NAT address pool name (address-group1), address pool number (1), a
range of public IP addresses (11.11.11.11 through 11.11.11.15)
● ACL number: 3000 and 3001.
● Name (GE 0/2/1) and IP address (192.0.2.1/24) of an interface to which a
NAT traffic diversion policy is applied
● NAT syslog host address (198.51.100.1) and port number (514), and NAT
device's source IP address (192.0.2.1) and source port number (514)
Procedure
Step 1 Configure basic NAT functions.
1. Create a NAT instance named nat1 and bind it to the service board.
<HUAWEI> system-view
[~HUAWEI] sysname NAT1
[*HUAWEI] commit
[~NAT1] service-location 1
[*NAT1-service-location-1] location follow-forwarding-mode
[*NAT1-service-location-1] commit
[~NAT1-service-location-1] quit
2. Create a syslog template for NAT flexible flows, configure the template, and
specify a flexible log template type.
[~NAT1] nat syslog flexible template session
[*NAT1-nat-syslog-template-session] nat position 0 fixed-string "<134> 1 "
[*NAT1-nat-syslog-template-session] nat position 1 timestamp-year " "
[*NAT1-nat-syslog-template-session] nat position 2 timestamp-month-en " "
[*NAT1-nat-syslog-template-session] nat position 3 timestamp-date " "
[*NAT1-nat-syslog-template-session] nat position 4 timestamp-hour ":"
[*NAT1-nat-syslog-template-session] nat position 5 timestamp-minute ":"
[*NAT1-nat-syslog-template-session] nat position 6 timestamp-second " "
[*NAT1-nat-syslog-template-session] nat position 7 host-ip " "
[*NAT1-nat-syslog-template-session] nat position 8 app-name " - "
[*NAT1-nat-syslog-template-session] nat position 9 scene ":"
[*NAT1-nat-syslog-template-session] nat position 10 fixed-string "SessionbasedA [" create
[*NAT1-nat-syslog-template-session] nat position 10 fixed-string "SessionbasedW [" free
[*NAT1-nat-syslog-template-session] nat position 11 protocol " "
[*NAT1-nat-syslog-template-session] nat position 12 source-ip " - "
[*NAT1-nat-syslog-template-session] nat position 13 destination-ip " "
[*NAT1-nat-syslog-template-session] nat position 14 source-port " "
[*NAT1-nat-syslog-template-session] nat position 15 destination-port " -]"
[*NAT1-nat-syslog-template-session] commit
[~NAT1] quit
[~NAT1] nat syslog descriptive format flexible template session
[*DeviceA-ipsec-policy-isakmp-map1-10] commit
[~DeviceA-ipsec-policy-isakmp-map1-10] quit
Step 6 Configure a static route to ensure that the log server is reachable. Set the next
hop address of the route from the NAT device to the external network to
192.0.2.2/24 and the next hop address of the route from Device A to the external
network to 10.0.1.1/24. (The routing policy needs to be configured based on the
actual networking.)
# Configure NAT1.
[~NAT1] ip route-static 198.51.100.1 0.0.0.0 tunnel 10 192.168.1.2
[*NAT1] ip route-static 192.168.1.2 255.255.255.255 192.0.2.2
[*NAT1] commit
# Configure DeviceA.
[~DeviceA] ip route-static 192.0.2.1 0.0.0.0 tunnel 10 192.168.1.1
[*DeviceA] ip route-static 192.168.1.1 255.255.255 255 10.0.1.1
[*DeviceA] commit
# View the log format of the syslog template for NAT flexible flows.
[~NAT1] display nat syslog flexible session template
Create Log:
fixed_string<134> 1 timestamp_year timestamp_month_en timestamp_date
timestamp_hour:timestamp_minute:timestamp_second host_ip app_name - scene:fixed_stringSessionbasedA
[protocol source_ip - destination_ip source_port destination_port -]
Example:
<134> 1 2019 January 18 14:09:22 X.X.X.X cnelog - NAT444:SessionbasedA [17 X.X.X.X - X.X.X.X 1052 2000
-]
Free Log:
fixed_string<134> 1 timestamp_year timestamp_month_en timestamp_date
timestamp_hour:timestamp_minute:timestamp_second host_ip app_name -
scene:fixed_stringSessionbasedW [protocol source_ip - destination_ip source_port destination_port -]
Example:
<134> 1 2019 January 18 14:09:22 X.X.X.X cnelog - NAT444:SessionbasedW [17 X.X.X.X - X.X.X.X 1052
2000 -]
----End
Configuration Files
● NAT1 configuration file
#
sysname NAT1
#
active ipsec slot 9
#
ike dpd 100
service-location 1
location follow-forwarding-mode
#
service-location 2
location slot 9
#
service-instance-group group1
service-location 1
#
service-instance-group group2
service-location 2
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1
section 1 11.11.11.11 11.11.11.15
nat log host 198.51.100.1 514 source 192.0.2.1 514 name NAT1
nat log session enable syslog
#
acl number 3000
rule 5 permit ip source 192.0.2.1 0 destination 198.51.100.1 0
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
ike proposal 10
encryption-algorithm des-cbc
dh group14
authentication-algorithm sha2-256
integrity-algorithm hmac-sha2-256
#
ike peer b
pre-shared-key cipher %^%#aRY4K;`"G=G{$z:d)#X;Y0Q,%@K|FF1/D=6k<G>;%^%#
ike-proposal 10
remote-address 192.168.1.2
#
ipsec proposal tran1
esp authentication-algorithm sha2-256
esp encryption-algorithm aes 256
#
ipsec policy map1 10 isakmp
security acl 3000
ike-peer b
proposal tran1
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
#
interface GigabitEthernet 0/2/1
undo shutdown
ip address 192.0.2.1 255.255.255.0
nat bind acl 3001 instance nat1
#
interface Tunnel10
ip address 192.168.1.1 255.255.255.255
tunnel-protocol ipsec
ipsec policy map1 service-instance-group 2
#
nat syslog flexible template session
nat position 0 fixed-string "<134> 1 "
nat position 1 timestamp-year " "
nat position 2 timestamp-month-en " "
nat position 3 timestamp-date " "
nat position 4 timestamp-hour ":"
nat position 5 timestamp-minute ":"
nat position 6 timestamp-second " "
nat position 7 host-ip " "
nat position 8 app-name " - "
nat position 9 scene ":"
nat position 10 fixed-string "SessionbasedA [" create
nat position 10 fixed-string "SessionbasedW [" free
nat position 11 protocol " "
nat position 12 source-ip " - "
nat position 13 destination-ip " "
nat position 14 source-port " "
nat position 15 destination-port " -]"
#
nat syslog descriptive format flexible template session
#
ip route-static 198.51.100.1 0.0.0.0 Tunnel10 192.168.1.2
ip route-static 192.168.1.2 255.255.255.255 192.0.2.2
#
return
undo shutdown
ip address 192.51.100.2 255.255.255.0
#
interface GigabitEthernet 0/2/1
undo shutdown
ip address 10.0.1.1 255.255.255.0
#
interface Tunnel10
ip address 192.168.1.2 255.255.255.255
tunnel-protocol ipsec
ipsec policy map1 service-instance-group 2
#
ip route-static 192.0.2.1 0.0.0.0 Tunnel10 192.168.1.1
ip route-static 192.168.1.1 255.255.255.255 172.0.2.2
#
return
Example for Configuring NAT to Translate Both the Source and Destination IP
Addresses
This section provides an example for configuring NAT to translate both the source
and destination IP addresses when Internet users access an internal server.
Networking Requirements
NOTE
Figure 1-69 Networking for configuring NAT to translate both the source and
destination IP addresses
NOTE
Configuration Roadmap
1. Configure basic NAT functions.
2. Configure the mapping between the public and private IP addresses of the
internal NAT server.
3. Enable the NAT ALG function for FTP.
4. Configure a NAT traffic diversion policy.
5. Apply the NAT traffic diversion policy.
6. (required in dedicated board mode)
7. Configure a static route.
Data Preparation
● NAT instance names (nat1 and nat2) and indexes (1 and 2)
● Name (address-group1), number (1), and IP address range (11.11.11.10–
11.11.11.15) of an address pool in the NAT instance named nat1 Name
(address-group2), number (2), and IP address range (11.11.11.16–11.11.11.20)
of an address pool in the NAT instance named nat2
● ACL names: 3001 and 3002.
● Private network-side interface (GE0/2/0 with IP address 192.168.1.1/24) and
public network-side interface (GE0/2/1 with IP address 11.11.11.1) to which a
NAT traffic diversion policy is applied
● External IP address 11.11.11.10 advertised by the internal server and internal
IP address 192.168.1.2
Procedure
Step 1 Configure basic NAT functions.
1. Create a service-location group and a service-instance group and bind the
NAT service board to the service-location group.
# The configuration is as follows in dedicated board mode.
<HUAWEI> system-view
[~HUAWEI] sysname NAT-Device
[*HUAWEI] commit
[~NAT-Device] vsm on-board-mode disable
[*NAT-Device] commit
[~NAT-Device] license
[~NAT-Device-license] active nat session-table size 6 slot 9
[*NAT-Device-license] active nat bandwidth-enhance 40 slot 9
[*NAT-Device-license] commit
[~NAT-Device-license] quit
[~NAT-Device] service-location 1
[*NAT-Device-service-location-1] location slot 9
[*NAT-Device-service-location-1] commit
[~NAT-Device-service-location-1] quit
[~NAT-Device] service-instance-group group1
[*NAT-Device-service-instance-group-group1] service-location 1
[*NAT-Device-service-instance-group-group1] commit
[~NAT-Device-service-instance-group-group1] quit
[~NAT-Device] service-location 1
[*NAT-Device-service-location-1] location follow-forwarding-mode
[*NAT-Device-service-location-1] commit
[~NAT-Device-service-location-1] quit
[~NAT-Device] service-instance-group group1
[*NAT-Device-service-instance-group-group1] service-location 1
[*NAT-Device-service-instance-group-group1] commit
[~NAT-Device-service-instance-group-group1] quit
2. Create NAT instances named nat1 and nat2 and bind the service-instance
group to the NAT instances so that service traffic can be processed by the
NAT service board.
[~NAT-Device] nat instance nat1 id 1
[*NAT-Device-nat-instance-nat1] service-instance-group group1
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
[~NAT-Device] nat instance nat2 id 2
[*NAT-Device-nat-instance-nat2] service-instance-group group1
[*NAT-Device-nat-instance-nat2] commit
[~NAT-Device-nat-instance-nat2] quit
3. Configure a NAT address pool.
[~NAT-Device] nat instance nat1 id 1
[~NAT-Device-nat-instance-nat1] nat address-group address-group1 group-id 1 11.11.11.10
11.11.11.15
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
[~NAT-Device] nat instance nat2 id 2
[~NAT-Device-nat-instance-nat1] nat address-group address-group2 group-id 2 11.11.11.16
11.11.11.20
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
Step 2 Configure the mapping between the public and private IP addresses of the NAT
internal server.
[~NAT-Device] nat instance nat1
[~NAT-Device-nat-instance-nat1] nat server-mode enable
[*NAT-Device-nat-instance-nat1] nat server global 11.11.11.10 inside 192.168.1.2
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
Step 3 Enable NAT ALG to translate the application-layer IP addresses and port numbers
of traffic in the NAT instance named nat1.
[~NAT-Device] nat instance nat1
[~NAT-Device-nat-instance-nat1] nat alg ftp
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
[*NATA-classifier-nat1] commit
[~NATA-classifier-nat1] quit
Step 7 Configure a traffic behavior named behavior1, which binds traffic to the NAT
instance named nat1.
[~NATA] traffic behavior nat1
[*NATA-behavior-nat1] nat bind instance nat1
[*NATA-behavior-nat1] commit
[~NATA-behavior-nat1] quit
Step 8 Configure a traffic behavior named behavior2, which binds traffic to the NAT
instance named nat2.
[~NATA] traffic behavior nat2
[*NATA-behavior-nat2] nat bind instance nat2
[*NATA-behavior-nat2] commit
[~NATA-behavior-nat2] quit
Step 9 Configure a NAT traffic policy named policy1 to associate the ACL rule with the
traffic behavior.
[~NATA] traffic policy nat1
[*NATA-trafficpolicy-nat1] classifier nat1 behavior nat1
[*NATA-trafficpolicy-nat1] commit
[~NATA-trafficpolicy-nat1] quit
Step 10 Configure a NAT traffic policy named policy2 to associate the ACL rule with the
traffic behavior.
[~NATA] traffic policy nat2
[*NATA-trafficpolicy-nat2] classifier nat2 behavior nat2
[*NATA-trafficpolicy-nat2] commit
[~NATA-trafficpolicy-nat2] quit
Step 12 Configure a NAT traffic conversion policy. (This step is required only in dedicated
board mode.)
[~NAT-Device] nat instance nat1 id 1
[~NAT-Device-nat-instance-nat1] nat outbound 3001 address-group address-group1
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
[~NAT-Device] nat instance nat2 id 2
[~NAT-Device-nat-instance-nat2] nat outbound 3002 address-group address-group2
[*NAT-Device-nat-instance-nat2] commit
[~NAT-Device-nat-instance-nat2] quit
Step 13 Configure a default route as a static route and set the next hop address of the
default route to 11.11.11.2.
[~NAT-Device] ip route-static 0.0.0.0 0.0.0.0 11.11.11.2
[*NAT-Device] commit
----End
Configuration Files
#
sysname NAT-Device
#
vsm on-board-mode disable //Dedicated board configuration
#
license //Dedicated board configuration
active nat session-table size 6 slot 9 //Dedicated board configuration
active nat bandwidth-enhance 40 slot 9//Dedicated board configuration
#
service-location 1
location slot 9 //Dedicated board configuration
location follow-forwarding-mode //On-board configuration
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat server-mode enable
nat address-group address-group1 group-id 1 11.11.11.10 11.11.11.15
nat server global 11.11.11.10 inside 192.168.1.2
nat outbound 3001 address-group address-group1 //Dedicated board configuration
nat alg ftp
#
nat instance nat2 id 2
service-instance-group group1
nat address-group address-group2 group-id 2 11.11.11.16 11.11.11.20
nat outbound 3002 address-group address-group2 //Dedicated board configuration
nat alg ftp
#
acl number 3001
rule 1 permit ip source 192.168.1.0 0.0.0.255
#
acl number 3002
rule 2 permit ip
#
traffic classifier nat1 operator or
if-match acl 3001 precedence 1
#
traffic classifier nat2 operator or
if-match acl 3002 precedence 1
#
traffic behavior nat1
nat bind instance nat1
#
traffic behavior nat2
nat bind instance nat2
#
traffic policy nat1
share-mode
classifier nat1 behavior nat1 precedence 1
#
traffic policy nat2
share-mode
classifier nat2 behavior nat2 precedence 1
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
traffic-policy nat1 inbound
#
interface GigabitEthernet 0/2/1
undo shutdown
ip address 11.11.11.1 255.255.255.0
traffic-policy nat2 inbound
#
ip route-static 0.0.0.0 0.0.0.0 11.11.11.2
#
return
Networking Requirements
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic NAT functions.
2. Configure an internal server in Easy IP mode.
3. Configure a NAT traffic diversion policy.
4. Configure a NAT traffic conversion policy.
Data Preparation
To complete the configuration, you need the following data:
● NAT instance name (nat1) and index (1)
● NAT Device's NAT address pool name (address-group1), address pool number
(1), a range of public IP addresses (11.1.1.2 and 11.1.1.3)
● ACL number (3001) to match traffic that a private network host sends to the
Internet
● ACL number (3001) to match traffic that a private network host sends to the
private network server
● Name and IP address of each interface to which a NAT traffic diversion policy
is applied
Procedure
Step 1 Configure basic NAT functions.
1. Configure a NAT instance named nat1.
[~NAT Device] service-location 1
[*NAT Device-service-location-1] location follow-forwarding-mode
[*NAT Device-service-location-1] commit
[~NAT Device-service-location-1] quit
[~NAT Device] service-instance-group group1
[*NAT Device-service-instance-group-group1] service-location 1
2. Configure an internal server and the public IP address of the server to reuse
the IP address of interface 0/2/2. In this example, TCP port 80 is used on an
internal server.
[~NAT Device] nat instance nat1
[~NAT Device-nat-instance-nat1] nat server protocol tcp global unnumbered interface
GigabitEthernet0/2/2 80 inside 10.1.1.254 80
[*NAT Device-nat-instance-nat1] commit
[~NAT Device-nat-instance-nat1] quit
----End
Configuration Files
#
sysname NAT Device
#
service-location 1
location follow-forwarding-mode
#
service-instance-group group1
service-location 1
#
Example for Configuring a NAT Easy IP Address Pool and a GRE Tunnel to Share an
Interface Address
Configure a NAT Easy IP address pool and a GRE tunnel to share an interface
address, which helps conserve public addresses.
Networking Requirements
NOTE
Figure 1-71 Configuring a NAT Easy IP address pool and a GRE tunnel to share an
interface address
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a GRE tunnel between DeviceA and DeviceB.
2. Configure basic NAT functions on DeviceA.
3. Configure a NAT traffic diversion policy on DeviceA.
4. Configure a NAT conversion policy on DeviceA.
5. Configure static routes on DeviceA and DeviceB to enable them to
communicate.
Data Preparation
To complete the configuration, you need the following data:
● Source and destination IP address on each end of a GRE tunnel, and IP
addresses of a tunnel interface
● Slot ID of a NAT service board, index of a service-location group, and name of
a service-instance group on DeviceA
● Name and index of a NAT instance on DeviceA
● Name and number of a public address pool and Easy IP mode for public
address segments in the NAT address pool on DeviceA
● ACL number, traffic classifier name, traffic behavior name, and traffic policy
name on DeviceA
● Number and IP address of an interface to which a NAT traffic diversion policy
is applied on DeviceA
Procedure
1. Create a GRE tunnel.
a. Configure a GRE tunnel on DeviceA.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface GigabitEthernet 0/2/0
[~DeviceA-GigabitEthernet0/2/0] ip address 172.20.1.1 255.255.255.0
[*DeviceA-GigabitEthernet0/2/0] binding tunnel gre
[*DeviceA-GigabitEthernet0/2/0] commit
[~DeviceA-GigabitEthernet0/2/0] quit
[~DeviceA] interface tunnel 1
[*DeviceA-Tunnel1] tunnel-protocol gre
[*DeviceA-Tunnel1] ip address 172.22.1.1 255.255.255.0
[*DeviceA-Tunnel1] source GigabitEthernet 0/2/0
[*DeviceA-Tunnel1] destination 172.20.1.2
[*DeviceA-Tunnel1] commit
[~DeviceA-Tunnel1] quit
[*DeviceA-nat-instance-nat1] commit
[~DeviceA-nat-instance-nat1] quit
3. Configure a NAT traffic diversion policy on the inbound interface of DeviceA.
a. Configure an ACL rule named rule1 so that traffic from the internal PC to
server 2 is diverted to the NAT service board for NAT processing.
[~DeviceA] acl 3001
[*DeviceA-acl4-advance-3001] rule 1 permit ip destination 2.1.1.0 0.0.0.255
[*DeviceA-acl4-advance-3001] commit
[~DeviceA-acl4-advance-3001] quit
b. Configure a traffic classifier named classifier1 and define an ACL-based
matching rule.
[~DeviceA] traffic classifier classifier1
[*DeviceA-classifier-classifier1] if-match acl 3001
[*DeviceA-classifier-classifier1] commit
[~DeviceA-classifier-classifier1] quit
c. Configure a traffic behavior named behavior1 and bind it to the NAT
instance.
[~DeviceA] traffic behavior behavior1
[*DeviceA-behavior-behavior1] nat bind instance nat1
[*DeviceA-behavior-behavior1] commit
[~DeviceA-behavior-behavior1] quit
d. Define a NAT traffic policy named policy1 to associate the traffic
classifier with the traffic behavior.
[~DeviceA] traffic policy policy1
[*DeviceA-trafficpolicy-policy1] classifier classifier1 behavior behavior1
[*DeviceA-trafficpolicy-policy1] commit
[~DeviceA-trafficpolicy-policy1] quit
e. Apply the NAT traffic diversion policy in the interface view.
[~DeviceA] interface GigabitEthernet 0/2/1
[~DeviceA-GigabitEthernet0/2/1] traffic-policy policy1 inbound
[*DeviceA-GigabitEthernet0/2/1] commit
[~DeviceA-GigabitEthernet0/2/1] quit
4. Configure a NAT conversion policy on DeviceA. (This configuration is required
only by dedicated boards.)
[~DeviceA] nat instance nat1
[~DeviceA-nat-instance-nat1] nat outbound 3001 address-group address-group1
[*DeviceA-nat-instance-nat1] commit
[~DeviceA-nat-instance-nat1] quit
5. Configure static routes.
a. Configure a static route on DeviceA so that the internal PC can access
server 1 at 1.1.1.1 through the GRE tunnel.
[~DeviceA] ip route-static 1.1.1.0 255.255.255.0 Tunnel1
[*DeviceA] commit
b. Configure another static route on DeviceA so that the internal PC can
access server 2 at 2.1.1.1 through GE 0/2/0.
[~DeviceA] ip route-static 2.1.1.0 255.255.255.0 172.20.1.2
[*DeviceA] commit
c. Configure a static route on DeviceB so that server 1 can access the
internal PC at 10.1.1.1 through the GRE tunnel.
[~DeviceB] ip route-static 10.1.1.0 255.255.255.0 Tunnel1
[*DeviceB] commit
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 172.20.1.1 255.255.255.0
binding tunnel gre
#
interface Tunnel1
tunnel-protocol gre
ip address 172.22.1.1 255.255.255.0
source GigabitEthernet 0/2/0
destination 172.20.1.2
\
vsm on-board-mode disable//This configuration is generated in the dedicated board scenario.
#
license
active nat session-table size 6 slot 9//This configuration is generated in the dedicated board scenario.
active nat bandwidth-enhance 40 slot 9//This configuration is generated in the dedicated board scenario.
#
service-location 1
location slot 9//This configuration is generated in the dedicated board scenario.
location follow-forwarding-mode//This configuration is generated in the on-board scenario.
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1 unnumbered interface GigabitEthernet 0/2/0
nat outbound 3001 address-group address-group1//This configuration is generated in the dedicated board
scenario.
#
acl number 3001
rule 1 permit ip destination 2.1.1.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat1
#
traffic policy policy1
share-mode
classifier classifier1 behavior behavior1 precedence 1
#
interface GigabitEthernet 0/2/1
undo shutdown
ip address 10.1.1.2 255.255.255.0
traffic-policy policy1 inbound
#
ip route-static 1.1.1.0 255.255.255.0 Tunnel1
ip route-static 2.1.1.0 255.255.255.0 172.20.1.2
#
return
Networking Requirements
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure basic NAT functions.
Step 2 Configure the mapping between the public and private IP addresses of the internal
NAT server.
[~NAT-Device] nat instance nat1 id 1
[~NAT-Device-nat-instance-nat1] nat server protocol tcp global unnumbered interface GigabitEthernet
0/2/1 ftp inside 192.168.2.1 ftp
[~NAT-Device-nat-instance-nat1] nat server protocol tcp global unnumbered interface GigabitEthernet
0/2/1 www inside 192.168.4.1 www
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
[~NAT-Device] nat instance nat2 id 2
[~NAT-Device-nat-instance-nat2] nat server protocol tcp global unnumbered interface GigabitEthernet
0/2/2 ftp inside 192.168.3.1 ftp
[~NAT-Device-nat-instance-nat2] nat server protocol tcp global unnumbered interface GigabitEthernet
0/2/2 www inside 192.168.5.1 www
[*NAT-Device-nat-instance-nat2] commit
[~NAT-Device-nat-instance-nat2] quit
Step 3 Configure the NAT ALG function. Enable the NAT ALG function for FTP and DNS in
each NAT instance. Configure a DNS mapping entry that contains a domain name,
a public IP address, and a private IP address in each NAT instance for NAT
processing that is performed after the DNS server resolves the IP address of the
internal server.
[~NAT-Device] nat instance nat1
[~NAT-Device-nat-instance-nat1] nat alg ftp
[*NAT-Device-nat-instance-nat1] nat alg dns
[*NAT-Device-nat-instance-nat1] nat dns-mapping domain www.huawei.com global-address 10.11.1.1
inside-address 192.168.4.1
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
[~NAT-Device] nat instance nat2
[~NAT-Device-nat-instance-nat2] nat alg ftp
[*NAT-Device-nat-instance-nat2] nat alg dns
[*NAT-Device-nat-instance-nat2] nat dns-mapping domain www.huawei.com global-address 10.11.2.1
inside-address 192.168.5.1
[*NAT-Device-nat-instance-nat2] commit
[~NAT-Device-nat-instance-nat2] quit
8. Configure traffic behaviors for data that needs to be redirected. Set the
redirection next-hop IP address to 10.11.1.2 in a traffic behavior named
redirectover2 and 10.11.2.2 in a traffic behavior named redirectover3.
[~NAT-Device] traffic behavior redirectover1
[*NAT-Device-behavior-redirectover1] commit
[~NAT-Device-behavior-redirectover1] quit
[~NAT-Device] traffic behavior redirectover2
[*NAT-Device-behavior-redirectover2] redirect ip-nexthop 10.11.1.2
[*NAT-Device-behavior-redirectover2] commit
[~NAT-Device-behavior-redirectover2] quit
[~NAT-Device] traffic behavior redirectover3
[*NAT-Device-behavior-redirectover3] redirect ip-nexthop 10.11.2.2
[*NAT-Device-behavior-redirectover3] commit
[~NAT-Device-behavior-redirectover3] quit
9. Bind the traffic classifiers with the traffic behaviors in a traffic policy.
– Data flows exchanged by users on the network segment of
192.168.0.0/16 within the enterprise network are assigned a priority value
of 1 (higher) and are not processed by NAT.
– Data flows with the source IP address 192.168.2.1/32 pass through
outbound interface 2 and are assigned a priority value of 2.
– Data flows with the source IP address 192.168.3.1/32 pass through
outbound interface 3 and are assigned a priority value of 3.
----End
Configuration Files
#
sysname NAT-Device
#
load-balance hash-key ip source-ip slot all
#
service-location 1
location follow-forwarding-mode
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1 unnumbered interface GigabitEthernet 0/2/1
nat server protocol tcp global unnumbered interface GigabitEthernet 0/2/1 ftp inside 192.168.2.1 ftp
nat server protocol tcp global unnumbered interface GigabitEthernet 0/2/1 www inside 192.168.4.1 www
nat alg ftp
nat alg dns
redirect ip-nexthop 10.11.1.2 outbound
nat dns-mapping domain www.huawei.com global-address 10.11.1.1 inside-address 192.168.4.1
#
nat instance nat2 id 2
service-instance-group group1
nat address-group address-group1 group-id 1 unnumbered interface GigabitEthernet 0/2/2
nat server protocol tcp global unnumbered interface GigabitEthernet 0/2/2 ftp inside 192.168.3.1 ftp
nat server protocol tcp global unnumbered interface GigabitEthernet 0/2/2 www inside 192.168.5.1 www
nat alg ftp
nat alg dns
redirect ip-nexthop 10.11.2.2 outbound
nat dns-mapping domain www.huawei.com global-address 10.11.2.1 inside-address 192.168.5.1
#
acl number 3000
rule 1 permit ip
#
acl number 3001
rule 1 permit ip source 192.168.0.0 0.0.255.255
rule 1 permit ip destination 2.1.1.0 0.0.0.255
#
acl number 3002
rule 1 permit ip source 192.168.2.1 0.0.0.0
rule 1 permit ip destination 2.1.1.0 0.0.0.255
#
acl number 3003
rule 1 permit ip source 192.168.3.1 0.0.0.0
rule 1 permit ip destination 2.1.1.0 0.0.0.255
#
acl number 3004
rule 1 permit ip source 192.168.4.1 0.0.0.0
rule 1 permit ip destination 2.1.1.0 0.0.0.255
#
acl number 3005
rule 1 permit ip source 192.168.5.1 0.0.0.0
rule 1 permit ip destination 2.1.1.0 0.0.0.255
#
traffic classifier redirectover1 operator or
if-match acl 3001 precedence 1
#
traffic classifier redirectover2 operator or
if-match acl 3002 precedence 1
#
traffic classifier redirectover3 operator or
if-match acl 3003 precedence 1
#
traffic classifier redirectover4 operator or
if-match acl 3004 precedence 1
#
traffic classifier redirectover5 operator or
if-match acl 3005 precedence 1
#
traffic behavior redirectover1
#
traffic behavior redirectover2
redirect ip-nexthop 10.11.1.2
#
traffic behavior redirectover3
redirect ip-nexthop 10.11.2.2
#
traffic policy redirect
classifier redirectover1 behavior redirectover1 precedence 1
classifier redirectover2 behavior redirectover2 precedence 2
classifier redirectover3 behavior redirectover3 precedence 3
classifier redirectover4 behavior redirectover2 precedence 4
classifier redirectover5 behavior redirectover3 precedence 5
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.0.1 255.255.0.0
traffic-policy redirect inbound
#
interface GigabitEthernet 0/2/1
undo shutdown
ip address 10.11.1.1 255.255.255.0
nat bind acl 3000 instance nat1
#
interface GigabitEthernet 0/2/2
undo shutdown
ip address 10.11.2.1 255.255.255.0
nat bind acl 3000 instance nat2
#
Networking Requirements
NOTE
Figure 1-73 Networking for configuring dual-uplink NAT and an internal server on
a campus network
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● NAT instance names (nat1 and nat2), indexes (1 and 2), and public address
pool and education network address pool assigned to nat1 and nat2,
respectively
● NAT-Device's address pool names (address-group1 and address-group2) and
address pool numbers (1 and 2)
● ACL numbers (3001 through 3005)
● Names (GE 0/2/0, GE 0/2/2, and GE 0/2/1) and IP addresses (192.168.1.1/24,
2.1.1.1/24, and 1.1.1.1/24) of interfaces, respectively, to which a NAT traffic
diversion policy is applied
● Private IP address (192.168.4.1) and public IP address (2.1.1.3) of an internal
server within the campus network
Procedure
Step 1 Configure basic NAT functions.
1. Create NAT instances named nat1 and nat2.
<HUAWEI> system-view
[~HUAWEI] sysname NAT-Device
[*HUAWEI] commit
[~NAT-Device] service-location 1
[*NAT-Device-service-location-1] location follow-forwarding-mode
[*NAT-Device-service-location-1] commit
[~NAT-Device-service-location-1] quit
[~NAT-Device] service-instance-group group1
[*NAT-Device-service-instance-group-group1] service-location 1
[*NAT-Device-service-instance-group-group1] commit
[~NAT-Device-service-instance-group-group1] quit
[~NAT-Device] nat instance nat1 id 1
[*NAT-Device-nat-instance-nat1] service-instance-group group1
[*NAT-Device-nat-instance-nat1] commit
[~NAT-Device-nat-instance-nat1] quit
[~NAT-Device] nat instance nat2 id 2
[*NAT-Device-nat-instance-nat2] service-instance-group group1
[*NAT-Device-nat-instance-nat2] commit
[~NAT-Device-nat-instance-nat2] quit
Step 2 Configure an internal server in the NAT instance named nat2 and assign the
private and public IP addresses of 192.168.4.1 and 2.1.1.3, respectively.
[~NAT-Device] nat instance nat2 id 2
[~NAT-Device-nat-instance-nat2] nat server-mode enable
[*NAT-Device-nat-instance-nat2] nat server global 2.1.1.3 inside 192.168.4.1
[*NAT-Device-nat-instance-nat2] commit
[~NAT-Device-nat-instance-nat2] quit
8. Configure traffic behaviors for data that needs to be redirected. Set the
redirection next-hop IP address to 1.1.1.2 in a traffic behavior named
redirectover1 and 2.1.1.2 in a traffic behavior named redirectover2.
[~NAT-Device] traffic behavior redirectover1
[*NAT-Device-behavior-redirectover1] redirect ip-nexthop 1.1.1.2
[*NAT-Device-behavior-redirectover1] commit
[~NAT-Device-behavior-redirectover1] quit
[~NAT-Device] traffic behavior redirectover2
[*NAT-Device-behavior-redirectover2] redirect ip-nexthop 2.1.1.2
[*NAT-Device-behavior-redirectover2] commit
[~NAT-Device-behavior-redirectover2] quit
[~NAT-Device] traffic behavior redirectover3
[*NAT-Device-behavior-redirectover3] commit
[~NAT-Device-behavior-redirectover3] quit
9. Bind the traffic classifiers with the traffic behaviors in a traffic policy.
– Data flows destined for 1.1.1.2/32 pass through the outbound interface 2
and are assigned a priority value of 1 (higher).
– Data flows destined for 2.1.1.2/32 pass through the outbound interface 3
and are assigned a priority value of 2 (higher).
– Data flows exchanged by users on the network segment of
192.168.0.0/16 within the campus network are assigned a priority value
of 3 and are not processed by NAT.
– Data flows originating from the network segment 192.168.2.0/24 pass
through the outbound interface 2 and are assigned a priority value of 4
(lower).
– Data flows originating from the network segment 192.168.3.0/24 pass
through the outbound interface 3 and are assigned a priority value of 5
(lower).
[~NAT-Device] traffic policy redirect
[*NAT-Device-trafficpolicy-redirect] classifier redirectover1 behavior redirectover1 precedence 1
[*NAT-Device-trafficpolicy-redirect] classifier redirectover2 behavior redirectover2 precedence 2
[*NAT-Device-trafficpolicy-redirect] classifier redirectover3 behavior redirectover3 precedence 3
[*NAT-Device-trafficpolicy-redirect] classifier redirectover4 behavior redirectover1 precedence 4
[*NAT-Device-trafficpolicy-redirect] classifier redirectover5 behavior redirectover2 precedence 5
[*NAT-Device-trafficpolicy-redirect] commit
[~NAT-Device-trafficpolicy-redirect] quit
----End
Configuration Files
#
sysname NAT-Device
#
service-location 1
location follow-forwarding-mode
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1 1.1.1.50 1.1.1.100
redirect ip-nexthop 1.1.1.2 outbound
#
nat instance nat2 id 2
service-instance-group group1
nat address-group address-group2 group-id 2 2.1.1.50 2.1.1.100
nat server-mode enable
nat server global 2.1.1.3 inside 192.168.4.1
redirect ip-nexthop 2.1.1.2 outbound
#
acl number 3000
rule 1 permit ip
#
acl number 3001
rule 1 permit ip destination 1.1.1.0 0.0.0.255
#
acl number 3002
rule 1 permit ip destination 2.1.1.0 0.0.0.255
#
acl number 3003
rule 1 permit ip destination 192.168.0.0 0.0.255.255
#
acl number 3004
rule 1 permit ip source 192.168.2.0 0.0.0.255
#
acl number 3005
rule 1 permit ip source 192.168.3.0 0.0.0.255
#
traffic classifier redirectover1 operator or
if-match acl 3001 precedence 1
#
traffic classifier redirectover2 operator or
if-match acl 3002 precedence 1
#
traffic classifier redirectover3 operator or
if-match acl 3003 precedence 1
#
traffic classifier redirectover4 operator or
if-match acl 3004 precedence 1
#
traffic classifier redirectover5 operator or
if-match acl 3005 precedence 1
#
traffic behavior redirectover1
redirect ip-nexthop 1.1.1.2
#
traffic behavior redirectover2
redirect ip-nexthop 2.1.1.2
#
traffic behavior redirectover3
#
traffic policy redirect
classifier redirectover1 behavior redirectover1 precedence 1
classifier redirectover2 behavior redirectover2 precedence 2
classifier redirectover3 behavior redirectover3 precedence 3
classifier redirectover4 behavior redirectover1 precedence 4
classifier redirectover5 behavior redirectover2 precedence 5
#
interface GigabitEthernet 0/2/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
traffic-policy redirect inbound
#
Networking Requirements
NOTE
NOTE
In Figure 1-74, home users access a BRAS using IPoE. The BRAS implements user
authentication, authorization, and accounting. It also provides the NAT service to
convert between the private and public IP addresses of home users, so that the
home users can access the Internet.
Home users of user group 1 can access the Internet.
Figure 1-74 Example for configuring IPoEoVLAN access together with NAT
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic NAT functions.
2. Configure NAT user information and RADIUS authentication on the BRAS.
3. Configure a NAT diversion policy.
Data Preparation
To complete the configuration, you need the following data:
● Name of a NAT instance
● NAT address pool's number and start and end IP addresses
● User group name
● ACL and UCL numbers
● NAT traffic diversion policy information
Procedure
Step 1 Create a NAT instance named nat1.
<HUAWEI> system-view
[~HUAWEI] service-location 1
[*HUAWEI-service-location-1] location follow-forwarding-mode
[*HUAWEI-service-location-1] commit
[~HUAWEI-service-location-1] quit
[~HUAWEI] service-instance-group group1
[*HUAWEI-service-instance-group-group1] service-location 1
[*HUAWEI-service-instance-group-group1] commit
[~HUAWEI-service-instance-group-group1] quit
[~HUAWEI] nat instance nat1 id 1
[*HUAWEI-nat-instance-nat1] service-instance-group group1
[*HUAWEI-nat-instance-nat1] commit
[~HUAWEI-nat-instance-nat1] quit
[~HUAWEI-aaa-accounting-acct1] quit
[~HUAWEI-aaa] domain isp1
[*HUAWEI-aaa-domain-isp1] authentication-scheme auth1
[*HUAWEI-aaa-domain-isp1] accounting-scheme acct1
[*HUAWEI-aaa-domain-isp1] radius-server group rd1
[*HUAWEI-aaa-domain-isp1] ip-pool pool1
[*HUAWEI-aaa-domain-isp1] user-group group1
[*HUAWEI-aaa-domain-isp1] commit
[~HUAWEI-aaa-domain-isp1] quit
[~HUAWEI-aaa] quit
Step 4 Configure a traffic classification rule, a NAT behavior, and a NAT traffic diversion
policy and apply the policy.
NOTE
3. Configure a traffic behavior named b1 and bind the traffic behavior to the
NAT instance named nat1.
[~HUAWEI] traffic behavior b1
[*HUAWEI-behavior-b1] nat bind instance nat1
[*HUAWEI-behavior-b1] commit
[~HUAWEI-behavior-b1] quit
4. Configure a NAT diversion policy and associate the ACL rule with the traffic
behavior.
[~HUAWEI] traffic policy p1
[*HUAWEI-trafficpolicy-p1] share-mode
[*HUAWEI-trafficpolicy-p1] classifier c1 behavior b1 precedence 1
[*HUAWEI-trafficpolicy-p1] commit
[~HUAWEI-trafficpolicy-p1] quit
----End
Configuration Files
● BRAS configuration file
#
service-location 1
location follow-forwarding-mode
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group group1 group-id 1 11.1.1.1 mask 26
#
radius-server group rd1
radius-server shared-key-cipher %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server authentication 192.168.8.9 1812 weight 0
radius-server accounting 192.168.8.9 1813 weight 0
radius-server type standard
#
ip pool pool1 bas local
gateway 10.64.0.1 255.255.0.0
section 0 10.64.0.2 10.64.255.254
dns-server 192.168.8.2
#
aaa
authentication-scheme auth1
authentication-mode radius
accounting-scheme acct1
accounting-mode radius
#
domain isp1
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool pool1
user-group group1
#
user-group group1
#
acl number 6001
rule 1 permit ip source user-group group1
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat1
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
interface Eth-Trunk2.1
user-vlan 1 2
#
bas
access-type layer2-subscriber default-domain authentication isp1
client-option82
option82-relay-mode include allvalue
authentication-method bind
#
#
return
Networking Requirements
On the distributed NAT network shown in Figure 1-75, a BRAS provides NAT
functions. PC1 on VPN-A attempts to access PC2 on VPN-B through the BRAS.
Upon receipt of user-side traffic, the BRAS performs NAT translation, in addition to
user authentication, authorization, and accounting.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Name of a NAT instance
● Name, RD, and VPN-target extended community attributes of a VPN instance
● NAT address pool's number and start and end IP addresses
● User group name
● ACL number and UCL number
● Information about the NAT traffic diversion policy
Procedure
1. Configure basic NAT functions.
a. Set the maximum number of sessions that can be created on the CPU of
the NAT service board to 16M.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS
[*HUAWEI] commit
[*BRAS] vsm on-board-mode disable
[~BRAS] commit
[~BRAS] license
[~BRAS-license] active nat session-table size 16 slot 9
[~BRAS-license] active nat bandwidth-enhance 40 slot 9
[~BRAS-license] quit
3. Configure a NAT address pool and a NAT traffic conversion policy so that NAT
is performed using addresses in the NAT address pool. This applies to all
packets that are diverted by an interface board to a NAT service board.
[~BRAS] nat instance nat1 id 1
[*BRAS-nat-instance-nat1] nat address-group address-group1 group-id 1 11.1.1.1 11.1.1.5 vpn-
instance vpnb
[*BRAS-nat-instance-nat1] nat outbound any address-group address-group1
[*BRAS-nat-instance-nat1] commit
[~BRAS-nat-instance-nat1] quit
[~BRAS-ip-pool-baspool1] quit
c. Configure a traffic behavior named b1, which binds traffic to the NAT
instance named nat1.
[~BRAS] traffic behavior b1
[*BRAS-behavior-b1] nat bind instance nat1
[*BRAS-behavior-b1] commit
[~BRAS-behavior-b1] quit
d. Define a NAT policy to associate the ACL rule with the traffic behavior.
[~BRAS] traffic policy p1
[*BRAS-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS-trafficpolicy-p1] commit
[~BRAS-trafficpolicy-p1] quit
Configuration Files
BRAS configuration file
#
sysname BRAS
#
vsm on-board-mode disable
#
user-group group1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
#
ip vpn-instance vpnb
ipv4-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 222:1 export-extcommunity
vpn-target 222:1 import-extcommunity
#
#
service-location 1
location slot 9
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.1.1.1 11.1.1.5 vpn-instance vpnb
nat outbound any address-group address-group1
#
ip pool baspool1 bas local
vpn-instance vpna
gateway 10.1.1.101 255.255.255.0
section 1 10.1.1.1 10.1.1.100
#
acl 6001
rule 1 permit ip source user-group group1
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat1
#
traffic policy p1
classifier c1 behavior b1 precedence 1
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
#
aaa
authentication-scheme default0
authentication-mode RADIUS
#
accounting-scheme default0
accounting-mode RADIUS
#
domain natbras
authentication-scheme default0
accounting-scheme default0
Example for Use VPN NAT to Implement User Access in an L3VPN Scenario
This section provides an example for configuring VPN NAT to allow VPN users on
the same network segment to access one another in an L3VPN scenario and to
allow the VPN users to access an internal server on the same network segment. A
networking diagram is provided to help you understand the configuration
procedure. The example provides the networking requirements, configuration
roadmap, configuration procedure, and configuration files.
Networking Requirements
NOTE
On the L3VPN shown in Figure 1-76, CE1 and CE2 belong to VPN-A. VPN-A
connected to PE1 has IP addresses on the network segment 10.1.0.0/16, and VPN-
A connected to PE2 has IP addresses on the network segment 10.2.0.0/16. VPN
NAT is required for communication between VPN users connected to CE1 and CE2.
VPN NAT also needs to be configured on the internal FTP server so that the VPN
users connected to CE2 can access the internal FTP server connected to CE1.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces
● Service-location group's index (1), service-instance group's name (group1),
and service boards' slot IDs (1 and 3)
● NAT instance name (nat1) and index (1)
● NAT address pool name (address-group1), address pool number (1), and a
range of IP addresses
● Internal server's private IP address (10.1.1.226) and public IP address
(11.1.1.2)
● ACL number (3001), traffic classifier (c1), traffic behavior (b1), and traffic
policy (p1)
Procedure
Step 1 Configure an IGP on the MPLS backbone network to implement connectivity
between the PEs and P.
# Configure PE1.
<HUAWEI> system-view
[HUAWEI] sysname PE1
[~PE1] interface gigabitEthernet 0/2/3
[*PE1-GigabitEthernet0/2/3] undo shutdown
[*PE1-GigabitEthernet0/2/3] ip address 172.16.2.2 255.255.255.0
[*PE1-GigabitEthernet0/2/3] commit
[~PE1-GigabitEthernet0/2/3] quit
[~PE1] interface LoopBack0
[*PE1-LoopBack0] ip address 1.1.1.9 255.255.255.255
[*PE1-LoopBack0] commit
[~PE1-LoopBack0] quit
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 172.16.2.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure the P.
<HUAWEI> system-view
[HUAWEI] sysname P
[~P] interface gigabitEthernet 0/2/3
[*P-GigabitEthernet0/2/3] undo shutdown
[*P-GigabitEthernet0/2/3] ip address 172.16.2.1 255.255.255.0
[*P-GigabitEthernet0/2/3] commit
[~P-GigabitEthernet0/2/3] quit
[~P] interface gigabitEthernet 0/2/2
[*P-GigabitEthernet0/2/2] ip address 172.16.3.2 255.255.255.0
[*P-GigabitEthernet0/2/2] commit
[~P-GigabitEthernet0/2/2] quit
[~P] interface LoopBack0
[*P-LoopBack0] ip address 2.2.2.9 255.255.255.255
[*P-LoopBack0] commit
[~P-LoopBack0] quit
[~P] ospf 1
[~P-ospf-1] area 0
[*P-ospf-1-area-0.0.0.0] network 172.16.2.0 0.0.0.255
[*P-ospf-1-area-0.0.0.0] network 172.16.3.0 0.0.0.255
[*P-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[*P-ospf-1-area-0.0.0.0] commit
[~P-ospf-1-area-0.0.0.0] quit
[~P-ospf-1] quit
# Configure PE2.
<HUAWEI> system-view
[HUAWEI] sysname PE2
[~PE2] interface gigabitEthernet 0/2/2
[*PE2-GigabitEthernet0/2/2] undo shutdown
[*PE2-GigabitEthernet0/2/2] ip address 172.16.3.1 255.255.255.0
[*PE2-GigabitEthernet0/2/2] commit
[~PE2-GigabitEthernet0/2/2] quit
[~PE2] interface LoopBack0
[*PE2-LoopBack0] ip address 3.3.3.9 255.255.255.255
[*PE2-LoopBack0] commit
[~PE2-LoopBack0] quit
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 172.16.3.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
Step 2 Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on the
MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] lsp-trigger all
[*PE1-mpls] commit
[~PE1-mpls] quit
[~PE1] mpls ldp
[*PE1-mpls-ldp] commit
[~PE1-mpls-ldp] quit
[~PE1] interface gigabitEthernet 0/2/3
[*PE1-GigabitEthernet0/2/3] mpls
[*PE1-GigabitEthernet0/2/3] mpls ldp
[*PE1-GigabitEthernet0/2/3] commit
[~PE1-GigabitEthernet0/2/3] quit
# Configure the P.
[~P] mpls lsr-id 2.2.2.9
[*P] mpls
[*P-mpls] lsp-trigger all
[*P-mpls] commit
[~P-mpls] quit
[~P] mpls ldp
[*P-mpls-ldp] commit
[~P-mpls-ldp] quit
[~P] interface gigabitEthernet 0/2/3
[*P-GigabitEthernet0/2/3] mpls
[*P-GigabitEthernet0/2/3] mpls ldp
[*P-GigabitEthernet0/2/3] commit
[~P-GigabitEthernet0/2/3] quit
[~P] interface gigabitEthernet 0/2/2
[*P-GigabitEthernet0/2/2] mpls
[*P-GigabitEthernet0/2/2] mpls ldp
[*P-GigabitEthernet0/2/2] commit
[~P-GigabitEthernet0/2/2] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] lsp-trigger all
[*PE2-mpls] commit
[~PE2-mpls] quit
[~PE2] mpls ldp
[*PE2-mpls-ldp] commit
[~PE2-mpls-ldp] quit
[~PE2] interface gigabitEthernet 0/2/2
[*PE2-GigabitEthernet0/2/2] mpls
[*PE2-GigabitEthernet0/2/2] mpls ldp
[*PE2-GigabitEthernet0/2/2-mpls-ldp] commit
[~PE2-GigabitEthernet0/2/2-mpls-ldp] quit
Step 3 Configure VPN instances on PEs so that CEs can access PEs.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] commit
[~PE1-vpn-instance-vpna-af-ipv4] quit
[~PE1] interface gigabitEthernet 0/2/4
[*PE1—GigabitEthernet0/2/4] undo shutdown
[*PE1—GigabitEthernet0/2/4] ip binding vpn-instance vpna
[*PE1-GigabitEthernet0/2/4] ip address 172.16.1.2 255.255.255.0
[*PE1-GigabitEthernet0/2/4] commit
[~PE1-GigabitEthernet0/2/4] quit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2] interface gigabitEthernet 0/2/5
[*PE2—GigabitEthernet0/2/5] undo shutdown
[*PE2—GigabitEthernet0/2/5] ip binding vpn-instance vpna
[*PE2-GigabitEthernet0/2/5] ip address 172.16.4.2 255.255.255.0
[*PE2-GigabitEthernet0/2/5] commit
[~PE2-GigabitEthernet0/2/5] quit
Step 4 Establish EBGP peer relationships between PEs and CEs and import VPN routes.
# Configure CE1.
[~CE1] bgp 200
[*CE1-bgp] peer 172.16.1.2 as-number 100
[*CE1-bgp] import-route direct
[*CE1-bgp] commit
[~CE1-bgp] quit
# Configure CE2.
[~CE2] bgp 300
[*CE2-bgp] peer 172.16.4.2 as-number 100
[*CE2-bgp] import-route direct
[*CE2-bgp] commit
[~CE2-bgp] quit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 172.16.1.1 as-number 200
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] import-route unr
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 172.16.4.1 as-number 300
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] import-route unr
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 1.1.1.9 as-number 100
After completing the configuration, run the display bgp peer command on a PE
to verify that the BGP peer relationship between PEs is in the Established state.
Step 6 Create a service-location group and a service-instance group and bind a NAT
service board to the service-location group. Create a NAT instance and bind it to
the service-instance group.
<~PE1> system-view
[~PE1] service-location 1
[*PE1-service-location-1] location follow-forwarding-mode
[*PE1-service-location-1] commit
[~PE1-service-location-1] quit
[~PE1] service-instance-group group1
[*PE1-service-instance-group-group1] service-location 1
[*PE1-service-instance-group-group1] commit
[~PE1-service-instance-group-group1] quit
[~PE1] nat instance nat1
[*PE1-nat-instance-nat1] service-instance-group group1
[*PE1-nat-instance-nat1] commit
[~PE1-nat-instance-nat1] quit
Step 8 Configure ACL-based traffic classification rules and a traffic behavior on PE1.
1. Configure traffic classification rules.
– Rule 1: Configure a NAT traffic diversion policy for diverting user traffic
with the network segment address of 10.1.0.0/16 to the server board for
NAT processing.
– Rule 2: Configure a NAT traffic translation policy so that addresses in the
NAT address pool can be assigned to user traffic during the NAT
processing.
[~PE1] acl 3001
[*PE1-acl-adv-3001] rule 1 permit ip source 10.1.0.0 0.0.255.255
[*PE1-acl-adv-3001] rule 2 permit ip vpn-instance vpna source 10.1.0.0 0.0.255.255
[*PE1-acl-adv-3001] commit
[~PE1-acl-adv-3001] quit
3. Configure a traffic behavior, which binds traffic to the NAT instance named
nat1.
[~PE1] traffic behavior b1
[*PE1-behavior-b1] nat bind instance nat1
[*PE1-behavior-b1] commit
[~PE1-behavior-b1] quit
4. Configure a traffic classification policy and associate the ACL rule with the
traffic behavior.
Step 9 On PE1, configure a NAT address pool and a NAT traffic conversion policy so that
interface boards divert matching packets to the NAT service boards to use
addresses in the address pool to perform NAT.
[~PE1] nat instance nat1
[*PE1-nat-instance-nat1] nat address-group address-group1 group-id 1 11.1.1.1 11.1.1.5 vpn-instance
vpna
[*PE1-nat-instance-nat1] commit
[~PE1-nat-instance-nat1] quit
----End
Configuration Files
PE1 configuration file
#
sysname PE1
#
service-location 1
location follow-forwarding-mode
#
service-instance-group group1
service-location 1
#
nat instance nat1
service-instance-group group1
nat address-group address-group1 group-id 1 11.1.1.1 11.1.1.5 vpn-instance vpna
nat server protocol tcp global 11.1.1.2 ftp vpn-instance vpna inside 10.1.1.226 ftp vpn-instance vpna
#
acl 3001
rule 1 permit ip source 10.1.0.0 0.0.255.255
rule 2 permit ip vpn-instance vpna source 10.1.0.0 0.0.255.255
#
traffic classifier c1 operator or
if-match acl 3001 precedence 1
#
traffic behavior b1
nat bind instance nat1
#
traffic policy p1
classifier c1 behavior b1 precedence 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
#
mpls lsr-id 1.1.1.9
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet 0/2/4
undo shutdown
P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet 0/2/3
ip address 172.16.2.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet 0/2/2
ip address 172.16.3.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 172.16.2.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 3.3.3.9
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet 0/2/5
undo shutdown
ip binding vpn-instance vpna
ip address 172.16.4.2 255.255.255.0
#
interface GigabitEthernet 0/2/2
undo shutdown
ip address 172.16.3.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 3.3.3.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack0
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
peer 172.16.4.1 as-number 300
import-route direct
import-route unr
#
ospf 1
area 0.0.0.0
network 172.16.3.0 0.0.0.255
network 3.3.3.9 0.0.0.0
#
return
#
bgp 300
peer 172.16.4.2 as-number 100
import-route direct
#
return
Networking Requirements
NOTE
In an L3VPN scenario shown in Figure 1-77, CE1 and CE3 belong to VPN-A, and
CE2 and CE4 belong to VPN-B. Private network users PC1 and PC2 have the same
IP address. PE2 is a NAT device. PC1 attempts to access PC3 through the NAT
device. PC2 accesses PC4 without using the NAT device. A VPN traffic diversion
policy is configured on the inbound interface of the egress PE to identify traffic of
different VPNs.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure MPLS/BGP IP VPN so that CEs can communicate with each other
and with PEs.
2. Create a service-location group and a service-instance group.
3. Create a NAT instance and bind it to the service-instance group.
4. Configure address and port mapping for a NAT internal server.
5. Configure a NAT traffic diversion policy.
6. Configure a NAT address pool and a NAT traffic conversion policy.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces
● Service-location group index (1), service-instance group name (group1), and
service board slot number (1)
● NAT instance name (nat1) and index (1)
● NAT address pool name (address-group1), address pool number (1), and
public IP address range
● ACL number (3001), traffic classifier (c1), traffic behavior name (b1), and
traffic policy name (p1)
Procedure
Step 1 Configure an IGP on the MPLS backbone network to implement connectivity
between the PEs and P.
# Configure PE1.
<HUAWEI> system-view
[HUAWEI] sysname PE1
[~PE1] interface gigabitEthernet 0/2/3
[*PE1-GigabitEthernet0/2/3] undo shutdown
[*PE1-GigabitEthernet0/2/3] ip address 172.16.3.2 255.255.255.0
[*PE1-GigabitEthernet0/2/3] commit
[~PE1-GigabitEthernet0/2/3] quit
[~PE1] interface LoopBack0
[*PE1-LoopBack0] ip address 1.1.1.9 255.255.255.255
[*PE1-LoopBack0] commit
[~PE1-LoopBack0] quit
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 172.16.3.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure the P.
<HUAWEI> system-view
[HUAWEI] sysname P
[~P] interface gigabitEthernet 0/2/3
[*P-GigabitEthernet0/2/3] undo shutdown
[*P-GigabitEthernet0/2/3] ip address 172.16.3.1 255.255.255.0
[*P-GigabitEthernet0/2/3] commit
[~P-GigabitEthernet0/2/3] quit
[~P] interface gigabitEthernet 0/2/2
[*P-GigabitEthernet0/2/2] ip address 172.16.4.2 255.255.255.0
[*P-GigabitEthernet0/2/2] commit
[~P-GigabitEthernet0/2/2] quit
[~P] interface LoopBack0
[*P-LoopBack0] ip address 2.2.2.9 255.255.255.255
[*P-LoopBack0] commit
[~P-LoopBack0] quit
[~P] ospf 1
[~P-ospf-1] area 0
[*P-ospf-1-area-0.0.0.0] network 172.16.3.0 0.0.0.255
[*P-ospf-1-area-0.0.0.0] network 172.16.4.0 0.0.0.255
[*P-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[*P-ospf-1-area-0.0.0.0] commit
[~P-ospf-1-area-0.0.0.0] quit
[~P-ospf-1] quit
# Configure PE2.
<HUAWEI> system-view
[HUAWEI] sysname PE2
[~PE2] interface gigabitEthernet 0/2/2
[*PE2-GigabitEthernet0/2/2] undo shutdown
[*PE2-GigabitEthernet0/2/2] ip address 172.16.4.1 255.255.255.0
[*PE2-GigabitEthernet0/2/2] commit
[~PE2-GigabitEthernet0/2/2] quit
[~PE2] interface LoopBack0
[*PE2-LoopBack0] ip address 3.3.3.9 255.255.255.255
[*PE2-LoopBack0] commit
[~PE2-LoopBack0] quit
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 172.16.4.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
Step 2 Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on the
MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] lsp-trigger all
[*PE1-mpls] commit
[~PE1-mpls] quit
[~PE1] mpls ldp
[*PE1-mpls-ldp] commit
[~PE1-mpls-ldp] quit
[~PE1] interface gigabitEthernet 0/2/3
[*PE1-GigabitEthernet0/2/3] mpls
[*PE1-GigabitEthernet0/2/3] mpls ldp
[*PE1-GigabitEthernet0/2/3] commit
[~PE1-GigabitEthernet0/2/3] quit
# Configure the P.
[~P] mpls lsr-id 2.2.2.9
[*P] mpls
[*P-mpls] lsp-trigger all
[*P-mpls] commit
[~P-mpls] quit
[~P] mpls ldp
[*P-mpls-ldp] commit
[~P-mpls-ldp] quit
[~P] interface gigabitEthernet 0/2/3
[*P-GigabitEthernet0/2/3] mpls
[*P-GigabitEthernet0/2/3] mpls ldp
[*P-GigabitEthernet0/2/3] commit
[~P-GigabitEthernet0/2/3] quit
[~P] interface gigabitEthernet 0/2/2
[*P-GigabitEthernet0/2/2] mpls
[*P-GigabitEthernet0/2/2] mpls ldp
[*P-GigabitEthernet0/2/2] commit
[~P-GigabitEthernet0/2/2] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] lsp-trigger all
[*PE2-mpls] commit
[~PE2-mpls] quit
[~PE2] mpls ldp
[*PE2-mpls-ldp] commit
[~PE2-mpls-ldp] quit
[~PE2] interface gigabitEthernet 0/2/2
[*PE2-GigabitEthernet0/2/2] mpls
[*PE2-GigabitEthernet0/2/2] mpls ldp
[*PE2-GigabitEthernet0/2/2] commit
[~PE2-GigabitEthernet0/2/2] quit
Step 3 Configure VPN instances on PEs so that CEs can access PEs.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] commit
[~PE1-vpn-instance-vpna-af-ipv4] quit
[~PE1] ip vpn-instance vpnb
[*PE1-vpn-instance-vpnb] ipv4-family
[*PE1-vpn-instance-vpnb-af-ipv4] route-distinguisher 200:1
[*PE1-vpn-instance-vpnb-af-ipv4] vpn-target 222:1 both
[*PE1-vpn-instance-vpnb-af-ipv4] commit
[~PE1-vpn-instance-vpnb-af-ipv4] quit
[~PE1] interface gigabitEthernet 0/2/4
[*PE1—GigabitEthernet0/2/4] undo shutdown
[*PE1—GigabitEthernet0/2/4] ip binding vpn-instance vpna
[*PE1-GigabitEthernet0/2/4] ip address 172.16.1.2 255.255.255.0
[*PE1-GigabitEthernet0/2/4] commit
[~PE1-GigabitEthernet0/2/4] quit
[~PE1] interface gigabitEthernet 0/2/5
[*PE1—GigabitEthernet0/2/5] undo shutdown
[*PE1—GigabitEthernet0/2/5] ip binding vpn-instance vpnb
[*PE1-GigabitEthernet0/2/5] ip address 172.16.2.2 255.255.255.0
[*PE1-GigabitEthernet0/2/5] commit
[~PE1-GigabitEthernet0/2/5] quit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 300:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2] ip vpn-instance vpnb
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 400:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 222:1 both
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2] interface gigabitEthernet 0/2/4
[*PE2—GigabitEthernet0/2/4] undo shutdown
[*PE2—GigabitEthernet0/2/4] ip binding vpn-instance vpna
[*PE2-GigabitEthernet0/2/4] ip address 172.16.5.2 255.255.255.0
[*PE2-GigabitEthernet0/2/4] commit
[~PE2-GigabitEthernet0/2/4] quit
[~PE2] interface gigabitEthernet 0/2/5
[*PE2—GigabitEthernet0/2/5] undo shutdown
[*PE2—GigabitEthernet0/2/5] ip binding vpn-instance vpnb
[*PE2-GigabitEthernet0/2/5] ip address 172.16.6.2 255.255.255.0
[*PE2-GigabitEthernet0/2/5] commit
[~PE2-GigabitEthernet0/2/5] quit
Step 4 Create EBGP peer relationships between PEs and CEs and import VPN routes.
# Configure CE1.
[~CE1] bgp 200
[*CE1-bgp] peer 172.16.1.2 as-number 100
[*CE1-bgp] import-route direct
[*CE1-bgp] commit
[~CE1-bgp] quit
# Configure CE2.
[~CE2] bgp 300
[*CE2-bgp] peer 172.16.2.2 as-number 100
[*CE2-bgp] import-route direct
[*CE2-bgp] commit
[~CE2-bgp] quit
# Configure CE3.
# Configure CE4.
[~CE4] bgp 500
[*CE4-bgp] peer 172.16.6.2 as-number 100
[*CE4-bgp] import-route direct
[*CE4-bgp] commit
[~CE4-bgp] quit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 172.16.1.1 as-number 200
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[*PE1-bgp] ipv4-family vpn-instance vpnb
[*PE1-bgp-vpnb] peer 172.16.2.1 as-number 200
[*PE1-bgp-vpnb] import-route direct
[*PE1-bgp-vpnb] commit
[~PE1-bgp-vpnb] quit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 172.16.5.1 as-number 300
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
[*PE2-bgp] ipv4-family vpn-instance vpnb
[*PE2-bgp-vpnb] peer 172.16.6.1 as-number 300
[*PE2-bgp-vpnb] import-route direct
[*PE2-bgp-vpnb] import-route unr
[*PE2-bgp-vpnb] commit
[~PE2-bgp-vpnb] quit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface LoopBack0
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] policy vpn-target
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
After completing the configuration, run the display bgp peer command on each
PE. BGP peer relationships between PEs have been established and are in the
Established state.
Step 6 On PE2, create a service-location group and a service-instance group and bind the
service-location group to the NAT service board. Create a NAT instance and bind it
to the service-instance group.
<~PE2> system-view
[~PE2] service-location 1
[*PE2-service-location-1] location follow-forwarding-mode
[*PE2-service-location-1] commit
[~PE2-service-location-1] quit
[~PE2] service-instance-group group1
[*PE2-service-instance-group-group1] service-location 1
[*PE2-service-instance-group-group1] commit
[~PE2-service-instance-group-group1] quit
[~PE2] nat instance nat1 id 1
[*PE2-nat-instance-nat1] service-instance-group group1
[*PE2-nat-instance-nat1] commit
[~PE2-nat-instance-nat1] quit
Step 7 Configure an ACL-based traffic classification rule and a traffic behavior on PE2.
1. Configure a traffic classification rule.
[~PE2] acl 3001
[*PE2-acl-adv-3001] rule 1 permit ip source 10.1.1.0 0.0.0.255
[*PE2-acl-adv-3001] commit
[~PE2-acl-adv-3001] quit
3. Configure a traffic behavior, which binds traffic to the NAT instance named
nat1.
[~PE2] traffic behavior b1
[*PE2-behavior-b1] nat bind instance nat1
[*PE2-behavior-b1] commit
[~PE2-behavior-b1] quit
4. Configure a traffic classification policy to associate the ACL rule with the
traffic behavior.
[~PE2] traffic policy p1
[*PE2-trafficpolicy-p1] classifier c1 behavior b1
[*PE2-trafficpolicy-p1] commit
[~PE2-trafficpolicy-p1] quit
Step 8 On PE2, configure a NAT address pool and a NAT traffic conversion policy so that
NAT is performed using addresses in the NAT address pool. This applies to all
packets that are diverted by an interface board to a NAT service board.
[~PE2] nat instance nat1 id 1
[*PE2-nat-instance-nat1] nat address-group address-group1 group-id 1 11.1.1.1 11.1.1.5 vpn-instance
vpna
[*PE2-nat-instance-nat1] commit
[~PE2-nat-instance-nat1] quit
----End
Configuration Files
PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
#
ip vpn-instance vpnb
ipv4-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 222:1 export-extcommunity
vpn-target 222:1 import-extcommunity
#
#
mpls lsr-id 1.1.1.9
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet 0/2/4
undo shutdown
ip binding vpn-instance vpna
ip address 172.16.1.2 255.255.255.0
#
interface GigabitEthernet 0/2/5
undo shutdown
ip binding vpn-instance vpnb
ip address 172.16.2.2 255.255.255.0
#
interface GigabitEthernet 0/2/3
undo shutdown
ip address 172.16.3.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.9 255.255.255.255
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack0
#
ipv4-family vpn-instance vpna
peer 172.16.1.1 as-number 200
import-route direct
import-route unr
#
ipv4-family vpn-instance vpnb
peer 172.16.2.1 as-number 200
import-route direct
import-route unr
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ospf 1
area 0.0.0.0
network 172.16.3.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
return
P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet 0/2/3
ip address 172.16.3.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet 0/2/2
ip address 172.16.4.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 172.16.3.0 0.0.0.255
network 172.16.4.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
#
return
Networking Requirements
NOTE
In the SRv6 scenario shown in Figure 1-78, CE1 and CE2 belong to VPN-A, and
PE2 is a NAT device. PC1 needs to be able to communicate with PC2 after address
translation on PE2, and NAT traffic diversion needs to be configured on the
network-side inbound interface of the egress PE (PE2).
Interface 1 and interface 2 in this example represent GE 0/2/1 and GE 0/2/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
● IPv4 addresses of interfaces on the CEs, IPv6 address of interfaces on the PEs
and P device, IS-IS process ID and level, and VPN instance names on the PEs
Procedure
Step 1 # Assign IPv4 addresses to the interfaces on the CEs and PEs according to Figure
1-78. For configuration details, see "Configuration Files" in this section.
Step 2 Enable IPv6 forwarding and configure an IPv6 address for each interface. The
following uses PE1 as an example. The configuration procedures on PE2 and P are
the same.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface GigabitEthernet 0/2/2
[*PE1-GigabitEthernet0/2/2] ipv6 enable
[*PE1-GigabitEthernet0/2/2] ipv6 address 2001:db8:1::1 96
[*PE1-GigabitEthernet0/2/2] quit
[*PE1] interface LoopBack 0
[*PE1-LoopBack0] ipv6 enable
[*PE1-LoopBack0] ipv6 address 2001:db8:10::1 64
[*PE1-LoopBack0] quit
[*PE1] commit
[*P] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface GigabitEthernet 0/2/2
[*PE2-GigabitEthernet0/2/2] isis ipv6 enable 1
[*PE2-GigabitEthernet0/2/2] quit
[*PE2] interface LoopBack 0
[*PE2-LoopBack0] isis ipv6 enable 1
[*PE2-LoopBack0] quit
[*PE2] commit
Step 4 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the interface that connects each PE to a CE to the VPN instance
on that PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface GigabitEthernet 0/2/1
[*PE1-GigabitEthernet0/2/1] ip binding vpn-instance vpna
[*PE1-GigabitEthernet0/2/1] ip address 172.16.1.2 24
[*PE1-GigabitEthernet0/2/1] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface GigabitEthernet 0/2/1
[*PE2-GigabitEthernet0/2/1] ip binding vpn-instance vpna
[*PE2-GigabitEthernet0/2/1] ip address 172.16.5.2 24
[*PE2-GigabitEthernet0/2/1] quit
[*PE2] commit
# Configure CE2.
[~CE2] bgp 400
[*CE2-bgp] peer 172.16.5.2 as-number 100
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 172.16.1.1 as-number 200
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 172.16.5.1 as-number 400
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] quit
[*PE2-bgp] import-route unr
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs to check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
Step 6 Establish an MP-IBGP peer relationship between the PEs.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 2001:db8:30::1 as-number 100
[*PE1-bgp] peer 2001:db8:30::1 connect-interface LoopBack0
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 2001:db8:30::1 enable
[*PE1-bgp-af-vpnv4] commit
[~PE1-bgp-af-vpnv4] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 2001:db8:10::1 as-number 100
[*PE2-bgp] peer 2001:db8:10::1 connect-interface LoopBack0
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 2001:db8:10::1 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 all peer command
on the PEs to check whether a BGP peer relationship has been established
between the PEs. If the Established state is displayed in the command output, the
BGP peer relationships have been established successfully.
Step 7 Establish SRv6 BE paths between the PEs.
# Configure PE1.
[~PE1] segment-routing ipv6
[*PE1-segment-routing-ipv6] encapsulation source-address 2001:db8:10::1
[*PE1-segment-routing-ipv6] locator as1 ipv6-prefix 2001:db8:40:: 64 static 32
[*PE1-segment-routing-ipv6-locator] opcode ::20 end-dt4 vpn-instance vpna
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 2001:db8:30::1 prefix-sid
[*PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 best-effort
[*PE1-bgp-vpna] segment-routing ipv6 locator as1
[*PE1-bgp-vpna] quit
[*PE1-bgp] quit
[*PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator as1
[*PE1-isis-1] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] encapsulation source-address 2001:db8:30::1
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 2001:db8:50:: 64 static 32
[*PE2-segment-routing-ipv6-locator] opcode ::20 end-dt4 vpn-instance vpna
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 2001:db8:10::1 prefix-sid
[*PE2-bgp-af-vpnv4] quit
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] segment-routing ipv6 best-effort
[*PE2-bgp-vpna] segment-routing ipv6 locator as1
[*PE2-bgp-vpna] quit
[*PE2-bgp] quit
[*PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator as1
[*PE2-isis-1] quit
[*PE2] commit
Step 9 Configure an ACL-based traffic policy and apply the traffic policy to the VPN
instance on PE2.
1. Configure an ACL and specify an ACL rule.
[~PE2] acl 3001
[*PE2-acl-adv-3001] rule 1 permit ip source 10.1.1.0 0.0.0.255
[*PE2-acl-adv-3001] commit
[~PE2-acl-adv-3001] quit
2. Configure a traffic classifier and define an ACL-based matching rule.
[~PE2] traffic classifier c1
[*PE2-classifier-c1] if-match acl 3001
[*PE2-classifier-c1] commit
[~PE2-classifier-c1] quit
3. Define a traffic behavior, in which the action is binding NAT instance nat1.
[~PE2] traffic behavior b1
[*PE2-behavior-b1] nat bind instance nat1
[*PE2-behavior-b1] commit
[~PE2-behavior-b1] quit
4. Define a traffic policy to associate the configured traffic classifier with the
traffic behavior.
[~PE2] traffic policy p1
[*PE2-trafficpolicy-p1] classifier c1 behavior b1
[*PE2-trafficpolicy-p1] commit
[~PE2-trafficpolicy-p1] quit
Step 10 Configure a NAT address pool and NAT conversion policy on PE2 so that all the
packets diverted to the NAT service board from the interface board are directly
translated using the addresses in the NAT address pool.
[~PE2] nat instance nat1 id 1
[*PE2-nat-instance-nat1] nat address-group address-group1 group-id 1 11.1.1.1 11.1.1.5 vpn-instance
vpna
[*PE2-nat-instance-nat1] commit
[~PE2-nat-instance-nat1] quit
Run the display nat session table command on PE2 to check that the desired
NAT session entry is displayed.
<HUAWEI> display nat session table slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break ...
Slot: 9
Current total sessions: 1.
udp: 10.1.1.2:1234[11.1.1.1:2234]--> 10.1.3.1:1024
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 2001:db8:10::1
locator as1 ipv6-prefix 2001:db8:40:: 64 static 32
opcode ::20 end-dt4 vpn-instance vpna
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet0/2/1
undo shutdown
ip binding vpn-instance vpna
ip address 172.16.1.2 255.255.255.0
#
interface GigabitEthernet0/2/2
undo shutdown
ipv6 enable
ipv6 address 2001:db8:1::1/96
isis ipv6 enable 1
#
interface LoopBack0
ipv6 enable
ipv6 address 2001:db8:10::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:db8:30::1 as-number 100
peer 2001:db8:30::1 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 2001:db8:30::1 enable
peer 2001:db8:30::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 172.16.1.1 as-number 200
#
return
● P configuration file
#
sysname P
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
#
interface GigabitEthernet0/2/1
undo shutdown
ipv6 enable
ipv6 address 2001:db8:1::2/96
isis ipv6 enable 1
#
interface GigabitEthernet0/2/2
undo shutdown
ipv6 enable
ipv6 address 2001:db8:2::1/96
isis ipv6 enable 1
#
interface LoopBack0
ipv6 enable
ipv6 address 2001:db8:20::1/64
isis ipv6 enable 1
#
return
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 2001:db8:10::1 enable
peer 2001:db8:10::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 172.16.5.1 as-number 400
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet0/2/1
undo shutdown
ip address 172.16.1.1 255.255.255.0
#
bgp 200
peer 172.16.1.2 as-number 100
import-route direct
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet0/2/1
undo shutdown
ip address 172.16.5.1 255.255.255.0
#
bgp 400
peer 172.16.5.2 as-number 100
import-route direct
#
return
NOTE
The NetEngine 8000 M4 and NetEngine 8000 M8 do not support this feature.
Definition
Whereas common NAT maps private IP addresses and port numbers to public IP
addresses and port numbers, L2-Aware NAT (L2NAT) — a special NAT technology
— maps private IP addresses and port numbers to public IP addresses and port
numbers based on user location information. Such information includes the PPP
session ID, MAC address, and user VLAN ID.
Purpose
L2NAT, similar to NAT444, is designed to help alleviate the IPv4 address shortage.
Benefits
● L2NAT performs one less translation than NAT444 does. Therefore, L2NAT has
advantages in reducing the NAT delay and number of NAT translations.
● L2NAT is mature and easy to deploy.
● CPE devices do not need to be upgraded, protecting existing investments.
1.1.2.1.2 Principles
L2NAT Principles
L2NAT is a special NAT technology. Unlike NAT444, a CPE in L2NAT only performs
route forwarding and does not perform NAT. A CGN device translates private IP
addresses, port numbers, and subscriber identifiers on one side into public IP
addresses and port numbers on the other side. Figure1 shows the address
translation of L2NAT. The IP addresses of PC1 and PC2 are assigned by CPE1, and
the IP addresses of PC3 and PC4 are assigned by CPE2. The IP addresses assigned
by the same CPE are different, but the IP addresses assigned by different CPEs can
be the same. After receiving a packet, the CGN device must use the MAC address
of the CPE and the IP address of the PC to uniquely identify the user when
performing NAT.
NAT Compared with DS- For protocols, such as Carriers' networks are
444 Lite, NAT444 does not the Session Initiation IPv4 only.
require CPE upgrade, Protocol (SIP), IP Home terminals
IP address family addresses are carried support only the IPv4
translation, or DNS at the application stack. Carriers allocate
modification. The layer, and NAT may be IPv4 addresses to
NAT444 network performed twice. home terminals.
model is compatible Universal Plug and
with the existing IPv4 Home terminals
Play (UPnP) will not support NAT.
network. work in scenarios
NAT444 does not use where NATs are
the tunneling performed twice.
technology. Therefore,
NAT444 does not
fragment packets.
1.1.2.1.3 Terminology
Terms
Term Definition
Context
NOTE
The NetEngine 8000 M8K and NetEngine 8000 M14K do not support this configuration.
Only dedicated boards support L2NAT.
Three NAT technologies are available on a network: NAT444, NAT44, and L2NAT.
● NAT444: A CPE performs NAT for user traffic before forwarding it to a NAT
device, which then performs NAT again for the user traffic. NAT444 allows
users on different private networks to use the same private IP address.
● NAT44: A CPE does not perform NAT for user traffic. Instead, it directly
forwards user traffic to a NAT device for NAT. NAT44 does not allow users on
different private networks to use the same private IP address.
● L2NAT: A CPE does not perform NAT for user traffic. Instead, it adds user
location information into packets when forwarding them. A NAT device then
translates the private IP address carried in each packet into a public IP address
based on the user location information. L2NAT allows users on different
private networks to use the same private IP address.
After CPEs connected to an L2NAT device are assigned the same IP address,
parameters related to external hosts must be configured on the L2NAT device to
establish static mappings between the CPEs and external hosts. This allows
external hosts to maintain access to online CPEs.
Procedure
Step 1 Configure the L2NAT license function.
1. Run license
The license view is displayed.
2. Run active l2nat vsuf slot slot-id
The L2NAT license is activated for the service board.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 2 Configure the L2NAT function.
1. Run system-view
The system view is displayed.
2. Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
3. Run l2-aware enable
The L2NAT function is enabled.
4. Run commit
The configuration is committed.
5. Run quit
Return to the system view.
Step 3 Binding the service board or the service board's CPU to the NAT instance.
1. Run system-view
The system view is displayed.
2. Run service-location service-location-id
A service-location group is created, and its view is displayed.
3. Run location slot slot-id [ backup slot slot-id ]
The service board's CPU is bound.
4. Run commit
The configuration is committed.
5. Run quit
Return to the system view.
6. Run service-instance-group service-instance-group-name
A service-instance group is created, and its view is displayed.
7. Run service-location service-location-id
The service-location group is bound.
8. Run commit
The configuration is committed.
9. Run quit
Return to the system view.
10. Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
11. Run service-instance-group service-instance-group-name
The service-instance group is bound.
----End
NOTE
Definition
Dual-Stack Lite (DS-Lite) uses IPv4-in-IPv6 tunneling and IPv4 Network Address
translation (NAT) techniques to allow private IPv4 users to travel through IPv6
networks and access public IPv4 networks.
Purpose
Carriers had to face the IPv4 depletion after the Internet Assigned Numbers
Authority (IANA) had assigned the last two IPv4 address groups with the mask
length of 8 bits in February 2011. Existing address resources can only be used in
the short-term running of telecom carrier networks, which cannot meet demands
for long-term service development. Existing solutions, such as IPv4 re-addressing
or reusing, can only relieve the address exhaustion. Therefore, introducing of IPv6
was considered as the ultimate and direct solution. IPv6, however, brought no
commercial opportunities, and existing alternative solutions meet carrier
deployment requirement. As a result, there is no sufficient motive to replace IPv4
networks with IPv6 networks.
A good way to implement IPv4-to-IPv6 transition is to deploy NAT and build IPv4
and IPv6 dual-stack networks with dual-stack hosts. Maintaining both IPv4 and
IPv6 networks increases the operating expense (OPEX), and existing carriers get
more benefits from dual-stack networks than new carriers. New carriers prefer
IPv6-only networks because their networks are assigned less IPv4 addresses. In
addition, in the later phase of IPv6 evolution, most new networks and services run
over IPv6, which makes dual-stack networks less significant. Carriers also face
another problem that existing IPv4 or dual-stack users have to go across IPv6-only
networks before accessing IPv4 applications. The DS-Lite technique is used to
address such a problem.
Benefits
DS-Lite offers the following benefits to carriers:
● To relieve IPv4 address depletion, DS-Lite allows IPv4 networks to run over
IPv6 networks.
● DS-Lite provides a technical plan for the transition from IPv4 to IPv6 and
protects the investment of carriers.
Dual-Stack Lite (DS-Lite) does not translate between address families and is more
complex than IPv4-in-IPv4 Network Address Translation (NAT). DS-Lite in nature is
to deploy IPv4-in-IPv6 channels on an IPv6 network to transmit IPv4 services and
to transmit IPv6 services directly over an IPv6 network.
Figure 1-80 shows a DS-Lite network with Point-to-Point Protocol over Ethernet
(PPPoE) enabled. The routed customer-premises equipment (CPE) functions as a
basic bridging broadband (B4) element, and a broadband remote access server
(BRAS) that is an IPv6-only node functions as an address family transition router
(AFTR) on a metropolitan area network (MAN). Either a standalone Carrier Grade
NAT (CGN) device or a CGN board installed on the BRAS can be used. An IPv6-
only network is between the CPE, BRAS, and CGN device. An IPv4/IPv6 dual-stack
network is between the CGN device and a core router (CR). The MAN only needs
to partially run the IPv4/IPv6 dual-stack, which is called a DS-Lite solution.
Related Concepts
● B4
B4 is short for basic bridging broadband. It is a network element configured
on a WAN interface of the CPE. The B4 element provides the following
functions:
– The B4 has the IPv4 and IPv6 dual-stack capability and is implemented
on a host or CPE. The CPE functions as a home gateway on a carrier's
network. The B4 element is primarily used on a CPE.
– The B4 element establishes an IPv4-in-IPv6 tunnel to connect to an AFTR.
● AFTR
AFTR is short for address family transition router. An AFTR is an ISP device to
remove IPv4-in-IPv6 tunnel information from packets. On a carrier's network,
an AFTR is used as a CGN device. The CGN device is either a standalone CGN-
capable device or a CGN board installed on a service control node. The AFTR
functions both as a headend of an IPv4-over-IPv6 tunnel and a NAT gateway.
The AFTR provides the following functions:
– Translates the source private IPv4 address in each decapsulated user
packet to a public address and sends the packet to a destination IPv4
host.
– Also translates the destination public IPv4 address in a reply packet sent
by the destination IPv4 host to a private IPv4 address and sends the
packet over an IPv6 tunnel to the CPE.
– Once the AFTR performs NAT translation, the AFTR records NAT
mappings and the IPv6 address of the CPE on the peer end of the IPv4-
over-IPv6 tunnel.
Technical Mechanism
DS-Lite uses the following techniques to transmit IPv4 services:
● IPv4-in-IPv6 (IPinIP) tunnel: An IPv6-only network is between the B4 element
and AFTR. Residential users send IPv4 packets over an IPv4-in-IPv6 tunnel to
access IPv4 services. The IPv4-in-IPv6 tunnel originates from the B4 element
and terminates at the AFTR. An IPv6 header is added to every IPv4 packet
before the packet is transmitted along the IPv4-in-IPv6 tunnel over the IPv6-
only network.
● NAT: The IPv4 address that a residential user obtains is a private IPv4 address.
When a packet containing a private IPv4 address is transmitted to an AFTR
over an IPv4-in-IPv6 tunnel, the NAT device translates the private IPv4
address to a public IPv4 address.
A NAT binding table extends the source IPv6 address for incoming traffic. The
source IPv6 address of incoming traffic is set to the B4's IPv6 address. In
actual deployment, the B4's IPv6 address is the IP address of the CPE's WAN
interface. Each CPE uses a specific WAN IPv6 interface address, which resolves
the IPv4 address overlapping problem among hosts on a home network.
The AFTR inherits all NAT functions:
– Supports the DS-Lite application level gateway (ALG) function over
various application layer protocols, such as Internet Control Message
Protocol (ICMP), Session Initiation Protocol (SIP), Real-Time Streaming
Implementation
A CPE (B4) establishes a PPPoE connection with a BRAS. The CPE and CGN (AFTR)
establish a DS-Lite IPv4-in-IPv6 tunnel to transmit the PC's IPv4 traffic. The BRAS
assigns an IPv6 address and IPv6 Prefix Delegation (PD) information to the CPE,
and the CPE assigns a private IPv4 address or IPv6 global unicast address (GUA) to
the PC.
The B4 element must obtain the AFTR's IPv6 address before the IPv4-in-
IPv6 tunnel is established. The B4 element learns the AFTR name using
DHCPv6 and uses the DNS service to obtain the IPv6 address.
b. Tunnel establishment
This tunnel is established when a home network transmits IPv4 traffic.
c. Tunnel release
In the PPPoE access model, the tunnel is torn down after a PPPoE session
is disconnected. In the IPoE access model, the tunnel is torn down after a
peer relationship is deleted.
d. Packet encapsulation and decapsulation
Figure 1-81 shows how a packet is encapsulated and decapsulated.
DS-Lite Server
To address this problem, the DS-Lite server is introduced. An internal server can be
manually specified, and a stack mapping between the private and public network
information can be configured, which allows public network users to access the
internal server through a DS-Lite device. The mapping can be:
● "private IP address of the internal server and port number and private WAN
IPv6 interface address on the CPE" and "public IP address and port number"
● "private IP address of the internal server and private WAN IPv6 interface
address on the CPE" and "public IP address"
2. A public network host sends a request to access a private network server, and
the DS-Lite device receives the service request.
3. The DS-Lite device searches for a DS-Lite entry that matches the request
packet's destination IP address and port number, and converts the destination
IP address and port number to the private network IP address and port
number recorded in the matching entry. Then, the DS-Lite device sends the
packet to the target private network server through an IPv6 tunnel.
4. After receiving the response packet from the private network side, the DS-Lite
device decapsulates the packet, searches the forward static DS-Lite entry that
matches the response packet's source IP address and source port number, and
converts the source IP address+source port number to the public IP address
and public port number recorded in the matching entry. Then, the DS-Lite
device sends the packet to the public network.
DS-Lite ALG
The DS-Lite application level gateway (ALG) translates IP addresses and port
numbers for special application layer protocols. DS-Lite ALG is similar to NAT44
ALG. In the public process, in the forward (user-to-network) direction, the DS-Lite
ALG processes IPv4 over IPv6 packets by removing IPv6 headers and processing
information in the IPv4 packet and payload. In the reverse (network-to-user)
direction, DS-Lite ALG processes information in the IPv4 packet and payload, adds
an IPv6 header to the packet, and processes the IPv6 packet.
The DS-Lite ALG function supports the application layer protocols, such as ICMP,
SIP, RTSP, FTP, and PPTP. After the DS-Lite ALG removes the IPv6 address from a
forward packet, the DS-Lite ALG processes the packet in the same way as the NAT
ALG does.
Networking Description
Figure 1-83 shows a distributed Dual-Stack Lite (DS-Lite) scenario. The routed
customer premises equipment (CPE) runs Point-to-Point Protocol over Ethernet
IPv6 (PPPoEv6) or IPv6 over Ethernet (IPoEv6) to dial up to log in to a broadband
remote access server (BRAS) equipped with Carrier Grade NAT (CGN) boards. The
BRAS assigns an IPv6 address to the CPE's WAN interface, an IPv6 address prefix
to the CPE's LAN interface, a related public IP address, and a related public port
range. Each CPE assigns a private IPv4 address to a residential terminal and uses
the IPv6 address prefix to assign an IPv6 address to the residential terminal. The
CPE uses the IPv6 address of the WAN interface to establish a tunnel to the BRAS.
The CPE directly forwards user IPv6 packets over routes. The CPE encapsulates
user IPv4 packets into IPv4 over IPv6 packets before forwarding them, in which the
source IP address is set to the CPE's WAN interface address and the destination
address is set to the BRAS's IPv6 address.
Upon receipt of the IPv4 over IPv6 packets, the BRAS decapsulates them, replaces
the source IPv4 addresses and port numbers with public ones, and forwards them
to an IPv4 network. In such a scenario, DS-Lite functions are deployed on DS-Lite
service boards equipped on BRASs. Therefore, this scenario is a distributed DS-Lite
solution.
DS-Lite Translation
The CPE functions as a Dynamic Host Configuration Protocolv4 (DHCPv4) server
to assign an IPv4 address (for example, 192.168.0.0/16) to each residential user,
and the BRAS assigns IPv6 addresses only. Before a residential user accesses IPv4
services, the user sends a packet with a private IPv4 address along an IPv4-in-IPv6
tunnel to a CGN device. The CGN device translates the private IPv4 address to a
public IPv4 address. The CGN device forwards the user packet along an IPv4 over
IPv6 tunnel to the CPE. Upon receipt of the user packet, the CPE forwards it to the
IPv4 network to access IPv4 services.
Solution Advantages
● Seamless integration of user access and DS-Lite
DS-Lite pre-allocates ports to various access users, such as Point-to-Point
Protocol over Ethernet (PPPoE) users, IPoE users, and Layer 2 Tunneling
Networking Description
Figure 1-84 shows a centralized Dual-Stack Lite (DS-Lite) scenario. Each routed
customer premises equipment (CPE) runs Point-to-Point Protocol over Ethernet
IPv6 (PPPoEv6) or IPv6 over Ethernet (IPoEv6) to dial up to log in to a broadband
remote access server (BRAS). The BRASs are not equipped with Carrier Grade NAT
(CGN) boards. Each BRAS assigns an IPv6 address to the connected CPE's WAN
interface and an IPv6 address prefix to the CPE's LAN interface. Each CPE assigns a
private IPv4 address to a residential terminal and uses the IPv6 address prefix to
assign an IPv6 address to the residential terminal. The CPE uses the IPv6 address
of the WAN interface to establish a tunnel to a CGN device.
A CPE directly forwards user IPv6 packets over routes. The CPE encapsulates user
IPv4 packets into IPv4 over IPv6 packets before forwarding them, in which the
source IP address is set to the CPE's WAN interface address and the destination
address is set to the CGN device's IPv6 address specified in a DS-Lite instance.
Upon receipt of the user packets along the IPv4 over IPv6 tunnel, the BRAS
forwards them at Layer 3 to the CGN device. The CGN device decapsulates the
packets, replaces the source IPv4 address and port number in each packet, and
forwards the packets to the IPv4 network. Since DS-Lite functions are deployed on
a CGN device attached to the BRAS, this solution is called centralized DS-Lite.
DS-Lite Translation
The CPE functions as a Dynamic Host Configuration Protocolv4 (DHCPv4) server
to assign an IPv4 address (for example, 192.168.0.0/16) to each residential user,
and the BRAS assigns IPv6 addresses only. Before a residential user accesses IPv4
services, the user sends a packet with a private IPv4 address along an IPv4-in-IPv6
tunnel to a CGN device. The CGN device translates the private IPv4 address to a
public IPv4 address. The CGN device forwards the user packet along an IPv4 over
IPv6 tunnel to the CPE. Upon receipt of the user packet, the CPE forwards it to the
IPv4 network to access IPv4 services.
NOTE
Background
In IPv4 to IPv6 transition, IPv6 networks are scaled up, and plans to establish IPv4
networks are being phased out. All backbone networks support the IPv4 and IPv6
dual-logic planes, and new devices on metropolitan area networks (MANs)
primarily run either the IPv4/IPv6 dual stack or only IPv6 (by application service
providers [ASPs]). IPv6 and IPv4 coexist dynamically and in an order. These
network deployment methods help carriers achieve the following goals:
● Provide the same user experience when users access IPv6 and IPv4 services.
● Allow IPv6 users to properly access IPv4 networks.
● Ensure that services are not compromised after IPv4 users are migrated to
IPv6 networks.
Since the mainstream operating systems support the IPv4 and IPv6 dual stack and
user preferences differ, network providers have to provide both IPv4 and IPv6
services during the transition. This, however, does not mean that networks must
support the IPv4 and IPv6 dual stack. In smooth evolution from IPv4 network to
IPv6 network, network carriers use different evolution solutions and network
models in each evolution phase based on infrastructure.
The tunneling technique is used in the primary phase of network IPv4-to-IPv6
evolution. In this phase, legacy access IPv4 networks cannot be upgraded to IPv6.
A few IPv6 users are scattered on the networks. This means the cost of a network
upgrade to IPv6 is high, and the input-output ratio is low, reducing the possibility
of an upgrade to IPv6. As a result, the churn rate of IPv6 users increases. To
address this problem, the tunneling technique can be used to allow IPv6 users to
travel through IPv4 networks.
In addition to the tunneling technique, IPv4 and IPv6 dual-stack networks can be
deployed. Such networks apply when both IPv4 traffic and IPv6 traffic are in a
large volume during transition and when devices on new carrier networks are
capable of IPv6. DS-Lite is used on such networks.
DS-Lite Functions
DS-Lite functions are as follows:
● Supports inter-board hot and warm backup.
● Supports the application level gateway (ALG) function, internal server
function, and one-to-one and one-to-multiple translation between private
and public network information.
● Supports distributed and centralized DS-Lite scenarios.
● Uses a NAT policy template delivered by a Remote Authentication Dial In
User Service (RADIUS) server to implement the ALG, to adjust the aging time,
and assign ports.
Usage Scenario
DS-Lite bandwidth and session table resources are controlled using a license. A
device is assigned no DS-Lite bandwidth or session table resources by default.
Before you configure DS-Lite functions, adjust DS-Lite bandwidth and session
table resources.
Pre-configuration Tasks
Before configuring the license function, load a license to a device and ensure that
a service board is working properly.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run license
The license view is displayed.
Step 3 Run active ds-lite vsuf slot slot-id
The DS-Lite function is enabled on a specified service board.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run license
The license view is displayed.
Step 3 Run active nat session-table size table-size slot slot-id
The number of session resources on the CPU of the service board is configured.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Prerequisites
Bandwidth resources have been assigned, and the service board is working
properly.
Procedure
● Run the display nat session-table size [ slot slot-id ] command to check
information about session table resources assigned to each service board.
● Run the display nat bandwidth [ slot slot-id ] command to check the
configured license bandwidth.
● Run the display ds-lite vsuf status [ slot slot-id ] command to check DS-Lite
license status information.
----End
Usage Scenario
Basic DS-Lite functions are prerequisites for DS-Lite configurations, including
enhanced DS-Lite configurations. The configuration of basic DS-Lite functions
includes the following operations:
● Create a DS-Lite address pools. You can define a public network IPv4 address
range for a DS-Lite address pool, and assign it to the specified DS-Lite
instance, enabling conversions between a private IPv4 address and a public
IPv4 address.
● (Optional) Create DS-Lite policy templates: To enable a RADIUS server to
issue DS-Lite configuration policies, you can predefine a DS-Lite template on
a device, and define some NAT configuration policies, for example, a limit on
the number of DS-Lite sessions in the template. When the RADIUS server
issues a template name to the device, the device finds the configured DS-Lite
template and applies it to the specified DS-Lite instance.
Prerequisites
Before you configure basic NAT functions, complete the following tasks:
Context
After a service-instance group is bound to a service-location group and a DS-Lite
instance is bound to the service-instance group, the DS-Lite instance is bound to
the service-location group. In this way, the DS-Lite service traffic can be processed
on the service board.
Procedure
Step 1 Run system-view
The CPU of the service board is bound in the service-location group view.
When inter-board warm or hot backup is used, run the location command with
backup slot configured, and ensure that at least two service boards are installed
to work in active/standby mode.
After the DS-Lite instance view is created, you can enter the view by running the
ds-lite instance command without id configured.
----End
Context
A DS-Lite device uses a remote IPv6 address and a local IPv6 address to establish
an IPv4-in-IPv6 tunnel to a CPE. Perform the following steps on a DS-Lite device:
Procedure
Step 1 Run system-view
NOTE
The local IPv6 address of a DS-Lite tunnel on the DS-Lite device must be the same as the IP
address mapped to the address family transition router (AFTR) name configured on a DNS
server.
A single local IPv6 network address can be configured in each DS-Lite instance and must be
different from interface IP addresses. The identical address causes the DS-Lite service to
conflict with other services. A local IPv6 address can be configured in multiple DS-Lite
instances.
The length of a CPE IPv6 address prefix of packets transmitted on a DS-Lite tunnel
is set.
Upon receipt of the packet, the DS-Lite device identifies a tunnel based on a 128-
bit source IPv6 address. When a BRAS only assigns a prefix, not an IPv6 address, to
the CPE, the DS-Lite device can only use the IPv6 prefix to identify the tunnel. In
this situation, run the ds-lite tunnel prefix-length command to set the length of
an IPv6 address prefix on a CPE of a DS-Lite tunnel.
The MTU on the outbound interface of a DS-Lite tunnel to the CPE is set.
● If the size of packets is greater than the configured MTU value, the packets
are broken into a great number of fragments.
● If the MTU is set too large, packets may be transmitted at a low speed.
The IPv4 ToS is set for packets transmitted along a DS-Lite tunnel.
The device is enabled to copy an IPv6 DSCP value in packets to the IPv4 DSCP field
when the packets are sent out of a DS-Lite tunnel.
The traffic class of IPv6 packets that are sent to a DS-Lite tunnel is set.
----End
Context
DS-Lite only supports network address port translation (NAPT). In NAPT mode,
DS-Lite translates both source IP addresses and port numbers between public and
private networks. For packets with the same private source IP address and
different source port numbers, DS-Lite in NAPT mode translates the private source
IP address in each packet into the same public source IP address and each private
source port number into a specific public source port number. A DS-Lite address
pool in a DS-Lite instance can be configured and used to translate addresses
based on 5- or 3-tuple information in user packets.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
Step 3 Use either of the following methods to configure a DS-Lite address pool:
● To configure a single public address segment for a single DS-Lite address
pool, run the ds-lite address-group address-group-name group-id id start-
address { mask { mask-length | address-mask } | end-address } [ vpn-
instance vpn-instance-name ] command.
– Run the ds-lite address-group command with start-address mask
specified to configure a public IP address segment.
– Run the ds-lite address-group command with start-address and end-
address specified to configure a public IP address segment with the
specified start and end addresses.
The mask mode is recommended. With this mode enabled, the length of
routes to be advertised is the same as the mask length specified in the ds-lite
address-group command. If the start-address and end-address modes are
used, the mask length of routes to be advertised is 32 bits.
● To create multiple public network segments for a single DS-Lite address pool,
run the ds-lite address-group address-group-name [ group-id group-id
[ vpn-instance vpn-instance-name ] ] command to enter the DS-Lite address
pool view, and then run the section section-num start-ip-address { mask
{ mask-length | mask-ip } | end-ip-address } command.
– Run the section command with start-address mask specified to configure
a public IP address segment.
– Run the section command with start-address and end-address specified
to configure a public IP address segment with the specified start and end
addresses.
The mask mode is recommended. With this mode enabled, the length of
routes to be advertised is the same as the mask length specified in the ds-lite
address-group command. If the start-address and end-address modes are
used, the mask length of routes to be advertised is 32 bits. In the view of the
same DS-Lite address pool, if the section command is run multiple times, all
configurations take effect, and multiple address segments are created; if the
ds-lite address-group command is run multiple times, the latest
configuration overrides the previous one, and only a single address segment
takes effect.
NOTE
----End
Context
A DS-Lite instance is a platform used to configure DS-Lite attributes. After user
traffic enters a DS-Lite instance, DS-Lite translates information in the user traffic
and forwards traffic based on the redirect next-hop IP address.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ds-lite instance instance-name id id
A DS-Lite instance is created, and the DS-Lite instance view is displayed.
Step 3 Run redirect ip-nexthop ip-address outbound or redirect ip-nexthop ipv6-
address inbound
The IP address of a next hop to which a route is redirected is specified.
Although both the inbound- and outbound-related commands can be run
simultaneously in a DS-Lite instance, network-to-user traffic's next-hop address
can be set only to an IPv6 address, and user-to-network traffic's next-hop address
can be set only to an IPv4 address. Only a single address can be set for each of
network-to-user and user-to-network traffic.
Step 4 Run commit
The configuration is committed.
----End
Context
DS-Lite supports the following port allocation modes:
● Dynamic port pre-allocation (Port Dynamic) enables a DS-Lite device to pre-
allocate a public IP address and a port range with 64 ports to a private IP
address. If the number of used ports exceeds the initial port range size, the
DS-Lite device assigns another port range with 64 ports to the user. The
allocation process repeats without a limit on the maximum number of
extended port ranges. This mode features high usage of public network IP
addresses, but involves a huge amount of log information. This requires a log
server to handle logs.
● The semi-dynamic port allocation (Semi-Dynamic) mode is an extension of
the port range mode. Semi-dynamic port allocation extends a single port
segment used in the port range mode to three parameters:
– Initial port range
– Extended port range
– Maximum number of times a port range can be extended
Before users go online, a DS-Lite device assigns an initial port segment and
ports in the initial segment to users. If the number of used ports exceeds the
initial port segment size, the device assigns an extended port segment, which
can repeat for a specified maximum number of times.
● Port pre-allocation: A port number range is pre-allocated on a DS-Lite device.
When the first flow of a specific user arrives, the DS-Lite device selects a
public IP address and associates the configured port number range with the
user. Then ports are selected from this port number range to perform address-
port replacement for all subsequent traffic of the user. The DS-Lite device
records log information during the allocation and reclamation of the port
number range. This mode involves a small amount of DS-Lite log information,
facilitating log checking.
During network deployment, a port allocation mode is configured based on the
service scale.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Select a port allocation mode as needed. Perform one of the following operations:
● Configure a port allocation mode in the DS-Lite instance view.
a. Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
b. Configure the port allocation mode.
Run port-range initial-port-range [ extended-port-range extended-port-
range extended-times extended-times ]
The port pre-allocation mode is configured.
After this command is run, ports work in pre-allocation modes.
NOTE
If the port-range command is run in both the NAT policy template view and DS-Lite
instance view, the configuration in the NAT policy template view takes effect. If the
port-range command is not run in the NAT policy template view, the configuration in
the DS-Lite instance view takes effect.
If the port-single enable command is run in the NAT instance view, the per-port
allocation mode configured in the instance takes effect, regardless of whether the
port-range command is run in the NAT policy template view. If the port-single
enable command is not run in the NAT instance view, the port-range command run
in the NAT policy template view takes effect.
The configurations in a NAT policy template take effect only after the NAT policy
template is issued by a RADIUS server. If packets of users to go online match the NAT
policy template, the configuration in the template takes effect on the users.
NOTE
● By default, sessions of different protocols using the same private IP address cannot
share the same public port during port pre-allocation. After port reuse is enabled, for
the same private IP address, TCP sessions can share a public port with sessions of other
protocols.
● The port-reuse enable and port-single enable commands are mutually exclusive.
----End
Context
In a DS-Lite ECMP and trunk load balancing scenario, multiple traffic models are
used. If the default hash algorithm does not meet your expectation, configure
another hash algorithm or change the hash factor, which implements best load
balancing performance.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ds-lite instance instance-name [ id id ]
The view of a DS-Lite instance is displayed.
Step 3 Run ds-lite load-balance hash-arithmetic { hash-arithmetic1 | hash-
arithmetic2 | hash-arithmetic3 }
----End
Prerequisites
Basic DS-Lite functions have been configured.
Procedure
● Run the display ds-lite instance [ instance-name ] command to check the
configuration of a DS-Lite instance.
● Run the display ds-lite address-usage instance instance-name address-
group address-group-name [ slot slot-id ] [ verbose ] command to check
public port usage of a DS-Lite address pool.
----End
Usage Scenario
In both centralized scenarios, if one DS-Lite service can be bound to only one
service board CPU, the service board of a single DS-Lite instance may reach the
performance threshold when the number of users goes up. With DS-Lite load
balancing, a DS-Lite instance can be bound to multiple service boards, which
increases the DS-Lite bandwidth for a specific type of users. Increasing the number
of service boards in a DS-Lite load balancing group saves the workload in
configuring instances and assigning IP addresses and reduces manual intervention
in traffic forwarding.
Pre-configuration Tasks
Before configuring DS-Lite load balancing, complete the following tasks:
● Install a CGN license and wait for the service board to work properly.
● Configure link layer protocol parameters and IP addresses for interfaces to go
Up.
Context
Different service-location groups are bound to different CPUs of service boards.
One service-instance group can be bound to multiple service-location groups, and
a DS-Lite instance must be bound to the service-instance group.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run service-location service-location-id
A service-location group is created, and the service-location group view is
displayed.
Step 3 Run location slot slot-id [ backup slot slot-id ]
The CPU of the service board is bound in the service-location group view.
Step 4 Run commit
The configuration is committed.
Step 5 Run quit
Return to the system view.
Step 6 Run service-instance-group service-instance-group-name
A service-instance group is created, and the service-instance group view is
displayed.
Step 7 Run service-location service-location-id [ weight weight-value ]
The service-location group is bound to the service-instance group.
NOTE
NOTE
In the centralized load balancing and online upgrade and capacity expansion scenario,
some user traffic may be interrupted temporarily and then recover automatically.
----End
Context
A DS-Lite device uses a remote IPv6 address and a local IPv6 address to establish
an IPv4-in-IPv6 tunnel to a CPE. Perform the following steps on a DS-Lite device:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
Step 3 Run local-ipv6 ipv6-address prefix-length prefix-length
A local IPv6 address of a DS-Lite tunnel is set.
NOTE
The local IPv6 address of a DS-Lite tunnel on the DS-Lite device must be the same as the IP
address mapped to the address family transition router (AFTR) name configured on a DNS
server.
A single local IPv6 network address can be configured in each DS-Lite instance and must be
different from interface IP addresses. The identical address causes the DS-Lite service to
conflict with other services. A local IPv6 address can be configured in multiple DS-Lite
instances.
Upon receipt of the packet, the DS-Lite device identifies a tunnel based on a 128-
bit source IPv6 address. When a BRAS only assigns a prefix, not an IPv6 address, to
the CPE, the DS-Lite device can only use the IPv6 prefix to identify the tunnel. In
this situation, run the ds-lite tunnel prefix-length command to set the length of
an IPv6 address prefix on a CPE of a DS-Lite tunnel.
The MTU on the outbound interface of a DS-Lite tunnel to the CPE is set.
● If the size of packets is greater than the configured MTU value, the packets
are broken into a great number of fragments.
● If the MTU is set too large, packets may be transmitted at a low speed.
The IPv4 ToS is set for packets transmitted along a DS-Lite tunnel.
The device is enabled to copy an IPv6 DSCP value in packets to the IPv4 DSCP field
when the packets are sent out of a DS-Lite tunnel.
The traffic class of IPv6 packets that are sent to a DS-Lite tunnel is set.
----End
Context
During load balancing, one DS-Lite instance can be bound to the CPUs of multiple
service boards. These CPUs share the same global static address pool, ensuring
flexible extension for a single DS-Lite user and the same public network address
for multi-core CPUs.
Procedure
Step 1 Run system-view
A global static address pool is created, and the global static address pool view is
displayed.
----End
Context
One DS-Lite service can be bound only to one service board CPU, the service board
of a single DS-Lite instance may reach the performance threshold when the
number of users goes up. In this case, expand the board to support user traffic
forwarding. If a global static address pool is bound to a DS-Lite instance, multiple
boards can share the same address pool.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Prerequisites
DS-Lite load balancing has been configured.
Procedure
● Run the display ds-lite instance dynamic-address-group command to check
information about the dynamic address pool of the specified DS-Lite instance.
● Run the display nat ip-pool command to check information about the global
static address pool.
● Run the display nat statistics command to check service board system
statistics.
----End
Usage Scenario
After configuring basic DS-Lite functions and user group information, you can
apply a DS-Lite traffic diversion policy in the inbound direction of the user traffic
to distribute the user traffic to a DS-Lite service board. You can also apply a DS-
Lite translation policy to translate the private IPv4 address in a packet into a
public IPv4 address, allowing the user to access public network services. DS-Lite
translation for user traffic is used only in distributed DS-Lite when a service board
is installed on a BRAS.
Pre-configuration Tasks
Before configuring distributed DS-Lite translation, configure basic DS-Lite
functions.
Context
Multiple associations between user groups and DS-Lite instances can be
configured in a domain. After a user gets online, a DS-Lite device checks the
number of users in each user group and adds the user to the group that has the
least members, so that the user's traffic can be allocated to the least busy DS-Lite
instance to implement load sharing among DS-Lite instances.
Procedure
Step 1 Configure a user access mode.
For detailed configurations, see HUAWEI NetEngine 8100 M14/M8, NetEngine
8000 M14K/M14/M8K/M8/M4 & NetEngine 8000E M14/M8 series NetEngine 8100
M, NetEngine 8000E M, NetEngine 8000 M Configuration Guide - User Access.
Step 2 Bind a user group to the DS-Lite instance.
1. Run system-view
The system view is displayed.
2. Run user-group group-name
A new user group is created.
3. Run aaa
The AAA view is displayed.
4. (Optional) Run cut access-user cgn-pub-user [ dhcp | exclude-dhcp ]
CGN public users are manually logged out, and CGN public users are to get
online using private IP addresses.
5. Run domain domain-name
The AAA domain view is displayed.
6. Run commit
The configuration is committed.
7. Run user-group user-group-name bind ds-lite instance instance-name
A user group is bound to the DS-Lite instance.
NOTE
----End
Context
A DS-Lite service board does not provide any interface. Therefore, an inbound
interface board must direct user traffic to a DS-Lite service board for DS-Lite
processing. You can configure a traffic diversion policy to direct packets matching
the configured traffic diversion policy to the DS-Lite service board.
Procedure
Step 1 Run system-view
NOTE
Generally, the source IP address is matched against in an ACL rule. To specify multiple
ACL rules, repeat Step Step 2.2.
Rules in an ACL to which traffic is matched against are used based on the depth-first
rule (with auto configured) or in a configuration order (with config configured). By
default, rules are used in a configuration order (with config configured).
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 3 Configure a traffic classifier.
1. Run traffic classifier classifier-name [ operator { and | or } ]
A traffic classifier is configured, and the traffic classifier view is displayed.
2. Run if-match ipv6 acl acl-number
An ACL-based matching rule for MF traffic classification is configured.
To specify multiple ACL-based matching rules, repeat this step.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 4 Configure a traffic behavior.
1. Run traffic behavior behavior-name
A traffic behavior is configured, and the traffic behavior view is displayed.
2. Run ds-lite bind instance instance-name
The traffic behavior is bound to the DS-Lite instance.
NOTE
The flow that has been processed by DS-lite cannot be redirected. This command is
mutually exclusive with the redirect ip-nexthop command.
3. Run commit
The configuration is committed.
4. Run quit
Return to the system view.
Step 5 Configure a traffic policy.
1. Run traffic policy policy-name
A traffic policy is configured, and the traffic policy view is displayed.
----End
Context
A DS-Lite translation policy for user traffic distributed to a service board can be
configured to:
● Match ACL rules:
An ACL and a DS-Lite address pool must be specified. If packets match the
ACL, and the action specified in the ACL rule is permit, DS-Lite translates the
packets using addresses in the DS-Lite address pool.
● Not match ACL rules:
User traffic distributed to a service board does not need to match any ACL
rule. By default, the addresses of user traffic are translated using addresses
from a specified DS-Lite address pool.
Procedure
● Configure a DS-Lite translation policy to match ACL rules.
a. Run system-view
NOTE
When performing this step, ensure that the address segment of the ACL rule used
in the translation policy covers the address segment of the ACL rule used in the
traffic diversion policy. Otherwise, service interruptions may occur. To prevent
such interruptions, use either of the following measures:
● Keep acl-number in this step consistent with that in the traffic diversion
policy.
● Configure a default address pool in this step for translating the addresses
unmatched by the configured traffic translation policy. For example:
The ACL rule referenced by the traffic diversion policy is as follows:
#
acl ipv6 3000
rule 1 permit ipv6 source 2001:db8::1 64
#
acl ipv6 3999 //Default address pool configuration
rule 1 permit ipv6 //Default address pool configuration
The translation policy is as follows:
ds-lite instance ds-lite1 id 1
ds-lite address-group group1 group-id 1 10.10.1.0 10.10.1.254
ds-lite address-group group2 group-id 2 10.10.1.255 10.10.1.255 //Default address
pool configuration
ds-lite outbound 3000 address-group address-group1
ds-lite outbound 3999 address-group address-group2 //Default address pool
configuration
e. Run commit
The configuration is committed.
● Configure a DS-Lite translation policy not to match ACL rules.
a. Run system-view
The system view is displayed.
b. Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
c. (Optional) Run the ds-lite filter { acl-number | acl-name acl-name }
outbound command to filter DS-Lite packets based on inner-layer IPv4
information before configuring DS-Lite translation.
If you need to filter out invalid packets such as spam before DS-Lite
translation, perform this step to configure filtering rules. If the action
defined in the ACL rule is deny, the packet is discarded. If the action
----End
Prerequisites
Distributed DS-Lite translation has been configured.
Procedure
● Run the display ds-lite instance [ instance-name ] command to check the
configuration of a DS-Lite instance.
● Run the display nat user-information command to check DS-Lite user
information.
----End
Usage Scenario
After configuring basic DS-Lite functions and user group information, you can
apply a DS-Lite traffic diversion policy in the inbound direction of the user traffic
to distribute the user traffic to a DS-Lite service board. You can also apply a DS-
Lite translation policy to translate the private IPv6 address in a data packet into a
public IPv4 address, allowing the user to access public network services.
Pre-configuration Tasks
Before configuring centralized DS-Lite for user traffic, configure basic DS-Lite
functions.
Context
A DS-Lite service board does not provide any interface. Therefore, an inbound
interface board must direct user traffic to a DS-Lite service board for DS-Lite
processing. You can configure a traffic diversion policy to direct packets matching
the configured traffic diversion policy to the DS-Lite service board.
Procedure
Step 1 Run system-view
Generally, the source IP address is matched against in an ACL rule. To specify multiple
ACL rules, repeat Step Step 2.2.
3. Run commit
4. Run quit
NOTE
The ds-lite bind instance and redirect ip-nexthop commands are mutually exclusive.
3. Run commit
# Apply the traffic diversion policy to a user-side Layer 2 Ethernet interface that is
added to a VLAN.
----End
Context
A DS-Lite translation policy for user traffic distributed to a service board can be
configured to:
● Match ACL rules:
An ACL and a DS-Lite address pool must be specified. If packets match the
ACL, and the action specified in the ACL rule is permit, DS-Lite translates the
packets using addresses in the DS-Lite address pool.
● Not match ACL rules:
User traffic distributed to a service board does not need to match any ACL
rule. By default, the addresses of user traffic are translated using addresses
from a specified DS-Lite address pool.
Procedure
● Configure a DS-Lite translation policy to match ACL rules.
a. Run system-view
The system view is displayed.
b. Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
c. (Optional) Run the ds-lite filter { acl-number | acl-name acl-name }
outbound command to filter DS-Lite packets based on inner-layer IPv4
information before configuring DS-Lite translation.
If you need to filter out invalid packets such as spam before DS-Lite
translation, perform this step to configure filtering rules. If the action
defined in the ACL rule is deny, the packet is discarded. If the action
defined in the ACL rule is permit, DS-Lite translation is performed. DS-
Lite translation can be performed for packets that do not match ACL
rules.
d. Run ds-lite outbound acl-number address-group address-group-name
An ACL is bound to an address pool, and a DS-Lite translation policy is
configured to match ACL rules.
NOTE
When performing this step, ensure that the address segment of the ACL rule used
in the translation policy covers the address segment of the ACL rule used in the
traffic diversion policy. Otherwise, service interruptions may occur. To prevent
such interruptions, use either of the following measures:
● Keep acl-number in this step consistent with that in the traffic diversion
policy.
● Configure a default address pool in this step for translating the addresses
unmatched by the configured traffic translation policy. For example:
The ACL rule referenced by the traffic diversion policy is as follows:
#
acl ipv6 3000
rule 1 permit ipv6 source 2001:db8::1 64
#
acl ipv6 3999 //Default address pool configuration
rule 1 permit ipv6 //Default address pool configuration
The translation policy is as follows:
ds-lite instance ds-lite1 id 1
ds-lite address-group group1 group-id 1 10.10.1.0 10.10.1.254
ds-lite address-group group2 group-id 2 10.10.1.255 10.10.1.255 //Default address
pool configuration
ds-lite outbound 3000 address-group address-group1
ds-lite outbound 3999 address-group address-group2 //Default address pool
configuration
If you need to filter out invalid packets such as spam before DS-Lite
translation, perform this step to configure filtering rules. If the action
defined in the ACL rule is deny, the packet is discarded. If the action
defined in the ACL rule is permit, DS-Lite translation is performed. DS-
Lite translation can be performed for packets that do not match ACL
rules.
d. Run ds-lite outbound any address-group address-group-name
----End
Prerequisites
DS-Lite translation has been configured.
Procedure
● Run the display ds-lite instance [ instance-name ] command to check the
configuration of a DS-Lite instance.
● Run the display nat user-information command to check DS-Lite user
information.
----End
Usage Scenario
You can deploy DS-Lite to allow intranet users of a private network to initiate a
request for accessing public network services, whereas public network users
cannot obtain information about the users of the private network and cannot
access these users. The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000
M supports the DS-Lite internal server function and the port forwarding function.
A user's private network IP address and port can be associated with a public
network IP address and port, respectively, so that external users can access the
public IP address and port of the user to access the private network user.
● DS-Lite internal server: Internal servers can be configured to associate a user's
private IP address and port with a public IP address and port, respectively.
● Port forwarding: A user's private IP address and port can be dynamically
associated with a public IP address and port, respectively.
Pre-configuration Tasks
Before configuring the access to internal hosts using public addresses, complete
the following tasks:
● Configure basic DS-Lite functions.
● Configure DS-Lite translation for user traffic.
● Configure a DS-Lite device to properly interwork with a RADIUS server.
● (Optional) Configure DS-Lite user information.
Context
DS-Lite can be configured to allow users on a private network to access public
network services, while hiding the structure of the private network and shielding
internal hosts. In this case, a user on an external network cannot communicate
with a private network user.
To solve this problem, the DS-Lite internal server is introduced. The DS-Lite
internal server statically configures the mapping between the private IP address
and port number and the public IP address and port number or between the
private IP address and the public IP address, reversing translation from public IP
addresses to private IP addresses
Procedure
Step 1 Run system-view
Step 3 Run either of the following commands to configure an internal DS-Lite server:
● If multiple internal servers are assigned the same public IP address, run the
ds-lite server protocol { tcp | udp } global global-address [ global-protocol |
global-protocol-number ] [ vpn-instance vpn-instance-name ] inside host-
address [ host-port | host-port-number ] cpe cpe-address [ vpn-instance vpn-
instance-name ] [ outbound ] command to configure a DS-Lite internal
server for a specific type of packet.
● If each internal server is assigned a specific public IP address, run the ds-lite
server global global-address [ vpn-instance vpn-instance-name ] inside
host-address cpe cpe-address [ vpn-instance vpn-instance-name ]
[ outbound ] command to configure a DS-Lite internal server.
NOTE
The IP address of the DS-Lite internal server cannot be the same as the IP address of a
DHCP server. Otherwise, a message indicating a conflict is displayed.
----End
Context
The IP addresses of internal servers may frequently change. To prevent frequent
manual modification of the configurations of internal NAT servers, you can deploy
the port forwarding function to dynamically associate each internal server with a
public IP address and port.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
Step 3 Run port-forwarding address-group addr-grp-name port-scope start-port-value
end-port-value
The DS-Lite address pool and port range are configured to be dynamically
associated with internal servers.
NOTE
The port-forwarding address-group command configures only the port range of the port
forwarding service. Port forwarding rules, however, are issued by a RADIUS server when a
user gets online.
----End
Prerequisites
The DS-Lite internal server function has been configured.
Procedure
Run the display nat server-map [ dynamic | static | port-forwarding ] [ ip ip-
address | port port-number | vpn-instance vpn-instance-name | slot slot-id ] *
command to check server-map entry information of an internal server.
Usage Scenario
DS-Lite translates only the IP addresses contained in user data packets and the
port information in the TCP/UDP headers of data packets. For special protocols,
for example, FTP, the Data field in a packet contains IP address or port
information. If the IP address or port information in the data field is not
translated, inconsistency and errors occur. A good way to solve the DS-Lite
translation issue for these special protocols is to use the ALG function. Functioning
as a special conversion agent for application protocols, the ALG interacts with the
DS-Lite device to establish states. The ALG uses DS-Lite state information to
change the specific data in the Data field of IP packets and to complete other
necessary work, so that application protocols can run across internal and external
networks.
Pre-configuration Tasks
Before configuring DS-Lite ALG, complete the following tasks:
● Configure basic DS-Lite functions.
● Configure distributed or centralized DS-Lite translation function.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Enable the DS-Lite ALG function in the DS-Lite instance view or the NAT policy
template view:
1. Enable the DS-Lite ALG function in the DS-Lite instance view.
a. Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
b. Run ds-lite alg { all | ftp [ rate-threshold rate-threshold-value ] | pptp |
rtsp | sip [ separate-translation ] }
The DS-Lite ALG function is enabled for one or more application layer
protocols.
To configure DS-Lite for the SIP control channel and data channel
separately, run the ds-lite alg sip separate-translation command.
c. Run commit
The configuration is committed.
2. Enable the DS-Lite ALG function in the NAT policy template view.
a. Run nat-policy template template-name A NAT policy template is
configured.
b. Run nat alg { all | ftp | pptp | rtsp | sip } The DS-Lite ALG function is
enabled for one or more application layer protocols.
NOTE
NOTE
The configurations in a NAT policy template take effect only after the NAT policy
template is issued by a RADIUS server. If packets of users to go online match the NAT
policy template, the configuration in the template takes effect on the users.
If the DS-Lite ALG function is enabled in the DS-Lite instance view and the NAT policy
template view, the later configuration takes effect.
----End
Usage Scenario
If a fault occurs on a DS-Lite device or service board during the DS-Lite operation,
DS-Lite cannot be performed. As a result, user services are interrupted. The
NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M supports inter-board
hot backup and inter-chassis hot backup:
● Inter-board hot backup: Two service boards installed on a CGN device can be
configured to work in master/backup mode to implement single-chassis inter-
board backup. The single-chassis inter-board backup mechanism helps
implement a data consistency between the master and backup DS-Lite
boards. If a fault occurs, a master/backup DS-Lite board switchover is
successfully performed. In this situation, services are properly transmitted, and
users are unaware of the fault.
● Inter-chassis hot backup: This function enables the master device to back up
its service board CPU's mapping table to the service board CPU on the backup
device. If the service board on the master device or the master device fails,
the backup device takes over traffic without service interruptions, improving
DS-Lite reliability.
Pre-configuration Tasks
Before configuring DS-Lite reliability, complete the following tasks:
● Configure basic DS-Lite functions.
● Configure DS-Lite translation for user traffic.
Context
If two service boards are installed on a DS-Lite device, the service boards can be
configured to work in active/standby mode in the same chassis to implement
inter-board backup. The inter-board backup mechanism verifies that the data
stored on the active service board is consistent with that stored on the standby
service board. If the active service board fails, an active/standby service board
switchover is performed to ensure that services are running properly. In this
situation, services are properly transmitted, and users are unaware of the fault.
Procedure
Step 1 (Optional) Configure value-added service management (VSM) high availability
(HA) hot backup functions.
1. Run system-view
The system view is displayed.
2. Run service-ha hot-backup enable
The HA hot backup function is enabled.
3. Run service-ha delay-time delay-time
The delay time is set for VSM HA hot backup.
NOTE
On a device with the service-ha delay-time delay-time command run, session entries
can be backed up only if the active time of session traffic is longer than the delay time
configured for VSM HA hot backup.
4. (Optional) Run service-ha preempt-time preempt-time
A preemption delay is set.
NOTE
You can set a preemption delay for the former master service board to become the
master again after it recovers.
5. Run commit
The configuration is committed.
Step 2 Configure a service-location group that implements single-chassis inter-board VSM
HA hot backup.
1. Run system-view
The system view is displayed.
2. Run service-location service-location-id
A service-location group is created, and the service-location group view is
displayed.
NOTE
NOTE
One DS-Lite instance can only be bound to one service-instance group. Different DS-
Lite instances can be bound to the same service-instance group.
3. Run commit
The configuration is committed.
----End
Context
If multiple DS-Lite devices with service boards exist on a network, you can
configure a service board on a master device and a service board on a backup
Procedure
Step 1 Configure basic high availability (HA) hot backup functions.
1. Run system-view
The system view is displayed.
2. Run service-ha hot-backup enable
HA hot backup is enabled.
3. Run commit
The configuration is committed.
Step 2 Configure a Virtual Router Redundancy Protocol (VRRP) group for dual-device
inter-chassis HA hot backup.
NOTE
This section provides commonly used steps for configuring VRRP. For more information, see
"VRRP Configuration."
1. Run system-view
The system view is displayed.
NOTE
7. Run quit
Return to the system view.
Step 4 Create a service instance group and associate the service instance group with the
HA backup group.
1. Run system-view
The system view is displayed.
2. Run service-instance-group service-instance-group-name
A service-instance group is created, and the service-instance group view is
displayed.
3. Run service-location service-location-id [ weight weight-value ]
The service-location group is bound to the service-instance group.
4. Run commit
The configuration is committed.
5. Run quit
Return to the system view.
Step 5 Bind a DS-Lite instance to a service-instance group to implement dual-device
inter-chassis HA hot backup for DS-Lite services.
1. Run system-view
The system view is displayed.
2. Run ds-lite instance instance-name id id
The DS-Lite instance view is displayed.
3. Run service-instance-group service-instance-group-name
The DS-Lite instance is bound to the service-instance group.
4. Run commit
The configuration is committed.
5. Run quit
Return to the system view.
Step 6 Associate a service-instance group with a VRRP group on an interface on which
mVRRP is configured.
1. Run system-view
The system view is displayed.
2. Run interface interface-type interface-number
The interface view is displayed.
3. Run vrrp vrid virtual-router-id track service-location service-location-id
[ reduced value-reduced ]
The VRRP group can track the status of a service-location group so that VRRP
priorities can be adjusted.
4. Run commit
----End
Context
In centralized DS-Lite providing backup for centralized DS-Lite, the master and
backup DS-Lite devices are configured and both support centralized DS-Lite
function. If a service board on the master DS-Lite device fails, the master DS-Lite
device distributes traffic to the backup DS-Lite device for DS-Lite processing. After
the service board on the master DS-Lite device recovers, user traffic switches back
for DS-Lite switching.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run license
The view in which license resources are assigned is displayed.
Step 3 Run active nat session-table size table-size slot slot-id
The session resources are assigned to the specified service board and its CPU.
Step 4 Run active ds-lite vsuf slot slot-id
DS-Lite is enabled on the specified service board.
Step 5 Run quit
Return to the system view.
Step 6 Run service-location service-location-id
A service-location group is created, and the service-location group view is
displayed.
Step 7 Run location slot slot-id [ backup slot backup-slot-id ]
The CPU on the service board is bound.
Step 8 Run quit
Return to the system view.
Step 9 Run service-instance-group service-instance-group-name
A service-instance group is created, and the service-instance group view is
displayed.
Step 10 Run service-location service-location-id
The service-instance group is bound to the service-location group.
----End
Prerequisites
The single-chassis inter-board DS-Lite hot backup and inter-chassis DS-Lite hot
backup have been configured.
Procedure
● Run the display service-ha global-information command to check the inter-
board backup delay time of the VSM HA module, the inter-board switchback
time, and whether hot backup is enabled.
● Run the display service-location service-location-id command to check the
configuration of a service-location group.
● Run the display service-instance-group service-instance-group-name
command to check the configuration of a service-instance group.
● Run the display ds-lite instance [ instance-name ] command to check the
configuration of a DS-Lite instance.
----End
Usage Scenario
You can deploy the DS-Lite security function to guarantee secure operations of a
DS-Lite device and prevent attacks to the system.
Pre-configuration Tasks
Before you configure the DS-Lite security function, complete the following tasks:
● Configure basic DS-Lite functions.
● Configure DS-Lite translation for user traffic.
Context
If the number of established Transmission Control Protocol (TCP), User Datagram
Protocol (UDP), Internet Control Message Protocol (ICMP) DS-Lite sessions, or the
total number of DS-Lite sessions involving the same source or destination IP
address exceeds a configured threshold, a device stops establishing such sessions.
The limit helps prevent resource overconsumption from resulting in a failure to
establish connections for other users.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Perform one of the following operations:
1. Set the limit on the number of forward DS-Lite sessions that can be
established in the DS-Lite instance view.
a. Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
b. (Optional) Run ds-lite session-limit enable
The user-based DS-Lite session number limit function is enabled.
c. Run commit
The configuration is committed.
2. Set the limit on the number of forward DS-Lite sessions that can be
established in the NAT policy template view.
a. Run nat-policy template template-name
A NAT policy template is created.
b. Run nat session-limit { tcp | udp | icmp | total } session-number
The limit on the number of forward DS-Lite sessions that can be
established is set.
c. Run commit
The configuration is committed.
NOTE
– The limit on the number of forward DS-Lite sessions configured in the NAT policy
template prevails. If a limit is set in the template, the setting takes effect. If no
limit is set in the DS-Lite instance or the NAT policy template, the default value
in the template takes effect. If a delivered NAT policy template is invalid, the
setting in the DS-Lite instance takes effect.
– The status of the limit on the number of forward DS-Lite sessions is determined
by the configuration in a DS-Lite instance. If the limit function is disabled in the
DS-Lite instance, forward DS-Lite sessions can be established without restriction.
----End
Context
If the number of established Transmission Control Protocol (TCP), User Datagram
Protocol (UDP), Internet Control Message Protocol (ICMP) DS-Lite sessions, or the
total number of DS-Lite sessions involving the same source or destination IP
address exceeds a configured threshold, a device stops establishing such sessions.
The limit helps prevent resource overconsumption from resulting in a failure to
establish connections for other users.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Select a mode to configure the limit on the number of reverse DS-Lite sessions:
● Configure the limit on the number of reverse DS-Lite sessions in the DS-Lite
instance view.
a. Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
b. (Optional) Run ds-lite reverse-session-limit enable
The DS-Lite reverse IP session monitoring function is enabled.
● Run ds-lite reverse-session-limit { tcp | udp | icmp | total } session-number
The maximum number of network-to-user sessions that can be established is
set.
● Run commit
The configuration is committed.
● Configure the limit on the number of reverse DS-Lite sessions in the NAT
policy template view.
NOTE
– The limit on the number of reverse DS-Lite sessions configured in the NAT policy
template prevails. If a limit is set in the template, the setting takes effect. If no
limit is set in the DS-Lite instance or the NAT policy template, the default value
in the template takes effect. If a delivered NAT policy template is invalid, the
setting in the DS-Lite instance takes effect.
– The status of the limit on the number of reverse DS-Lite sessions is determined
by the configuration in a DS-Lite instance. If the limit function is disabled in the
DS-Lite instance, forward DS-Lite sessions can be established without restriction.
----End
Context
By checking whether the total number of TCP/UDP/ICMP/TOTAL ports used for
connections involving the same source or destination address exceeds the
configured threshold, the system can determine whether to restrict the initiation
of new connections in the direction, preventing individual users from consuming
excessive port resources and causing the connection failure of other users.
Procedure
Step 1 Run system-view
The maximum number of user-based ports that can be assigned to each user is
set.
----End
Setting the Maximum Number of Private Network Users Sharing the Same Public
IP Address
In PAT mode, multiple uses can use the same public IP address for access.
However, excessive users may lead to network congestion. To address this issue,
you can set the maximum number of private network users sharing the same
public IP address.
Context
This configuration is applicable to the dynamic port allocation and per-port
allocation modes where multiple users use the same public IP address for access.
In these two modes, a public IP address can be used by a large number of users. If
the number of sessions of these users is large, user traffic may fail to be
forwarded. Therefore, the number of online users using a single public IP address
needs to be limited.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ds-lite instance instance-name id id
The DS-Lite instance view is displayed.
Step 3 Run ds-lite ip access-user limit max-number
The maximum number of private network users sharing the same public IP
address is set.
Step 4 Run commit
The configuration is committed.
----End
Setting the Rate at Which Packets Are Sent to Create a Flow for a User
A device can be configured to dynamically detect the traffic forwarding rate and
limit the rate at which user sessions are created so that a certain proportion is
maintained between the forwarding rate and the session creation rate.
Context
A DS-Lite device with a multi-core structure allows flow construction and
forwarding processes to share CPU resources. To minimize or prevent DS-Lite
packet loss and a CPU usage increase, the device has to maintain a proper ratio of
the forwarding rate to the flow creation rate.
Procedure
Step 1 Run system-view
The system view is displayed.
The limit on the rate at which packets are sent to create a session is enabled.
To disable this function, run the undo ds-lite user-session create-rate limit
enable command.
----End
Setting the Limit on the Rate at Which the First Packet Is Sent to Create a Flow
Limiting the rate at which the first packet is sent to create a session prevents users
from occupying a large number of CPU resources through first packet attacks,
which would otherwise affect common traffic forwarding.
Context
You can flexibly set the limit on the rate at which the first packet is sent to create
a session, based on different types of packets.
Procedure
Step 1 Run system-view
Step 3 Run nat flow-defend { forward | fragment | reverse } rate rate-number slot
slot-id
The limit on the rate at which the first packet is sent to create a session on a
service board is set.
----End
Prerequisites
All DS-Lite security functions have been configured.
Procedure
● Run the display nat flow-defend command to check the configured rate at
which the first packet is sent to create a flow for a user.
● Run the display nat user-information command to check DS-Lite user
information.
● Run the display nat flow-defend reverse-blacklist command to check
entries in a reverse first-packet blacklist on the CPU of a service board.
----End
Usage Scenario
DS-Lite maintainability functions are as follows:
● DS-Lite logs: Flow logs record DS-Lite user and translation information. They
also help a device monitor and record private network access to public
networks.
● DS-Lite alarms: A device can be configured to generate alarms if the number
of established DS-Lite sessions or assigned DS-Lite ports reaches a specified
alarm threshold. After obtaining alarm information, the device administrator
can add DS-Lite boards or modify service configurations.
● DS-Lite statistics: A DS-Lite device collects statistics about DS-Lite packets
that are forwarded, which helps improve the operating performance of DS-
Lite.
Prerequisites
Before configuring DS-Lite maintainability, complete the following tasks:
● Configure basic DS-Lite functions.
● Configure DS-Lite translation for user traffic.
Context
DS-Lite logs are generated by a DS-Lite device during DS-Lite operation. The
information contains basic user information and DS-Lite operation information.
DS-Lite logs also record private network users' access to public networks and
public network users' access to private network servers. When users from an
intranet access an external network through a DS-Lite device, they share an
external network address. For this reason, the users accessing the external
network cannot be located. The log function helps trace and record intranet users'
access to external networks in real time, enhancing network maintainability.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat log rate rate-value
The rate at which a DS-Lite device sends logs is set.
Step 3 Run commit
The configuration is committed.
Step 4 Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
Step 5 Run ds-lite log session enable [ elog | syslog ]
The DS-Lite log function is enabled.
Step 6 Run ds-lite log user enable [ syslog | netstream ]
The DS-Lite user log function is enabled.
Step 7 Run ds-lite log host host-ip-address host-port source source-ip-address source-
port name host-name [ vpn-instance vpn-instance-name ]
The DS-Lite log host information is configured.
Step 8 (Optional) Run ds-lite log radius enable
The DS-Lite RADIUS log function is enabled.
In semi-dynamic port segment pre-allocation mode, after ports in the initially pre-
allocated port segment are used up, an available incremental port segment is
automatically used. In this situation, this function must be enabled when the
device needs to send RADIUS logs.
In VS mode, this command is supported only by the admin VS.
Step 9 (Optional) Run ds-lite log send-mode { session-start-only | session-end-only }
The mode of sending DS-Lite logs is configured.
Step 10 Run commit
The configuration is committed.
Step 11 (Optional) Configure a flexible DS-Lite log template.
1. Run quit
Return to the system view.
2. Run nat syslog flexible template { user | session }
A NAT flexible syslog template is created, and the template view is displayed.
The ds-lite time command takes precedence over the ds-lite time local command run in
the flexible log template view. If the ds-lite time command is run, the time format of the
end time, start time, or timestamp takes effect according to the configured format. If the
ds-lite time command is not run, the ds-lite time local command takes effect. If neither
the ds-lite time nor the ds-lite time local command is run, the default UTC time takes
effect.
5. Enter the view of the DS-Lite flexible flow log template, either run the ds-lite
position command to configure a DS-Lite flexible flow log template or the
ds-lite position command to configure a DS-Lite flexible user log template.
6. Run commit
The configuration is committed.
7. Run quit
Return to the system view.
8. Run nat syslog descriptive format flexible template { session | user }
Logs are bound to the flexible log template.
Step 12 (Optional) Run nat syslog descriptive format { cn | type2 | type3 }
DS-Lite syslogs are configured to be in the extended format.
Step 13 Run commit
The configuration is committed.
----End
Context
DS-Lite sessions and ports available are important resources for DS-Lite services. If
these resources are exhausted, DS-Lite cannot be performed for traffic sent by
newly logged-in users. Therefore, the usage of these resources must be properly
monitored. The DS-Lite alarm function generates an alarm when the resource
usage reaches a certain extent, notifying the customer of the necessity to
implement capacity expansion or service adjustment.
Procedure
● Configure the maximum number of alarm packets that a service board is
allowed to send every second.
a. Run system-view
The system view is displayed.
b. Run nat alarm rate threshold-value
The maximum number of alarm packets that the service board sends
every second is set.
c. Run commit
The configuration is committed.
● Configure a device to generate an alarm when the number of sessions on a
service board reaches a specified alarm threshold.
a. Run system-view
The system view is displayed.
b. (Optional) Run undo nat alarm session-number { log | trap } disable
The trap and log functions for the number of NAT sessions are disabled.
c. Run nat alarm session-number threshold threshold-value
An alarm threshold for the total number of sessions on a service board is
configured.
d. Run commit
The configuration is committed.
● Enable a device to generate an alarm when the number of used ports in a DS-
Lite PAT address pool exceeds a configured threshold.
a. Run system-view
The system view is displayed.
b. Run ds-lite instance instance-name [ id id ]
The DS-Lite instance view is displayed.
c. (Optional) Run ds-lite alarm port-number { log | trap } address-group
disable
The log and alarm functions for the port usage of a public IP address
pool are disabled.
d. Run ds-lite alarm address-group port-number threshold threshold-
value
An alarm threshold is set for the port usage rate of a DS-Lite address
pool.
NOTE
The PAT address pool does not support alarms based on a single IP address.
Port usage of a DS-Lite PAT address pool = Number of ports used by the public
network addresses in the PAT address pool/Total number of ports available for
the public network addresses in the PAT address pool
● Enable the device to generate an alarm when the number of ports used by a
DS-Lite user exceeds a configured threshold.
a. Run system-view
NOTE
go online. If the alarm function for the usage rate of the reserved PCP
ports is enabled and the usage rate reaches the threshold, an alarm is
reported. A proper threshold provides effective monitoring.
e. Run commit
The configuration is committed.
● Configure the device to generate an alarm when server mapping entry usage
exceeds a specified threshold.
a. Run system-view
The system view is displayed.
b. (Optional) Run nat alarm server-map [ log | trap ] disable
The log and trap functions for server mapping entries are disabled.
The log and trap functions are enabled by default. To disable the
functions, run the nat alarm server-map disable command.
c. Run nat alarm server-map threshold threshold-value
The alarm threshold for the usage of server mapping entries is set.
NOTE
Run the display nat memory-usage servermap command to query the number
of used server mapping entries and the number of supported server mapping
entries. Server mapping entry usage rate = Number of used server mapping
entries/Number of server mapping entries supported by the service board
----End
Context
Perform the following steps on the NetEngine 8100 M, NetEngine 8000E M,
NetEngine 8000 M:
Procedure
● Enable the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M to
collect statistics about forwarded DS-Lite packets.
NOTE
a. Run system-view
The system view is displayed.
b. Run nat statistics payload slot slot-id enable
The device is enabled to collect statistics about forwarded DS-Lite
packets. After this function is enabled, the average and maximum
numbers of sent and received packets on service boards can be counted.
----End
Context
NOTICE
Once the statistics are cleared, they cannot be restored. Confirm the action before
you use the command.
Procedure
● After you confirm to delete DS-Lite statistics, run the reset nat statistics
command in the user view.
----End
Context
NOTICE
After DS-Lite session entries are deleted, services are interrupted. Confirm you
operation before you run the following command.
NOTE
Procedure
After you confirm to delete DS-Lite session entries, run the reset nat session
table command.
Context
In routine maintenance, you can run the following commands in any view to
check the running status of DS-Lite services.
Procedure
● Run the display nat flow-defend command to check the rate at which the
first packet is sent to create a flow.
● Run the display ds-lite instance command to check the configuration of a
DS-Lite instance.
● Run the display nat memory-usage command to check usage of each entry
in memory of a service board.
● Run the display nat session aging-time command to check the configured
aging time for DS-Lite session entries.
● Run the display nat session table command to check DS-Lite session entry
information.
● Run the display nat statistics command to check DS-Lite service statistics.
● Run the display nat user-information command to check DS-Lite user
information.
● Run the display nat server-map command to view server-map entry
information about internal servers.
----End
Usage Scenario
You can configure the following parameters to adjust DS-Lite performance:
● Aging time of session entries: The time of aging session entries for each
protocol can be set. After a specified aging time elapses, DS-Lite session
entries for a specific protocol automatically age so that a DS-Lite device can
release resources.
● TCP maximum segment size (MSS) adjustment: When the MTU of a link is
small, DS-Lite packets may be fragmented. You can change the MSS value of
Pre-configuration Tasks
Before adjusting DS-Lite performance, complete the following tasks:
● Configure basic DS-Lite functions.
● Configure DS-Lite translation for user traffic.
Context
Perform the following steps on the router:
Procedure
Step 1 Run system-view
Step 2 Run nat session aging-time { tcp | udp | icmp | fin-rst | syn | fragment | dns |
ftp | http | rtsp | sip | pptp | tcp long-link | ip }aging-time
The changed aging time takes effect on new rather than existing DS-Lite sessions.
Step 3 (Optional) Set the fast aging time for DNS sessions.
You are advised to configure this function when DNS traffic is heavy. After the fast
aging function for DNS sessions is enabled, if the device receives DNS request and
response packets at the same time, the DNS sessions age according to the
configured fast aging time to save system resources.
Step 5 Run ds-lite session aging-time { tcp | udp | syn | fin-rst | icmp | ftp | rtsp | sip |
pptp | fragment } aging-time
----End
Context
When the link MTU is small, DS-Lite packet fragments may be generated. You can
change the MTU value so that the packets for DS-Lite are not fragmented,
improving DS-Lite translation efficiency.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ds-lite instance instance-name id id
The DS-Lite instance view is displayed.
Step 3 Run ds-lite mtu mtu-value
The IPv6 MTU for packets of a DS-Lite instance is set.
Step 4 Run commit
The configuration is committed.
----End
Context
If the size of packets for DS-Lite processing is larger than a link MTU, the packets
are fragmented. You can reduce the TCP MSS value, which prevents a DS-Lite
board from fragmenting packets and helps improve DS-Lite efficiency.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat tcp-mss mss-value
The MSS value carried in TCP SYN packets for DS-Lite processing is set.
Step 3 Run commit
NOTE
If an MSS value is set in both the DS-Lite instance view and system view, the setting in the
DS-Lite instance view takes effect. If no MSS value is set in the DS-Lite instance view, the
MSS value in the system view takes effect.
----End
Prerequisites
Basic DS-Lite functions have been configured.
Procedure
● Run the display nat session aging-time command to check the configured
aging time for DS-Lite session entries.
● Run the display nat session-table size [ slot slot-id ] command to check
information about DS-Lite session entries assigned to each service board.
----End
Networking Requirements
In Figure 1-85, a home user's PC with a private IPv4 address accesses an IPv6
MAN through an IPv4 and IPv6 dual-stack-capable and DS-Lite-capable CPE. The
CPE and DS-Lite device establish a DS-Lite tunnel. The CPE transmits traffic with
the private IPv4 address along the DS-Lite tunnel to the DS-Lite device. The DS-
Lite device decapsulates traffic, uses a Network Address Translation (NAT)
technique to translate the private IPv4 address to a public IPv4 address, and
forwards traffic to the IPv4 Internet. The DS-Lite device's GE 0/2/1 is connected to
an IPv6 MAN, and GE 0/2/2 is connected to the Internet. IPv4 home users need to
access the IPv4 Internet through the IPv6 MAN. The carrier provides 11 public IPv4
addresses 11.11.11.100 through 11.11.11.110.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic license functions.
2. Configure a DS-Lite instance and bind it to a DS-Lite board.
3. Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
4. Configure a DS-Lite address pool with addresses ranging from 11.11.11.100 to
11.11.11.110.
5. Configure DS-Lite user information and RADIUS authentication on the BRAS.
6. Configure a traffic diversion policy for the DS-Lite tunnel.
7. Bind the DS-Lite tunnel to the address pool.
8. Configure interfaces and a routing protocol.
9. Enable the device to advertise the local IP route to the IPv6 network and the
address pool route to the IPv4 network.
Data Preparation
● Name of a DS-Lite instance
● Slot IDs of DS-Lite boards to which a DS-Lite instance is bound
● Local IPv6 address and a remote IPv6 address for a DS-Lite tunnel
● DS-Lite address pool number and start and end IP addresses
● Name of a user group bound to a DS-Lite instance
● IPv6 ACL rule number used in DS-Lite traffic classification
● IPv6 ACL rule number used to be bound to a DS-Lite address pool
Procedure
Step 1 Configure basic license functions.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS
[*HUAWEI] commit
[~BRAS] vsm on-board-mode disable
[*BRAS] commit
[~BRAS] license
[~BRAS-license] active ds-lite vsuf slot 9
[*BRAS-license] active nat session-table size 16 slot 9
[*BRAS-license] active nat bandwidth-enhance 40 slot 9
[*BRAS-license] commit
[~BRAS-license] quit
Step 3 Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
[~BRAS] ds-lite instance ds-lite1
[*BRAS-ds-lite-instance-ds-lite1] local-ipv6 2001:DB8::1 prefix-length 128
[*BRAS-ds-lite-instance-ds-lite1] remote-ipv6 2001:DB8:2::2 prefix-length 96
[*BRAS-ds-lite-instance-ds-lite1] commit
[~BRAS-ds-lite-instance-ds-lite1] quit
Step 4 Configure a DS-Lite address pool with addresses ranging from 11.11.11.100 to
11.11.11.110.
[~BRAS] ds-lite instance ds-lite1
[*BRAS-ds-lite-instance-ds-lite1] ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
[*BRAS-ds-lite-instance-ds-lite1] commit
[~BRAS-ds-lite-instance-ds-lite1] quit
NOTICE
In the IPv6 address pool view of a BRAS, the DNS server address and AFTR
name must be specified. The AFTR name identifies a DS-Lite server. The
following conditions must be met so that the DS-Lite tunnel between the DS-
Lite device and CPE can be established:
– When the BRAS uses the ND mode to assign an IP address to a WAN
interface on the CPE, the AFTR name and DNS information must be
configured in the PD address pool. Otherwise, the DS-Lite tunnel between
the DS-Lite device and CPE cannot be established.
– After the AFTR name is configured on the BRAS, the mapping between
the AFTR name and the local IP address of the DS-Lite instance must be
configured on the DNS server. The DS-Lite device advertises its local IP
address to the CPE.
3. Configure a traffic behavior and bind the traffic behavior to the DS-Lite
instance named ds-lite1.
[~BRAS] traffic behavior b1
[*BRAS-behavior-b1] ds-lite bind instance ds-lite1
[*BRAS-behavior-b1] commit
[~BRAS-behavior-b1] quit
4. Configure a DS-Lite traffic diversion policy and associate the IPv6 ACL-based
traffic classification rule with the traffic behavior.
[~BRAS] traffic policy p1
[*BRAS-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS-trafficpolicy-p1] commit
[~BRAS-trafficpolicy-p1] quit
1. Configure the IPv6 ACL-based traffic classification rule and use the address
pool named group1 to translate the source addresses in the range of
2001:DB8:2::2/96 in tunnel packets.
[~BRAS] acl ipv6 3000
[*BRAS-acl6-adv-3000] rule permit ipv6 source 2001:DB8:2::2 96
[*BRAS-acl6-adv-3000] commit
[~BRAS-acl6-adv-3000] quit
2. Associate the ACL rule with the DS-Lite address pool. In the DS-Lite instance
named ds-lite1, bind the IPv6 ACL numbered 3000 to the address pool named
group1.
[~BRAS] ds-lite instance ds-lite1
[*BRAS-ds-lite-instance-ds-lite1] ds-lite outbound 3000 address-group group1
[*BRAS-ds-lite-instance-ds-lite1] commit
[~BRAS-ds-lite-instance-ds-lite1] quit
Step 9 Enable the device to advertise the local IP route to the IPv6 network and the
address pool route to the IPv4 network. Import the local IP route and address pool
route to the IS-IS routing table. In this example, UNRs are used as the local IP
route and address pool route.
[~BRAS] isis 1000
[~BRAS-isis-1000] ipv6 import-route unr
[*BRAS-isis-1000] commit
[~BRAS-isis-1000] quit
[~BRAS] isis 100
[~BRAS-isis-100] import-route unr
[*BRAS-isis-100] commit
[~BRAS-isis-100] quit
Step 10 After the configuration is complete, the DS-Lite device can establish connections
with other devices. In addition, the CPE is routable to the local IP address and the
addresses in the address pool.
[~BRAS] display ipv6 routing-table 2001:DB8::1
Routing Table : _Public_
Summary Count : 1
----End
Configuration Files
DS-Lite device configuration file
#
sysname BRAS
#
vsm on-board-mode disable
#
license
active ds-lite vsuf slot 9
active nat session-table size 16 slot 9
active nat bandwidth-enhance 40 slot 9
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
interface Virtual-Template1
ppp authentication-mode auto
#
interface GigabitEthernet0/2/1
undo shutdown
ipv6 enable
isis ipv6 enable 1000
#
interface GigabitEthernet0/2/2
undo shutdown
ip address 1.1.1.1 255.255.255.0
isis enable 100
#
ipv6 prefix pre1 local
prefix 2001:DB8:2::/96
#
ipv6 pool pool1 bas local
prefix pre1
dns-server 2001:DB8::1:2 //An address is assigned to an IPv6 DNS server. In a DS-Lite scenario,
configuring a DNS server address in the local address pool is recommended. When the BRAS uses the ND
mode to assign an IP address to a WAN interface on the CPE, the AFTR name and DNS information must
be configured in the PD address pool. Otherwise, the DS-Lite tunnel cannot be established.
aftr-name www.huawei.com //An AFTR name is set. The AFTR name identifies a DS-Lite server. After
the AFTR name is configured on the DS-Lite device, the device must advertise the configuration to the CPE.
Otherwise, the DS-Lite tunnel between the CPE and DS-Lite server cannot be established. The
advertisement method is to configure a mapping between the AFTR name and the local IP address of the
DS-Lite tunnel on the DNS server.
#
service-location 1
location slot 9 backup slot 10
#
service-instance-group 1
service-location 1
#
ds-lite instance ds-lite1 id 1
service-instance-group 1
ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
ds-lite outbound 3000 address-group group1
local-ipv6 2001:DB8::1 prefix-length 128
remote-ipv6 2001:DB8:2::2 prefix-length 96
#
acl ipv6 6001
rule 1 permit ipv6 source user-group group1 destination ipv6-address 2001:DB8::1 128
#
acl ipv6 3000
rule 1 permit ipv6 source 2001:DB8:2::2 96
#
dhcpv6 duid 0001000125a7625df063f9761497
#
traffic classifier c1 operator or
if-match ipv6 acl 6001 precedence 1
#
traffic behavior b1
ds-lite bind instance ds-lite1
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
isis 100
network-entity 10.1000.1000.1000.00
import-route unr
#
isis 1000
network-entity 10.0000.0000.0002.000
#
ipv6 enable topology ipv6
ipv6 import-route unr
#
#
user-group group1
#
aaa
authentication-scheme auth1
authentication-mode radius
#
accounting-scheme acct1
accounting-mode radius
#
domain isp1
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ipv6-pool pool1
user-group group1 bind ds-lite instance ds-lite1
#
return
Networking Requirements
In Figure 1-86, a home user's PC with a private IPv4 address accesses an IPv6
MAN through an IPv4 and IPv6 dual-stack-capable and DS-Lite-capable CPE. The
CPE and DS-Lite device establish a DS-Lite tunnel. The CPE transmits traffic with
the private IPv4 address along the DS-Lite tunnel to the DS-Lite device. The DS-
Lite device decapsulates traffic, uses a Network Address Translation (NAT)
technique to translate the private IPv4 address to a public IPv4 address, and
forwards traffic to the IPv4 Internet. The DS-Lite device is equipped with DS-Lite
boards in slots 9 and 10, respectively. The DS-Lite device's GE 0/2/1 is connected
to an IPv6 MAN, and GE 0/2/2 is connected to the Internet. IPv4 home users need
to access the IPv4 Internet through the IPv6 MAN. The carrier provides 11 public
IPv4 addresses from 11.11.11.100 to 11.11.11.110.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
● DS-Lite instance name (ds-lite1)
● Slot IDs (9 and 10) of DS-Lite boards to which a DS-Lite instance is bound
● Local IP address (2001:DB8::1) and remote IP address (2001:DB8:2::1) of a DS-
Lite tunnel
● DS-Lite address pool number (1) and address segment (11.11.11.100 to
11.11.11.110)
● ACL6 rule numbers used in DS-Lite traffic classification and used to be bound
to a DS-Lite address pool
Procedure
Step 1 Configure basic license functions.
<HUAWEI> system-view
[~DS-Lite] sysname DS-Lite
[*DS-Lite] commit
[~DS-Lite] vsm on-board-mode disable
[*DS-Lite] commit
[~DS-Lite] license
[~DS-Lite-license] active ds-lite vsuf slot 9
[*DS-Lite-license] active ds-lite vsuf slot 10
[*DS-Lite-license] active nat session-table size 16 slot 9
[*DS-Lite-license] active nat session-table size 16 slot 10
[*DS-Lite-license] active nat bandwidth-enhance 40 slot 9
[*DS-Lite-license] active nat bandwidth-enhance 40 slot 10
[*DS-Lite-license] commit
[~DS-Lite-license] quit
Step 3 Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
[~DS-Lite] ds-lite instance ds-lite1
[~DS-Lite-ds-lite-instance-ds-lite1] local-ipv6 2001:DB8::1 prefix-length 128
[*DS-Lite-ds-lite-instance-ds-lite1] remote-ipv6 2001:DB8:2::1 prefix-length 64
[*DS-Lite-ds-lite-instance-ds-lite1] commit
[~DS-Lite-ds-lite-instance-ds-lite1] quit
Step 4 Configure a DS-Lite address pool with addresses ranging from 11.11.11.100 to
11.11.11.110.
[~DS-Lite] ds-lite instance ds-lite1
[*DS-Lite-ds-lite-instance-ds-lite1] ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
[*DS-Lite-ds-lite-instance-ds-lite1] commit
[~DS-Lite-ds-lite-instance-ds-lite1] quit
Step 8 Enable the device to advertise the local IP route to the IPv6 network and the
address pool route to the IPv4 network. Import the local IP route and address pool
route to the IS-IS routing table. In this example, UNRs are used as the local IP
route and address pool route.
[~DS-Lite] isis 1000
[~DS-Lite-isis-1000] ipv6 import-route unr
[*DS-Lite-isis-1000] commit
[~DS-Lite-isis-1000] quit
[~DS-Lite] isis 100
[~DS-Lite-isis-100] import-route unr
[*DS-Lite-isis-100] commit
[~DS-Lite-isis-100] quit
# Display detailed information about users of CPU 0 on the service board in slot 9.
[~DS-Lite] display nat user-information slot 9 verbose
This operation will take a few minutes. Press 'Ctrl+C' to break ...
Slot: 9
Total number: 1.
---------------------------------------------------------------------------
User Type : DS-Lite
CPE IP : 2001:DB8:2::1/128
User ID : -
VPN Instance : -
Address Group : group1
DS-Lite Instance : ds-lite1
Public IP : 11.11.11.101
Start Port : 1024
Port Range : 256
Port Total : 256
Extend Port Alloc Times : 0
Extend Port Alloc Number : 0
First/Second/Third Extend Port Start : 0/0/0
Total/TCP/UDP/ICMP Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Session Current : 0/0/0/0
Total/TCP/UDP/ICMP Rev Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Rev Session Current: 0/0/0/0
Total/TCP/UDP/ICMP Port Limit : 0/0/0/0
Total/TCP/UDP/ICMP Port Current : 0/0/0/0
Nat ALG Enable : NULL
Token/TB/TP : 0/0/0
Port Forwarding Flag : Non Port Forwarding
Port Forwarding Ports : 00000
Aging Time(s) : -
Left Time(s) : -
Port Limit Discard Count : 0
Session Limit Discard Count : 0
Fib Miss Discard Count : 0
-->Transmit Packets : 0
-->Transmit Bytes : 0
-->Drop Packets : 0
<--Transmit Packets : 0
<--Transmit Bytes : 0
<--Drop Packets : 0
---------------------------------------------------------------------------
----End
Configuration Files
DS-Lite device configuration file
#
sysname DS-Lite
#
vsm on-board-mode disable
#
license
active ds-lite vsuf slot 9
active ds-lite vsuf slot 10
active nat session-table size 16 slot 9
active nat session-table size 16 slot 10
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
#
service-location 1
location slot 9 backup slot 10
#
service-instance-group 1
service-location 1
#
ds-lite instance ds-lite1 id 1
service-instance-group 1
ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
ds-lite outbound 3000 address-group group1
local-ipv6 2001:DB8::1 prefix-length 128
remote-ipv6 2001:DB8:2::1 prefix-length 64
#
acl ipv6 number 3500
rule 5 permit ipv6 source 2001:DB8:2::1 64 destination 2001:DB8::1 128
#
acl ipv6 number 3000
rule 0 permit ipv6 source 2001:DB8:2::1/64
#
traffic classifier c1 operator or
if-match ipv6 acl 3500 precedence 1
#
traffic behavior b1
ds-lite bind instance ds-lite1
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
isis 100
network-entity 10.1000.1000.1000.00
import-route unr
#
isis 1000
network-entity 10.0000.0000.0002.000
#
ipv6 enable topology ipv6
ipv6 import-route unr
#
#
interface GigabitEthernet0/2/1
undo shutdown
ipv6 enable
Networking Requirements
As shown in Figure 1-87, a home user using a private IPv4 address accesses a CPE
through a VPN and accesses an IPv6 MAN through a PE. A DS-Lite tunnel is
established between the PE and DS-Lite device. The PE transmits traffic with the
private IPv4 address along the DS-Lite tunnel to the DS-Lite device. The DS-Lite
device decapsulates traffic, uses a NAT technique to translate the private IPv4
address to a public IPv4 address, and forwards traffic to the IPv4 Internet. The DS-
Lite device is equipped with DS-Lite boards in slots 9 and 10, respectively. The DS-
Lite device's GE 0/2/1 is connected to an IPv6 MAN, and GE 0/2/2 is connected to
the Internet. IPv4 home users need to access the IPv4 Internet through the IPv6
MAN. The carrier provides 11 public IPv4 addresses from 11.11.11.100 to
11.11.11.110.
The configuration requirements are as follows:
● The private IPv4 home users' PCs can access the IPv4 Internet through the
IPv6 MAN.
● The private IPv4 addresses of home users can be mapped to multiple public IP
addresses in many-to-many translation mode.
Interface 1, interface 2, and interface 3 in this example represent GE0/2/1, GE0/2/2, and
GE0/2/3, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configuring an L3VPN service.
2. Configure basic license functions.
3. Configure a DS-Lite instance and bind it to a DS-Lite board.
4. Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
5. Configure a DS-Lite address pool. The IP addresses in the address pool range
from 11.11.11.100 to 11.11.11.110.
6. Configure a traffic policy for the DS-Lite tunnel.
7. Bind the DS-Lite tunnel to the address pool.
8. Configure interfaces and a routing protocol.
9. Enable the device to advertise the local IP route to the IPv6 network and the
address pool route to the IPv4 network.
Data Preparation
● VPN instance name (vpna)
● DS-Lite instance name (ds-lite1)
● Slot IDs (9 and 10) of DS-Lite boards to which a DS-Lite instance is bound
● Local IP address (2001:DB8::1) and remote IP address (2001:DB8:2::1) of a DS-
Lite tunnel
● DS-Lite address pool number (1) and address segment (11.11.11.100 to
11.11.11.110)
● ACL6 rule numbers used in DS-Lite traffic classification and used to be bound
to a DS-Lite address pool
Procedure
Step 1 Assign an IP address to each interface. For configuration details, see configuration
files.
Step 2 Configure an IGP on the IPv4 backbone network to implement interworking
between PEs and DS-Lite devices. (For configuration details, see configuration
files.)
Step 3 Enable MPLS LDP on PEs and DS-Lite devices and interfaces.
# Configure the PE.
<PE> system-view
[~PE] mpls lsr-id 1.1.1.1
[*PE] mpls
[*PE-mpls] quit
[*PE] mpls ldp
[*PE-mpls-ldp] quit
[*PE] interface gigabitEthernet0/2/3
[*PE-GigabitEthernet0/2/3] mpls
[*PE-GigabitEthernet0/2/3] mpls ldp
[*PE-GigabitEthernet0/2/3] quit
[*PE] commit
# Configure DS-Lite.
<DS-Lite> system-view
[~DS-Lite] mpls lsr-id 2.2.2.2
[*DS-Lite] mpls
[*DS-Lite-mpls] quit
[*DS-Lite] mpls ldp
[*DS-Lite-mpls-ldp] quit
[*DS-Lite] interface gigabitEthernet0/2/1
[*DS-Lite-GigabitEthernet0/2/1] mpls
[*DS-Lite-GigabitEthernet0/2/1] mpls ldp
[*DS-Lite-GigabitEthernet0/2/1] quit
[*DS-Lite] commit
Step 4 Configure a VPN instance that supports the IPv6 address family on each PE and
bind the PE interface connecting to the CPE to the VPN instance.
# On the PE, configure an IPv6-address-family-capable VPN instance named vpna.
[~PE] ip vpn-instance vpna
[*PE-vpn-instance-vpna] ipv6-family
[*PE-vpn-instance-vpna-af-ipv6] route-distinguisher 1000:1
[*PE-vpn-instance-vpna-af-ipv6] vpn-target 1000: 1 export-extcommunity
[*PE-vpn-instance-vpna-af-ipv6] vpn-target 1000: 1 import-extcommunity
[*PE-vpn-instance-vpna-af-ipv6] quit
[*PE-vpn-instance-vpna] quit
[*PE] commit
# Configure a VPN instance named vpna that supports the IPv6 address family on
the DS-Lite device.
[~DS-Lite] ip vpn-instance vpna
[*DS-Lite-vpn-instance-vpna] ipv6-family
[*DS-Lite-vpn-instance-vpna-af-ipv6] route-distinguisher 1000:1
[*DS-Lite-vpn-instance-vpna-af-ipv6] vpn-target 1000: 1 export-extcommunity
[*DS-Lite-vpn-instance-vpna-af-ipv6] vpn-target 1000: 1 import-extcommunity
[*DS-Lite-vpn-instance-vpna-af-ipv6] quit
[*DS-Lite-vpn-instance-vpna] quit
[*DS-Lite] commit
Step 5 Establish VPNv6 peer relationships between PEs and DS-Lite devices.
# Configure the PE.
[~PE] bgp 65000
[*PE-bgp] peer 3.3.3.3 as-number 65000
[*PE-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE-bgp] peer 2001:db8:3::3 as-number 65000
[*PE-bgp] peer 2001:db8:3::3 connect-interface loopback 1
[*PE-bgp] ipv4-family unicast
[*PE-bgp-af-ipv4] peer 3.3.3.3 enable
[*PE-bgp-af-ipv4] peer 2001:db8:3::3 enable
[*PE-bgp-af-ipv4] import-route direct
[*PE-bgp-af-ipv4] quit
[*PE-bgp] ipv6-family unicast
[*PE-bgp-af-ipv6] import-route direct
[*PE-bgp-af-ipv6] quit
[*PE-bgp] ipv6-family vpnv6
[*PE-bgp-af-vpnv6] peer 3.3.3.3 enable
[*PE-bgp-af-vpnv6] quit
[*PE-bgp] ipv6-family vpn-instance vpna
[*PE-bgp-6-vpna] import-route direct
[*PE-bgp] quit
[*PE] commit
Step 8 Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
[~DS-Lite] ds-lite instance ds-lite1
[~DS-Lite-ds-lite-instance-ds-lite1] local-ipv6 2001:DB8::1 prefix-length 128
[*DS-Lite-ds-lite-instance-ds-lite1] remote-ipv6 2001:DB8:2::1 prefix-length 64
[*DS-Lite-ds-lite-instance-ds-lite1] commit
[~DS-Lite-ds-lite-instance-ds-lite1] quit
Step 9 Configure a DS-Lite address pool. The IP addresses in the address pool range from
11.11.11.100 to 11.11.11.110.
[~DS-Lite] ds-lite instance ds-lite1
[*DS-Lite-ds-lite-instance-ds-lite1] ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
[*DS-Lite-ds-lite-instance-ds-lite1] commit
[~DS-Lite-ds-lite-instance-ds-lite1] quit
Step 10 Reconstruct the local IPv6 route of the DS-Lite instance and specify a public
network device as the next hop.
[~DS-Lite] ipv6 route-static vpn-instance vpna 2001:DB8::1 128 loopback
[*DS-Lite] commit
3. Configure a traffic behavior and bind it to the DS-Lite instance named ds-
lite1.
[~DS-Lite] traffic behavior b1
[*DS-Lite-behavior-b1] ds-lite bind instance ds-lite1
[*DS-Lite-behavior-b1] commit
[~DS-Lite-behavior-b1] quit
4. Configure a DS-Lite traffic policy and associate the rule with the behavior.
[~DS-Lite] traffic policy p1
[*DS-Lite-trafficpolicy-p1] classifier c1 behavior b1
[*DS-Lite-trafficpolicy-p1] commit
[~DS-Lite-trafficpolicy-p1] quit
2. Associate the ACL rule with the DS-Lite address pool. In the DS-Lite instance
named ds-lite1, bind the IPv6 ACL numbered 3000 to the address pool named
group1.
[~DS-Lite] ds-lite instance ds-lite1
[~DS-Lite-ds-lite-instance-ds-lite1] ds-lite outbound 3000 address-group group1
[*DS-Lite-ds-lite-instance-ds-lite1] commit
[~DS-Lite-ds-lite-instance-ds-lite1] quit
# After the configuration is complete, the DS-Lite device can establish connections
with other devices. The CPE has a local IP address and a route to the address pool.
[~DS-Lite] display ipv6 routing-table 2001:DB8::1
Routing Table : Public
Summary Count : 1
Destination : 2001:DB8::1 PrefixLength : 128
NextHop : FE80::218:82FF:FE84:CCF Preference : 15
Cost : 10 Protocol : Unr
RelayNextHop : :: TunnelID : 0x0
Interface : InLoopBack1 Flags :D
----End
Configuration Files
Configuration file of the PE.
#
sysname PE
#
isis 1
is-level level-2
network-entity 10.0000.0000.0001.00
traffic-eng level-2
#
ipv6 enable topology ipv6
#
#
interface LoopBack1
ipv6 enable
ip address 1.1.1.1 255.255.255.255
ipv6 address 2001:DB8:1::1/128
isis enable 1
isis ipv6 enable 1
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet0/2/3
undo shutdown
ipv6 enable
ip address 10.1.1.1 255.255.255.0
ipv6 address 2001:DB8:10::1/64
isis enable 1
isis ipv6 enable 1
mpls
mpls ldp
undo dcn
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 1000:1
apply-label per-instance
vpn-target 1000:1 export-extcommunity
vpn-target 1000:1 import-extcommunity
ipv6-family
route-distinguisher 1000:1
apply-label per-instance
vpn-target 1000:1 export-extcommunity
vpn-target 1000:1 import-extcommunity
#
interface gigabitethernet 0/1/0
ipv4-family
route-distinguisher 1000:1
apply-label per-instance
vpn-target 1000:1 export-extcommunity
vpn-target 1000:1 import-extcommunity
ipv6-family
route-distinguisher 1000:1
apply-label per-instance
vpn-target 1000:1 export-extcommunity
vpn-target 1000:1 import-extcommunity
#
bgp 65000
router-id 3.3.3.3
private-4-byte-as enable
peer 1.1.1.1 as-number 65000
peer 2001:DB8:1::1 as-number 65000
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.1 enable
peer 2001:DB8:1::1 enable
#
ipv6-family unicast
undo synchronization
import-route direct
peer 2001:DB8:1::1 enable
#
ipv6-family vpnv6
policy vpn-target
peer 1.1.1.1 enable
#
ipv6-family vpn-instance vpna
import-route direct
import-route static
#
vsm on-board-mode disable
#
license
active ds-lite vsuf slot 9
active ds-lite vsuf slot 10
active nat session-table size 16 slot 9
active nat session-table size 16 slot 10
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
#
service-location 1
location slot 9 backup slot 10
#
service-instance-group 1
service-location 1
#
ds-lite instance ds-lite1 id 1
service-instance-group 1
ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
ds-lite outbound 3000 address-group group1
local-ipv6 2001:DB8::1 prefix-length 128
remote-ipv6 2001:DB8:2::1 prefix-length 64
#
acl ipv6 number 3500
rule 5 permit ipv6 source 2001:DB8:2::1 64 destination 2001:DB8::1 128
#
acl ipv6 number 3000
rule 0 permit ipv6 source 2001:DB8:2::1/64
#
traffic classifier c1 operator or
if-match ipv6 acl 3500 precedence 1
#
traffic behavior b1
Example for Configuring Centralized DS-Lite Providing Backup for Centralized DS-
Lite
This section provides an example for configuring centralized DS-Lite providing
backup for centralized DS-Lite.
Networking Requirements
In Figure 1-88, DS-Lite device A is connected to device B that backs up device A. If
no fault occurs, device A performs NAT for user traffic. If all available service
boards' CPUs fail or the number of failed CPUs on device A reaches the value of
down-number, user traffic is switched to device B for NAT.
The configuration requirements are as follows:
● PCs on the private network segment of 10.110.10.1/24 can access the
Internet.
The configurations in this example are mainly performed on Device A and Device B.
Interfaces 1 and 2 in this example represent GE 0/2/1 and GE 0/2/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure device A:
a. Configure basic DS-Lite functions and enable centralized DS-Lite
providing backup for centralized DS-Lite.
b. Configure a DS-Lite traffic diversion policy.
c. Advertise the local IP route to the IPv6 network.
d. Configure a DS-Lite translation policy.
2. Configure device B:
a. Configure basic DS-Lite functions.
b. Configure a DS-Lite traffic diversion policy.
c. Configure a DS-Lite translation policy.
d. Configure static routes.
Data Preparation
To complete the configuration, you need the following data:
● DS-Lite instance name (ds-lite1)
● Slot ID (1) of the DS-Lite board to which a DS-Lite instance is bound
● Local IP address (2001:DB8::1) and remote IP address (2001:DB8:2::1) of a DS-
Lite tunnel
● DS-Lite address pool number (1) and group name (group1)
● Centralized DS-Lite providing backup for centralized DS-Lite in a DS-Lite
instance
● Configure ACL6 rules for DS-Lite traffic classification and policy-based traffic
diversion
● Configure a DS-Lite translation policy.
Procedure
Step 1 Configure basic DS-Lite functions on the master device (device A).
1. Configure basic license functions.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] vsm on-board-mode disable
[*DeviceA] commit
[~DeviceA] license
[~DeviceA-license] active ds-lite vsuf slot 9
[*DeviceA-license] active nat session-table size 16 slot 9
[*DeviceA-license] active nat bandwidth-enhance 40 slot 9
[*DeviceA-license] commit
[~DeviceA-license] quit
3. Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
[~DeviceA] ds-lite instance ds-lite1
[~DeviceA-ds-lite-instance-ds-lite1] local-ipv6 2001:DB8::1 prefix-length 128
[*DeviceA-ds-lite-instance-ds-lite1] remote-ipv6 2001:DB8:2::1 prefix-length 64
[*DeviceA-ds-lite-instance-ds-lite1] commit
[~DeviceA-ds-lite-instance-ds-lite1] quit
4. Configure a DS-Lite address pool. Set the address pool with a range of
10.38.160.100 through 10.38.160.110.
[~DeviceA] ds-lite instance ds-lite1
[*DeviceA-ds-lite-instance-ds-lite1] ds-lite address-group group1 group-id 1 10.38.160.100
10.38.160.110
[*DeviceA-ds-lite-instance-ds-lite1] commit
[~DeviceA-ds-lite-instance-ds-lite1] quit
3. Configure a traffic behavior and bind the traffic behavior to the DS-Lite
instance named ds-lite1.
Step 4 Configure the device to import the local IP route to the IS-IS routing table so that
the route is advertised to the IPv6 network. In this example, UNRs are used as the
local IP route and address pool route.
[~DeviceA] isis 1000
[~DeviceA-isis-1000] ipv6 import-route unr
[*DeviceA-isis-1000] commit
[~DeviceA-isis-1000] quit
Step 6 After the configuration is complete, the DS-Lite device can establish connections
with other devices. In addition, the CPE is routable to the local IP address and the
addresses in the address pool.
[~DeviceA] display ipv6 routing-table 2001:DB8::1
Routing Table : _Public_
Summary Count : 1
Destination : 2001:DB8::1 PrefixLength : 128
NextHop : FE80::218:82FF:FE84:CCF Preference : 15
Cost : 10 Protocol : Unr
RelayNextHop : :: TunnelID : 0x0
Interface : InLoopBack0 Flags :D
Step 7 Configure basic DS-Lite functions on the backup device (device B).
1. Configure basic license functions.
<HUAWEI> system-view
[~DeviceB] sysname DeviceB
[*DeviceB] commit
[~DeviceB] vsm on-board-mode disable
[*DeviceB] commit
[~DeviceB] license
[*DeviceB-license] active ds-lite vsuf slot 9
[*DeviceB-license] active nat session-table size 16 slot 9
[*DeviceB-license] commit
[~DeviceB-license] quit
2. Configure a DS-Lite instance and bind it to a DS-Lite board.
[~DeviceB] service-location 1
[*DeviceB-service-location-1] location slot 9
[*DeviceB-service-location-1] commit
[~DeviceB-service-location-1] quit
[~DeviceB] service-instance-group 1
[*DeviceB-instance-group-1] service-location 1
[*DeviceB-instance-group-1] commit
[~DeviceB-instance-group-1] quit
[~DeviceB] ds-lite instance ds-lite1 id 1
[*DeviceB-ds-lite-instance-ds-lite1] service-instance-group 1
[*DeviceB-ds-lite-instance-ds-lite1] quit
[~DeviceB-ds-lite-instance-ds-lite1] commit
3. Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
[~DeviceB] ds-lite instance ds-lite1
[~DeviceB-ds-lite-instance-ds-lite1] local-ipv6 2001:DB8::1 prefix-length 128
[*DeviceB-ds-lite-instance-ds-lite1] remote-ipv6 2001:DB8:2::1 prefix-length 64
[*DeviceB-ds-lite-instance-ds-lite1] commit
[~DeviceB-ds-lite-instance-ds-lite1] quit
4. Configure a DS-Lite address pool and a NAT translation policy. Set the address
pool with a range of 10.38.160.10 through 10.38.160.20.
[~DeviceB] ds-lite instance ds-lite1
[*DeviceB-ds-lite-instance-ds-lite1] ds-lite address-group group1 group-id 1 10.38.160.10
10.38.160.20
[*DeviceB-ds-lite-instance-ds-lite1] commit
[~DeviceB-ds-lite-instance-ds-lite1] quit
[*DeviceB-GigabitEthernet0/2/2] commit
[~DeviceB-GigabitEthernet0/2/2] quit
Step 9 Configure basic DS-Lite functions and a traffic diversion policy on the backup
device.
[~DeviceB] ds-lite instance ds-lite1
[*DeviceB-ds-lite-instance-ds-lite1] ds-lite outbound 3500 address-group group1
[*DeviceB-ds-lite-instance-ds-lite1] commit
[~DeviceB-ds-lite-instance-ds-lite1] quit
Step 11 Configure a static route to the local IPv6 address segment of the DS-Lite tunnel on
the master device so that user traffic can be switched to the backup device if the
service board on the master device fails.
[~DeviceA] ipv6 route-static 2001:DB8::1 128 2001:DB8:1::2:1
[*DeviceA] commit
----End
Configuration Files
● Master DS-Lite device configuration file
#
sysname DeviceA
#
vsm on-board-mode disable
#
license
active ds-lite vsuf slot 9
active nat session-table size 16 slot 9
active nat bandwidth-enhance 40 slot 9
#
acl ipv6 number 3500
rule permit ipv6 source 2001:DB8:2::1 64 destination 2001:DB8::1 128
#
service-location 1
location slot 9
#
service-instance-group 1
service-location 1
#
ds-lite instance ds-lite1 id 1
service-instance-group 1
local-ipv6 2001:DB8::1 prefix-length 128
remote-ipv6 2001:DB8:2::1 prefix-length 64
ds-lite address-group group1 group-id 1 10.38.160.100 10.38.160.110
ds-lite outbound 3500 address-group group1
ds-lite centralized-backup enable
#
traffic classifier c1 operator or
if-match acl 3500 precedence 1
#
traffic behavior b1
ds-lite bind instance ds-lite1
#
traffic policy p1
classifier c1 behavior b1 precedence 1
#
isis 1000
is-level level-1
network-entity 10.1000.1000.1002.00
ipv6 enable topology ipv6
ipv6 import-route unr
#
interface gigabitEthernet 0/2/1
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:2::2:1 64
traffic-policy p1 inbound
isis ipv6 enable 1000
#
interface gigabitEthernet 0/2/2
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::2:2 64
#
ipv6 route-static 2001:DB8::1 128 2001:DB8:1::2:1
#
return
● Backup DS-Lite device configuration file
#
sysname DeviceB
#
vsm on-board-mode disable
#
license
active ds-lite vsuf slot 9
active nat session-table size 16 slot 9
active nat bandwidth-enhance 40 slot 9
#
acl ipv6 number 3500
rule permit ipv6 source 2001:DB8:2::1 64 destination 2001:DB8::1 128
#
service-location 1
location slot 9
#
service-instance-group 1
service-location 1
#
ds-lite instance ds-lite1 id 1
service-instance-group 1
local-ipv6 2001:DB8::1 prefix-length 128
remote-ipv6 2001:DB8:2::1 prefix-length 64
ds-lite address-group group1 group-id 1 10.38.160.10 10.38.160.20
ds-lite outbound 3500 address-group group1
ds-lite centralized-backup enable
#
traffic classifier c1 operator or
if-match acl 3500 precedence 1
#
traffic behavior b1
ds-lite bind instance ds-lite1
#
traffic policy p1
classifier c1 behavior b1 precedence 1
#
isis 1000
is-level level-1
network-entity 10.1000.1000.1002.00
ipv6 enable topology ipv6
ipv6 import-route unr
#
interface gigabitEthernet 0/2/2
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::2:1 64
traffic-policy p1 inbound
isis ipv6 enable 1000
#
ipv6 route-static 2001:DB8:2:: 64 2001:DB8:1::2:2
#
return
Definition
The Port Control Protocol (PCP) establishes PCP connections between customer
premises equipment (CPE) and Carrier Grade NAT (CGN) devices to implement
point-to-point (P2P) applications between terminal users accessing different CGN
devices. PCP is used in two-level Network Address Translation (NAT444) and Dual-
Stack Lite (DS-Lite) scenarios. In a PCP application, a CGN device functions as a
PCP server, and a CPE functions as a PCP client. The CGN devices assign public IP
addresses and port numbers to the users on the CPEs. A P2P server uses the public
IP addresses and port numbers to establish P2P connections between terminal
users so that the users connected to the CGN devices can access P2P services, such
as teleconferences, online games, and P2P data transmission.
Purpose
During the IPv4-to-IPv6 transition, carriers use transition techniques, such as
NAT444 and DS-Lite, to translate between public and private address realms and
save IPv4 addresses. If symmetric NAT (5-tuple mode) is used, two terminal users
on different internal networks (in private address realms) are difficult to
communicate, which fails to meet the basic P2P application requirement for node
communication. A CGN device connected to a terminal user rejects any session
request initiated by the terminal user connected to the other CGN device,
hindering the P2P communication between the terminal users on different private
networks.
Therefore, the CGN device must support the PCP protocol. The CGN device
establishes a PCP connection to a CPE and assigns public network resources to
private network terminals accessing the CPE to implement P2P applications
between P2P terminal users.
Benefits
PCP offers the following benefits.
PCP Connection
1. Customer premises equipment (CPE) sends a Carrier Grade NAT (CGN) device
a PCP request for a public IP address and a public port number for user
access.
NOTE
a. If the CPE detects no traffic destined for the PC on the listening port after
the listening timeout period elapses, the CPE constructs a PCP request
packet for a connection teardown and sends it to the CGN device. The
packet carries a PCP connection lifetime of 0.
b. Upon receipt, the CGN device tears down the PCP connection, deletes the
NAT mapping entry for the PC, and reclaims the public network
resources.
● The CGN device triggers a teardown.
– After the PCP connection lifetime elapses on the CGN device, the CGN
device does not receive a PCP connection teardown request with a PCP
connection lifetime of 0 from the CPE. Upon receipt, the CGN device tears
down the PCP connection, deletes the NAT mapping entry for the PC, and
reclaims the public network resources.
NOTE
After the CGN device receives a connection teardown request, it takes one second to
process the request. If a connection setup request is received within this period, the
connection fails to be established.
The CPE also sends a PCP request packet carrying a PCP connection lifetime to the
CGN device. The lifetime value on the CGN device takes effect according to the
following rules:
● If the received lifetime is greater than or equal to the minimum lifetime and
less than or equal to the maximum lifetime, the received lifetime takes effect.
● If the received lifetime is less than the minimum lifetime, the minimum
lifetime takes effect.
● If the received lifetime is greater than the maximum lifetime, the maximum
lifetime takes effect.
You can set the minimum and maximum lifetime values based on the network
traffic volume and the number of allowable PCP connections and public network
resources on the CGN device.
● This rule helps prevent a PCP connection from being frequently torn down
and reestablished due to a small lifetime.
● This rule helps prevent a PCP connection from retaining and being unable to
release service resources due to a large lifetime.
A CGN device (PCP server) must be assigned with an IP address so that a CPE
(PCP client) can send a request to the CGN device to establish a Port Control
Protocol (PCP) connection.
The following describes how to configure a PCP server address in centralized and
distributed deployment scenarios.
● In centralized deployment, a CGN device is deployed on a CR/SR and does not
provide user access and authentication services. In this case, it is
recommended that you statically specify a PCP server address on the CPE and
CGN device.
● In distributed deployment, a CGN device is deployed on a broadband remote
access server (BRAS) and can function as a DHCP or RADIUS server for user
login and authentication. In this case, in addition to statically specifying a PCP
server address on the CPE and CGN device, either of the following methods
can be used for the CPE to obtain the PCP server address:
– When the CGN device functions as a DHCP server, it encapsulates the PCP
server name into the DHCP Option field. When a user goes online, the
CPE requests the PCP server name from the DHCP server based on the
DHCP Option field. The DHCP server then sends the PCP server name to
the CPE through the Option field.
– When the CGN device functions as a RADIUS server, during user
authentication, the RADIUS server encapsulates the Huawei proprietary
hw-pcp-server-name attribute in the DHCP Option field of the Access-
Accept packet and sends the packet to the CPE.
Security
In distributed CGN, a CPE (PCP client) sends a PCP request packet to the BRAS
(PCP server). Upon receipt of the packet, the PCP server responds to the request
packet and allocates public network information to the PCP client. If an attacker
pretends to be a PCP client and initiates a session to the PCP server, security risks
may arise.
To prevent such risks, a RADIUS server needs to control PCP enabling for BRAS
users. The PCP function is available for users to get online only after the users are
authorized by the RADIUS server.
In the user authentication process, note the following:
● If the RADIUS authentication packet does not carry the hw-pcp-server-name
attribute, the user supports the PCP function.
● If the RADIUS authentication packet carries the hw-pcp-server-name attribute,
the CGN device checks the value of the attribute in the packet to determine
whether the user supports the PCP function.
– Value 1 indicates that the user supports the PCP function.
– Value 0 indicates that the user does not support the PCP function.
Allocation Rules
A PCP request originating from a CPE carries a private IP address and port number
and a recommended public IP address and port number.
● If the request does not contain a PREFER_FAILURE Option field, the CGN
device allocates the recommended public IP address and port number. If the
recommended public IP address and port number are unavailable, the CGN
device allocates another public IP address and port number.
● If the request contains a PREFER_FAILURE Option field, the CGN device (PCP
server) allocates the recommended public IP address and port number.
– By default, if the recommended public IP address and port number have
been allocated or are unavailable, the CGN device returns the error code
CANNOT_PROVIDE_EXTERNAL to the CPE. Upon receipt, the CPE has to
resend a PCP request to obtain public network resources.
– In a P2P application, after PCP port reservation is enabled, the CGN
device can reserve a range of ports used in PCP mapping for the PCE. If a
port carried in a PCP request sent by a PCE is not in a port range to be
allocated, the CGN device allocates a port within the reserved port range,
increasing the P2P connection success rate.
Quantity
A CGN device allocates the following public network resources one public IP
address and one public port number or two consecutive public port numbers or a
group of port numbers to each user on CPE:
● One public IP address and one public port number if a PCP request packet
received by the CGN device carries no Port Reservation Option field.
● One public IP address and two consecutive public port numbers if a PCP
request packet carries the Port Reservation Option field. If two consecutive
public port numbers are unavailable, the CGN device allocates only one port
number to the user and informs the CPE that the other port number fails to
be allocated.
● One public IP address and a group of port numbers if a PCP request packet
carries the Port Set Option field.
NOTE
● This function applies only to PCP request packets of the MAP protocol type.
● This function applies only to the DS-Lite scenario.
The following example demonstrates how PC1 establishes a TCP connection to the P2P
server. The process on PC2 is similar to that on PC1.
The PCs must support the Universal Plug and Play (UPnP) protocol and P2P software.
Using UPnP-capable P2P software is recommended.
a. The user on PC1 starts P2P software so that PC1 can automatically send a
UPnP request packet to instruct CPE1 to open a User Datagram Protocol
(UDP) listening port.
b. Upon receipt, CPE1 performs the following operations:
▪ Uses the UPnP proxy function to accept the UPnP request and
triggers the PCP client function.
▪ Uses the PCP client function to send CGN1 a PCP request for a public
IP address and a public port number.
c. As the PCP server, CGN1 responds to the request from the client. CGN1
selects a public IP address from the public address pool and generates a
NAT entry containing the mapping between the private and public
addresses and between the private and public port numbers. CGN1 then
sends to CPE1 a PCP response packet carrying the public IP address and
port number allocated to PC1.
d. CPE1 forwards the public IP address and port number to PC1. The public
IP address and port number can be displayed on the P2P software
interface.
e. PC1 uses the public IP address and port number to establish a TCP
connection to the P2P server.
2. PC1 sends TCP packets carrying file information and user status information
to the P2P server.
3. The P2P server parses the TCP packets sent by PC1 and obtains and saves
PC1's public IP address and port number.
4. When PC1 sends a request to search for a file to the P2P server, the P2P
server replies with all the file owners' information, including file owners'
public IP addresses and port numbers. This example uses PC2 as a file owner.
5. PC1 sends the P2P server a request to download the file saved on PC2.
6. The P2P server forwards the download request to PC2.
7. Upon receipt of the request, PC2 establishes a UDP connection to PC1 and
sends packets to PC1 over the UDP connection.
8. PC1 downloads the file from PC2. P2P transmission between PC1 and PC2 is
complete.
NOTE
CR Core Router
SR Service Router
NOTE
PCP Deployment
PCP can be deployed in the integrated and centralized NAT444 and DS-Lite
scenarios. PCP can be smoothly implemented without changes in the existing
IPv4-to-IPv6 transition infrastructure or service deployment.
NOTE
If a CGN device (PCP server) has the internal server function enabled, in addition to the
PCP function, the CGN device uses the NAT mappings contained in NAT server entries,
without generating NAT mappings.
Context
PCP is enabled by default in a NAT or DS-Lite instance. A CGN device (PCP server)
must be assigned an IP address so that a CPE (PCP client) can send a request to
establish a PCP connection to the CGN device.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run vsm on-board-mode disable
The dedicated board working mode is specified.
Step 3 Run license
The license view is displayed.
Step 4 Run active pcp vsuf slot slot-id
PCP is enabled on a specified service board.
Step 5 Run commit
The configuration is committed.
Step 6 Run quit
Exit the license view.
Step 7 Perform the following operations based on whether the CPE and CGN devices are
deployed in a NAT444 or DS-Lite scenario:
● In a NAT444 scenario, perform the following steps:
a. Run nat instance instance-name id id
The NAT instance view is displayed.
b. Run pcp enable
The PCP function is enabled in the NAT instance.
c. Run pcp server ipv4 ipv4–address { mask | mask-len }
An IPv4 address is configured for the PCP server.
NOTE
▪ Only one IPv4 address can be configured for a PCP server in each NAT
instance, and the specified address cannot be used as a physical interface
address, a loopback interface address, or an address in an address pool.
When users go offline and then go online again, to allow the users to obtain the
same ports or port range as that before the users go offline, perform the
following operations: Configure the port reservation function, and configure the
RADIUS server to carry the HW-NAT-Start-Port (162) and HW-NAT-End-Port
(163) private attributes and deliver the reserved ports or port range to the users.
f. (Optional) Run nat log session port-reservation
The device is enabled to record PCP port reservation logs.
NOTE
▪ The flow and user log functions must be enabled in the NAT instance and the
log host information must be configured. For configuration details, see
Configuring the NAT Log Function.
▪ When the port that the PCP client preempts is within the reserved port range
and the device is enabled to send port reservation log messages in the
instance, the device sends flow log messages. In the other situations, the
device does not send flow log messages. User log messages can be used for
source tracing.
g. Run commit
The configuration is committed.
h. Run quit
Exit the NAT instance view.
● In a DS-Lite scenario, perform the following steps:
a. Run ds-lite instance instance-name id id
The DS-Lite instance view is displayed.
b. Run pcp enable
The PCP function is enabled in the DS-Lite instance.
c. Run pcp server { ipv4 ipv4–address { mask | mask-len } | ipv6 ipv6–
address prefix-length }
The IP address of the PCP server is configured.
NOTE
A single PCP server IPv4 address and a single PCP server IPv6 address can be
specified in each DS-Lite instance, and the specified address cannot be used as a
physical interface address, a loopback interface address, or an address in an
address pool.
d. (Optional) Run pcp version1 ignore prefer-failure
The device is configured to send PCPv1 response packets without the
prefer_failure option.
A non-Huawei CPE may fail to identify PCPv1 response packets carrying
the prefer_failure option. As a result, the non-Huawei CPE cannot
communicate with the CGN device in this situation. To prevent this issue,
run the pcp version1 ignore prefer-failure command to enable the CGN
device to send PCPv1 response packets without the prefer_failure option.
e. (Optional) Run port-reservation start-port to end-port [ extend well-
known-port ]
The port reservation function is enabled.
If the port requested by the PCP client is not within the port range, the
PCP client can apply for port preemption from the reserved port range.
NOTE
When users go offline and then go online again, to allow the users to obtain the
same ports or port range as that before the users go offline, perform the
following operations: Configure the port reservation function, and configure the
RADIUS server to carry the HW-NAT-Start-Port (162) and HW-NAT-End-Port
(163) private attributes and deliver the reserved ports or port range to the users.
f. (Optional) Run pcp port-set max-size size-value
The maximum size of a port set is configured.
If a PCP request packet sent by a client carries the Port Set Option field,
you can run this command to apply for a group of public network ports.
NOTE
▪ This function applies only to PCP request packets of the MAP protocol type.
▪ The flow and user log functions must be enabled in the DS-Lite instance and
the log host information must be configured. For configuration details, see
Configuring the DS-Lite Log Function.
▪ When the port that the PCP client preempts is within the reserved port range
and the device is enabled to send port reservation log messages in the
instance, the device sends flow log messages. In the other situations, the
device does not send flow log messages. User log messages can be used for
source tracing.
h. Run commit
The configuration is committed.
i. Run quit
Exit the DS-Lite instance view.
4. Run quit
For PCP MAP request packets carrying the prefer_failure option and non-0 public
network IP addresses, PCP prefer_failure enhancement is enabled to allow the PCP
server to randomly assign a public IP address.
The minimum and maximum lifetime values for PCP connections are set.
After the lifetime elapses, a CGN device terminates PCP connections to release
resources. You can set the minimum and maximum lifetime values based on the
network traffic volume and the number of resources.
----End
Prerequisites
All PCP functions have been complete.
Procedure
● Run the display pcp vsuf status [ slot slot-id ] command to check PCP
license status information.
● Run the display nat session table pcp command to check PCP session
information.
● Run the display nat statistics command to check statistics on a service
board.
----End
Networking Requirements
In Figure 1-91, a private IPv4 user accesses the Internet after a BRAS performs
NAT for the user. The CGN device is required to have a static PCP server address
configured and establishes a PCP connection to the user.
Figure 1-91 PCP server with a static IP address in a distributed NAT444 scenario
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic NAT functions.
2. Configure NAT user information and RADIUS authentication on the BRAS.
3. Configure a NAT traffic diversion policy.
4. Configure a NAT traffic conversion policy.
5. Configure a static PCP server in a NAT444 instance.
6. Verify the configuration.
Data Preparation
To complete the configuration, you need the following data:
● Name of a NAT instance
● NAT address pool's number and start and end IP addresses
● User group name
● ACL and UCL numbers
● NAT traffic diversion policy information
● Static PCP server address
NOTE
The static PCP server address must differ from a physical interface address, loopback
interface address, or an address in the address pool of the NAT444 instance.
Procedure
Step 1 Set the maximum number of sessions that can be created on the service board in
slot 9 to 6M.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS_NAT
[*HUAWEI] commit
[~BRAS_NAT] vsm on-board-mode disable
[*BRAS_NAT] commit
[~BRAS_NAT] license
[~BRAS_NAT-license] active nat session-table size 6 slot 9
[*BRAS_NAT-license] active nat bandwidth-enhance 40 slot 9
● Configure a NAT instance named nat1 with the ID of 1 and bind it to a NAT
board.
[~BRAS_NAT] service-location 1
[*BRAS_NAT-service-location-1] location slot 9
[*BRAS_NAT-service-location-1] commit
[~BRAS_NAT-service-location-1] quit
[~BRAS_NAT] service-instance-group group1
[*BRAS_NAT-service-instance-group-group1] service-location 1
[*BRAS_NAT-service-instance-group-group1] commit
[~BRAS_NAT-service-instance-group-group1] quit
[~BRAS_NAT] nat instance nat1 id 1
[*BRAS_NAT-nat-instance-nat1] service-instance-group group1
[*BRAS_NAT-nat-instance-nat1] commit
[~BRAS_NAT-nat-instance-nat1] quit
● Configure an address pool and set the start IP address to 11.11.11.101 and
the end IP address to 11.11.11.105.
[~BRAS_NAT] nat instance nat1
[~BRAS_NAT-nat-instance-nat1] nat address-group address-group1 group-id 1 11.11.11.101
11.11.11.105
[*BRAS_NAT-nat-instance-nat1] commit
[~BRAS_NAT-nat-instance-nat1] quit
[~BRAS_NAT-ucl-6001] quit
2. Configure a traffic classifier.
[~BRAS_NAT] traffic classifier classifier1
[*BRAS_NAT-classifier-classifier1] if-match acl 6001
[*BRAS_NAT-classifier-classifier1] commit
[~BRAS_NAT-classifier-classifier1] quit
3. Define a traffic behavior behavior1 and bind it to the NAT instance nat1.
[~BRAS_NAT] traffic behavior behavior1
[*BRAS_NAT-behavior-behavior1] nat bind instance nat1
[*BRAS_NAT-behavior-behavior1] commit
[~BRAS_NAT-behavior-behavior1] quit
4. Configure a NAT traffic policy named policy1 and associate the ACL-based
traffic classification rules with the traffic behavior.
[~BRAS_NAT] traffic policy policy1
[*BRAS_NAT-trafficpolicy-policy1] classifier classifier1 behavior behavior1
[*BRAS_NAT-trafficpolicy-policy1] commit
[~BRAS_NAT-trafficpolicy-policy1] quit
5. Apply the NAT traffic diversion policy in the system view.
[~BRAS_NAT] traffic-policy policy1 inbound
[*BRAS_NAT] commit
[~BRAS_NAT] quit
----End
Configuration File
BRAS_NAT configuration file
#
sysname BRAS_NAT
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
interface Virtual-Template1
ppp authentication-mode auto
#
interface GigabitEthernet0/2/1.1
user-vlan 1
pppoe-server bind Virtual-Template 1
bas
access-type layer2-subscriber default-domain authentication isp1
authentication-method ppp
#
ip pool pool1 bas local
gateway 10.110.10.101 255.255.255.0
section 1 10.110.10.1 10.110.10.100
dns-server 192.168.7.252
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
active pcp vsuf slot 9
#
service-location 1
location slot 9
#
service-instance-group group1
service-location 1
#
nat instance nat1 id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.101 11.11.11.105
rule 1 permit ip source 10.110.10.0 0.0.0.255
nat outbound 3001 address-group address-group1
pcp server ipv4 10.1.1.1 255.255.255.255
#
user-group group1
#
acl 3001
rule 10 permit ip source 10.110.10.0 0.0.0.255
#
acl 6001
rule 1 permit ip source user-group group1
#
traffic classifier classifier1 operator or
if-match acl 6001 precedence 1
#
traffic behavior behavior1
nat bind instance nat1
#
traffic policy policy1
classifier classifier1 behavior behavior1 precedence 1
#
traffic-policy policy1 inbound
#
aaa
authentication-scheme auth1
authentication-mode RADIUS
#
accounting-scheme acct1
accounting-mode RADIUS
#
domain isp1
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool pool1
user-group group1 bind nat instance nat1
#
ip route-static 11.11.11.0 24 null 0
#
ospf 1
import-route static
#
return
Networking Requirements
In Figure 1-92, a private network IPv4 user traverses an IPv6 network and
accesses the IPv4 Internet after a CGN device performs DS-Lite for the user. The
CGN device is required to have a static IPv4 PCP server address (10.1.1.1)
configured so that private network user can establish a PCP connection.
Figure 1-92 PCP server with a static IP address in a distributed DS-Lite scenario
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic license functions.
2. Configure a DS-Lite instance and bind it to a DS-Lite board.
3. Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
4. Configure a DS-Lite address pool with addresses ranging from 11.11.11.100 to
11.11.11.110.
5. Configure a static PCP server in the DS-Lite instance.
6. Configure DS-Lite user information and RADIUS authentication on the BRAS.
7. Configure a traffic diversion policy for the DS-Lite tunnel.
8. Bind the DS-Lite tunnel to the address pool.
9. Configure interfaces and a routing protocol.
10. Enable the DS-Lite device to advertise the local IP route to the IPv6 network
and the address pool route to the IPv4 network.
Data Preparation
To complete the configuration, you need the following data:
The static PCP server address must differ from a physical interface address, loopback
interface address, or an address in the address pool of the DS-Lite instance.
Procedure
Step 1 Configure basic license functions.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS
[*HUAWEI] commit
[~BRAS] vsm on-board-mode disable
[*BRAS] commit
[~BRAS] license
[~BRAS-license] active ds-lite vsuf slot 9
[*BRAS-license] active nat session-table size 16 slot 9
[*BRAS_NAT-license] active nat bandwidth-enhance 40 slot 9
[*BRAS-license] active pcp vsuf slot 9
[*BRAS-license] commit
[~BRAS-license] quit
Step 3 Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
[~BRAS] ds-lite instance ds-lite1
[*BRAS-ds-lite-instance-ds-lite1] local-ipv6 2001:DB8::1 prefix-length 128
[*BRAS-ds-lite-instance-ds-lite1] remote-ipv6 2001:DB8:2::2 prefix-length 96
[*BRAS-ds-lite-instance-ds-lite1] commit
[~BRAS-ds-lite-instance-ds-lite1] quit
Step 4 Configure a DS-Lite address pool with addresses ranging from 11.11.11.100 to
11.11.11.110.
[~BRAS] ds-lite instance ds-lite1
[*BRAS-ds-lite-instance-ds-lite1] ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
NOTICE
In the IPv6 address pool view of a BRAS, the DNS server address and AFTR
name must be specified. The AFTR name identifies a DS-Lite server. The
following conditions must be met so that the DS-Lite tunnel between the DS-
Lite device and CPE can be established:
– When the BRAS uses the ND mode to assign an IP address to a WAN
interface on the CPE, the AFTR name and DNS information must be
configured in the PD address pool. Otherwise, the DS-Lite tunnel between
the DS-Lite device and CPE cannot be established.
– After the AFTR name is configured on the BRAS, the mapping between
the AFTR name and the local IP address of the DS-Lite instance must be
configured on the DNS server. The DS-Lite device advertises its local IP
address to the CPE.
3. Configure a traffic behavior and bind the traffic behavior to the DS-Lite
instance named ds-lite1.
[~BRAS] traffic behavior b1
[*BRAS-behavior-b1] ds-lite bind instance ds-lite1
[*BRAS-behavior-b1] commit
[~BRAS-behavior-b1] quit
4. Configure a DS-Lite traffic diversion policy and associate the IPv6 ACL-based
traffic classification rule with the traffic behavior.
[~BRAS] traffic policy p1
[*BRAS-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS-trafficpolicy-p1] commit
[~BRAS-trafficpolicy-p1] quit
2. Associate the ACL rule with the DS-Lite address pool. In the DS-Lite instance
named ds-lite1, bind the IPv6 ACL numbered 3000 to the address pool named
group1.
[~BRAS] ds-lite instance ds-lite1
[*BRAS-ds-lite-instance-ds-lite1] ds-lite outbound 3000 address-group group1
[*BRAS-ds-lite-instance-ds-lite1] commit
[~BRAS-ds-lite-instance-ds-lite1] quit
Step 10 Enable the DS-Lite device to advertise the local IP route to the IPv6 network and
the address pool route to the IPv4 network. Import the local IP route and address
pool route to the IS-IS routing table. In this example, UNRs are used as the local IP
route and address pool route.
[~BRAS] isis 1000
[~BRAS-isis-1000] ipv6 import-route unr
[*BRAS-isis-1000] commit
[~BRAS-isis-1000] quit
[~BRAS] isis 100
[~BRAS-isis-100] import-route unr
[*BRAS-isis-100] commit
[~BRAS-isis-100] quit
Step 11 After completing the preceding configurations, run the following command. The
command output shows that the DS-Lite device has established IS-IS neighbor
relationships with other devices, and the neighbor relationship status is up. In
addition, the CPE is routable to the local IP address and the addresses in the
address pool.
[~BRAS] display ipv6 routing-table 2001:DB8::1
Routing Table : Public
Summary Count : 1
----End
Configuration File
DS-Lite device configuration file
#
sysname BRAS
#
vsm on-board-mode disable
#
license
active ds-lite vsuf slot 9
active pcp vsuf slot 9
active nat session-table size 16 slot 9
active nat bandwidth-enhance 40 slot 9
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
interface Virtual-Template1
ppp authentication-mode auto
#
interface GigabitEthernet0/2/1
undo shutdown
ipv6 enable
isis ipv6 enable 1000
#
interface GigabitEthernet0/2/2
undo shutdown
ip address 1.1.1.1 255.255.255.0
isis enable 100
#
ipv6 prefix pre1 local
prefix 2001:DB8:2::/96
#
ipv6 pool pool1 bas local
prefix pre1
dns-server 2001:DB8::1:2 //Configure the IPv6 DNS server address. In a DS-Lite scenario, setting this
parameter in the local address pool is recommended. When a BRAS assigns an IP address in ND mode to a
WAN interface on the CPE, the AFTR named and DNS information must be configured in the PD address
pool.
aftr-name www.huawei.com //Configure the AFTR name. The AFTR is DS-Lite server. After the AFTR
name is configured on the DS-Lite server, the configuration must be advertised to the CPE. Otherwise, the
DS-Lite tunnel between the CPE and DS-Lite server cannot be established. The notification sent by the DNS
server contains the mapping between the AFTR name and the local IP address configured in the DS-Lite
instance.
#
service-location 1
location slot 9 backup slot 10
#
service-instance-group 1
service-location 1
#
#
isis 1000
network-entity 10.0000.0000.0002.000
#
ipv6 enable topology ipv6
ipv6 import-route unr
#
#
user-group group1
#
aaa
authentication-scheme auth1
authentication-mode radius
#
accounting-scheme acct1
accounting-mode radius
#
domain isp1
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ipv6-pool pool1
user-group group1 bind ds-lite instance ds-lite1
#
return
Definition
NAT64 is a network address translation technology that translates IPv6 addresses
into IPv4 addresses.
Purpose
NAT64 is mainly applicable to the later phase of IPv4-to-IPv6 transition. On the
networks where IPv6 is the mainstream, new IPv6 single-stack users can access
the remaining IPv4 services across the IPv6 network.
On the networks where IPv4 is the mainstream, carriers can also directly deploy
IPv6 networks. New IPv6 single-stack users can access IPv4 services using NAT64.
The main reasons include:
● Carrier networks are experiencing IPv4 address shortage or inappropriate IPv4
address allocation, and therefore fail to meet access requirements of a large
number of new users.
● Carriers want to gradually deploy IPv6 networks to implement a smooth
transition from IPv4 to IPv6. For example, carriers deploy IPv6 on access and
core networks and allow all new users to access IPv6 networks while allowing
existing IPv4 users to gradually transition to IPv6 networks.
● Emerging carriers directly deploy IPv6 networks to reduce costs.
● Mobile phones do not support the dual stack because of high power
consumption and poor heat dissipation. During the IPv4-to-IPv6 transition,
mobile phone users may need to access IPv4 services across IPv6 networks.
Benefits
Benefits to carriers
● NAT64 is future-proof and fundamentally solves deployment difficulties and
high maintenance cost issues stemming from IPv4 address shortage or
inappropriate IPv4 address allocation, which facilitates network evolution
towards IPv6-only networks.
● NAT64 is easy to deploy and use. Carriers assign IPv6 addresses, without
maintaining IPv4 addresses.
● NAT64 protects internal networks against external attacks, improving security.
this case, the NAT64 device identifies destinations based on IPv6 prefixes that
have been defined for NAT64 processing.
– If the IPv6 prefix carried in a packet is the same as that defined on the
NAT64 device, the packet is destined for an IPv4 network. After the
NAT64 device processes the packet, the packet is forwarded to the IPv4
network.
– If the IPv6 prefix carried in a packet differs from that defined on the
NAT64 device, the packet is destined for an IPv6 network. This packet is
forwarded to the IPv6 network without being processed by NAT64.
The NAT64 device advertises the route with the defined IPv6 prefix. IPv4
packets that IPv6 terminals send and are destined for IPv4 networks are
directed to the NAT64 device over the advertised route.
● NAT64 translation policy
NAT64 uses ACLs to control the scope in which NAT64 address translation
takes effect. Only data packets matching ACL rules can be processed by
NAT64. Private network terminals whose packets match addresses specified in
an ACL can access IPv4 networks. A NAT64 translation policy can flexibly
control terminals' access to the IPv4 networks.
ACL rules are defined based on both IPv6 headers and upper layer protocol
headers in IPv6 packets. A NAT64 device permits or denies IPv6 data packets
matching the ACL rules. The NAT64 device can translate addresses only for
the packets that match ACL rules before the packets reach a public IPv4
network, which improves network security.
Translation association enables a NAT64 device to associate an address pool
with an ACL so that only IPv6 packets matching ACL rules are processed using
addresses in the address pool. Before forwarding data packets from a private
network to a public network, a NAT64 device matches the packets with the
ACL, searches for an IP address pool associated with the ACL, and translates
the source private addresses in the matching packets to public addresses in
the address pool.
When packets are sent from a private network to a public network and allow
for address translation, the translation association function enables a NAT64
device to replace a private IPv6 address and port number in each packet with
a public IPv4 address and a port number, respectively, for each private
network host. For data that is sent from a public IPv4 network to a private
IPv6 network, the NAT64 device translates the public IPv4 network address
and port number to the private IPv6 address and port number, respectively.
The NAT64 device cannot be connected to a DNS64 server with a non-0 suffix.
Relevant standards recommended the suffix of 0 for a DNS IPv6 server.
4. The PC sends packets with source address 2001:DB8::1 and source port
number 1500 to a destination with destination address 64:FF9B::0A0A:B and
destination port number 80.
5. Packets are forwarded to the NAT64 device.
NOTE
The NAT64 device advertises a route destined for 64:FF9B/96 to direct traffic with the
same destination to the NAT64 device.
6. The NAT64 device removes the IPv6 prefix (64:FF9B) from the destination
address of IPv6 packets and translates the source IP address and source port
to 192.168.113.1 and 2000, respectively, in IPv4 packets and forwards them to
the IPv4 network.
NOTE
When private network traffic is processed by NAT64 in the forward direction, the
NAT64 device creates an entry in the NAT64 mapping table. The entry contains the
following information:
● Address mapping: A private address of 2001:DB8::1 is mapped to a public address
of 192.168.113.1.
● Port mapping: A private port number of 1500 is mapped to a public port number
of 2000.
If public network traffic is sent to the private network, traffic hits the entry and NAT64
reversely translates IPv4 information to IPv6 information. Obtained IPv6 packets are
sent to the IPv6 network.
NAT64 port allocation is triggered when a NAT64 device receives the first valid IP
packet from an IPv6 terminal.
Port Pre-allocation
In port pre-allocation (port range) mode, a NAT64 device pre-allocates an IPv4
address and a port range to an IPv6 address when mapping the IPv6 address and
IPv4 IP address. The IPv4 address and ports in the port range are used in the
NAT64 mapping of the IPv6 address. In port pre-allocation mode, the NAT64
device generates a log only when it pre-allocates an IPv4 address and a port range
to an IPv6 address. Therefore, the number of generated logs is small.
NAT64 Server
NAT64 Server
The NAT64 function allows an IPv6 client to proactively access an IPv4 server. In
some usage scenarios, IPv4 users need to access an IPv6 server. IPv4 users,
however, cannot access IPv6 hosts in NAT64 mode.
The internal server function in a NAT64 instance can solve this problem. By
configuring the mapping between an IPv4 address and a pair of an IPv6 address
and an IPv6 prefix on a NAT64 device, the NAT64 device translates the IPv4
address into an IPv6 address, which enables IPv4 hosts to access servers on the
IPv6 network.
In Figure 1-94, the internal server function is implemented for the NAT64 device
to map 2001:db8:cafe::101:101 of an IPv6 host to 1.1.1.1 so that an IPv4 host can
initiate a request to the IPv4 address of the IPv6 server.
NAT64 ALG
The NAT64 application level gateway (ALG) provides transparent translation for
some application layer protocols in NAT64.
Packets of some protocols such as ICMP and FTP carry IP addresses or port
numbers in their payload. After NAT is performed, the IP address and port number
in the header are different from those in the payload, causing communication
errors. For example, an FTP server using an internal IP address may be required to
send its IP address to an external network host when communicating with the
external network host. The internal IP address is encapsulated in the Data field of
IP packets, which cannot be translated by NAT64. The external network host then
uses the internal IP address carried in the IP packet payload and finds that the FTP
server is unreachable.
A good way to solve the NAT64 issue for these special protocols is to use the
Application Level Gateway (ALG) function. As a special conversion agent for
application protocols, the NAT64 ALG interacts with the NAT64 device to establish
states. It uses NAT64 state information to change the specific data in the Data
field of IP packets and complete other necessary work, so that application
protocols can run across internal and external networks.
For example, when an error occurs in packet A which is sent from a host on a
private network to a public network, an ICMP unreachable packet is returned. The
ICMP packet carries the header of the error packet A. Because the address is
translated by a NAT64 device before packet A is sent, the source address is not the
actual address of the host. If ICMP ALG is enabled, the ALG interacts with the
NAT64 device before the ICMP packet is forwarded. The ALG translates the
address in the Data field of packet A to the actual address of the host and
completes other necessary work, so the NAT64 device can send the ICMP packet
to the host.
NAT64 supports ALG for ICMP, FTP, HTTP, and UDP-based DNS.
1. An IPv6 terminal sends DNS AAAA query request record packet to the IPv4
network.
2. After the query request reaches the DNS ALG of the NAT64 device, the DNS
ALG converts AAAA record into A record and sends it to the DNS4 server on
the IPv4 network.
3. The DNS4 server on the IPv4 network completes domain name resolution and
returns a resolution result. In this case, the DNS4 server obtains the IPv4
address mapping the domain name.
4. When the resolution result reaches the DNS ALG of the NAT64 device, the
DNS ALG converts A record response into AAAA record response, converts the
IPv4 address into the IPv6 address, and saves the mapping locally. The
obtained IPv4 address is mapped to the IPv6 address by the DNS ALG.
5. After receiving the IPv6 destination address resolution result from the NAT64
device, the IPv6 terminal can access the IPv4 destination address.
User sessions that saving NAT mapping relationship of users, are critical to a
NAT64 device when NAT64 is performed. Therefore, user session resources need to
be protected against attacks to ensure resource efficiency.
NAT64 Backup
In most cases, two boards are configured for a NAT64 instance for high reliability.
The CPU on one board is the active CPU, whereas the CPU on the other board is
the standby CPU. Hot standby is implemented. In this mechanism, the active
NAT64 service board processes NAT64 user services, whereas the standby board is
in the "listening" state. If the active board fails, the standby board takes over
NAT64 user services.
The hot standby mechanism ensures data consistency between the active and
standby boards. If a fault occurs on the active board, the system automatically
performs an active/standby switchover and the standby board takes over services
on the active boards. This ensures proper running of services and users cannot
detect the fault.
NAT64 Logs
Purpose
When users access an IPv4 network through a NAT64 device, the source IP
addresses of users are translated addresses. It is difficult to accurately locate the
hosts or users who access the network, which reduces network security.
NAT64 logs can address this problem. NAT64 logs record information about
NAT64 flows so that administrators can learn addresses before NAT64 translation
to query and trace network activities and operations. This improves network
availability and security.
The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M supports NAT64
flow logs.
Flow Logs
Flow logs apply when a NAT64 device establishes and age sessions. Flow logs
contain the source IP address, source port number, destination IP address, post-
translated source IP address, post-translated source port number, and protocol
type. They are sent to a log server. Flow logs contain abundant information in a
large amount. Flow logs are not only used for source tracing, but also used for
providing information about external networks accessed by users.
Flow logs support the binary formats and are transmitted through a configured
UDP port.
Flow logs on the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M
support the syslog format and eLog format.
For details about the preceding log formats, see "NAT Logging" in HUAWEI
NetEngine 8100 M14/M8, NetEngine 8000 M14K/M14/M8K/M8/M4 & NetEngine
8000E M14/M8 series Router Feature Description – NAT.
NAT64 Deployment
NAT64 applies to the latter phase of IPv6 transition in which IPv6 is the
mainstream application. New users connected to an IPv6 network can access the
remaining IPv4 services across the IPv6 network.
On the network shown in Figure 1-96, the CGN device is deployed on the SR/CR
side, and the CPE, CR, and CGN device communicate with each other over an IPv6
network. The terminal users of the IPv6 access translate IPv6 addresses into public
IPv4 addresses to access the IPv4 service.
Terms
Term Description
CR Core Router
Currently, IPv4 takes the lead. Carriers can directly deploy IPv6 networks to allow
new users connected to an IPv6 network to access IPv4 services using NAT64.
NAT64 applies to the following situations:
● Carrier networks are experiencing IPv4 address shortages or inappropriate
IPv4 address allocations, and therefore fail to meet access requirements of a
large number of new users.
● Carriers want to gradually deploy IPv6 networks to implement a smooth
transition from IPv4 to IPv6. For example, carriers deploy IPv6 on access and
core networks and allow all new users to access IPv6 networks while allowing
existing IPv4 users to gradually transition to IPv6 networks.
● Emerging carriers directly deploy IPv6 networks to reduce costs.
● Mobile phones do not support the IPv4/IPv6 dual stack because of high power
consumption and poor heat dissipation. During the transition from IPv4 to
IPv6, mobile phone users may need to access IPv4 services across IPv6
networks.
Usage Scenario
NAT64 bandwidth and session table resources are controlled using a license. A
device is assigned no NAT64 bandwidth and session table resources by default.
Before you configure NAT64 functions, adjust NAT64 bandwidth and session table
resources.
Pre-configuration Tasks
Before configuring the license function, load a license file on a NAT64 device and
ensure that involved service board works properly.
Context
Before you configure basic NAT64 functions, apply for a NAT64 license for a
service board and run the active nat64 vsuf command in the license view.
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
----End
Prerequisites
Bandwidth resources have been assigned, and the service board is working
properly.
Procedure
● Run the display nat session-table size [ slot slot-id ] command to check
information about NAT64 session entries assigned to each service board.
● Run the display nat bandwidth [ slot slot-id ] command to check
information about bandwidth resources configured in the license.
● Run the display nat64 vsuf status [ slot slot-id ] command to check NAT64
license status information.
----End
Usage Scenario
Basic NAT64 functions are prerequisites for NAT64 configurations, including
enhanced NAT64 configurations. The configuration of basic NAT64 functions
includes the following operations:
● A NAT64 IPv6 prefix is configured. The destination address of IPv6 service
data packets sent to a NAT64 Carrier Grade NAT (CGN) device is an IPv6
address, regardless of whether the packets are sent to the IPv4 or IPv6
network. To differentiate IPv4 and IPv6 packets, the NAT64 CGN device
processes the packets based on the IPv6 prefix.
Pre-configuration Tasks
Before you configure basic NAT64 functions, complete the following tasks:
● Ensure that service boards have been installed and working properly.
● Configure data link layer protocol parameters and IP addresses for interfaces
so that the data link layer protocol on each interface can go Up.
● (Optional) Configure a NAT64 device to properly interwork with a RADIUS
server.
Context
The creation of NAT64 instances is the basis of NAT64 configuration because
NAT64 instances are used in many subsequent configurations. For example, you
may configure the CPUs of the master and backup service boards in a NAT64
instance to work in inter-board hot backup mode; you may also configure the
CPUs of a service board on a master device and a service board on a slave device
in a NAT64 instance to work in inter-chassis hot backup mode.
The NAT64 instance needs to be bound to a service-instance group so that the
instance is indirectly bound to CPUs of service boards. To bind a NAT64 instance to
the CPU of a service board, create a service-location group, bind the group to the
CPU of the service board, bind the service-instance group to the service-location
group, and bind the NAT64 instance to the service-instance group in the NAT64
instance view.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run service-location service-location-id
A service-location group is created, and the service-location group view is
displayed.
Step 3 Run location { follow-forwarding-mode | slot slot-id }
The CPU of the service board is bound in the service-location group view.
The location follow-forwarding-mode and location slot slot-id commands vary
according to device types.
----End
Context
A NAT64 address pool is essential to NAT64 implementation. When IPv6 user data
packets are sent to a NAT64 CGN device, an IPv4 address must be allocated from
the NAT64 address pool to the packets so that the packets are transmitted over an
IPv4 network. NAT64 supports the following address translation methods:
● Port address translation (PAT): NAT64 translates both IP addresses and port
numbers between private and public networks. PAT implements more efficient
IP address sharing and is the most commonly used mode in address
translation.
● No-Port Address Translation (No-PAT): Only the IP address in a packet is
replaced.
The following address translation modes are supported based on NAT64 address
translation methods:
● Symmetric mode: also called the 5-tuple mode. A 5-tuple entry contains a
source IP address, source port number, protocol type, destination IP address,
and destination port number and is used to allocate addresses and filter
packets. If packets carrying the same source IP address and port number but
different destination IP addresses and port numbers are translated by a device
using NAT64, the source IP address and port number in the packets are
translated into different external IP addresses and port numbers. In addition,
the device allows only the external network hosts with these destination IP
addresses to use the translated IP addresses and port numbers to visit internal
network hosts.
● Full-cone mode: also known as the 3-tuple mode that is not concerned with
destination addresses or destination port numbers. In this mode, a device
assigns IP addresses and filters packets based on the source address, source
port number, and protocol type. If packets carrying the same source IP
address and port number are translated by a device using NAT64, the source
IP address and port number in the packets are translated into the same
external IP address and port number. In addition, the device allows external
network hosts to use the translated IP addresses and port numbers to visit
internal network hosts.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat64 instance instance-name [ id id ]
The NAT64 instance view is displayed.
Step 3 Create a NAT64 address pool and specify the mode of assigning address ranges.
Perform either of the following operations:
● Run the nat64 address-group address-group-name group-id group-id start-
address { mask { mask-length | mask-ip } | end-address } [ vpn-instance vpn-
instance-name ] [ no-pat ] command to create an address pool by specifying
an address range in the NAT64 instance view.
● Run the nat64 address-group address-group-name group-id group-id [ vpn-
instance vpn-instance-name ] [ no-pat ] command to enter the NAT64
address pool view and run the section section-id start-address { mask
{ mask-length | mask-ip } | end-address } command to specify an address
range in the NAT64 address pool view.
NOTE
A NAT64 address pool cannot contain a DHCP server address. You need to properly allocate
address segments. Otherwise, NAT traffic cannot be forwarded.
----End
Context
The destination address of IPv6 user data packets sent to a NAT64 CGN device is
an IPv6 address, regardless of whether the packets are sent to the IPv4 or IPv6
network. In this situation, the NAT64 CGN device identifies packets based on IPv6
prefixes that have been defined for NAT64 processing.
● If a device receives an IPv6 packet with a prefix defined on the NAT64 device,
the packet is destined for an IPv4 network. After NAT64 processes the packet,
the packet is forwarded to the IPv4 network.
● If a device receives an IPv6 packet with a prefix different from that defined on
the NAT64 device, the packet is destined for an IPv6 network. This packet is
forwarded to the IPv6 network without being processed by NAT64.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat64 instance instance-name id id
The NAT64 instance view is displayed.
Step 3 Run nat64 prefix ipv6-prefix prefix-length prefix-length prefix-id [ no-pat ]
A NAT64 IPv6 prefix length is set.
NOTE
The prefix configured on a DNS64 server must be the same as the NAT64 prefix, and the
suffix must be 0.
A NAT64 prefix in PAT mode can be associated with a NAT64 address pool only in PAT
mode, and a NAT64 prefix in No-PAT mode can be associated with a NAT64 address pool
only in No-PAT mode.
----End
Context
NAT64 supports three port allocation modes:
● Port pre-allocation: A NAT64 device pre-allocates an IPv4 address and a port
range to an IPv6 address when mapping the IPv6 address to the IPv4 address.
The IPv4 address and ports in the port range are used in the NAT64 mapping
of the IPv6 address.
● Semi-dynamic port pre-allocation: When a host with an IPv6 address accesses
an IPv4 network, a NAT64 device allocates an initial port range and then
allocates the ports within the port range to the IPv6 address. If the number of
ports used by the terminal exceeds the initial port range, the device allocates
an extended port range. The maximum number of times the port range is
extended determines the number of times the extended port range can be
allocated.
● Dynamic port allocation: When mapping a private IP address to a public IP
address, a NAT64 device pre-allocates a public IP address and a port range
with a fixed size of 64 to the private IP address. If the number of used ports
exceeds the initial port range, the NAT64 device assigns another port range
with a fixed size of 64 to the user. The allocation process repeats without a
limit on the maximum number of extended port ranges.
Configure a port allocation mode based on the service scale during network
deployment.
Procedure
Step 1 Run system-view
NOTE
If the port pre-allocation mode or incremental port allocation mode has been configured,
you can change the port-range parameter values in overriding mode. The change takes
effect only for new users.
NOTE
● By default, sessions of different protocols using the same private IP address cannot
share the same public port during port pre-allocation. After port reuse is enabled, for
the same private IP address, TCP sessions can share a public port with sessions of other
protocols.
● The port-reuse enable and port-single enable commands are mutually exclusive.
----End
Procedure
● Run the display service-location service-location-id command to check the
service-location configuration.
● Run the display service-instance-group service-instance-group-name
command to check configuration information of a service instance group.
● Run the display nat64 instance [ instance-name ] command to check the
configuration of a NAT64 instance.
----End
Usage Scenario
When multiple NAT64 instances are configured with the same prefix, you need to
use ACL rules to divert user traffic to the correct service board to the inbound
direction of user traffic to translate the private IPv6 addresses in user packets into
public IPv4 addresses so that users can access the IPv4 network.
Pre-configuration Tasks
Before configuring centralized NAT64 translation, complete the following task:
Context
A service board does not provide any interfaces. Therefore, an interface board
must distribute user traffic to a service board for NAT64 treatment. You can
configure a traffic distribution policy to distribute the packets matching the traffic
distribution policy to the NAT64 service board.
Procedure
Step 1 Configure a traffic classification rule.
1. Run system-view
4. Run commit
# In centralized NAT64 scenarios, apply the traffic policy to Layer 3 interfaces for
Layer 3 traffic sent by the network side.
1. Run system-view
1. Run system-view
----End
Context
A NAT64 conversion policy for user traffic distributed to a service board supports
either of the following modes:
● Match ACL rules:
An ACL and a NAT64 address pool must be specified. If packets match the
ACL, and the action specified in the ACL rule is permit, NAT64 translates the
packets using addresses in the NAT64 address pool.
● Not match ACL rules:
User traffic distributed to a service board does not need to match any ACL
rule. By default, NAT64 is performed for user traffic using addresses from a
specified NAT64 address pool.
Procedure
● Configure a NAT64 conversion policy in which ACL rules are used to filter
packets for translation.
a. Run system-view
A NAT64 conversion policy in which the ACL is bound to the address pool
is configured.
NOTE
When performing this step, ensure that the address range of the ACL rule used in
the conversion policy covers the address range of the ACL rule used in the traffic
diversion policy. Otherwise, service interruptions may occur. To prevent such
interruptions, use either of the following measures:
● Keep acl-number in this step consistent with that in the traffic diversion
policy.
● Configure a default address pool in this step for translating the addresses
unmatched by the configured traffic conversion policy. For example:
The ACL rule referenced by the traffic diversion policy is as follows:
#
acl ipv6 3000
rule 1 permit ipv6 source 2001:db8::1 64
#
acl ipv6 3999 //Default address pool configuration
rule 1 permit ipv6 //Default address pool configuration
The conversion policy is as follows:
nat64 instance nat1 id 1
nat64 address-group group1 group-id 1 10.10.1.0 10.10.1.254
nat64 address-group group2 group-id 2 10.10.1.255 10.10.1.255 //Default address pool
configuration
nat64 outbound 3000 address-group address-group1
nat64 outbound 3999 address-group address-group2 //Default address pool
configuration
d. Run commit
● Configure a NAT64 conversion policy in which ACL rules are not used to filter
packets for translation.
a. Run system-view
The system view is displayed.
b. Run nat64 instance instance-name id id
The NAT64 instance view is displayed.
c. Run nat64 outbound any address-group address-group-name
A NAT64 conversion policy is configured to not allow packets to match
against ACL rules.
d. Run commit
The configuration is committed.
----End
Prerequisites
A NAT64 translation has been configured.
Procedure
● Run the display nat64 instance [ instance-name ] command to check the
configuration of a NAT64 instance.
● Run the display nat user-information command to check information about
online users.
----End
Usage Scenario
NAT64 can be configured to allow IPv6 users on a private network to access public
network IPv4 services and prevent IPv4 users from accessing IPv6 users because
the IPv4 users cannot obtain IPv6 user information. The NetEngine 8100 M,
NetEngine 8000E M, NetEngine 8000 M supports the internal server function in a
NAT instance, allowing external IPv4 users to communicate with IPv6 users by
accessing the public IPv4 address of a specified private IPv6 server.
Pre-configuration Tasks
Before configuring an IPv4 address used to access an internal IPv6 host, complete
the following tasks:
Usage Scenario
NAT64 allows IPv6 users to initiate access to IPv4 services. IPv4 users, however,
cannot access IPv6 users.
To address this problem, the internal server function can be configured on the
private network where a NAT64 device resides. The internal server function in a
NAT64 instance implements reverse translation from IPv4 addresses to IPv6
addresses based on statically configured mapping between IPv4 addresses and
pairs of IPv6 addresses and prefixes.
Procedure
Step 1 Run system-view
NOTE
The IP address of the internal server must be different from the IP address of a DHCP
server. Otherwise, a message about the address conflict is displayed.
----End
Procedure
● Run the display nat64 instance [ instance-name ] command to check the
configuration of a NAT64 instance.
Usage Scenario
ALG is short for application level gateway. Most application layer protocol packets
carry user IP addresses and port numbers. NAT64 translates only network layer
addresses and transport layer ports. Therefore, you need an ALG to translate the IP
addresses and port numbers carried in the Data field of application layer packets.
NAT64 ALG provides transparent translation for special application layer protocols.
NAT64 translates only the IP addresses contained in user data packets and the
port information in the Transmission Control Protocol (TCP)/User Datagram
Protocol (UDP) headers of data packets. For special protocols (for example, FTP)
the Data field in a packet contains IP address or port information. NAT64,
however, does not take effect on an IP address or port information in the Data
field of a packet. As a result, a protocol-specific connection fails to be established.
A good way to solve the NAT64 translation issue for these special protocols is to
use the ALG function. As a special translation agent for application layer protocols,
the ALG interacts with NAT64. The ALG uses NAT64 state information to change
the specific data in the Data field of IP packets so that application layer protocols
can run across internal and external networks.
Pre-configuration Tasks
Before configuring NAT64 ALG, complete the following tasks:
● Configure basic NAT64 functions.
● Configure centralized NAT64 translation.
Procedure
1. Run system-view
The system view is displayed.
2. Run nat64 alg server-address server-ip [ end-ip ] protocol { all | ftp | dns |
http }
An IPv4 server address list of NAT64 ALG is configured for application layer
protocols.
3. Run commit
The configuration is committed.
4. Run nat64 instance instance-name [ id id ]
The NAT64 instance view is displayed.
5. Run nat64 alg { all | dns | { ftp [ rate-threshold value ] } | http }
NAT64 ALG is enabled for an application layer protocol.
After an IPv4 server address or an IPv4 server address list of NAT64 ALG for a
specific or all application layer protocols is configured, the configuration can
take effect only if the ALG function for the application layer protocols is
enabled in the NAT64 instance view.
6. Run commit
The configuration is committed.
Usage Scenario
You can deploy the NAT64 security function to guarantee secure operations of a
NAT64 device and prevent malicious attacks to the system.
Pre-configuration Tasks
Before you configure the NAT64 security function, complete the following tasks:
Context
If the number of established Transmission Control Protocol (TCP), User Datagram
Protocol (UDP), Internet Control Message Protocol (ICMP) NAT64 sessions, or the
total number of NAT64 sessions of a user exceeds a configured threshold, a device
stops establishing such sessions. The limit helps prevent resource overconsumption
from resulting in a failure to establish connections for other users.
Procedure
Step 1 Run system-view
----End
Context
A NAT device checks whether the number of established Transmission Control
Protocol (TCP), User Datagram Protocol (UDP), or Internet Control Message
Protocol (ICMP) sessions or the total number of sessions involving the same
source or destination IP address exceeds the configured threshold. Then the NAT
device determines whether to restrict the initiation of new connections from the
source or destination IP address. This prevents individual users from consuming
excessive session table resources and causing the connection failure of other users.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat64 instance instance-name [ id id ]
The NAT64 instance view is displayed.
Step 3 (Optional) Run nat64 reverse-session-limit enable
The limitation on the number of NAT64 reverse sessions that can be established is
enabled.
If this function is not enabled, run the nat64 reverse-session-limit enable
command to enable it.
Step 4 Run nat64 reverse-session-limit { icmp | tcp | udp | total } limit-number
The maximum number of NAT64 sessions that can be established is set.
----End
Setting the Rate at Which Packets Are Sent to Create a Flow for a User
A device can be configured to dynamically detect the traffic forwarding rate and
limit the rate at which packets are sent to create a flow for each user.
Context
A NAT64 device with a multi-core structure allows flow construction and
forwarding processes to share CPU resources. To minimize or prevent NAT64
packet loss and a CPU usage increase, the device has to maintain a proper ratio of
the forwarding rate to the flow creation rate.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat64 instance instance-name [ id id ]
The NAT64 instance view is displayed.
Step 3 (Optional) Run nat64 user-session create-rate limit enable
The limit on the rate at which packets are sent to create a user flow is set.
To disable this function, run the undo ds-lite user-session create-rate limit
enable command.
Step 4 Run nat64 user-session create-rate rate
The rate at which packets are sent to create a flow on a NAT64 device is set.
Step 5 Run commit
The configuration is committed.
----End
Setting the Maximum Number of Private Network Users Who Use Public IP
addresses Translated by NAT64 to Get Online
If a device working in PAT mode creates a large number of NAT sessions for each
of users sharing a public IP address, user traffic may fail to be forwarded. To
prevent the problem, set the maximum number of private network users who can
get online using the same public IP address.
Context
The limit on the number of private network users sharing a public IP address is
primarily used when dynamic port range allocation or per-port allocation is used
on a device working in PAT mode. When dynamic port range allocation or per-port
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat64 instance instance-name id id
The NAT64 instance view is displayed.
Step 3 Run nat64 ip access-user limit max-number
The maximum number of private network users who can get online using the
same NAT64 public IP address is set.
Step 4 Run commit
The configuration is committed.
----End
Context
The port filter function may cause the core router (CR) to discard returned
packets. A NAT64 service board translates a private source port into a filtered port
used to forward packets from a private network to a public network. After packets
are returned from the public network to the private network, the CR finds that the
packets' destination port is within a range of filtered ports and unexpectedly
discards the packets, which interrupts user services.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat64 instance instance-name id id
The NAT64 instance view is displayed.
Step 3 Run exclude-port { start-port [ to end-port ] } & <1–10>
A port number and a port range to be filtered are specified.
----End
Procedure
● Run the display nat user-information command to check online user
information, including the configuration of NAT64 security.
----End
Usage Scenario
You can configure NAT64 maintainability to reinforce the device administrator's
capability to monitor NAT64 services in real time.
Pre-configuration Tasks
Before you configure NAT64 maintainability, complete the following tasks:
● Configure basic NAT64 functions.
● Configure centralized NAT64 translation.
Context
NAT64 logs are system information generated by a NAT64 device during NAT64
translation. The information contains basic user information and NAT64
translation information. The logs record internal users' access to external
networks. When internal users access an external network through a NAT64
device, they share a public IP address. For this reason, the users accessing the
external network cannot be located. The log function helps trace and record
internal users' access to external networks in real time, enhancing network
maintainability.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 4 Run nat64 log host host-ip-address host-port source source-ip-address source-
port [ name host-name ] [ vpn-instance vpn-instance-name ]
The time format configured using the nat64 time command takes precedence over that
configured using the nat64 time local command in the flexible log template view. If the
nat64 time command is run, the time format of the end time, start time, or timestamp
takes effect according to the configured format. If the nat64 time command is not
configured, the nat64 time local command takes effect. If neither the nat64 time nor
nat64 time local command is run, the default UTC time takes effect.
5. Enter the NAT64 flexible flow log template view and run the nat64 position
command to configure a NAT64 flexible flow log template.
6. Run quit
Return to the system view.
7. Run nat syslog descriptive format { type3 | flexible template session }
A flexible log template type is specified.
----End
Context
The total number of addresses and the total number of ports available in the
system are important resources of NAT64 services. If these resources are
exhausted, NAT64 cannot be performed for the traffic of users just going online.
Therefore, usage of these resources must be properly monitored. The NAT64 alarm
function generates an alarm when the resource usage reaches a certain extent,
notifying the customer of the necessity to implement capacity expansion or
service adjustment.
Procedure
● Set the maximum number of alarm packets that a service board sends every
second.
NOTE
The maximum number of alarm packets that a service board sends every
second is set.
c. Run commit
The log and trap functions of the user table are enabled.
c. Run nat alarm user-table threshold threshold-value
The log and alarm functions of port usage of a No-PAT NAT64 public IP
address pool are enabled.
d. Run nat64 alarm address-group port-number threshold value
----End
Procedure
● Run the display nat64 instance [ instance-name ] command to check the
NAT64 instance configuration.
----End
Usage Scenario
You can improve the operational performance of NAT64 by adjusting NAT64
performance.
Pre-configuration Tasks
Before you adjust NAT64 performance, complete the following tasks:
Context
The aging time of NAT64 sessions for various protocols can be adjusted, so that
expired NAT64 sessions are deleted as soon as possible and system resources can
be released.
Perform the following steps on the router on which the aging time of NAT64
sessions is to be set:
Procedure
Step 1 Run system-view
Step 2 (Optional) Set the fast aging time for DNS sessions.
You are advised to configure this function when DNS traffic is heavy. After the fast
aging function for DNS sessions is enabled, if the device receives DNS request and
response packets at the same time, the DNS sessions age according to the
configured fast aging time to save system resources.
Step 4 (Optional) Run nat64 session aging-time { fin-rst | fragment | ftp | icmp | pptp |
rtsp | sip | syn | tcp | udp } aging-time
After an aging time is set for a NAT64 instance, this aging time is used when a
session is established in the instance. If no aging time is set for a NAT64 instance,
the aging time set in the system view is used when a session is established in the
instance.
----End
Context
When the link MTU is small, NAT64 packet fragments may be generated. You can
change the MTU value so that the packets for NAT64 are not fragmented,
improving NAT64 translation efficiency.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat64 instance instance-name id id
The NAT64 instance view is displayed.
Step 3 Run nat64 mtu value
The IPv6 MTU for packets of a NAT64 instance is set.
Step 4 Run commit
The configuration is committed.
----End
Context
If the size of packets for NAT64 processing is larger than a link MTU, IPv6 packets
are fragmented. You can reduce the MSS value in TCP, which prevents a NAT64
board from fragmenting packets and helps improve NAT64 efficiency.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat64 instance instance-name id id
The NAT64 instance view is displayed.
Step 3 Run nat64 tcp adjust-mss mss-value
An MSS value is set in TCP SYN and SYN ACK packets.
Step 4 Run commit
----End
Procedure
● Run the display nat64 instance command to check the NAT64 instance
configuration.
----End
Networking Requirements
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
● Slot ID and CPU ID of a service board
● ID of a service-location group
● NAT64 instance name and ID
● NAT64 address pool number and start and end IP addresses
● NAT64 IPv6 prefix (64:FF9B::/96)
● ACL number and ACL rule
● Traffic classifier name, traffic behavior name, and traffic policy name
Procedure
Step 1 Configure the NAT64 license function on a service board, configure a service-
location group, and bind the CPU of the service board to the group.
2. Configure a service-location group and bind the CPU of the service board to
it.
[~HUAWEI] service-location 1
[*HUAWEI-service-location-1] location slot 9
[*HUAWEI-service-location-1] commit
[~HUAWEI-service-location-1] quit
Step 3 Configure a NAT64 instance and bind it to the service-instance group so that the
NAT64 instance is bound to the CPU of the service board.
[~HUAWEI] nat64 instance nat1 id 1
[*HUAWEI-nat64-instance-nat1] service-instance-group instance-group1
Step 4 Configure a NAT64 public address pool ranging from 11.11.11.100 to 11.11.11.105.
[*HUAWEI-nat64-instance-nat1] nat64 address-group address-group1 group-id 1 11.11.11.100
11.11.11.105
Step 5 Configure a NAT64 IPv6 prefix of 64:FF9B::/96. This prefix must be the same as the
prefix of the DNS64 server.
[*HUAWEI-nat64-instance-nat1] nat64 prefix 64:FF9B:: prefix-length 96 1
NOTE
This IPv6 prefix is set according to a standard. The prefix of the DNS64 server must be the
same as the IPv6 prefix.
Step 6 Configure a traffic classification rule, a NAT64 traffic behavior, and a NAT64 traffic
diversion policy, and apply the NAT64 traffic diversion policy.
1. Configure an ACL traffic classification rule.
[~HUAWEI] acl ipv6 number 3003
[*HUAWEI-acl6-adv-3003] rule 5 permit ipv6 source 2001:db8::1:1112/126 destination 64:FF9B::/96
[*HUAWEI-acl6-adv-3003] commit
[~HUAWEI-acl6-adv-3003] quit
6. Apply the NAT64 traffic diversion policy in the user-side interface view.
[~HUAWEI] interface GigabitEthernet0/2/1
[*HUAWEI-GigabitEthernet0/2/1] traffic-policy p1 inbound
[*HUAWEI-GigabitEthernet0/2/1] commit
[~HUAWEI-GigabitEthernet0/2/1] quit
Step 7 Configure a NAT64 conversion policy so that the addresses of the packets that are
diverted by an interface board to the service board are converted using the
addresses in the NAT64 address pool.
[~HUAWEI] nat64 instance nat1 id 1
[*HUAWEI-nat64-instance-nat1] nat64 outbound any address-group address-group1
[*HUAWEI-nat64-instance-nat1] commit
[~HUAWEI-nat64-instance-nat1] quit
----End
Configuration Files
NAT64 device configuration file
#
sysname HUAWEI
#
#
vsm on-board-mode disable //Configuration in a dedicated board scenario
license //Configuration in a dedicated board scenario
active nat session-table size 16 slot 9 //Configuration in a dedicated board scenario
active nat bandwidth-enhance 40 slot 9 //Configuration in a dedicated board scenario
active nat64 vsuf slot 9 //Configuration in a dedicated board scenario
#
service-location 1
location slot 9//Configuration in a dedicated board scenario
location follow-forwarding-mode//Configuration in an on-board scenario
#
service-instance-group instance-group1
service-location 1
#
nat64 instance nat1 id 1
service-instance-group instance-group1
nat64 address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat64 prefix 64:FF9B:: prefix-length 96 1
#
acl ipv6 number 3003
rule 5 permit ipv6 source 2001:db8::1:1112/126 destination 64:FF9B::/96
#
traffic classifier c1 operator or
if-match ipv6 acl 3003 precedence 1
#
traffic behavior b1
nat64 bind instance nat1
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
interface GigabitEthernet0/2/1
undo shutdown
ipv6 enable
ipv6 address 2001:db8::1:110e 126
traffic-policy p1 inbound
#
ipv6 route-static 2001:DB8::1:1112 126 2001:DB8::1:110F
#
return
Networking Requirements
NOTE
On the network shown in Figure 1-98, it is required that the internal NAT64
server function be deployed on the NAT64 device and a static mapping between
an internal IPv6 user's private IPv6 address+prefix and a public IPv4 address be
configured, so that the external IPv4 user can use the public IPv4 address of the
internal IPv6 user to access the internal IPv6 user.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic NAT64 functions.
2. Configure the internal NAT64 server function.
3. Configure an address for the interface on the private network side.
Data Preparation
● Slot ID of a service board
● ID of a service-location group
● NAT64 instance name and ID
● NAT64 address pool number and start and end IP addresses
● NAT64 IPv6 prefix (64:FF9B::/96)
Procedure
Step 1 Configure basic NAT64 functions.
1. Configure the NAT64 license function on a service board.
<HUAWEI> system-view
[~HUAWEI] sysname NAT64
[*HUAWEI] commit
[~NAT64] vsm on-board-mode disable
[*NAT64] commit
[~NAT64] license
[*NAT64-license] active nat64 vsuf slot 9
[*NAT64-license] active nat64 vsuf slot 10
[*NAT64-license] active nat session-table size 16 slot 9
[*NAT64-license] active nat session-table size 16 slot 10
[*NAT64-license] commit
[~NAT64-license] quit
5. Configure a NAT64 public address pool, with addresses ranging from 1.1.1.1
to 1.1.1.5.
[~NAT64] nat64 instance nat1 id 1
[*NAT64-nat64-instance-nat1] nat64 address-group address-group1 group-id 1
[*NAT64-nat64-instance-nat1-nat64-address-group-address-group1] section 1 1.1.1.1 1.1.1.5
[*NAT64-nat64-instance-nat1-nat64-address-group-address-group1] commit
[~NAT64-nat64-instance-nat1-nat64-address-group-address-group1] quit
6. Configure a NAT64 IPv6 prefix of 64:FF9B::/96. This prefix must be the same
as the prefix of the DNS64 server.
[~NAT64] nat64 instance nat1 id 1
[*NAT64-nat64-instance-nat1] nat64 prefix 64:FF9B:: prefix-length 96 1
[*NAT64-nat64-instance-nat1] commit
[~NAT64-nat64-instance-nat1] quit
NOTE
This IPv6 prefix is set according to a standard. The prefix of the DNS64 server must be
the same as the IPv6 prefix.
Step 3 Configure an address for the interface on the private network side.
[~NAT64] interface GigabitEthernet0/2/1
[*NAT64-GigabitEthernet0/2/1] ipv6 enable
[*NAT64-GigabitEthernet0/2/1] ipv6 address 2001:db8::1:110e 96
[*NAT64-GigabitEthernet0/2/1] commit
[~NAT64-GigabitEthernet0/2/1] quit
Total number: 2.
NAT64 Instance: nat1
Protocol:ANY, VPN:--->-
Server reverse:ANY->1.1.1.10[2001:DB8::1:1112]
Tag:0x0, TTL:-, Left-Time:-
CPE IP:2001:DB8::1:1111
outbound: false
prefixId: 1
----End
Configuration File
#
sysname NAT64
#
vsm on-board-mode disable
#
license
active nat session-table size 16 slot 9
active nat session-table size 16 slot 10
active nat64 vsuf slot 9
active nat64 vsuf slot 10
#
service-location 1
location follow-forwarding-mode
#
service-instance-group instance-group1
service-location 1
#
nat64 instance nat1 id 1
service-instance-group instance-group1
nat64 address-group address-group1 group-id 1
section 1 1.1.1.1 1.1.1.5
nat64 prefix 64:FF9B:: prefix-length 96 1
nat64 server global 1.1.1.10 inside 2001:db8::1:1112 prefix-id 1
#
interface GigabitEthernet0/2/1
undo shutdown
ipv6 enable
ipv6 address 2001:db8::1:110e 96
#
return
Definition
CGN reliability allows for inter-chassis and intra-chassis hot backup on the master
and slave CGN board to ensure data consistency as well as enhance service
reliability and device availability.
Purpose
In the event of a fault on the master service board, master device, or a link, CGN
reliability allows services to be smoothly switched to the slave service board or
device, thereby ensuring service restoration within a short period.
Benefits
● This feature offers the following benefits to carriers:
Enhances network reliability by providing service continuity and provides
reliable support for IPv4-to-IPv6 transition networks with CGN devices
deployed.
● This feature offers the following benefits to users:
Ensures service continuity without letting users perceive faults.
Inter-Board Backup
Basic Concepts
From a micro perspective, inter-board hot backup implements CPU backup
between the active and standby service boards. From a macro perspective, inter-
board hot backup implements inter-board backup between active and standby
service boards on a NAT device equipped with multiple service boards. Inter-board
hot backup ensures data consistency between the active and standby service
boards. If the active service board fails, a single-chassis inter-board active/standby
switchover is triggered to ensure normal service running and prevent users from
being aware of the fault.
Backup Rules
In inter-board hot backup, the single-chassis inter-board active/standby status is
statically configured for service boards. The active service board establishes NAT
sessions, and service traffic passes through the active service board rather than the
standby service board. Once the active service board fails, the interface board
switches traffic to the standby service board after the chassis detects the fault.
Currently, three inter-board backup modes are available: cold backup, warm
backup, and hot backup. Compared with cold and warm backup, hot backup is
different in the following implementation from a micro perspective:
● Hot backup
The standby service board automatically synchronizes NAT sessions from the
active service board, without requiring the NAT sessions to be reestablished
during traffic switching. Figure 1-99 shows the internal processing path:
multi-core CPU (active) <->TM (active) <-> SFU <-> TM (standby) <-> and
multi-core CPU (standby).
NOTE
● Cold/Warm backup
In cold and warm backup: The standby service board does not synchronize
NAT sessions from the active service board. When traffic is switched to the
standby service board, NAT sessions need to be re-established. The re-
establishment time varies according to the number of NAT sessions. Figure
1-100 shows the internal processing path: multi-core CPU (active) <->TM
(active).
Troubleshooting Mechanism
As shown in Table 1-34, compared with inter-board cold/warm backup, inter-
board hot backup supports both centralized and distributed scenarios and ensures
fast service recovery.
Inter- Central The active If the active service After the active
board ized service board fails, user service board
cold NAT board traffic is interrupted recovers, traffic is
backup scenari processes for a short period of switched back to the
o services, time before being active service board
and the switched to the after a delay and NAT
standby standby service board. sessions are re-
service Public network IP established.
board does addresses are then re-
not back up allocated, and NAT
any tables. sessions are re-
established.
Inter- Distrib The active If the active service When the active
board uted service board fails, user service board recovers
warm NAT board traffic is interrupted from the fault, user
backup scenari processes for a short period of table information is
o services, time before being backed up to the
and the switched to the active service board,
standby standby service board. and traffic is switched
service NAT sessions are re- back to the active
board backs established using the service board after a
up the user backup user table delay. NAT sessions
table in real information. are re-established
time. using the backup user
table information.
Inter- Central The active If the active service When the active
board ized service board fails, user service board recovers
hot NAT board traffic is interrupted from the fault, the
backup scenari processes for a short period of user table and NAT
o services, time before being session information
and the switched to the are backed up to the
Distrib standby standby service board. active service board,
uted service The backup user table and traffic is switched
NAT board backs and NAT session back to the active
scenari up the user information are used. service board after a
o table and delay. The backup
NAT session user table and NAT
information. session information
are used.
NOTE
In a centralized NAT scenario, two service boards are deployed on a NAT device, and the
CPUs on the two service boards work in hot/cold backup mode. In a distributed NAT
scenario, two service boards are deployed on a BRAS, and the CPUs on the two service
boards work in hot/warm backup mode.
Inter-Chassis Backup
Basic Concepts
VSU service boards equipped on two devices can be configured to work in active/
standby mode to implement inter-chassis backup. Inter-chassis hot backup ensures
service data consistency between the master and backup devices. If the master
device, service board, public-network link, or private-network link fails, a master/
backup device switchover is performed to ensure service continuity.
Backup Rules
Inter-chassis hot backup involves the following features:
The Virtual Router Redundancy Protocol (VRRP) is a fault-tolerant protocol. This
protocol groups several routing devices into one virtual routing device and uses a
certain mechanism to switch services to another router if a fault occurs on the
next hop router of the host. This ensures communication continuity and reliability.
Generally, a routing device group consists of two routing devices: one master
chassis and one backup chassis. The master/backup status is determined by the
priorities of the routers. If VRRP detects a service board fault, service traffic will be
switched from the master device to the backup device.
NOTE
● You are advised to configure inter-board Eth-Trunk between master and backup devices to
ensure that direct links are not interrupted. Otherwise, services may be affected.
● Description of some acronyms and abbreviations in this document:
● VSM is short for value-added service management, and HA indicates high availability.
Inter-chassis backup supports three backup modes: cold backup, warm backup,
and hot backup. As shown in Table 1-35, inter-chassis hot backup is a common
inter-chassis backup mode. It ensures higher service reliability and lower impact of
faults on the network compared with the other two modes.
address 224.0.0.18 and the TTL value must be 255. Therefore, VRRP packets
cannot be transmitted over a Layer 3 network but can be transmitted over a Layer
2 network (such as a VLAN, VLL, or VPLS network). Therefore, the master and
backup devices must be directly connected or work at Layer 2. Therefore, inter-
chassis direct connection backup is preferred.
When a user accesses a private VPN, the private VPN ingress translates private IP
addresses into public IP addresses. The master device backs up NAT/DS-Lite/
NAT64 data carrying VPN information to the backup device through the backup
link. In this context, master and backup devices share the same private VPN and
VPN over NAT/DS-Lite/NAT64 inter-chassis hot backup is implemented.
On the forwarding plane, each VPN instance is identified by a VPN index. In inter-
chassis hot backup scenarios, the VPN indexes generated based on the VPN
information may vary between the master and backup devices. The backup data
and packets reach the peer device through RBS and can be properly processed
only when valid VPN information is contained. Therefore, the master and backup
devices need to learn VPN information from each other.
On the network shown in Figure 1-106, when the master and backup devices
learn the peer VPN index of the same VPN instance, NAT device 1 (master device)
backs up the VPN-A information with the VPN index 100 carried in the NAT/DS-
Lite/NAT64 session information to NAT device 2 (backup device). Therefore, when
the private network traffic is switched from NAT device 1 to NAT device 2, packets
can be correctly forwarded according to the NAT/DS-Lite/NAT64 forwarding
entries backed up from the master device to the backup device.
NOTE
As shown in Figure 1-107, if the BRAS and CGN functions cannot be integrated on
the same device, a CGN device needs to be connected to the CR or SR. In this
manner, CPEs can dial up through the BRAS, and the BRAS can allocate private IP
addresses for the CPEs. Upon receipt of a packet from a PC, the CPEs perform the
first NAT on the IP address of the PC, and the CGN device performs the second
NAT on the CPEs. The NATed packets are then transmitted to the ISP core network
through the CR. This solution is called centralized NAT444 inter-board hot backup
because there are two NAT operations and the CGN function is centralized on the
CR for inter-board hot backup.
Figure 1-107 Centralized NAT444 inter-board hot backup solution based on port
pre-allocation
NOTE
On the network shown in Figure 1-108, a CPE dials up to a BRAS integrated with
a CGN board and gets online. The BRAS assigns an access IP address, a post-NAT
address, and a port range to the CPE. After receiving a packet from a PC, the
corresponding CPE performs the first NAT for the packet and then sends the
packet to the BRAS. Upon receipt of the packet, the BRAS performs the second
NAT. Taking into consideration the two NATs and CGN functions distributed on
various BRASs, 1:1 inter-board hot backup is deployed to achieve CGN service
reliability. This solution is called distributed NAT444 inter-board hot backup.
Figure 1-108 Distributed NAT444 inter-board hot backup based on port pre-
allocation
Service Description
Inter-chassis hot backup configured on a network ensures service data consistency
between the master and slave devices. If a master device, service board, link on
the public network, or link on the private network fails, a master/slave device
switchover is performed to ensure service continuity.
Networking Description
As shown in Figure 1-109, if the BRAS and CGN functions cannot be integrated on
the same device, a CGN device needs to be connected to the CR or SR. In this
manner, CPEs can dial up through the BRAS, and the BRAS can allocate private IP
addresses for the CPEs. Upon receipt of an access request packet from a PC, a CPE
performs the first NAT on the IP address of the PC, and a CGN device performs the
second NAT on the CPE's IP address. The NATed packets are then transmitted to
the ISP core network through CR1. This solution is called centralized NAT444 inter-
chassis hot backup because there are two NAT operations and the CGN function is
centralized on CR1 and CR2 where inter-chassis hot backup is implemented.
Figure 1-109 Centralized NAT444 inter-chassis hot backup solution based on port
pre-allocation
NOTE
Service Description
Inter-chassis hot backup configured on a network ensures service data consistency
between the master and slave devices. If a master device, service board, link on
the public network, or link on the private network fails, a master/slave device
switchover is performed to ensure service continuity.
Networking Description
On the network shown in Figure 1-110, CPEs dial up through the BRAS (with CGN
integrated). The BRAS assigns private IP addresses, post-NAT IP addresses, and
port segments for CPEs. After receiving a packet from a PC, the corresponding CPE
performs the first NAT for the packet and then sends the packet to BRAS1. Upon
receipt of the packet, BRAS1 performs the second NAT. This solution is called
distributed NAT444 inter-chassis hot backup because there are two NAT
operations, the CGN function is deployed on different BRAS access points, and
inter-chassis hot backup is deployed between BRAS1 and BRAS2 to enhance CGN
service reliability.
Figure 1-110 Distributed NAT444 inter-chassis hot backup solution based on port
pre-allocation
NOTE
Service Overview
Figure 1-111 shows a scenario where centralized backup is provided for
distributed NAT devices. In this scenario, two NAT service boards are installed on
the BRAS to implement inter-board hot backup. A NAT device equipped with a
NAT service board is attached to the CR. When both of the two NAT boards on the
BRAS become faulty, the BRAS does not perform distributed NAT on private
network traffic. Instead, the private network traffic is forwarded over routes to the
CR and then redirected to the NAT device for centralized NAT. After the private IP
address is translated to a public IP address, the traffic goes to the Internet.
NOTE
If the CGN service board on a distributed device fails, users are not logged out, and user
traffic for accessing the Internet is sent to the centralized device for NAT translation. A ping
to the gateway address (BRAS shown in Figure 1-111) fails. After the CGN service board
recovers, NAT services are switched back to the distributed device. The user can successfully
ping the gateway address.
Feature Deployment
● Deploy inter-board hot backup for distributed NAT on the BRAS.
● Deploy centralized NAT on the CR.
HA High Availability
CGN is used to implement address translation for a large number of users. This is
why CGN is also called large scale NAT (LSN). If a CGN fault occurs and no
appropriate protection measure is taken, a large number of users may be affected.
The impact is more severe in centralized deployment mode. Therefore, CGN
reliability needs to be deployed to achieve CGN hot backup (intra-chassis inter-
board backup), ultimately enhancing reliability of CGN devices.
Backup Features
● Warm backup
Warm backup allows user tables to be backed up between the master and
slave service boards. In load balancing scenarios, warm backup also requires
the backup of global address pool table entries. After a master/slave
switchover is performed, user sessions need to be re-established. Warm
backup is enabled by default, with no need for manual operation.
● Hot backup
Hot backup allows user tables and session tables to be backed up between
the master and slave service boards. In load balancing scenarios, hot backup
also requires the backup of global address pool table entries. After a master/
slave switchover is performed, user sessions are not interrupted. The hot
backup mode can work only after HA hot backup is enabled.
Backup Scenarios
● Inter-board backup
If a CGN device has multiple service boards, you can configure a master
service board and a slave service board on the CGN device to implement
inter-board backup. This mechanism ensures data consistency between the
master and slave service boards. Once a fault occurs on the master service
board, a master/slave switchover is trigger to ensure service continuity
without letting users perceive the fault.
● Inter-chassis backup
If multiple CGN devices equipped with service boards exist on a network, you
can configure a service board on a master device and a service board on a
backup device to implement inter-chassis backup. This mechanism ensures
data consistency between chassis. Once the master device, the service board
on it, or the link between the master and backup devices fails, a master/slave
switchover is triggered to ensure service continuity without letting users
perceive faults.
Usage Scenario
If multiple CGN devices equipped with service boards are deployed on a network,
the service boards on a CGN device can be configured to work in active/standby
mode to implement inter-board backup for the active and standby CGN boards.
● If a CGN device has slots for two or more dedicated boards and provides
access services for a small number of users, inter-board 1:1 backup can be
configured. In inter-board 1:1 backup, when the service board on the master
device processes services, the service board on the backup device does not
work. The service board on the master device backs up the user tables,
session tables, and address pool entries to the service board on the backup
device. Once the active service board fails, the standby service board takes
over services.
● If a CGN device has slots for two or more dedicated boards and provides
access services for a large number of users, inter-board 1+1 backup is
recommended. In inter-board 1+1 backup, both the active and standby service
boards process services and back up their user tables, session tables, and
address pool entries to each other. Once a service board fails, the other
service board processes all services.
Pre-configuration Tasks
Before configuring inter-board hot backup, complete the following tasks:
● Set the dedicated board working mode.
● Load the license to the device, and ensure that the service board is working
properly.
Context
When a NAT device is equipped with two service boards, you can configure the
active and standby service boards in the same chassis on the NAT device to
implement inter-board backup on the single device. The inter-board backup
mechanism verifies that the data stored on the active NAT service board is
consistent with that stored on the standby NAT service board. If the active NAT
service board fails, an active/standby NAT service board switchover is performed
to ensure that services are running properly. In this situation, users are unaware of
this fault.
Procedure
Step 1 (Optional) Configure value-added service management (VSM) high availability
(HA) hot backup functions.
1. Run system-view
The system view is displayed.
2. Run service-ha hot-backup enable
HA hot standby is enabled.
3. Run service-ha delay-time delay-time
The delay time is set for VSM HA hot backup.
NOTE
On a device with the service-ha delay-time command run, session entries can be
backed up only if the active time of session traffic is longer than the delay time
configured for VSM HA hot backup.
4. (Optional) Run service-ha preempt-time preempt-time
The delay for active/standby revertive switching is set.
NOTE
When the active service board recovers, it takes over services from the standby service
board after the specified delay elapses.
5. Run commit
The configuration is committed.
Step 2 Configure a service-location group that implements single-chassis inter-board VSM
HA hot backup.
1. Run system-view
The system view is displayed.
2. Run service-location service-location-id
A service-location group is created, and its view is displayed.
3. For dedicated NAT, run the location slot slot-id backup slot backup-slot-id
command to bind the service-location group to the CPUs of the active and
standby service boards.
For on-board NAT:
– For the NetEngine 8000 M4, run the location slot command to bind the
CPUs of the service boards.
– For other products in the HUAWEI NetEngine 8100 M14/M8, NetEngine
8000 M14K/M14/M8K/M8/M4 & NetEngine 8000E M14/M8 series, run
the location follow-forwarding-mode command to bind service boards.
4. Run commit
NOTE
Each NAT instance group can only be bound to a single service-instance group.
Different NAT instance groups can be bound to the same service-instance group.
3. Run commit
----End
Context
If two service boards are installed on a DS-Lite device, the service boards can be
configured to work in active/standby mode in the same chassis to implement
inter-board backup. The inter-board backup mechanism verifies that the data
stored on the active service board is consistent with that stored on the standby
service board. If the active service board fails, an active/standby service board
switchover is performed to ensure that services are running properly. In this
situation, services are properly transmitted, and users are unaware of the fault.
Procedure
Step 1 (Optional) Configure value-added service management (VSM) high availability
(HA) hot backup functions.
1. Run system-view
NOTE
On a device with the service-ha delay-time delay-time command run, session entries
can be backed up only if the active time of session traffic is longer than the delay time
configured for VSM HA hot backup.
4. (Optional) Run service-ha preempt-time preempt-time
NOTE
You can set a preemption delay for the former master service board to become the
master again after it recovers.
5. Run commit
The CPUs of the active and standby service boards are bound to the service-
location group.
NOTE
NOTE
One DS-Lite instance can only be bound to one service-instance group. Different DS-
Lite instances can be bound to the same service-instance group.
3. Run commit
----End
Prerequisites
Single-chassis inter-board hot backup has been configured.
Procedure
● Run the display service-ha global-information delay-time command to
check the delay time configured for inter-board VSM HA hot backup on a
single chassis.
● Run the display service-ha global-information preempt-time command to
check the switchback time for inter-board VSM HA hot backup on a single
chassis.
● Run the display service-location service-location-id command to check the
configuration of a service-location group.
● Run the display service-instance-group service-instance-group-name
command to check the configuration of a service-instance group.
----End
Usage Scenario
If multiple CGN devices equipped with service boards exist on a network, you can
configure a service board on a master device and a service board on a backup
device to implement inter-chassis backup. The inter-chassis hot backup
mechanism ensures that the data stored in CPUs of the service boards on the
master and backup devices is consistent. If the master device, the service board on
it, or the link between the master and backup devices fails, a master/slave
switchover is triggered to ensure service continuity. In this case, services are
properly transmitted, and users are unaware of the fault.
● If multiple CGN devices equipped with service boards exist on a network and
provide access services for a limited number of users, you can configure inter-
chassis 1:1 backup. In inter-chassis 1:1 backup, when the service board on the
master device processes services, the service board on the backup device does
not work. The service board on the master device backs up the user tables,
session tables, and address pool entries to the service board on the backup
device. Once the master device, the service board on it, or the link between
the master and backup devices fails, the backup device becomes the master
and processes services.
● If multiple CGN devices equipped with service boards exist on a network and
provide access services for a large number of users, you can configure inter-
chassis 1+1 backup. In inter-chassis 1+1 backup, the service boards on both
the master and backup devices process services and back up the user tables,
session tables, and address pool entries to each other. Once a service board, a
device, or the link between the master and backup devices fails, the service
board on the other device processes all services.
Pre-configuration Tasks
Before enabling inter-chassis hot backup, complete the following tasks:
● Load a license on devices, and ensure that the service boards are working
properly.
● Complete the basic NAT/DS-Lite/NAT64 function configuration. For details,
see NAT Configuration/DS-Lite Configuration/NAT64 Configuration in NAT
and IPv6 Transition.
Context
After HA hot backup is enabled, user tables and session tables are backed up
between chassis on the master and backup devices. In load balancing over inter-
chassis hot backup, global address pool entries are also backed up between the
master and backup devices.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run service-ha hot-backup enable
The HA hot backup function is enabled.
If this function is not enabled, centralized backup is cold backup, and distributed
backup is warm backup.
Step 3 Run commit
The configuration is committed.
----End
Context
A dual-device inter-chassis backup channel needs to be configured in the service-
location group view to transmit backup packets. Perform the following operations
on both the master and backup devices.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run service-location service-location-id
A service-location group is created, and the service-location group view is
displayed.
NOTE
Service-location IDs and the number of service-location groups configured on the master
and backup devices must be the same. Otherwise, backup may fail, affecting services.
NOTE
● VRRP is usually configured on the interfaces for dual-device inter-chassis backup. The VRRP
protocol is used to determine the master/backup status of the members in the dual-device
inter-chassis backup group. If the interface specified by interface-type is a virtual Ethernet
interface, the performance of the interface board may be halved.
----End
Context
Before the master/backup relationship of VRRP group members is bound to
service-location group members, the service-location group must be bound to the
VRRP group. Perform the following operations on both the master and backup
devices.
Procedure
Step 1 Run system-view
A VRRP group is created, and a virtual IP address is configured for the VRRP group.
Different priorities must be configured for devices in a VRRP group. The device
with the highest priority is the master device.
To ensure that NAT information is completely backed up, you are advised to
perform this step to set the VRRP preemption delay to 1500s.
To ensure VRRP stability, you are advised to perform this step. The recovery delay
of 15s is recommended.
----End
Context
Configuring association between a service-location group and VRRP allows a VRRP
group to track the service-location group status in real time so that the involved
device can determine whether to perform a master/slave switchover. Perform the
following operations on both the master and backup devices.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed.
Step 3 Run vrrp vrid virtual-router-id track service-location service-location-id
[ reduced value-reduced ]
The service-location group is associated with the VRRP group.
The virtual-router-id and service-location-id values configured on the master and
backup devices must be the same.
A VRRP group is associated with a service location (multi-core CPU) on a VSU
board. All NAT instances that use the service location use the associated VRRP
group. Generally, only one NAT instance is configured at each service location.
Therefore, the NAT instance and VRRP group are in one-to-one relationship. In
special cases, however, multiple NAT instances can use the same service location
and VRRP group. In this case, you need to configure a VRRP group for each service
location and configure a VLAN for each VRRP group.
NOTE
----End
Context
In centralized NAT load balancing, when the master device becomes faulty,
services may be interrupted. To ensure normal service operation, configure inter-
chassis backup on the basis of load balancing.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nat instance instance-name [ id id ]
The NAT instance view is displayed.
Step 3 Run nat address-group address-group-name group-id group-id bind-ip-pool
pool-name
The NAT instance is bound to a global static address pool.
Step 4 Run nat ip-address ip-address mask-len [ vpn-instance vpn-instance-name ]
The IP address for distributing traffic between the master and slave chassis is
configured.
This command is mandatory for achieving NAT load balancing and inter-chassis
backup.
Step 5 Run commit
The configuration is committed.
----End
Context
In VPN over NAT inter-chassis hot backup scenarios, a private IP address plus a
private network VPN instance can be translated to a public IP address plus a
private VPN instance, allowing private network users to access a public network
over a VPN.
Procedure
Step 1 Run system-view
Step 2 Configure VPN over NAT inter-chassis hot backup according to on-site
requirements.
● Perform the following operations to allow translation between private and
public VPN instances:
a. Run the service-instance-group service-instance-group-name command
to enter the service-instance group view.
b. Run the remote-backup-service service-name command to bind an RBS
group to the service-instance group.
c. Run the commit command to commit the configurations.
NOTE
NOTE
----End
Context
In CGN inter-chassis backup scenarios, if a fault occurs, a master/backup device
switchover is triggered. However, if the address pools on the master and backup
devices are inconsistent, users cannot go online after the switchover. To ensure
data consistency between the master and backup devices and prevent user
experience from being affected by network faults, configure inter-chassis batch
backup between the master and backup devices.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 (Optional) Run batch-backup service-type nat enable
The inter-chassis batch backup function is enabled.
The batch backup function is enabled by default. If you need to enable this
function again after it is disabled, perform this step.
Step 3 Run batch-backup service-type nat timer-interval timer-interval
A batch backup interval is set.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
All CGN inter-chassis hot backup configurations have been performed.
Procedure
● Run the display service-location service-location-id command on the master
and backup devices to view configurations of the HA backup group.
● Run the display vrrp virtual-router-id command on the master and backup
devices to view configurations of the VRRP group.
----End
?.1. Example for Configuring 2:1 Boards for Inter-Board Hot Backup in Distributed
NAT444
This section provides an example for configuring 2:1 boards for inter-board hot
backup in distributed NAT444.
Networking Requirements
In a distributed networking scenario shown in Figure 1-113, a BRAS is equipped
with three CGN boards in slots 9, 10, and 3. CPU 0 in slot 9 and CPU 0 in slot 10
perform inter-board hot backup for NAT444 services. Users get online using
PPPoE. The BRAS assigns a private IP address range to each CPE, and each CPE
assigns IP addresses from the range to terminal PCs. After the CPEs perform NAT
for user traffic, the BRAS performs NAT again. As the user traffic is in a large
volume, capacity expansion is required. Based on the 1:1 master/backup NAT
board solution, a service board is added to slot 3 and its backup board is in slot
10. In 1:1 solution, the service board in 9 is the master, and that in 10 is the
backup.
Figure 1-113 2:1 board expansion for inter-board hot backup in distributed
NAT444
NOTE
Networking Solution
● Traffic diversion policy: The routes of the public address pool are advertised
by BGP using the network command.
● Port allocation policy: Ports are allocated in semi-dynamic mode, which
flexibly increases the number of user ports.
● User source tracing policy: RADIUS user logs are used to reduce the load on
the BRAS and RADIUS server.
● Backup policy: 2:1 board expansion for inter-board hot backup on a single
device is used to ensure the reliability of NAT services.
Configuration Roadmap
The configuration roadmap is as follows:
1. Apply and load a GTL license that enables NAT and configure NAT sessions on
service boards.
2. Enable HA hot backup.
3. Configure a RADIUS service group named nat-pppoe-radius.
Procedure
Step 1 Apply and load a GTL license that enables NAT and configure NAT sessions on
service boards.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS
[*HUAWEI] commit
[~BRAS] vsm on-board-mode disable
[*BRAS] commit
[~BRAS] display license
Item name Item type Value Description
-------------------------------------------------------------
LME0CONN01 Resource 64 Concurrent Users(1k)
LME0NATDS00 Resource 16 2M NAT Session
Item name (View)Resource License Command-line
-------------------------------------------------------------
LME0NATDS00 (Sys)nat session-table size table-size slot slot-id
Master board license state: Normal.
[~BRAS] license
[*BRAS-license] active nat session-table size 6 slot 9
[*BRAS-license] active nat session-table size 6 slot 10
[*BRAS-license] active nat session-table size 6 slot 3
[*BRAS-license] active nat bandwidth-enhance 40 slot 9
[*BRAS-license] active nat bandwidth-enhance 40 slot 10
[*BRAS-license] active nat bandwidth-enhance 40 slot 3
[*BRAS-license] commit
[~BRAS-license] quit
Step 4 Configure a private network address pool named nat-pppoe-pool-1 and bind it to
a DNS server.
[~BRAS] ip pool nat-pppoe-pool-1 bas local
[*BRAS-ip-pool-nat-pppoe-pool-1] gateway 10.1.0.1 255.255.0.0
[*BRAS-ip-pool-nat-pppoe-pool-1] section 0 10.1.0.2 10.1.0.255
[*BRAS-ip-pool-nat-pppoe-pool-1] dns-server 192.168.224.68 192.168.225.68
[*BRAS-ip-pool-nat-pppoe-pool-1] commit
[~BRAS-ip-pool-nat-pppoe-pool-1] quit
NOTE
Step 5 Create a basic ACL used to match against the private address pool.
[~BRAS] acl number 2011
[*BRAS-acl-basic-2011] rule permit source 10.1.0.0 0.0.255.255
[*BRAS-acl-basic-2011] commit
[~BRAS-acl-basic-2011] quit
4. Start the port semi-dynamic allocation mode to enable the device to pre-
allocate 4096 ports to a single user for the first time and allocate 1024
incrementally each time out of the maximum of three times.
[*BRAS-nat-instance-nat444-1] port-range 4096 extended-port-range 1024 extended-times 3
NOTE
8. Enable the ALG function for all protocols, including FTP, PPTP, RTSP, and SIP.
[*BRAS-nat-instance-nat444-1] nat alg all
[~BRAS] service-location 2
[*BRAS-service-location-2] location slot 3 backup slot 10//Slot 3 houses the active service board,
and slot 10 houses the standby service board.
[*BRAS-service-location-2] commit
[~BRAS-service-location-2] quit
2. Create a service-instance group.
[~BRAS] service-instance-group nat444-group2
[*BRAS-service-instance-group-nat444-group2] service-location 2
[*BRAS-service-instance-group-nat444-group2] commit
[~BRAS-service-instance-group-nat444-group2] quit
3. Create a NAT instance and bind it to a service-instance group to specify
service board resources.
[~BRAS] nat instance nat444-2 id 1
[*BRAS-nat-instance-nat444-2] service-instance-group nat444-group2
4. Start the port semi-dynamic allocation mode to enable the device to pre-
allocate 4096 ports to a single user for the first time and allocate 1024
incrementally each time out of the maximum of three times.
[*BRAS-nat-instance-nat444-2] port-range 4096 extended-port-range 1024 extended-times 3
5. Configure a private network address pool named pppoe-public-2.
[*BRAS-nat-instance-nat444-2] nat address-group pppoe-public-2 group-id 1
6. Configure a public IP address.
[*BRAS-nat-instance-nat444-2-nat-address-group-pppoe-public-2] section 0 1.1.2.0 mask 24
[*BRAS-nat-instance-nat444-2-nat-address-group-pppoe-public-2] quit
NOTE
If no traffic distribution policy is used globally, add one and apply it in the system
view. If a traffic distribution policy is used globally, bind the C-B pair to it. If multiple
C-B pairs are configured in a traffic distribution policy, packets are matched against
them in a top-to-bottom order.
[~BRAS] traffic policy global-policy
[*BRAS-trafficpolicy-global-policy] classifier pppoe-nat-1 behavior pppoe-nat-1
[*BRAS-trafficpolicy-global-policy] classifier pppoe-nat-2 behavior pppoe-nat-2
[*BRAS-trafficpolicy-global-policy] commit
[~BRAS-trafficpolicy-global-policy] quit
Step 9 Enable the NAT device to advertise public routes. In the following example, BGP is
used.
[~BRAS] bgp 64640
[*BRAS-bgp-64640] network 1.1.1.0 24
[*BRAS-bgp-64640] commit
[~BRAS-bgp-64640] quit
NOTE
Select a proper routing protocol to advertise public network routes. In this example, BGP is
used to advertise the route to the public address pool in network mode. If a public address
pool is configured using a mask, run the network command to advertise a route. Do not
configure a black-hole summary route, which prevents a failure to forward reverse traffic.
Step 10 Configure a private network domain and bind a NAT instance to it.
[~BRAS] aaa
[~BRAS-aaa] authentication-scheme auth1
[*BRAS-aaa-authen-auth1] authentication-mode radius
[*BRAS-aaa-authen-auth1] commit
[~BRAS-aaa-authen-auth1] quit
[~BRAS-aaa] accounting-scheme acct1
[*BRAS-aaa-accounting-acct1] accounting-mode radius
[~BRAS-aaa-accounting-acct1] commit
[~BRAS-aaa-accounting-acct1] quit
[~BRAS-aaa] domain nat-pppoe
[*BRAS-aaa-domain-nat-pppoe] ip-pool nat-pppoe-pool-1
[*BRAS-aaa-domain-nat-pppoe] user-group pppoe-nat-1 bind nat instance nat444-1
[*BRAS-aaa-domain-nat-pppoe] user-group pppoe-nat-2 bind nat instance nat444-2
[*BRAS-aaa-domain-nat-pppoe] radius-server group nat-pppoe-radius
[*BRAS-aaa-domain-nat-pppoe] commit
[*BRAS-aaa-domain-nat-pppoe] quit
# Verify that user table information on the master and slave service boards is
correct.
<BRAS> display nat user-information slot 9 verbose
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
Total number: 1.
----------------------------------------------------------------
User Type : NAT444
CPE IP : 10.1.0.253
User ID : 1132
VPN Instance :-
Address Group : pppoe-public-1
NAT Instance : nat444-1
Public IP : 192.0.2.0
Start Port : 1024
Port Range : 4096
Port Total : 4096
Extend Port Alloc Times :0
Extend Port Alloc Number :0
First/Second/Third Extend Port Start : 0/0/0
Total/TCP/UDP/ICMP Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Session Current : 709/0/709/0
Total/TCP/UDP/ICMP Rev Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Rev Session Current: 0/0/0/0
Total/TCP/UDP/ICMP Port Limit : 0/0/0/0
Total/TCP/UDP/ICMP Port Current : 709/0/709/0
Nat ALG Enable : ALL
Token/TB/TP : 0/0/0
Port Forwarding Flag : Non Port Forwarding
Port Forwarding Ports :00000
Aging Time(s) :-
Left Time(s) :-
Port Limit Discard Count :0
Session Limit Discard Count :0
Fib Miss Discard Count :0
-->Transmit Packets : 5041
-->Transmit Bytes : 2272053
-->Drop Packets :0
<--Transmit Packets : 3330
<--Transmit Bytes : 1794897
<--Drop Packets :0
-----------------------------------------------------------------
<BRAS> display nat user-information slot 10 verbose
# Verify that session table information on the master and backup service boards is
correct.
<BRAS> display nat session table slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
Current total sessions: 709.
udp: 10.1.0.253:28195[192.0.2.0:1723]-->*:*
udp: 10.1.0.253:20069[192.0.2.0:1727]--> *:*
udp: 10.1.0.253:59556[192.0.2.0:1085]--> *:*
udp: 10.1.0.253:28384[192.0.2.0:2047]--> *:*
<BRAS> display nat session table slot 10
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 10
Current total sessions: 709.
udp: 10.1.0.253:28195[192.0.2.0:1723]-->*:*
udp: 10.1.0.253:20069[192.0.2.0:1727]--> *:*
udp: 10.1.0.253:59556[192.0.2.0:1085]--> *:*
udp: 10.1.0.253:28384[192.0.2.0:2047]--> *:*
# Run the display nat statistics command to view the number of sent and
received packets on the master service board.
<BRAS> display nat statistics received slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
---------------------------------------------------------------------
Packets received from interface :632014772
Packets received from mainboard :29450
----End
Configuration Files
● BRAS configuration file
#
sysname BRAS
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat session-table size 6 slot 10
active nat session-table size 6 slot 3
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
active nat bandwidth-enhance 40 slot 3
#
radius-server group nat-pppoe-radius
radius-server authentication 192.168.10.10 1824 weight 0
radius-server accounting 192.168.10.10 1825 weight 0
radius-server shared-key-cipher %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
#
ip pool nat-pppoe-pool-1 bas local
gateway 10.1.0.1 255.255.0.0
section 0 10.1.0.2 10.1.0.255
dns-server 192.168.224.68 192.168.225.68
#
acl number 2011
rule permit source 10.1.0.0 0.0.255.255
#
acl number 7001
rule permit ip source user-group pppoe-nat-1
#
acl number 7002
rule permit ip source user-group pppoe-nat-2
#
traffic classifier pppoe-nat-1 operator or
if-match acl 7001 precedence 1
#
traffic classifier pppoe-nat-2 operator or
if-match acl 7002 precedence 1
#
traffic behavior pppoe-nat-1
nat bind instance nat444-1
#
traffic behavior pppoe-nat-2
user-vlan 2010
pppoe-server bind Virtual-Template 1
bas
access-type layer2-subscriber default-domain authentication nat-pppoe
authentication-method ppp
#
return
Networking Requirements
As shown in Figure 1-114, a terminal user assigned a private IPv4 address
accesses an IPv6 metropolitan area network (MAN) through a customer premises
equipment (CPE). The CPE establishes a Dual-Stack Lite (DS-Lite) tunnel to a DS-
Lite device. The CPE transmits traffic with the private IPv4 address along the DS-
Lite tunnel to the DS-Lite device. The DS-Lite device decapsulates traffic, uses a
Network Address Translation (NAT) technique to translate the private IPv4 address
to a public IPv4 address, and forwards traffic to the IPv4 Internet. The DS-Lite
device is equipped with DS-Lite boards in slots 1 and 2, respectively. The DS-Lite
device's GE 0/1/1 is connected to an IPv6 MAN, and GE 0/1/2 is connected to the
Internet. IPv4 residential users need to access the IPv4 Internet through the IPv6
MAN. The broadband remote access server (BRAS) performs DS-Lite translation
for user packets. The users log in to the BRAS using IPv6. The CGN boards on the
BRAS perform 1:1 inter-board hot backup.
Configuration Roadmap
The configuration roadmap is as follows:
1. Load a GTL license that enables DS-Lite and configure DS-Lite sessions on
service boards.
Procedure
Step 1 Load a GTL license that enables DS-Lite and configure DS-Lite sessions on service
boards.
<BRAS> system-view
[~BRAS] vsm on-board-mode disable
[*BRAS] commit
[~BRAS] display license
Item name Item type Value Description
-------------------------------------------------------------
LME0CONN01 Resource 64 Concurrent Users(1k)
LME0NATDS01 Resource 32 2M NAT Session
[~BRAS] License
[*BRAS-license] active nat session-table size 16 slot 9
[*BRAS-license] active nat session-table size 16 slot 10
[*BRAS-license] active nat bandwidth-enhance 40 slot 9
[*BRAS-license] active nat bandwidth-enhance 40 slot 10
[*BRAS-license] active ds-lite vsuf slot 9
[*BRAS-license] active ds-lite vsuf slot 10
[*BRAS-license] commit
[~BRAS-license] quit
4. Create an IPv6 PD address pool and specify an address family transition router
(AFTR) name.
[~BRAS] ipv6 pool ipv6-pppoe-pd-1 bas delegation
[*BRAS-ipv6-pool-ipv6-pppoe-pd-1] dns-server 2001:db8:1:1:1::10
[*BRAS-ipv6-pool-ipv6-pppoe-pd-1] aftr-name zj-hz-aftr1.dualstack-lite.com
[*BRAS-ipv6-pool-ipv6-pppoe-pd-1] prefix ipv6-pppoe-pd-1
[*BRAS-ipv6-pool-ipv6-pppoe-pd-1] prefix ipv6-pppoe-pd-1
[~BRAS-ipv6-pool-ipv6-pppoe-pd-1] quit
Step 11 Enable the DS-Lite device to advertise public routes. In the following example,
Intermediate System to Intermediate System (IS-IS) is used.
1. Configure IS-IS.
[~BRAS] isis 100
[*BRAS-isis-100] network-entity 10.1000.1000.1000.00
[*BRAS-isis-100] commit
[*BRAS-isis-100] quit
[~BRAS] isis 1000
[*BRAS-isis-1000] network-entity 10.1000.1000.1002.00
[*BRAS-isis-1000] ipv6 enable
[*BRAS-isis-1000] commit
[~BRAS-isis-1000] quit
4. Enable the DS-Lite device to advertise local IP routes to the IPv6 network and
address pool routes to the IPv4 network. Import local IP routes and address
pool routes to the IS-IS routing table.
[~BRAS] isis 1000
[*BRAS-isis-1000] ipv6 import-route unr
[*BRAS-isis-1000] commit
[~BRAS-isis-1000] quit
[~BRAS] isis 100
[*BRAS-isis-100] import-route unr
[*BRAS-isis-100] commit
[~BRAS-isis-100] commit
NOTE
If a public address pool is configured using a mask, run the network command to advertise
a route. Do not configure a black-hole summary route, which prevents a failure to forward
reverse traffic.
# Run the display device command to view the status of the master and slave
service boards.
<BRAS> display device
NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M's Device status:
Slot # Type Online Register Status Primary
-----------------------------------
1 VSU Present Registered Normal NA
2 VSU Present Registered Normal NA
17 MPU Present NA Normal Master
18 MPU Present Registered Normal Slave
19 SFU Present Registered Normal NA
21 SFU Present Registered Normal NA
23 CLK Present Registered Normal Master
24 CLK Present Registered Normal Slave
25 PWR Present Registered Normal NA
26 PWR Present Registered Normal NA
27 FAN Present Registered Normal NA
28 FAN Present Registered Normal NA
29 FAN Present Registered Normal NA
30 FAN Present Registered Normal NA
# Verify that user table information on the master and slave service boards is
correct.
<BRAS> display nat user-information slot 9 verbose
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
Total number: 1.
---------------------------------------------------------------------------
User Type : Ds-Lite
IPv6Address : 2001:db8:2001:db8:2:2:2::12:200/128
User ID :-
VPN Instance :-
Address Group : dt-addr-group
DS-Lite Instance : ds-lite-1
Public IP : 11.1.1.1
Start Port : 1024
Port Range : 4096
Port Total : 4096
MTU : 1500
Extend Port Alloc Times :0
Extend Port Alloc Number :0
First/Second/Third Extend Port Start : 0/0/0
Total/TCP/UDP/ICMP Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Session Current : 1/0/1/0
Total/TCP/UDP/ICMP Rev Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Rev Session Current : 0/0/0/0
Total/TCP/UDP/ICMP Port Limit : 0/0/0/0
Total/TCP/UDP/ICMP Port Current : 1/0/1/0
Nat ALG Enable : ALL
Token/TB/TP : 0/0/0
Port Forwarding Flag : Non Port Forwarding
Port Forwarding Ports :00000
Aging Time(s) :-
Left Time(s) :-
Port Limit Discard Count :0
Session Limit Discard Count :0
Fib Miss Discard Count :0
-->Transmit Packets : 84302
-->Transmit Bytes : 11970884
-->Drop Packets :0
<--Transmit Packets :0
<--Transmit Bytes :0
<--Drop Packets :0
------------------------------------------------------
# Verify that session table information on the master and backup service boards is
correct.
<BRAS> display nat session table slot 9 verbose
This operation will take a few minutes. Press 'Ctrl+C' to break ...
Slot: 9
Current total sessions: 1.
udp: 192.168.1.2:1024[11.1.1.1:1024]--> *:*
*:* -->11.1.1.1:1024[192.168.1.2:1024]
DS-Lite Instance: ds-lite-1
VPN:--->-
Tag:0x2,FixedTag:0x1, Status:hit, NPFlag:0x6, Create:2015-9-30 11:37:04,TTL:00:04:00 ,Left:00:04:00 , Master
AppProID: 0x0, CPEIP:2001:db8:2:2:2::12, FwdType:DSLITE
Dest-ip:1.1.1.2,Dest-port:1024
<BRAS> display nat session table slot 10 verbose
This operation will take a few minutes. Press 'Ctrl+C' to break ...
Slot: 10
Current total sessions: 1.
udp: 192.168.1.2:1024[11.1.1.1:1024]--> *:*
*:* -->11.1.1.1:1024[192.168.1.2:1024]
DS-Lite Instance: ds-lite-1
VPN:--->-
Tag:0x2,FixedTag:0x1, Status:hit, NPFlag:0x6, Create:2015-9-30 11:37:04,TTL:00:04:00 ,Left:00:04:00 , Master
AppProID: 0x0, CPEIP:2001:db8:2001:db8:2:2:2::12:200, FwdType:DSLITE
Dest-ip:1.1.1.2,Dest-port:1024
# Run the display nat statistics command to view the number of sent and
received packets on the master service board.
<BRAS> display nat statistics received slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
---------------------------------------------------------------------------
Packets received from interface :632014772
Packets received from mainboard :29450
Packets received by nat entry :255587842
receive hrp packets from peer device :0
receive boardhrp packets from peer board :0
---------------------------------------------------------------------------
<BRAS> display nat statistics transmitted slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
--------------------------------------------------------------------------
Packets transmitted to interface :159142427
Packets transmitted to mainboard :22219
Seclog packets transmitted :0
Syslog packets transmitted :0
Userinfo log msg transmitted to cp :0
Transparent packet with nat :65080312
Transparent packet without nat :0
sessions sent by hrp :0
UserTbl sent by hrp :0
UserTbl sent by Boardhrp :0
sessions sent by Boardhrp :0
---------------------------------------------------------------------------
----End
Configuration Files
● BRAS configuration file
#
sysname BRAS
#
vsm on-board-mode disable
#
license
active nat session-table size 16 slot 9
active nat session-table size 16 slot 10
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
active ds-lite vsuf slot 9
active ds-lite vsuf slot 10
#
service-ha hot-backup enable
#
#
user-group ds-lite-1
#
aaa
authentication-scheme auth1
authentication-mode radius
accounting-scheme acct1
accounting-mode radius
accounting start-fail online
domain ds-lite-domain
authentication-scheme auth1
accounting-scheme acct1
prefix-assign-mode unshared
ipv6-pool ipv6-pppoe-nd-1
ipv6-pool ipv6-pppoe-pd-1
radius-server group rad-ser
user-group ds-lite-1 bind ds-lite instance ds-lite-1
user-basic-service-ip-type ipv6
#
interface gigabitethernet0/1/1
ipv6 enable
isis ipv6 enable 1000
#
interface gigabitethernet0/1/2
ip address 2.2.2.1 24
isis enable 100
#
return
Networking Requirements
In the centralized deployment scenario shown in Figure 1-115, a CGN device is
deployed close to the CR on the MAN core as a standalone device and equipped
with two CGN boards to implement 1:1 inter-board backup. A terminal user
assigned a private IPv4 address accesses an IPv6 metropolitan area network
(MAN) through the customer premises equipment (CPE). The CPE establishes a
Dual-Stack Lite (DS-Lite) tunnel to the CGN device. The CPE transmits traffic with
the private IPv4 address along the DS-Lite tunnel to the CGN device. The CGN
device decapsulates traffic, uses a Network Address Translation (NAT) technique
to translate the private IPv4 address to a public IPv4 address, and forwards traffic
to the IPv4 Internet.
Configuration Roadmap
The configuration roadmap is as follows:
1. Apply for NAT session resources and configure DS-Lite session resources and
the DS-Lite GTL license.
2. Enable HA hot backup.
3. Add the CPU of the master service board to a DS-Lite instance.
4. Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
5. Configure a traffic policy for the DS-Lite tunnel.
6. Configure basic DS-Lite functions.
7. Enable the CGN device to advertise routes.
Procedure
Step 1 Apply for NAT session resources and configure DS-Lite session resources and the
DS-Lite GTL license.
<HUAWEI> system-view
[~HUAWEI] sysname CGN
[*HUAWEI] commit
[~CGN] vsm on-board-mode disable
[*CGN] commit
[~HUAWEI] display license
Item name Item type Value Description
-------------------------------------------------------------
[~CGN] License
[*CGN-license] active nat session-table size 16 slot 9
[*CGN-license] active nat session-table size 16 slot 10
[*CGN-license] active nat bandwidth-enhance 40 slot 9
[*CGN-license] active nat bandwidth-enhance 40 slot 10
[*CGN-license] active ds-lite vsuf slot 9
[*CGN-license] active ds-lite vsuf slot 10
[*CGN-license] commit
[~CGN-license] quit
Step 3 Add the CPU of the master service board to a DS-Lite instance.
[~CGN] service-location 1
[*CGN-service-location-1] location slot 9 backup slot 10
[*CGN-service-location-1] commit
[~CGN-service-location-1] quit
[~CGN] service-instance-group group1
[*CGN-instance-group-group1] service-location 1
[*CGN-instance-group-group1] commit
[~CGN-instance-group-group1] quit
[~CGN] ds-lite instance ds-lite-1 id 1
[*CGN-ds-lite-instance-ds-lite-1] service-instance-group group1
[*CGN-ds-lite-instance-ds-lite-1] commit
[~CGN-ds-lite-instance-ds-lite-1] quit
Step 4 Configure a local IPv6 address and a remote IPv6 address for a DS-Lite tunnel.
[~CGN] ds-lite instance ds-lite-1
[*CGN-ds-lite-instance-ds-lite-1] local-ipv6 2001:dB8:2::12 prefix-length 128
[*CGN-ds-lite-instance-ds-lite-1] remote-ipv6 2001:db8:1:: prefix-length 41
[~CGN-ds-lite-instance-ds-lite-1] commit
[*CGN-ds-lite-instance-ds-lite-1] quit
[*CGN-ds-lite-instance-ds-lite-1] commit
[~CGN-ds-lite-instance-ds-lite-1] quit
# Verify that user table information on the master and slave service boards is
correct.
<CGN> display nat user-information slot 9 verbose
This operation will take a few minutes. Press 'Ctrl+C' to break ...
Slot: 9
Total number: 2.
---------------------------------------------------------------------------
User Type : Ds-Lite
IPv6Address : 2001:db8:1::0221:0:0200:0001/128
User ID : -
VPN Instance : -
Address Group : group1
DS-Lite Instance : ds-lite-1
Public IP : 11.1.1.2
Start Port : 1024
Port Range : 4096
Port Total : 4096
Total/TCP/UDP/ICMP Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Session Current : 26/0/26/0
Total/TCP/UDP/ICMP Port Limit : 20992/10240/10240/512
Total/TCP/UDP/ICMP Port Current : 8192/0/8192/0
Nat ALG Enable : ALL
Rbp Index/Rbp Status : 0/0/0
Token/TB/TP : 0/0/0
Port Forwarding Flag : Non Port Forwarding
Port Forwarding Ports : 00000
Aging Time(s) : -
Left Time(s) : -
Port Limit Discard Count : 0
Session Limit Discard Count : 0
Fib Miss Discard Count : 0
-->Transmit Packets : 8192
-->Transmit Bytes : 19252
-->Drop Packets : 0
<--Transmit Packets : 0
------------------------------------------------------------
# Verify that user session table information on the master and slave service
boards is correct.
<CGN> display nat session table slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
Current total sessions: 8192.
udp: 192.168.1.2:28195[11.1.1.2:1723]--> *:*
udp: 192.168.1.2:20069[11.1.1.2:1727]--> *:*
udp: 192.168.1.2:59556[11.1.1.2:1085]--> *:*
<CGN> display nat session table slot 10
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 10
Current total sessions: 8192.
udp: 192.168.1.2:28195[11.1.1.2:1723]--> *:*
udp: 192.168.1.2:20069[11.1.1.2:1727]--> *:*
udp: 192.168.1.2:59556[11.1.1.2:1085]--> *:*
# Run the display nat statistics command to view the number of sent and
received packets on the master service board.
<CGN> display nat statistics received slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
---------------------------------------------------------------------
Packets received from interface :632014772
Packets received from mainboard :29450
Packets received by nat entry :255587842
receive hrp packets from peer device :0
receive boardhrp packets from peer board :0
-----------------------------------------------------------------------
<CGN> display nat statistics transmitted slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
-----------------------------------------------------------------------
----End
Configuration Files
● CGN configuration file
#
sysname CGN
#
vsm on-board-mode disable
#
license
active nat session-table size 16 slot 9
active nat session-table size 16 slot 10
active ds-lite vsuf slot 9
active ds-lite vsuf slot 10
#
service-ha hot-backup enable
service-ha delay-time 10
#
service-location 1
location slot 9 backup slot 10
#
service-instance-group group1
service-location 1
#
ds-lite instance ds-lite-1 id 1
port-range 4096
local-ipv6 2001:dB8:2::12 prefix-length 128
remote-ipv6 2001:db8:1:: prefix-length 41
ds-lite address-group group1 group-id 1
section 0 11.1.1.1 mask 24
ds-lite outbound 3500 address-group group1
ds-lite alg all
ds-lite filter mode full-cone
#
acl ipv6 3500
rule permit ipv6 source 2001:db8:1:: 41 destination 2001:dB8:2::12 128
#
traffic classifier c1 operator or
if-match ipv6 acl 3500 precedence 1
#
traffic behavior b1
ds-lite bind instance ds-lite-1
#
traffic policy p1
classifier c1 behavior b1 precedence 1
#
interface gigabitethernet 0/1/1
ip address 10.1.1.1 24
traffic-policy p1 inbound
#
return
Networking Requirements
In a centralized NAT64 networking scenario shown in Figure 1-116, a NAT64 CGN
device is deployed close to the CR on the MAN core and equipped with two CGN
boards. IPv6 users access the IPv4 network through a BRAS, CR, and NAT64 CGN
device in sequence. The NAT64 CGN device translates IPv6 addresses of enterprise
users to external IPv4 addresses so that the enterprise users can access the IPv4
Internet.
Configuration Roadmap
The configuration roadmap is as follows:
1. Apply for and configure NAT64 session resources and configure a NAT64 GTL
license file.
Procedure
Step 1 Apply for and configure NAT64 session resources and configure a NAT64 GTL
license file.
<HUAWEI> display license
Item name Item type Value Description
-------------------------------------------------------------
LME0CONN01 Resource 64 Concurrent Users(1k)
LME0NATDS00 Resource 16 2M NAT Session
..................................................
Item name (View)Resource License Command-line
-------------------------------------------------------------
LME0NATDS00 (Sys)nat session-table size table-size slot slot-id
Master board license state: Normal.
[~HUAWEI] sysname NAT64_CGN
[*HUAWEI] commit
[~NAT64] vsm on-board-mode disable
[*NAT64] commit
[~NAT64_CGN] License
[~NAT64_CGN-license] active nat session-table size 16 slot 9
[*NAT64_CGN-license] active nat session-table size 16 slot 10
[*NAT64_CGN-lcense] active nat bandwidth-enhance 40 slot 9
[*NAT64_CGN-license] active nat bandwidth-enhance 40 slot 10
[*NAT64_CGN-license] active nat64 vsuf slot 9
[*NAT64_CGN-license] active nat64 vsuf slot 10
[*NAT64_CGN-license] commit
[~NAT64_CGN-license] quit
Step 2 Bind the CPU of each service board to the NAT64 instance.
[~NAT64_CGN] service-location 1
[*NAT64_CGN-service-location-1] location slot 9 backup slot 10
[*NAT64_CGN-service-location-1] commit
[~NAT64_CGN-service-location-1] quit
[~NAT64_CGN] service-instance-group group1
[*NAT64_CGN-instance-group-group1] service-location 1
[*NAT64_CGN-instance-group-group1] commit
[~NAT64_CGN-instance-group-group1] quit
[~NAT64_CGN] nat64 instance nat64-1 id 1
[*NAT64_CGN-nat64-instance-nat64-1] service-instance-group group1
[*NAT64_CGN-nat64-instance-nat64-1] commit
[~NAT64_CGN-nat64-instance-nat64-1] quit
Step 4 Configure a traffic classification rule, a NAT64 traffic behavior, and a NAT64 traffic
distribution policy, and apply the NAT64 traffic distribution policy.
1. Configure an ACL-based traffic classification rule so that only hosts on
internal network segment 64:FF9B/96 can access the network segment
192.168.0.133/30 on the IPv4 Internet.
[~NAT64_CGN] acl ipv6 number 3003
[*NAT64_CGN-acl6-adv-3003] rule 5 permit ipv6 source 2001:db8::1:1112 126 destination
64:FF9B::C0A8:85 96
[*NAT64_CGN-acl6-adv-3003] commit
[~NAT64_CGN-acl6-adv-3003] quit
NOTE
NOTE
If multiple behavior settings are configured, packets are matched against them in a
top-to-bottom order.
4. Configure a NAT64 traffic diversion policy to associate the traffic classifier
with the traffic behavior.
[~NAT64_CGN] traffic policy p1
[*NAT64_CGN-trafficpolicy-p1] classifier c1 behavior b1
[*NAT64_CGN-trafficpolicy-p1] commit
[~NAT64_CGN-trafficpolicy-p1] quit
5. Configure an IPv6 address in the user-side interface view.
[~NAT64_CGN] interface GigabitEthernet 0/1/1
[~NAT64_CGN-GigabitEthernet0/1/1] ipv6 enable
[*NAT64_CGN-GigabitEthernet0/1/1] ipv6 address 2001:db8::1:110e 126
6. Apply the NAT64 traffic diversion policy in the user-side interface view.
[*NAT64_CGN-GigabitEthernet0/1/1] traffic-policy p1 inbound
[*NAT64_CGN-GigabitEthernet0/1/1] commit
[~NAT64_CGN-GigabitEthernet0/1/1] quit
7. Enable the NAT64 device to advertise public routes.
The NAT64 device must advertise private prefix routes and public address
pool rules. Private network VPNs are not supported.
Step 5 Verify the configuration.
# Verify the status of the active and standby service boards.
# Verify that the public network routes are properly advertised.
[NAT64_CGN] display ip routing-table
Route Flags: R - relay, D - download to fib
-------------------------------------------------------------------
# Verify that user table information on the master and slave service boards is
correct.
[NAT64_CGN] display nat user-information slot 9 verbose
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
Total number: 2.
--------------------------------------------------------------
User Type : NAT64
IPv6Address : 2001:db8:2068::0002/128
User ID :-
VPN Instance :-
Address Group : nat64-group1-1
NoPAT Address Group :-
NAT64 Instance : nat64-1
Public IP : 198.51.100.0
NoPAT Public IP : 0.0.0.0
Start Port : 1024
Port Range : 4096
Port Total : 4096
MTU : 1500
Extend Port Alloc Times :0
Extend Port Alloc Number :0
First/Second/Third Extend Port Start : 0/0/0
Total/TCP/UDP/ICMP Session Limit : 8192/10240/10240/512
Total/TCP/UDP/ICMP Session Current : 3/0/2/1
Total/TCP/UDP/ICMP Rev Session Limit : 0/0/0/0
Total/TCP/UDP/ICMP Rev Session Current: 0/0/0/0
Nat ALG Enable : FTP/HTTP
Token/TB/TP : 0/0/0
Port Forwarding Flag : Non Port Forwarding
Port Forwarding Ports :00000
Aging Time(s) :-
Left Time(s) :-
Port Limit Discard Count :0
Session Limit Discard Count :0
Fib Miss Discard Count :0
-->Transmit Packets : 230853
-->Transmit Bytes : 50325954
-->Drop Packets :0
<--Transmit Packets :0
<--Transmit Bytes :0
<--Drop Packets :0
--------------------------------------------------------------
# Verify that user session table information on the master and slave service
boards is correct.
[NAT64_CGN] display nat session table slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
Current total sessions:3.
icmp: [2001:db8:2068::0002]:200(198.51.100.0:200)--> *:32768
udp: [2001:db8:2068::0002]:-( 198.51.100.0:-)-->[2001:db8:2091::]:-(0.0.0.0:-), frag_ID:100
udp: [2001:db8:2068::0002]:1024(198.51.100.0:1024)--> *:*
# Run the display nat statistics command to view the number of sent and
received packets on the master service board.
[NAT64_CGN] display nat statistics received slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
------------------------------------------------------------------------
Packets received from interface :632014772
Packets received from mainboard :29450
Packets received by nat entry :255587842
-------------------------------------------------------------------------
[NAT64_CGN] display nat statistics transmitted slot 9
This operation will take a few minutes. Press 'Ctrl+C' to break
Slot: 9
--------------------------------------------------------------
Packets transmitted to interface :159142427
Packets transmitted to mainboard :22219
Seclog packets transmitted :0
SYSLOG packets transmitted :0
Userinfo log msg transmitted to cp :0
Transparent packet with nat :65080312
Transparent packet without nat :0
---------------------------------------------------------------
----End
Configuration Files
● NAT64 CGN device configuration file
#
sysname NAT64_CGN
#
vsm on-board-mode disable
#
license
active nat session-table size 16 slot
active nat session-table size 16 slot
active nat bandwidth-enhance 40 slot 9
active nat bandwidth-enhance 40 slot 10
active nat64 vsuf slot 9
active nat64 vsuf slot 10
#
acl ipv6 number 3003
rule 5 permit ipv6 source 2001:db8::1:1112 126 destination 64:FF9B::C0A8:85 96
#
traffic classifier c1 operator or
if-match ipv6 acl 3003 precedence 1
#
traffic behavior b1
nat64 bind instance nat64-1t
#
traffic policy p1
classifier c1 behavior b1 precedence 1
#
Networking Requirements
In the centralized networking scenario shown in Figure 1-117, a NAT service
board is deployed in slot 9 on CGN1 and another NAT service board is deployed in
slot 9 on CGN2. CGN1 and CGN2, between which a VRRP channel is established
over GE interfaces, are deployed close to two SRs on the MAN core as standalone
devices. CPU0 of the NAT service board in slot 9 on CGN1 and CPU0 of the NAT
service board in slot 9 on CGN2 implement NAT inter-chassis hot backup. VRRP
enabled for the channel determines the master/backup status of the CGN devices,
and the service board status is associated with VRRP.
In this example, interface1, interface2, and interface3 represent GE0/1/1, GE0/1/2, and
GE0/1/3, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Set the number of sessions supported by the service board in slot 9 to 6M.
2. Enable HA hot backup.
3. Create a service-location group, configure members for HA dual-device inter-
chassis backup, and configure a VRRP channel.
4. Create and configure a VRRP group.
5. Associate HA with VRRP.
6. Bind the service-location group to the VRRP group.
7. Create a service-instance group and bind it to the service-location group.
8. Create a remote backup service (RBS) and bind the service-instance group to
the RBS.
9. Create a NAT instance and bind it to the service-instance group.
10. Configure a NAT traffic policy and a NAT conversion policy.
Data Preparation
To complete the configuration, you need the following data:
No. Data
2 CPU ID and slot ID of the active CPU on CGN1's service board (CPU 0
in slot 9 in this example)
3 CPU ID and slot ID of the standby CPU on CGN2's service board (CPU
0 in slot 9 in this example)
6 ID of a VRRP group
Procedure
Step 1 Configure interface IP addresses. For configuration details, see Configuration Files
in this section.
Step 2 Set the number of sessions supported by the service boards in slot 9 to 6M on
CGN1 and CGN2.
# Configure the master device CGN1.
<HUAWEI> system-view
[~HUAWEI] sysname CGN1
[*HUAWEI] commit
[~CGN1] vsm on-board-mode disable
[*CGN1] commit
[~CGN1] license
[~CGN1-license] active nat session-table size 6 slot 9
[*CGN1-license] active nat bandwidth-enhance 40 slot 9
[*CGN1-license] commit
[~CGN1-license] quit
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
Step 4 Create a service-location group on CGN1 and CGN2, configure members for HA
dual-device inter-chassis backup, and configure a VRRP channel.
# Create service-location group 1 on CGN1, add CPU 0 in slot 9 as an HA dual-
device inter-chassis backup member, and set the local VRRP outbound interface to
GE 0/1/1 and the peer IP address to 10.1.1.2.
[~CGN1] service-location 1
[*CGN1-service-location-1] location slot 9
[*CGN1-service-location-1] remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.2
[*CGN1-service-location-1] commit
[~CGN1-service-location-1] quit
# On CGN2, enter the view of GE0/1/1, create VRRP group 1, and set the virtual IP
address of the VRRP group to 10.1.1.3. Configure the VRRP group as an mVRRP
group, set CGN2's priority in the VRRP group to 150, and set the VRRP recovery
delay to 15s.
[~CGN2] interface GigabitEthernet0/1/1
[*CGN2-GigabitEthernet0/1/1] vrrp vrid 1 virtual-ip 10.1.1.3
[*CGN2-GigabitEthernet0/1/1] admin-vrrp vrid 1 ignore-if-down
Step 6 Associate the service-location group with the VRRP group on each device.
# On CGN1, enter the view of GE0/1/1, and associate service-location group 1,
user-side interface, and network-side interface with VRRP group 1.
[~CGN1] interface GigabitEthernet 0/1/1
[~CGN1-GigabitEthernet0/1/1] vrrp vrid 1 track service-location 1 reduced 60
[*CGN1-GigabitEthernet0/1/1] vrrp vrid 1 track interface GigabitEthernet 0/1/2 reduced 60
[*CGN1-GigabitEthernet0/1/1] vrrp vrid 1 track interface GigabitEthernet 0/1/3 reduced 60
[*CGN1-GigabitEthernet0/1/1] commit
[~CGN1-GigabitEthernet0/1/1] quit
# Run the display vrrp 1 command on CGN1 and CGN2 to view the master/
backup VRRP status, which reflects the master/backup status of the service-
location group. State in the command output indicates the CGN device's status.
[~CGN1] display vrrp 1
GigabitEthernet 0/1/1 | Virtual Router 1
State : Master
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 1500 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track IF : GigabitEthernet0/1/2 Priority Reduced : 60
IF State : UP
Track IF : GigabitEthernet0/1/3 Priority Reduced : 60
IF State : UP
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
NOTE
Master in the command output indicates that CGN1 is the master device.
[~CGN2] display vrrp 1
GigabitEthernet0/1/1 | Virtual Router 1
State : Backup
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.2
PriorityRun : 150
PriorityConfig : 150
MasterPriority : 200
Preempt : YES Delay Time : 0 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:26:40 UTC+08:00
Last Change Time : 2011-10-18 14:02:22 UTC+08:00
Step 7 Bind the service-location group to the VRRP group on CGN1 and CGN2.
# Bind service-location group 1 to VRRP group 1 on CGN1.
[~CGN1] service-location 1
[~CGN1-service-location-1] vrrp vrid 1 interface GigabitEthernet 0/1/1
[*CGN1-service-location-1] commit
[~CGN1-service-location-1] quit
Step 8 Create a service-instance group on CGN1 and CGN2 and bind them to the service-
location group.
# Create a service-instance group named group1 on CGN1 and bind it to service-
location group 1.
[~CGN1] service-instance-group group1
[*CGN1-service-instance-group-group1] service-location 1
[*CGN1-service-instance-group-group1] commit
[~CGN1-service-instance-group-group1] quit
Step 9 Create a remote backup service (RBS) and bind the service-instance group to the
RBS.
# Configure CGN1.
[~CGN1] remote-backup-service natrbs
[*CGN1-rm-backup-srv-natrbs] peer 10.1.1.2 source 10.1.1.1 port 1024
[*CGN1-rm-backup-srv-natrbs] commit
[~CGN1-rm-backup-srv-natrbs] quit
[~CGN1] service-instance-group group1
[~CGN1-service-instance-group-group1] remote-backup-service natrbs
[*CGN1-service-instance-group-group1] commit
[~CGN1-service-instance-group-group1] quit
# Configure CGN2.
[~CGN2] remote-backup-service natrbs
[*CGN2-rm-backup-srv-natrbs] peer 10.1.1.1 source 10.1.1.2 port 1024
[*CGN2-rm-backup-srv-natrbs] commit
[~CGN2-rm-backup-srv-natrbs] quit
[~CGN2] service-instance-group group1
[~CGN2-service-instance-group-group1] remote-backup-service natrbs
[*CGN2-service-instance-group-group1] commit
[~CGN2-service-instance-group-group1] quit
Step 10 Create a NAT instance on each of CGN1 and CGN2 and bind the instances to a
service-instance group.
# Create a NAT instance named nat on CGN1 and bind it to the service-instance
group named group1.
[~CGN1] nat instance nat id 1
[*CGN1-nat-instance-nat] service-instance-group group1
[*CGN1-nat-instance-nat] nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
[*CGN1-nat-instance-nat] commit
[~CGN1-nat-instance-nat] quit
# Create a NAT instance named nat on CGN2 and bind it to the service-instance
group named group1.
[~CGN2] nat instance nat id 1
[*CGN2-nat-instance-nat] service-instance-group group1
[*CGN2-nat-instance-nat] nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
[*CGN2-nat-instance-nat] commit
[~CGN2-nat-instance-nat] quit
Step 11 Configure a NAT traffic diversion policy and a NAT conversion policy on CGN1 and
CGN2. For details, see "Example for Configuring Centralized NAT" in NAT and IPv6
Transition > NAT Configuration.
NOTE
# Configure a NAT traffic diversion policy and a NAT conversion policy on CGN1.
c. Define a traffic behavior behavior1 and bind it to the NAT instance nat1.
[~CGN1] traffic behavior behavior1
[*CGN1-behavior-behavior1] nat bind instance nat
[*CGN1-behavior-behavior1] commit
[~CGN1-behavior-behavior1] quit
d. Create a traffic policy policy1 to associate all ACL rules with the traffic
behaviors.
[~CGN1] traffic policy policy1
[*CGN1-policy-policy1] classifier classifier1 behavior behavior1
[*CGN1-policy-policy1] commit
[~CGN1-policy-policy1] quit
e. Apply the NAT traffic diversion policy in the GE 0/1/2 interface view.
[~CGN1] interface gigabitEthernet 0/1/2
[~CGN1-GigabitEthernet0/1/2] ip address 192.168.10.1 24
[*CGN1-GigabitEthernet0/1/2] traffic-policy policy1 inbound
[*CGN1-GigabitEthernet0/1/2] commit
[~CGN1-GigabitEthernet0/1/2] quit
NOTE
The configuration of CGN2 is similar to that of CGN1. For configuration details, see CGN2
configuration file in this section.
# Run the display nat instance nat command on CGN1 and CGN2 to view NAT
configurations.
[~CGN1] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100
11.11.11.105
nat outbound 3001 address-group address-group1
[~CGN2] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100
11.11.11.105
nat outbound 3001 address-group address-group1
----End
Configuration Files
● CGN1 configuration file
#
sysname CGN1
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
acl number 3001
rule 1 permit ip source 192.168.10.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat
#
traffic policy policy1
share-mode
classifier classifier1 behavior behavior1 precedence 1
#
service-ha hot-backup enable
#
service-location 1
location slot 9
vrrp vrid 1 interface GigabitEthernet 0/1/1
remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.2
#
service-instance-group group1
service-location 1
remote-backup-service natrbs
#
remote-backup-service natrbs
peer 10.1.1.2 source 10.1.1.1 port 1024
#
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat outbound 3001 address-group address-group1
#
interface GigabitEthernet0/1/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 200
vrrp vrid 1 preempt-mode timer delay 1500
vrrp vrid 1 track interface GigabitEthernet 0/1/2 reduced 60
vrrp vrid 1 track interface GigabitEthernet 0/1/3 reduced 60
vrrp vrid 1 track service-location 1 reduced 60
vrrp recover-delay 15
#
interface GigabitEthernet 0/1/2
undo shutdown
ip address 192.168.10.1 255.255.255.0
traffic-policy policy1 inbound
#
interface GigabitEthernet 0/1/3
ip address 11.2.1.1 255.255.0.0
#
ospf 10
default cost inherit-metric
import-route unr
opaque-capability enable
area 0.0.0.0
Networking Requirements
On the network shown in Figure 1-118, in distributed deployment mode, a NAT
service board is installed in slot 9 on BRAS1 and another NAT service board is
installed in slot 9 on BRAS2. A VRRP channel is configured on GE interfaces of
BRAS1 and BRAS2. NAT inter-chassis hot backup is implemented on CPU 0 of the
NAT service board in slot 9 on BRAS1 and CPU 0 of the NAT service board in slot 9
on BRAS2. The NAT service's master/backup status is determined by VRRP, and the
service board status is associated with VRRP.
Configuration Roadmap
The configuration roadmap is as follows:
1. Set the number of sessions supported by the service board in slot 9 to 6M.
Data Preparation
To complete the configuration, you need the following data:
No. Data
6 ID of a VRRP group
No. Data
13 Names of the user group and user domain, and AAA schemes on the
devices at both ends
Procedure
Step 1 Set the number of sessions supported by the service boards in slot 9 on the master
and backup devices to 6M.
# Configure the master device (BRAS1).
<HUAWEI> system-view
[~HUAWEI] sysname BRAS1
[*HUAWEI] commit
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
Step 3 Create a service-location group on BRAS1 and BRAS2, configure members for HA
dual-device inter-chassis backup, and configure a VRRP channel.
# Create service-location group 1 on BRAS1, add CPU 0 in slot 9 as an HA dual-
device inter-chassis backup member, and set the local VRRP outbound interface to
GE 0/1/1 and the peer IP address to 10.1.1.2.
[~BRAS1] service-location 1
[*BRAS1-service-location-1] location slot 9
[*BRAS1-service-location-1] remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.2
[*BRAS1-service-location-1] commit
[~BRAS1-service-location-1] quit
# On BRAS2, enter the view of GE 0/1/1, create VRRP group 1, and set the virtual
IP address of the VRRP group to 10.1.1.3. Configure the VRRP group as an mVRRP
group, set BRAS2's priority in the VRRP group to 150, and set the VRRP recovery
delay to 15s.
[~BRAS2] interface GigabitEthernet 0/1/1
[*BRAS2-GigabitEthernet0/1/1] vrrp vrid 1 virtual-ip 10.1.1.3
[*BRAS2-GigabitEthernet0/1/1] admin-vrrp vrid 1 ignore-if-down
[*BRAS2-GigabitEthernet0/1/1] vrrp vrid 1 priority 150
[*BRAS2-GigabitEthernet0/1/1] vrrp recover-delay 15
[*BRAS2-GigabitEthernet0/1/1] commit
[~BRAS2-GigabitEthernet0/1/1] quit
Step 5 Associate the service-location group with the VRRP group on each device.
# On BRAS1, enter the view of GE 0/1/1, and associate service-location group 1
with VRRP group 1.
[~BRAS1] interface GigabitEthernet 0/1/1
# Run the display vrrp 1 command on BRAS1 and BRAS2 to view the master/
backup VRRP status, which reflects the master/backup status of the service-
location group. State in the command output indicates the CGN device's status.
[~BRAS1] display vrrp 1
GigabitEthernet 0/1/1 | Virtual Router 1
State : Master
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 400 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location Ttate : UP
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
NOTE
Master in the command output indicates that BRAS1 is the master device.
[~BRAS2] display vrrp 1
GigabitEthernet0/1/1 | Virtual Router 1
State : Backup
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.2
PriorityRun : 150
PriorityConfig : 150
MasterPriority : 200
Preempt : YES Delay Time : 0 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:26:40 UTC+08:00
Last Change Time : 2011-10-18 14:02:22 UTC+08:00
Step 6 Bind the service-location group to the VRRP group on BRAS1 and BRAS2.
Step 7 Create a service-instance group on BRAS1 and BRAS2 and bind them to the
service-location group.
# Create a service-instance group named group1 on BRAS1 and bind it to service-
location group 1.
[~BRAS1] service-instance-group group1
[*BRAS1-service-instance-group-group1] service-location 1
[*BRAS1-service-instance-group-group1] commit
[~BRAS1-service-instance-group-group1] quit
[*BRAS1-nat-instance-nat] commit
[~BRAS1-nat-instance-nat] quit
Step 9 Configure user information (user group named natbras, IP address pool named
natbras, user domain named natbras, and AAA) and bind the user group to the
NAT instance named nat on each of the master and backup devices.
# Configure BRAS1.
[~BRAS1] user-group natbras
[~BRAS1] commit
[~BRAS1] ip pool natbras bas local
[*BRAS1-ip-pool-natbras] gateway 192.168.0.1 255.255.255.0
[*BRAS1-ip-pool-natbras] commit
[~BRAS1-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS1-ip-pool-natbras] quit
[~BRAS1] radius-server group rd1
[*BRAS1-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS1-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS1-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS1-radius-rd1] commit
[~BRAS1-radius-rd1] radius-server type plus11
[~BRAS1-radius-rd1] radius-server traffic-unit kbyte
[~BRAS1-radius-rd1] quit
[~BRAS1] aaa
[~BRAS1-aaa] authentication-scheme auth1
[*BRAS1-aaa-authen-auth1] authentication-mode radius
[*BRAS1-aaa-authen-auth1] commit
[~BRAS1-aaa-authen-auth1] quit
[~BRAS1-aaa] accounting-scheme acct1
[*BRAS1-aaa-accounting-acct1] accounting-mode radius
[~BRAS1-aaa-accounting-acct1] commit
[~BRAS1-aaa-accounting-acct1] quit
[~BRAS1-aaa] domain natbras
[*BRAS1-aaa-domain-natbras] authentication-scheme auth1
[*BRAS1-aaa-domain-natbras] accounting-scheme acct1
[*BRAS1-aaa-domain-natbras] radius-server group rd1
[*BRAS1-aaa-domain-natbras] commit
[~BRAS1-aaa-domain-natbras] ip-pool natbras
[~BRAS1-aaa-domain-natbras] user-group natbras bind nat instance nat
[*BRAS1-aaa-domain-natbras] quit
[~BRAS1-aaa] quit
# Configure BRAS2.
[~BRAS2] user-group natbras
[~BRAS2] commit
[~BRAS2] ip pool natbras bas local
[*BRAS2-ip-pool-natbras] gateway 192.168.0.1 255.255.255.0
[*BRAS2-ip-pool-natbras] commit
[~BRAS2-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS2-ip-pool-natbras] quit
[~BRAS2] radius-server group rd1
[*BRAS2-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS2-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS2-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS2-radius-rd1] commit
[~BRAS2-radius-rd1] radius-server type plus11
[~BRAS2-radius-rd1] radius-server traffic-unit kbyte
[~BRAS2-radius-rd1] quit
[~BRAS2] aaa
[~BRAS2-aaa] authentication-scheme auth1
[*BRAS2-aaa-authen-auth1] authentication-mode radius
[*BRAS2-aaa-authen-auth1] commit
[~BRAS2-aaa-authen-auth1] quit
[~BRAS2-aaa] accounting-scheme acct1
[*BRAS2-aaa-accounting-acct1] accounting-mode radius
[~BRAS2-aaa-accounting-acct1] commit
[~BRAS2-aaa-accounting-acct1] quit
[~BRAS2-aaa] domain natbras
[*BRAS2-aaa-domain-natbras] authentication-scheme auth1
[*BRAS2-aaa-domain-natbras] accounting-scheme acct1
[*BRAS2-aaa-domain-natbras] radius-server group rd1
[*BRAS2-aaa-domain-natbras] commit
[~BRAS2-aaa-domain-natbras] ip-pool natbras
[~BRAS2-aaa-domain-natbras] user-group natbras bind nat instance nat
[*BRAS2-aaa-domain-natbras] quit
[~BRAS2-aaa] quit
Step 10 Configure a traffic classification rule, a NAT behavior, and a NAT traffic policy and
apply the NAT traffic policy on each of the master and backup devices. For details,
see "Example for Configuring Distributed NAT" in IPv6 Transition > NAT
Configuration.
5. Define a traffic policy to associate the traffic classifier with the traffic
behavior.
[~BRAS1] traffic policy p1
[*BRAS1-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS1-trafficpolicy-p1] commit
[~BRAS1-trafficpolicy-p1] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see BRAS2
configuration file in this section.
Step 11 On each of the master and backup devices, configure a user-side VRRP group
(between BRAS1/BRAS2 and SWITCH) and enable it to track the service-location
group. If the service-location group is not tracked, a CGN board failure cannot
trigger a master/backup BRAS switchover. As a result, new distributed NAT users
cannot go online.
# Configure BRAS1 (between BRAS1 and SWITCH).
[~BRAS1] interface GigabitEthernet0/1/2.2
[~BRAS1-GigabitEthernet0/1/2.2] vlan-type dot1q 2002
[*BRAS1-GigabitEthernet0/1/2.2] ip address 192.168.2.10 255.255.255.0
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 virtual-ip 192.168.2.200
[*BRAS1-GigabitEthernet0/1/2.2] admin-vrrp vrid 2
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 priority 150
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 preempt-mode timer delay 1500
[*BRAS1-GigabitEthernet0/1/2.2] vrrp recover-delay 20
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 track service-location 1 reduced 50
[*BRAS1-GigabitEthernet0/1/2.2] commit
[~BRAS1-GigabitEthernet0/1/2.2] quit
# Configure BRAS2.
[~BRAS2] service-instance-group group1
[*BRAS2-service-instance-group-group1] remote-backup-service natbras
[*BRAS2-service-instance-group-group1] commit
[~BRAS2-service-instance-group-group1] quit
# Configure BRAS2.
[~BRAS2] nat instance nat id 1
[~BRAS2-nat-instance-nat] service-instance-group group1
[*BRAS2-nat-instance-nat] commit
[~BRAS2-nat-instance-nat] quit
----End
Configuration Files
● BRAS1 configuration file
#
sysname BRAS1
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
acl number 3001
rule 10 permit ip source 192.168.0.0 0.0.255.255
#
acl number 6001
rule 1 permit ip source user-group natbras
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
service-ha hot-backup enable
service-ha delay-time 10
#
service-location 1
location slot 9
vrrp vrid 1 interface GigabitEthernet 0/1/1
remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.2
#
service-instance-group group1
service-location 1
remote-backup-service natbras
#
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat outbound 3001 address-group address-group1
#
user-group natbras
#
ip pool natbras bas local
interface Virtual-Template1
ppp authentication-mode auto
#
interface GigabitEthernet0/1/1
undo shutdown
ip address 10.1.1.2 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 150
vrrp vrid 1 track service-location 1 reduced 60
vrrp recover-delay 15
#
interface GigabitEthernet0/1/2.10
user-vlan 2010
pppoe-server bind Virtual-Template 1
remote-backup-profile natbras
bas
access-type layer2-subscriber default-domain authentication natbras
authentication-method ppp
#
interface GigabitEthernet0/1/2.2
vlan-type dot1q 2002
ip address 192.168.2.100 255.255.255.0
vrrp vrid 2 virtual-ip 192.168.2.200
admin-vrrp vrid 2
vrrp vrid 2 priority 120
vrrp vrid 2 track service-location 1
#
return
?.3. Example for Configuring Distributed NAT444 Inter-Chassis Hot Backup (Global
VE Interface Scenario)
This section provides an example for configuring NAT inter-chassis hot backup
(global VE interface scenario) in distributed deployment mode.
Networking Requirements
In distributed deployment networking shown in Figure 1-119, NAT service boards
are installed in slot 9 on BRAS1 and slot 9 on BRAS2, respectively. A VRRP channel
is established between the two BRASs through global VE interfaces. CPU 0 on the
NAT service board in slot 9 of BRAS1 work with CPU 0 on the NAT service board in
slot 9 of BRAS2 to implement NAT inter-chassis hot backup. The NAT service's
master/backup status is determined by VRRP, and the service board status is
associated with VRRP.
Interface 1, interface 2.2, interface 3, and interface 4 in this example represent global VE
1.1, GE 0/1/2.2, GE 0/1/3, and GE 0/1/4, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Set the number of sessions supported by the service board in slot 9 to 6M.
2. Configure HA hot backup, with NAT service information backed up.
3. Assign IP addresses to interfaces, configure routes, and enable MPLS on
loopback interfaces.
4. Create global VE interfaces on the master and backup devices.
5. Create a service-location group, configure members for HA dual-device inter-
chassis backup, and configure a VRRP channel between BRAS1 and BRAS2.
6. Create and configure VRRP groups.
7. Configure BFD and configure the VRRP groups to track the BFD session status.
8. Associate HA with VRRP.
9. Bind the service-location group to the VRRP groups.
10. Create a service-instance group and bind it to the service-location group.
11. Create NAT instances.
12. Configure user information (user group, IP address pool, user domain, and
AAA), configure RADIUS authentication on the BRAS, and bind the user group
to the NAT instance.
13. Configure a NAT traffic diversion policy and a NAT conversion policy on
BRAS1 and BRAS2.
14. Configure a user-side VRRP group on BRAS1 and BRAS2 that are connected to
the switch.
15. Configure an RBS and bind a service-instance group to the RBS.
Data Preparation
To complete the configuration, you need the following data:
No. Data
6 ID of a VRRP group
13 Names of the user group and user domain, and AAA schemes on the
devices at both ends
Procedure
Step 1 Set the number of sessions supported by the service boards in slot 9 on the master
and backup devices to 6M.
# Configure BRAS1.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS1
[*HUAWEI] commit
# Configure BRAS2.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS2
[*HUAWEI] commit
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
Step 7 Create a service-location group on BRAS1 and BRAS2, configure members for HA
dual-device inter-chassis backup, and configure a VRRP channel.
# Create service-location group 1 on BRAS1, add CPU 0 in slot 9 as an HA dual-
device inter-chassis backup member, and set the local VRRP outbound interface to
global VE 1.1 and the peer IP address to 10.1.1.2.
[~BRAS1] service-location 1
[*BRAS1-service-location-1] location slot 9
[*BRAS1-service-location-1] remote-backup interface Global-VE1.1 peer 10.1.1.2
[*BRAS1-service-location-1] commit
[~BRAS1-service-location-1] quit
# On BRAS2, enter the view of global VE 1.1 interface, create VRRP group 1, and
set the virtual IP address of the VRRP group to 10.1.1.3. Configure the VRRP group
as an mVRRP group, set BRAS2's priority in the VRRP group to 150, and set the
VRRP recovery delay to 15s.
[~BRAS2] interface Global-VE1.1
[*BRAS2-Global-VE1.1] vrrp vrid 1 virtual-ip 10.1.1.3
[*BRAS2-Global-VE1.1] admin-vrrp vrid 1 ignore-if-down
[*BRAS2-Global-VE1.1] vrrp vrid 1 priority 150
[*BRAS2-Global-VE1.1] vrrp recover-delay 15
[*BRAS2-Global-VE1.1] commit
[~BRAS2-Global-VE1.1] quit
Step 9 Enable BFD on the master and backup devices and configure VRRP groups to track
the BFD session status.
# Enable BFD on BRAS2 and configure the VRRP group to track the BFD session
status.
[~BRAS2] bfd
[*BRAS2-bfd] commit
[~BRAS2-bfd] quit
[~BRAS2] bfd peer1 bind peer-ip 10.1.1.1 source-ip 10.1.1.2 auto
[*BRAS2-bfd-session-peer1] commit
[~BRAS2-bfd-session-peer1] quit
[~BRAS2] interface Global-VE1.1
[*BRAS2-Global-VE1.1] vrrp vrid 1 timer advertise 20
[*BRAS2-Global-VE1.1] vrrp vrid 1 track bfd-session session-name peer1 peer
[*BRAS2-Global-VE1.1] commit
[~BRAS2-Global-VE1.1] quit
Step 10 Associate the service-location group with the VRRP group on each device.
# On BRAS1, enter the view of global VE 1.1 interface, and associate service-
location group 1 with VRRP group 1.
[~BRAS1] interface Global-VE1.1
[~BRAS1-Global-VE1.1] vrrp vrid 1 track service-location 1 reduced 60
[*BRAS1-Global-VE1.1] commit
[~BRAS1-Global-VE1.1] quit
# On BRAS2, enter the view of global VE 1.1 interface, and associate service-
location group 1 with VRRP group 1.
[~BRAS2] interface Global-VE1.1
[~BRAS2-Global-VE1.1] vrrp vrid 1 track service-location 1 reduced 60
[*BRAS2-Global-VE1.1] commit
[~BRAS2-Global-VE1.1] quit
# Run the display vrrp 1 command on BRAS1 and BRAS2 to view the master/
backup VRRP status, which reflects the master/backup status of the service-
location group. State in the command output indicates the BRAS status.
[~BRAS1] display vrrp 1
Global-VE1.1 | Virtual Router 1
State : Master
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 400 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
NOTE
Master in the command output indicates that BRAS1 is the master device.
[~BRAS2] display vrrp 1
Global-VE1.1 | Virtual Router 1
State : Backup
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.2
PriorityRun : 150
PriorityConfig : 150
MasterPriority : 200
Preempt : YES Delay Time : 0 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:26:40 UTC+08:00
Last change Time : 2011-10-18 14:02:22 UTC+08:00
Step 11 Bind the service-location group to the VRRP group on BRAS1 and BRAS2.
# Bind service-location group 1 to VRRP group 1 on BRAS1.
[~BRAS1] service-location 1
[~BRAS1-service-location-1] vrrp vrid 1 interface Global-VE1.1
[*BRAS1-service-location-1] commit
[~BRAS1-service-location-1] quit
Step 12 Create a service-instance group on BRAS1 and BRAS2 and bind them to the
service-location group.
# Create a service-instance group named group1 on BRAS1 and bind it to service-
location group 1.
[~BRAS1] service-instance-group group1
[*BRAS1-service-instance-group-group1] service-location 1
[*BRAS1-service-instance-group-group1] commit
[~BRAS1-service-instance-group-group1] quit
Step 14 Configure user information (user group named natbras, IP address pool named
natbras, user domain named natbras, and AAA) and bind the user group to the
NAT instance named nat on each of the master and backup devices.
# Configure BRAS1.
[~BRAS1] user-group natbras
[~BRAS1] commit
[~BRAS1] ip pool natbras bas local
[*BRAS1-ip-pool-natbras] gateway 192.168.0.1 255.255.0.0
[*BRAS1-ip-pool-natbras] commit
[~BRAS1-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS1-ip-pool-natbras] quit
[~BRAS1] radius-server group rd1
[*BRAS1-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS1-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS1-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS1-radius-rd1] commit
[~BRAS1-radius-rd1] radius-server type plus11
[~BRAS1-radius-rd1] radius-server traffic-unit kbyte
[~BRAS1-radius-rd1] quit
[~BRAS1] aaa
[~BRAS1-aaa] authentication-scheme auth1
[*BRAS1-aaa-authen-auth1] authentication-mode radius
[*BRAS1-aaa-authen-auth1] commit
[~BRAS1-aaa-authen-auth1] quit
[~BRAS1-aaa] accounting-scheme acct1
[*BRAS1-aaa-accounting-acct1] accounting-mode radius
[~BRAS1-aaa-accounting-acct1] commit
[~BRAS1-aaa-accounting-acct1] quit
[~BRAS1-aaa] domain natbras
[*BRAS1-aaa-domain-natbras] authentication-scheme auth1
[*BRAS1-aaa-domain-natbras] accounting-scheme acct1
[*BRAS1-aaa-domain-natbras] radius-server group rd1
[*BRAS1-aaa-domain-natbras] commit
[~BRAS1-aaa-domain-natbras] ip-pool natbras
[~BRAS1-aaa-domain-natbras] user-group natbras bind nat instance nat
[~BRAS1-aaa-domain-natbras] quit
[~BRAS1-aaa] quit
# Configure BRAS2.
[~BRAS2] user-group natbras
[~BRAS2] commit
[~BRAS2] ip pool natbras bas local
[*BRAS2-ip-pool-natbras] gateway 192.168.0.1 255.255.0.0
[*BRAS2-ip-pool-natbras] commit
[~BRAS2-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS2-ip-pool-natbras] quit
[~BRAS2] radius-server group rd1
[*BRAS2-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS2-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS2-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS2-radius-rd1] commit
[~BRAS2-radius-rd1] radius-server type plus11
[~BRAS2-radius-rd1] radius-server traffic-unit kbyte
[~BRAS2-radius-rd1] quit
[~BRAS2] aaa
[~BRAS2-aaa] authentication-scheme auth1
[*BRAS2-aaa-authen-auth1] authentication-mode radius
[*BRAS2-aaa-authen-auth1] commit
[~BRAS2-aaa-authen-auth1] quit
[~BRAS2-aaa] accounting-scheme acct1
[*BRAS2-aaa-accounting-acct1] accounting-mode radius
[~BRAS2-aaa-accounting-acct1] commit
[~BRAS2-aaa-accounting-acct1] quit
[~BRAS2-aaa] domain natbras
[*BRAS2-aaa-domain-natbras] authentication-scheme auth1
[*BRAS2-aaa-domain-natbras] accounting-scheme acct1
[*BRAS2-aaa-domain-natbras] radius-server group rd1
[*BRAS2-aaa-domain-natbras] commit
[~BRAS2-aaa-domain-natbras] ip-pool natbras
[~BRAS2-aaa-domain-natbras] user-group natbras bind nat instance nat
[~BRAS2-aaa-domain-natbras] quit
[~BRAS2-aaa] quit
Step 15 Configure a traffic classification rule, a NAT behavior, and a NAT traffic policy on
BRAS1 and BRAS2, and apply the NAT traffic policy. For details, see "Example for
Configuring Distributed NAT" in IPv6 Transition > NAT Configuration.
5. Define a traffic policy to associate the traffic classifier with the traffic
behavior.
[~BRAS1] traffic policy p1
[*BRAS1-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS1-trafficpolicy-p1] commit
[~BRAS1-trafficpolicy-p1] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see BRAS2
configuration file in this section.
Step 16 On each of the master and backup devices, configure a user-side VRRP group
(between BRAS1/BRAS2 and SWITCH) and enable it to track the service-location
group. If the service-location group is not tracked, a CGN board failure cannot
trigger a master/backup BRAS switchover. As a result, new distributed NAT users
cannot go online.
# Configure BRAS1 (between BRAS1 and SWITCH).
[~BRAS1] interface GigabitEthernet0/1/2.2
[~BRAS1-GigabitEthernet0/1/2.2] vlan-type dot1q 2002
[*BRAS1-GigabitEthernet0/1/2.2] ip address 192.168.2.10 255.255.255.0
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 virtual-ip 192.168.2.200
[*BRAS1-GigabitEthernet0/1/2.2] admin-vrrp vrid 2
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 priority 150
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 preempt-mode timer delay 1500
[*BRAS1-GigabitEthernet0/1/2.2] vrrp recover-delay 20
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 track service-location 1 reduced 50
[*BRAS1-GigabitEthernet0/1/2.2] commit
[~BRAS1-GigabitEthernet0/1/2.2] quit
# Configure BRAS2.
[~BRAS2] service-instance-group group1
[*BRAS2-service-instance-group-group1] remote-backup-service natbras
[*BRAS2-service-instance-group-group1] commit
[~BRAS2-service-instance-group-group1] quit
# Configure BRAS2.
[~BRAS2] nat instance nat id 1
[~BRAS2-nat-instance-nat] service-instance-group group1
[*BRAS2-nat-instance-nat] commit
[~BRAS2-nat-instance-nat] quit
----End
Configuration Files
● BRAS1 configuration file
#
sysname BRAS1
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
acl number 3001
rule 10 permit ip source 192.168.0.0 0.0.255.255
#
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
acl number 3001
rule 10 permit ip source 192.168.0.0 0.0.255.255
#
acl number 6001
rule 1 permit ip source user-group natbras
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
service-ha hot-backup enable
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
service-location 1
location slot 9
vrrp vrid 1 interface Global-VE1.1
remote-backup interface Global-VE1.1 peer 10.1.1.1
#
service-instance-group group1
service-location 1
remote-backup-service natbras
#
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat outbound 3001 address-group address-group1
#
bfd
#
mpls lsr-id 1.1.1.2
#
mpls
#
mpls l2vpn
#
mpls ldp
#
ipv4-family
#
mpls ldp remote-peer peer-bas
remote-ip 1.1.1.1
#
user-group natbras
#
ip pool natbras bas local
#
interface GigabitEthernet0/1/3
undo shutdown
ip address 10.3.1.2 255.255.255.255
mpls
mpls ldp
#
ospf 1
area 0.0.0.0
area 1
network 10.3.1.0 0.0.0.255
network 1.1.1.2 0.0.0.0
#
return
● CR configuration file
#
sysname CR
#
mpls lsr-id 1.1.1.3
#
mpls
#
mpls ldp
#
interface GigabitEthernet0/1/3
undo shutdown
ip address 10.2.1.2 255.255.255.255
mpls
mpls ldp
#
interface GigabitEthernet0/1/4
undo shutdown
ip address 10.3.1.1 255.255.255.255
mpls
mpls ldp
#
ospf 1
area 0.0.0.0
network 10.2.1.0 0.0.0.255
area 0.0.0.1
area 1
network 10.3.1.0 0.0.0.255
#
return
?.4. Example for Configuring Distributed NAT444 Inter-Chassis Hot Backup (SRv6
Access Through Global VE Interfaces)
This section provides an example for configuring distributed NAT inter-chassis hot
backup in scenarios where global VE interfaces are used for SRv6 access.
Networking Requirements
In distributed deployment networking shown in Figure 1-120, NAT service boards
are installed in slot 9 on BRAS1 and slot 9 on BRAS2, respectively. A VRRP channel
is established between the two BRASs through global VE interfaces. The master/
backup status of the BRASs is determined by the VRRP protocol.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
No. Data
13 Names of the user group and user domain, and AAA schemes on the
devices at both ends
14 Remote backup identifiers for RUI backup on the devices at both ends
Procedure
Step 1 Enable IPv6 and configure IP addresses for interfaces.
# Configure BRAS1. The configurations of the P and BRAS2 are similar to the
configuration of BRAS1. For configuration details, see BRAS2 configuration file
and P configuration file.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS1
[*HUAWEI] commit
[~BRAS1] interface gigabitethernet 0/1/1
[*BRAS1-GigabitEthernet0/1/1] ipv6 enable
[*BRAS1-GigabitEthernet0/1/1] ipv6 address 2001:DB8:10::1 96
[*BRAS1-GigabitEthernet0/1/1] quit
[*BRAS1] interface LoopBack 1
[*BRAS1-LoopBack1] ipv6 enable
[*BRAS1-LoopBack1] ipv6 address 2001:DB8:1::1 64
[*BRAS1-LoopBack1] ip address 10.1.2.1 255.255.255.255
[*BRAS1-LoopBack1] quit
[~BRAS1] interface gigabitethernet 0/1/2
[*BRAS1-GigabitEthernet0/1/2] ip address 10.1.1.1 24
[*BRAS1-GigabitEthernet0/1/2] quit
[*BRAS1] commit
Step 5 Configure session resources for the service boards and enable HA hot backup on
the devices.
# Configure BRAS1.
[~BRAS1] vsm on-board-mode disable
[*BRAS1] commit
[~BRAS1] license
[~BRAS1-license] active nat session-table size 6 slot 9
[*BRAS1-license] active nat bandwidth-enhance 40 slot 9
[*BRAS1-license] commit
[~BRAS1-license] quit
[~BRAS1] service-ha hot-backup enable
[*BRAS1] commit
# Configure BRAS2.
[~BRAS2] vsm on-board-mode disable
[*BRAS2] commit
[~BRAS2] license
[~BRAS2-license] active nat session-table size 6 slot 9
[*BRAS2-license] active nat bandwidth-enhance 40 slot 9
[*BRAS2-license] commit
[~BRAS2-license] quit
[~BRAS2] service-ha hot-backup enable
[*BRAS2] commit
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
# Configure BRAS2.
[~BRAS2] evpn vpn-instance cgnrui bd-mode
[*BRAS2-evpn-instance-cgnrui] route-distinguisher 1:9
[*BRAS2-evpn-instance-cgnrui] segment-routing ipv6 best-effort
[*BRAS2-evpn-instance-cgnrui] segment-routing ipv6 locator as1
[*BRAS2-evpn-instance-cgnrui] vpn-target 9:10 export-extcommunity
[*BRAS2-evpn-instance-cgnrui] vpn-target 9:10 import-extcommunity
[*BRAS2-evpn-instance-cgnrui] commit
[~BRAS2-evpn-instance-cgnrui] quit
[~BRAS2] bridge-domain 10
[*BRAS2-bd10] evpn binding vpn-instance cgnrui
[*BRAS2-bd10] commit
[~BRAS2-bd10] quit
# Configure BRAS2.
[~BRAS2] interface Global-VE0
Step 8 Create a service-location group on the master and backup devices, add members
to the group, and configure a VRRP channel.
# Configure BRAS1.
[~BRAS1] service-location 1
[*BRAS1-service-location-1] location slot 9
[*BRAS1-service-location-1] remote-backup interface Global-VE0.1 peer 10.10.1.2
[*BRAS1-service-location-1] vrrp vrid 1 interface Global-VE0.1
[*BRAS1-service-location-1] commit
[~BRAS1-service-location-1] quit
# Configure BRAS2.
[~BRAS2] service-location 1
[*BRAS2-service-location-1] location slot 9
[*BRAS2-service-location-1] remote-backup interface Global-VE0.1 peer 10.10.1.1
[*BRAS2-service-location-1] vrrp vrid 1 interface Global-VE0.1
[*BRAS2-service-location-1] commit
[~BRAS2-service-location-1] quit
# Configure BRAS2.
[~BRAS2] interface Global-VE0.1
[~BRAS2-Global-VE0.1] vrrp vrid 1 track service-location 1 reduced 60
[*BRAS2-Global-VE0.1] commit
[~BRAS2-Global-VE0.1] quit
Step 10 Configure RBSs, create service-instance groups, and bind the service-instance
groups to the service-location groups.
# Configure BRAS1.
[~BRAS1] remote-backup-service rui
[*BRAS1-rm-backup-srv-rui] peer 10.1.2.2 source 10.1.2.1 port 6001
[*BRAS1-rm-backup-srv-rui] protect lsp-tunnel for-all-instance peer-ip 10.1.2.2
[*BRAS1-rm-backup-srv-rui] commit
[~BRAS1-rm-backup-srv-rui] quit
[~BRAS1] service-instance-group group1
[*BRAS1-service-instance-group-group1] service-location 1
[*BRAS1-service-instance-group-group1] remote-backup-service rui
[*BRAS1-service-instance-group-group1] commit
[~BRAS1-service-instance-group-group1] quit
# Configure BRAS2.
[~BRAS2] remote-backup-service rui
[*BRAS2-rm-backup-srv-rui] peer 10.1.2.1 source 10.1.2.2 port 6001
[*BRAS2-rm-backup-srv-rui] protect lsp-tunnel for-all-instance peer-ip 10.1.2.1
[*BRAS2-rm-backup-srv-rui] commit
[~BRAS2-rm-backup-srv-rui] quit
[~BRAS2] service-instance-group group1
[*BRAS2-service-instance-group-group1] service-location 1
[*BRAS2-service-instance-group-group1] remote-backup-service rui
[*BRAS2-service-instance-group-group1] commit
[~BRAS2-service-instance-group-group1] quit
# Configure BRAS2.
[~BRAS2] nat instance nat id 1
[*BRAS2-nat-instance-nat] commit
[~BRAS2-nat-instance-nat] quit
Step 12 Configure user information (user group named natbras, IP address pool named
natbras, user domain named natbras, and AAA) on BRAS1 and BRAS2 and bind
the user groups to the NAT instance named nat.
# Configure BRAS1.
[~BRAS1] user-group natbras
[~BRAS1] commit
[~BRAS1] ip pool natbras bas local
[*BRAS1-ip-pool-natbras] gateway 192.168.0.1 255.255.0.0
[*BRAS1-ip-pool-natbras] commit
[~BRAS1-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS1-ip-pool-natbras] quit
[~BRAS1] remote-backup-service rui
[*BRAS1-rm-backup-srv-rui] ip-pool natbras
[*BRAS1-rm-backup-srv-rui] commit
[~BRAS1-rm-backup-srv-rui] quit
[~BRAS1] radius-server group rd1
[*BRAS1-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS1-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS1-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS1-radius-rd1] commit
[~BRAS1-radius-rd1] radius-server type plus11
[~BRAS1-radius-rd1] radius-server traffic-unit kbyte
[~BRAS1-radius-rd1] quit
[~BRAS1] aaa
[~BRAS1-aaa] authentication-scheme auth1
[*BRAS1-aaa-authen-auth1] authentication-mode radius
[*BRAS1-aaa-authen-auth1] commit
[~BRAS1-aaa-authen-auth1] quit
[~BRAS1-aaa] accounting-scheme acct1
[*BRAS1-aaa-accounting-acct1] accounting-mode radius
[~BRAS1-aaa-accounting-acct1] commit
[~BRAS1-aaa-accounting-acct1] quit
[~BRAS1-aaa] domain natbras
[*BRAS1-aaa-domain-natbras] authentication-scheme auth1
[*BRAS1-aaa-domain-natbras] accounting-scheme acct1
# Configure BRAS2.
[~BRAS2] user-group natbras
[~BRAS2] commit
[~BRAS2] ip pool natbras bas local
[*BRAS2-ip-pool-natbras] gateway 192.168.0.1 255.255.0.0
[*BRAS2-ip-pool-natbras] commit
[~BRAS2-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS2-ip-pool-natbras] quit
[~BRAS2] remote-backup-service rui
[*BRAS2-rm-backup-srv-rui] ip-pool natbras
[*BRAS2-rm-backup-srv-rui] commit
[~BRAS2-rm-backup-srv-rui] quit
[~BRAS2] radius-server group rd1
[*BRAS2-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS2-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS2-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS2-radius-rd1] commit
[~BRAS2-radius-rd1] radius-server type plus11
[~BRAS2-radius-rd1] radius-server traffic-unit kbyte
[~BRAS2-radius-rd1] quit
[~BRAS2] aaa
[~BRAS2-aaa] authentication-scheme auth1
[*BRAS2-aaa-authen-auth1] authentication-mode radius
[*BRAS2-aaa-authen-auth1] commit
[~BRAS2-aaa-authen-auth1] quit
[~BRAS2-aaa] accounting-scheme acct1
[*BRAS2-aaa-accounting-acct1] accounting-mode radius
[~BRAS2-aaa-accounting-acct1] commit
[~BRAS2-aaa-accounting-acct1] quit
[~BRAS2-aaa] domain natbras
[*BRAS2-aaa-domain-natbras] authentication-scheme auth1
[*BRAS2-aaa-domain-natbras] accounting-scheme acct1
[*BRAS2-aaa-domain-natbras] radius-server group rd1
[*BRAS2-aaa-domain-natbras] commit
[~BRAS2-aaa-domain-natbras] ip-pool natbras
[~BRAS2-aaa-domain-natbras] user-group natbras bind nat instance nat
[~BRAS2-aaa-domain-natbras] quit
[~BRAS2-aaa] quit
Step 13 Configure traffic classification rules, NAT behaviors, and NAT traffic diversion
policies, and apply the NAT traffic diversion policies on the master and backup
devices.
# Configure a NAT conversion policy on BRAS1.
1. Configure an ACL numbered 6001 and an ACL rule numbered 1.
[~BRAS1] acl 6001
[*BRAS1-acl-ucl-6001] rule 1 permit ip source user-group natbras
[*BRAS1-acl-ucl-6001] commit
[~BRAS1-acl-ucl-6001] quit
2. Configure ACL 3001 that is used for IP address assignment during user login.
[~BRAS1] acl 3001
[*BRAS1-acl4-advance-3001] rule 10 permit ip source 192.168.0.0 0.0.255.255
[*BRAS1-acl4-advance-3001] commit
[~BRAS1-acl4-advance-3001] quit
[*BRAS1-classifier-c1] commit
[~BRAS1-classifier-c1] quit
5. Define a traffic policy to associate the traffic classifier with the traffic
behavior.
[~BRAS1] traffic policy p1
[*BRAS1-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS1-trafficpolicy-p1] commit
[~BRAS1-trafficpolicy-p1] quit
NOTE
Step 14 On each of the master and backup devices, configure a user-side VRRP group
(between BRAS1/BRAS2 and SWITCH) and enable it to track the service-location
group. If the service-location group is not tracked, a CGN board failure cannot
trigger a master/backup BRAS switchover. As a result, new distributed NAT users
cannot go online.
Step 15 Configure an RBP for backing up BRAS information on each of the devices.
# Configure BRAS2.
[~BRAS2] nat instance nat id 1
[~BRAS2-nat-instance-nat] service-instance-group group1
[*BRAS2-nat-instance-nat] commit
[~BRAS2-nat-instance-nat] quit
----End
Configuration Files
● BRAS1 configuration file
#
sysname BRAS1
#
vsm on-board-mode disable
#
service-ha hot-backup enable
#
evpn vpn-instance cgnrui bd-mode
route-distinguisher 1:10
segment-routing ipv6 best-effort
segment-routing ipv6 locator as1
vpn-target 9:10 export-extcommunity
vpn-target 9:10 import-extcommunity
#
ip vpn-instance VPN1
vpn-id 100:100
ipv4-family
route-distinguisher 65060:12006
apply-label per-route
vpn-target 65060:1 export-extcommunity evpn
vpn-target 65060:1 export-extcommunity
vpn-target 65060:1 import-extcommunity evpn
vpn-target 65060:1 import-extcommunity
ipv6-family
route-distinguisher 65060:12006
apply-label per-route
vpn-target 65060:1 export-extcommunity evpn
vpn-target 65060:1 import-extcommunity evpn
#
service-location 1
location slot 9
remote-backup interface Global-VE0.1 peer 10.10.1.2
vrrp vrid 1 interface Global-VE0.1
#
service-instance-group group1
service-location 1
remote-backup-service rui
#
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat outbound 3001 address-group address-group1
#
bridge-domain 10
evpn binding vpn-instance cgnrui
#
ip pool natbras bas local
gateway 192.168.0.1 255.255.0.0
section 0 192.168.0.2 192.168.0.254
#
user-group natbras
#
remote-backup-service rui
peer 10.1.2.2 source 10.1.2.1 port 6001
protect lsp-tunnel for-all-instance peer-ip 10.1.2.2
ip-pool natbras
#
remote-backup-profile natbras
service-type bras
backup-id 10 remote-backup-service natbras
peer-backup hot
vrrp-id 2 interface GigabitEthernet0/1/2
#
acl number 3001
rule 10 permit ip source 192.168.0.0 0.0.255.255
#
acl number 6001
rule 1 permit ip source user-group natbras
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
aaa
authentication-scheme auth1
authentication-mode radius
accounting-scheme acct1
accounting-mode radius
domain natbras
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool natbras
router-id 10.10.1.1
peer 10.2.2.2 as-number 100
peer 10.2.2.2 connect-interface LoopBack1
peer 2001:DB8:3::1 as-number 100
peer 2001:DB8:3::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
import-route direct
import-route unr
unicast-route recursive-lookup tunnel-v6 tunnel-selector srv6
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer
peer 10.2.2.2 enable
peer 2001:DB8:3::1 enable
#
ipv6-family unicast
undo synchronization
import-route direct
import-route unr
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer
peer 2001:DB8:3::1 enable
#
ipv4-family vpn-instance VPN1
maximum load-balancing 16
advertise l2vpn evpn
segment-routing ipv6 locator as1 evpn
segment-routing ipv6 best-effort evpn
#
ipv6-family vpn-instance VPN1
maximum load-balancing 16
advertise l2vpn evpn
segment-routing ipv6 locator as1 evpn
segment-routing ipv6 best-effort evpn
#
l2vpn-family evpn
policy vpn-target
peer 2001:DB8:3::1 enable
peer 2001:DB8:3::1 advertise encap-type srv6
#
return
● BRAS2 configuration file
#
sysname BRAS2
#
vsm on-board-mode disable
#
service-ha hot-backup enable
#
evpn vpn-instance cgnrui bd-mode
route-distinguisher 1:9
segment-routing ipv6 best-effort
segment-routing ipv6 locator as1
vpn-target 9:10 export-extcommunity
vpn-target 9:10 import-extcommunity
#
ip vpn-instance VPN1
vpn-id 100:100
ipv4-family
route-distinguisher 65060:12006
apply-label per-route
vpn-target 65060:1 export-extcommunity evpn
vpn-target 65060:1 export-extcommunity
vpn-target 65060:1 import-extcommunity evpn
vpn-target 65060:1 import-extcommunity
ipv6-family
route-distinguisher 65060:12006
apply-label per-route
ip-pool natbras
user-group natbras bind nat instance nat
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::1
locator as1 ipv6-prefix 2001:DB8:30:: 64 static 6 args 3
locator as2 ipv6-prefix 2001:DB8:31:: 64
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
segment-routing ipv6 locator as2
#
#
interface GigabitEthernet0/1/1
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/96
isis enable 1
isis ipv6 enable 1
dcn
#
interface GigabitEthernet0/1/2
undo shutdown
ip address 10.1.1.2 255.255.255.0
vrrp vrid 2 virtual-ip 10.1.1.3
admin-vrrp vrid 2
vrrp vrid 2 priority 120
vrrp vrid 2 track service-location 1
dcn
#
interface LoopBack1
ipv6 enable
ip address 10.1.2.2 255.255.255.255
ipv6 address 2001:DB8:3::1/64
isis enable 1
isis ipv6 enable 1
#
interface Global-VE0
ve-group 2 l3-access
#
interface Global-VE0.1
vlan-type dot1q 11
ip binding vpn-instance VPN1
ip address 10.10.1.2 255.255.255.0
vrrp vrid 1 virtual-ip 10.10.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 150
vrrp vrid 1 track service-location 1 reduced 60
#
interface Global-VE1
ve-group 2 l2-terminate
#
interface Global-VE1.1 mode l2
encapsulation dot1q vid 11
bridge-domain 10
#
bgp 100
router-id 10.10.10.10
peer 10.2.2.3 as-number 100
peer 10.2.2.3 connect-interface LoopBack1
● P configuration file
#
sysname P
#
segment-routing ipv6
encapsulation source-address 2001:DB8:2::2
locator as1 ipv6-prefix 2001:DB8:20:: 64 static 6 args 3
locator as2 ipv6-prefix 2001:DB8:21:: 64 static 6
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet0/1/1
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/96
isis enable 1
isis ipv6 enable 1
dcn
#
interface GigabitEthernet0/1/2
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/96
isis enable 1
isis ipv6 enable 1
dcn
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::1/64
isis ipv6 enable 1
#
return
?.5. Example for Configuring Inter-Chassis Hot Backup in a Distributed NAT Load
Balancing Scenario
This section provides an example for configuring inter-chassis hot backup in a
distributed NAT load balancing scenario.
Networking Requirements
In distributed networking, NAT service boards are equipped in slot 9 of BRAS1 and
slot 9 of BRAS2 and each of them provides two CPUs to balance NAT traffic. A
VRRP channel is established between BRAS1 and BRAS2 through GE interfaces.
CPU0 and CPU1 of the service board in slot 9 on BRAS1 work together with CPU0
and CPU1 of the service board in slot 9 on BRAS2 to implement inter-chassis hot
backup. The NAT service's master/backup status is determined by VRRP, and the
service board status is associated with VRRP.
Context
The configuration roadmap is as follows:
1. Enable the license function for the NAT service boards on BRAS1 and BRAS2
and configure NAT session resources.
2. Create a NAT load balancing instance.
3. Configure HA hot backup, with NAT service information backed up between
BRAS1 and BRAS2.
4. Create a service-location group, configure members for HA dual-device inter-
chassis backup, and configure a VRRP channel between BRAS1 and BRAS2.
5. Create and configure a VRRP group.
6. Associate HA with VRRP.
7. Bind the service-location group to the VRRP group.
8. Create a service-instance group and an RBS, and bind the service-instance
group to the service-location group.
9. Bind the NAT instance to the service-instance group.
10. Configure a user-side VRRP group on BRAS1 and BRAS2 that are connected to
the switch.
11. Configure RUI to back up BRAS information.
12. Configure user information (user group, IP address pool, user domain, and
AAA), configure RADIUS authentication on the BRAS, and bind the user group
to the NAT instance.
13. Configure a user-side sub-interface.
14. Configure a NAT traffic diversion policy.
Data Preparation
To complete the configuration, you need the following data:
No. Data
2 Slot ID and CPU ID of the active CPU on the service board on BRAS1
3 Slot ID and CPU ID of the standby CPU on the service board on BRAS2
6 ID of a VRRP group
No. Data
13 User group names, user domain names and AAA schemes on BRAS1
and BRAS2
22 ID of the NAT address pool and name of the global static address pool
bound to the NAT address pool on BRAS1 and BRAS2
Procedure
Step 1 Configure interface IP addresses and basic routes on each device to ensure
reachability between the devices and the CR. For configuration details, see
"Configuration Files" in this section.
Step 2 Enable the license function for the NAT service boards on BRAS1 and BRAS2 and
configure NAT session resources.
# Configure BRAS1.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS1
[*HUAWEI] commit
[~BRAS1] vsm on-board-mode disable
[*BRAS1] commit
[~BRAS1] license
[~BRAS1-license] active nat session-table size 6 slot 9
[*BRAS1-license] active nat session-table size 6 slot 9
[*BRAS1-license] active nat bandwidth-enhance 40 slot 9
[*BRAS1-license] commit
[~BRAS1-license] quit
# Configure BRAS2.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS2
[*HUAWEI] commit
[~BRAS2] vsm on-board-mode disable
[*BRAS2] commit
[~BRAS2] license
[~BRAS2-license] active nat session-table size 6 slot 9
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
# Configure BRAS2.
1. Configure a CGN global static address pool. (You must configure the slave
parameter. Otherwise, services will be affected.)
[~BRAS1] nat ip-pool pool1 slave
[*BRAS1-nat-ip-pool-pool1] section 0 11.11.11.1 mask 24
[*BRAS1-nat-ip-pool-pool1] nat-instance subnet length initial 25 extend 27
[*BRAS1-nat-ip-pool-pool1] nat-instance ip used-threshold upper-limit 60 lower-limit 40
[*BRAS1-nat-ip-pool-pool1] nat alarm ip threshold 60
[*BRAS1-nat-ip-pool-pool1] commit
[~BRAS1-nat-ip-pool-pool1] quit
# Configure BRAS2.
[~BRAS2] service-ha hot-backup enable
[*BRAS2] commit
# On BRAS1, enter the view of GE 0/1/1.1, create VRRP group 1, and set the
virtual IP address of the VRRP group to 10.1.1.3. Configure the VRRP group as an
mVRRP group, set BRAS1's priority in the VRRP group to 200, and set the VRRP
preemption delay to 1500s.
[~BRAS1] interface GigabitEthernet 0/1/1.1
[~BRAS1-GigabitEthernet0/1/1.1] vlan-type dot1q 2001
[*BRAS1-GigabitEthernet0/1/1.1] ip address 10.1.1.1 255.255.255.0
[*BRAS1-GigabitEthernet0/1/1.1] vrrp vrid 1 virtual-ip 10.1.1.3
[*BRAS1-GigabitEthernet0/1/1.1] admin-vrrp vrid 1 ignore-if-down
[*BRAS1-GigabitEthernet0/1/1.1] vrrp vrid 1 priority 200
[*BRAS1-GigabitEthernet0/1/1.1] vrrp vrid 1 preempt-mode timer delay 1500
[*BRAS1-GigabitEthernet0/1/1.1] vrrp recover-delay 20
[*BRAS1-GigabitEthernet0/1/1.1] commit
[~BRAS1-GigabitEthernet0/1/1.1] quit
# On BRAS1, enter the view of GE 0/1/1.2, create VRRP group 2, and set the
virtual IP address of the VRRP group to 10.1.2.3. Configure the VRRP group as an
mVRRP group, set BRAS1's priority in the VRRP group to 200, and set the VRRP
preemption delay to 1500s.
[~BRAS1] interface GigabitEthernet 0/1/1.2
[~BRAS1-GigabitEthernet0/1/1.2] vlan-type dot1q 2002
[*BRAS1-GigabitEthernet0/1/1.2] ip address 10.1.2.1 255.255.255.0
[*BRAS1-GigabitEthernet0/1/1.2] vrrp vrid 2 virtual-ip 10.1.2.3
[*BRAS1-GigabitEthernet0/1/1.2] admin-vrrp vrid 2 ignore-if-down
[*BRAS1-GigabitEthernet0/1/1.2] vrrp vrid 2 priority 200
[*BRAS1-GigabitEthernet0/1/1.2] vrrp vrid 2 preempt-mode timer delay 1500
[*BRAS1-GigabitEthernet0/1/1.2] vrrp recover-delay 20
[*BRAS1-GigabitEthernet0/1/1.2] commit
[~BRAS1-GigabitEthernet0/1/1.2] quit
# On BRAS2, enter the view of GE 0/1/1.1, create VRRP group 1, and set the
virtual IP address of the VRRP group to 10.1.1.3. Configure the VRRP group as an
mVRRP group, and set BRAS2's priority in the VRRP group to 150.
[~BRAS2] interface GigabitEthernet 0/1/1.1
[~BRAS2-GigabitEthernet0/1/1.1] vlan-type dot1q 2001
[*BRAS2-GigabitEthernet0/1/1.1] ip address 10.1.1.2 255.255.255.0
[*BRAS2-GigabitEthernet0/1/1.1] vrrp vrid 1 virtual-ip 10.1.1.3
[*BRAS2-GigabitEthernet0/1/1.1] admin-vrrp vrid 1 ignore-if-down
[*BRAS2-GigabitEthernet0/1/1.1] vrrp vrid 1 priority 150
[*BRAS2-GigabitEthernet0/1/1.1] commit
[~BRAS2-GigabitEthernet0/1/1.1] quit
# On BRAS2, enter the view of GE 0/1/1.2, create VRRP group 2, and set the
virtual IP address of the VRRP group to 10.1.2.3. Configure the VRRP group as an
mVRRP group, and set BRAS2's priority in the VRRP group to 150.
[~BRAS2] interface GigabitEthernet 0/1/1.2
[~BRAS2-GigabitEthernet0/1/1.2] vlan-type dot1q 2002
[~BRAS2-GigabitEthernet0/1/1.2] ip address 10.1.2.2 255.255.255.0
[*BRAS2-GigabitEthernet0/1/1.2] vrrp vrid 2 virtual-ip 10.1.2.3
[*BRAS2-GigabitEthernet0/1/1.2] admin-vrrp vrid 2 ignore-if-down
[*BRAS2-GigabitEthernet0/1/1.2] vrrp vrid 2 priority 150
[*BRAS2-GigabitEthernet0/1/1.2] commit
[~BRAS2-GigabitEthernet0/1/1.2] quit
NOTE
Step 6 Create a service-location group on BRAS1 and BRAS2, configure members for HA
dual-device inter-chassis backup, and configure a VRRP channel. Ensure that the
direct link between the master and back devices is not interrupted. Otherwise, the
backup channel cannot be established.
NOTE
Service-location IDs and the number of service-location groups configured on the master
and backup devices must be the same. Otherwise, backup may fail, affecting services.
Step 7 Bind the service-location group to the VRRP group on BRAS1 and BRAS2.
# On BRAS1, enter the view of GE 0/1/1.1, and associate service-location group 1
with VRRP group 1.
[~BRAS1] interface GigabitEthernet 0/1/1.1
[~BRAS1-GigabitEthernet0/1/1.1] vrrp vrid 1 track service-location 1 reduced 60
[*BRAS1-GigabitEthernet0/1/1.1] commit
[~BRAS1-GigabitEthernet0/1/1.1] quit
# Run the display vrrp 1 and display vrrp 2 commands on BRAS1 and BRAS2,
respectively, to view the master/backup VRRP status, which reflects the master/
backup status of the service-location groups. State in the command output
indicates the BRAS status.
[~BRAS1] display vrrp 1
GigabitEthernet 0/1/1.1 | Virtual Router 1
State : Master
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 1500 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
[~BRAS1] display vrrp 2
GigabitEthernet 0/1/1.2 | Virtual Router 1
State : Master
Virtual IP : 10.1.2.3
Master IP : 10.1.2.1
Local IP : 10.1.2.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 1500 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 2 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
NOTE
Master in the command output indicates that BRAS1 is the master device.
NOTE
Master in the command output indicates that BRAS1 is the master device.
Step 8 Bind the service-location group to the VRRP group on BRAS1 and BRAS2.
# Bind service-location group 1 to VRRP group 1 on BRAS1.
[~BRAS1] service-location 1
[*BRAS1-service-location-1] vrrp vrid 1 interface GigabitEthernet 0/1/1.1
[*BRAS1-service-location-1] commit
[~BRAS1-service-location-1] quit
Step 9 Create a service-instance group on BRAS1 and BRAS2 and bind the service-
location groups to the service-instance groups.
# Configure BRAS1.
Create a service-instance group named group1 and bind it to service-location
groups 1 and 2.
[~BRAS1] service-instance-group group1
[*BRAS1-service-instance-group-group1] service-location 1
[*BRAS1-service-instance-group-group1] service-location 2
[*BRAS1-service-instance-group-group1] commit
[~BRAS1-service-instance-group-group1] quit
# Configure BRAS2.
Create a service-instance group named group1 and bind it to service-location
groups 1 and 2.
[~BRAS2] service-instance-group group1
[*BRAS2-service-instance-group-group1] service-location 1
[*BRAS2-service-instance-group-group1] service-location 2
[*BRAS2-service-instance-group-group1] commit
[~BRAS2-service-instance-group-group1] quit
Peer: 10.1.1.1
Vrrp ID: 1
Vrrp bind interface: GigabitEthernet0/1/1.1
Vrrp state: slave
Bound service-instance-group number: 1
Batch-backup state: NA
[~BRAS2] display service-location 2
service-location 2
Backup scene type: inter-box
Location slot ID: 9
Remote-backup interface: GigabitEthernet0/1/1.2
Peer: 10.1.2.1
Vrrp ID: 2
Vrrp bind interface: GigabitEthernet0/1/1.2
Vrrp state: slave
Bound service-instance-group number: 1
Batch-backup state: NA
Step 10 Create a NAT instance on each of BRAS1 and BRAS2 and bind the instances to a
service-instance group.
# Create a NAT instance named nat on BRAS1 and bind it to the service-instance
group named group1.
[~BRAS1] nat instance nat id 1
[*BRAS1-nat-instance-nat] service-instance-group group1
[*BRAS1-nat-instance-nat] commit
[~BRAS1-nat-instance-nat] quit
# Create a NAT instance named nat on BRAS2 and bind it to the service-instance
group named group1.
[~BRAS2] nat instance nat id 1
[*BRAS2-nat-instance-nat] service-instance-group group1
[*BRAS2-nat-instance-nat] commit
[~BRAS2-nat-instance-nat] quit
# Run the display nat instance nat command on BRAS1 and BRAS2 to view NAT
configurations.
[~BRAS1] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group group1 group-id 1 bind-ip-pool pool1
[~BRAS2] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group group1 group-id 1 bind-ip-pool pool1
Step 11 On each of the master and backup devices, configure a user-side VRRP group and
configure it to track the service-location group. If the service-location group is not
tracked, a CGN board failure cannot trigger a master/backup BRAS switchover. As
a result, new distributed NAT users cannot go online.
# Configure BRAS1.
[~BRAS1] interface GigabitEthernet0/1/2.2
[~BRAS1-GigabitEthernet0/1/2.2] vlan-type dot1q 2002
[*BRAS1-GigabitEthernet0/1/2.2] ip address 192.168.2.10 255.255.255.0
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 3 virtual-ip 192.168.2.200
[*BRAS1-GigabitEthernet0/1/2.2] admin-vrrp vrid 3
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 3 priority 150
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 3 preempt-mode timer delay 1500
[*BRAS1-GigabitEthernet0/1/2.2] vrrp recover-delay 20
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 3 track service-location 1 reduced 50
[*BRAS1-GigabitEthernet0/1/2.2] commit
[~BRAS1-GigabitEthernet0/1/2.2] quit
# Configure BRAS2.
[~BRAS2] interface GigabitEthernet0/1/2.2
[~BRAS2-GigabitEthernet0/1/2.2] vlan-type dot1q 2002
[*BRAS2-GigabitEthernet0/1/2.2] ip address 192.168.2.100 255.255.255.0
[*BRAS2-GigabitEthernet0/1/2.2] vrrp vrid 3 virtual-ip 192.168.2.200
[*BRAS2-GigabitEthernet0/1/2.2] admin-vrrp vrid 3
[*BRAS2-GigabitEthernet0/1/2.2] vrrp vrid 3 priority 120
[*BRAS2-GigabitEthernet0/1/2.2] vrrp vrid 3 track service-location 1 reduced 50
[*BRAS2-GigabitEthernet0/1/2.2] commit
[~BRAS2-GigabitEthernet0/1/2.2] quit
Step 12 Configure user information (user group named natbras, IP address pool named
natbras, user domain named natbras, and AAA) and bind the user group to the
NAT instance named nat on each of the master and backup devices.
# Configure BRAS1.
[~BRAS1] user-group natbras
[~BRAS1] commit
[~BRAS1] ip pool natbras bas local
[*BRAS1-ip-pool-natbras] gateway 192.168.0.1 255.255.255.0
[*BRAS1-ip-pool-natbras] commit
[~BRAS1-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS1-ip-pool-natbras] quit
[~BRAS1] radius-server group rd1
[*BRAS1-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS1-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS1-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS1-radius-rd1] commit
[~BRAS1-radius-rd1] radius-server type plus11
[~BRAS1-radius-rd1] radius-server traffic-unit kbyte
[~BRAS1-radius-rd1] quit
[~BRAS1] aaa
[~BRAS1-aaa] authentication-scheme auth1
[*BRAS1-aaa-authen-auth1] authentication-mode radius
[*BRAS1-aaa-authen-auth1] commit
[~BRAS1-aaa-authen-auth1] quit
[~BRAS1-aaa] accounting-scheme acct1
[*BRAS1-aaa-accounting-acct1] accounting-mode radius
[~BRAS1-aaa-accounting-acct1] commit
[~BRAS1-aaa-accounting-acct1] quit
[~BRAS1-aaa] domain natbras
[*BRAS1-aaa-domain-natbras] authentication-scheme auth1
[*BRAS1-aaa-domain-natbras] accounting-scheme acct1
[*BRAS1-aaa-domain-natbras] radius-server group rd1
[*BRAS1-aaa-domain-natbras] commit
[~BRAS1-aaa-domain-natbras] ip-pool natbras
[~BRAS1-aaa-domain-natbras] user-group natbras bind nat instance nat
[~BRAS1-aaa-domain-natbras] quit
[~BRAS1-aaa] quit
# Configure BRAS2.
[~BRAS2] user-group natbras
[~BRAS2] commit
[~BRAS2] ip pool natbras bas local
[*BRAS2-ip-pool-natbras] gateway 192.168.0.1 255.255.255.0
[*BRAS2-ip-pool-natbras] commit
[~BRAS2-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS2-ip-pool-natbras] quit
[~BRAS2] radius-server group rd1
[*BRAS2-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS2-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS2-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS2-radius-rd1] commit
[~BRAS2-radius-rd1] radius-server type plus11
[~BRAS2-radius-rd1] radius-server traffic-unit kbyte
[~BRAS2-radius-rd1] quit
[~BRAS2] aaa
[~BRAS2-aaa] authentication-scheme auth1
[*BRAS2-aaa-authen-auth1] authentication-mode radius
[*BRAS2-aaa-authen-auth1] commit
[~BRAS2-aaa-authen-auth1] quit
[~BRAS2-aaa] accounting-scheme acct1
[*BRAS2-aaa-accounting-acct1] accounting-mode radius
[~BRAS2-aaa-accounting-acct1] commit
[~BRAS2-aaa-accounting-acct1] quit
[~BRAS2-aaa] domain natbras
[*BRAS2-aaa-domain-natbras] authentication-scheme auth1
[*BRAS2-aaa-domain-natbras] accounting-scheme acct1
[*BRAS2-aaa-domain-natbras] radius-server group rd1
[*BRAS2-aaa-domain-natbras] commit
[~BRAS2-aaa-domain-natbras] ip-pool natbras
[~BRAS2-aaa-domain-natbras] user-group natbras bind nat instance nat
[~BRAS2-aaa-domain-natbras] quit
[~BRAS2-aaa] quit
Step 13 Configure an RBS and bind the RBS to the service-instance group on each device.
1. Configure an RBS on BRAS1 and BRAS2.
# Configure an RBS on BRAS1.
[~BRAS1] remote-backup-service natbras
[*BRAS1-rm-backup-srv-natbras] peer 10.1.1.2 source 10.1.1.1 port 7000
[*BRAS1-rm-backup-srv-natbras] protect redirect ip-nexthop 10.1.1.2 interface
GigabitEthernet0/1/1.1
[*BRAS1-rm-backup-srv-natbras] ip-pool natbras
[*BRAS1-rm-backup-srv-natbras] commit
[~BRAS1-rm-backup-srv-natbras] quit
# Configure an RBS on BRAS2.
[~BRAS2] remote-backup-service natbras
[*BRAS2-rm-backup-srv-natbras] peer 10.1.1.1 source 10.1.1.2 port 7000
[*BRAS2-rm-backup-srv-natbras] protect redirect ip-nexthop 10.1.1.1 interface
GigabitEthernet0/1/1.1
[*BRAS2-rm-backup-srv-natbras] ip-pool natbras
[*BRAS2-rm-backup-srv-natbras] commit
[~BRAS2-rm-backup-srv-natbras] quit
2. Bind the RBS to the service-instance group on each device. (The RBS must be
bound to the service-instance group. Otherwise, traffic is transmitted over the
inter-chassis backup channel during a master/backup service switchover,
affecting services.)
# Configure BRAS1.
[~BRAS1] service-instance-group group1
[~BRAS1-service-instance-group-group1] remote-backup-service natbras
[*BRAS1-service-instance-group-group1] commit
[~BRAS1-service-instance-group-group1] quit
# Configure BRAS2.
[~BRAS2] service-instance-group group1
[~BRAS2-service-instance-group-group1] remote-backup-service natbras
[*BRAS2-service-instance-group-group1] commit
[~BRAS2-service-instance-group-group1] quit
# Configure BRAS2.
[~BRAS2] interface GigabitEthernet0/1/2.10
[*BRAS2-GigabitEthernet0/1/2.10] commit
[~BRAS2-GigabitEthernet0/1/2.10] user-vlan 2010
[~BRAS2-GigabitEthernet0/1/2.10-user-vlan-2010] quit
[~BRAS2-GigabitEthernet0/1/2.10] remote-backup-profile natbras
[*BRAS2-GigabitEthernet0/1/2.10] commit
[~BRAS2-GigabitEthernet0/1/2.10] bas
[~BRAS2-GigabitEthernet0/1/2.10-bas] access-type layer2-subscriber default-domain authentication
natbras
[*BRAS2-GigabitEthernet0/1/2.10-bas] commit
[~BRAS2-GigabitEthernet0/1/2.10-bas] quit
[~BRAS2-GigabitEthernet0/1/2.10] quit
Step 16 Configure a traffic classification rule, a NAT behavior, and a NAT traffic policy.
Then apply the policy.
1. Configure a NAT traffic diversion policy.
a. Configure an ACL numbered 6001 and an ACL rule numbered 1.
[~BRAS1] acl 6001
[*BRAS1-acl-ucl-6001] rule 1 permit ip source user-group natbras
[*BRAS1-acl-ucl-6001] commit
[~BRAS1-acl-ucl-6001] quit
b. Configure an ACL numbered 3001 and an ACL rule numbered 1.
[~BRAS1] acl 3001
[*BRAS1-acl4-advance-3001] rule 1 permit ip source 192.168.0.0 0.0.255.255
[*BRAS1-acl4-advance-3001] commit
[~BRAS1-acl4-advance-3001] quit
c. Configure a traffic classifier.
[*BRAS1] traffic classifier c1
[*BRAS1-classifier-c1] if-match acl 6001
[*BRAS1-classifier-c1] commit
[~BRAS1-classifier-c1] quit
d. Configure a traffic behavior.
[~BRAS1] traffic behavior b1
[*BRAS1-behavior-b1] nat bind instance nat
[*BRAS1-behavior-b1] commit
[~BRAS1-behavior-b1] quit
e. Define a traffic policy to associate the traffic classifier with the traffic
behavior.
[~BRAS1] traffic policy p1
[*BRAS1-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS1-trafficpolicy-p1] commit
[~BRAS1-trafficpolicy-p1] quit
f. Apply the NAT traffic diversion policy in the system view. In VS mode, the
traffic-policy command is supported only by the admin VS.
[~BRAS1] traffic-policy p1 inbound
[*BRAS1] commit
NOTE
----End
Configuration Files
● BRAS1 configuration file
#
sysname BRAS1
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
service-ha hot-backup enable
service-ha delay-time 10
#
user-group natbras
#
acl number 6001
rule 1 permit ip source user-group natbras
#
acl number 3001
rule 1 permit ip source 192.168.0.0 0.0.255.255
#
nat ip-pool pool1
section 0 11.11.11.1 mask 24
nat-instance subnet length initial 25 extend 27
nat-instance ip used-threshold upper-limit 60 lower-limit 40
nat alarm ip threshold 60
#
service-location 1
location slot 9
vrrp vrid 1 interface GigabitEthernet 0/1/1.1
remote-backup interface GigabitEthernet 0/1/1.1 peer 10.1.1.2
#
service-location 2
location slot 9
vrrp vrid 2 interface GigabitEthernet 0/1/1.2
remote-backup interface GigabitEthernet 0/1/1.2 peer 10.1.2.2
#
service-instance-group group1
service-location 1
service-location 2
remote-backup-service natbras
#
nat instance nat id 1
service-instance-group group1
nat address-group group1 group-id 1 bind-ip-pool pool1
nat outbound 3001 address-group group1
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
ip pool natbras bas local
gateway 192.168.0.1 255.255.255.0
section 0 192.168.0.2 192.168.0.254
#
aaa
authentication-scheme auth1
authentication-mode radius
accounting-scheme acct1
accounting-mode radius
domain natbras
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool natbras
user-group natbras bind nat instance nat
#
remote-backup-service natbras
peer 10.1.1.2 source 10.1.1.1 port 7000
protect redirect ip-nexthop 10.1.1.2 interface GigabitEthernet0/1/1.1
ip-pool natbras
#
remote-backup-profile natbras
service-type bras
backup-id 10 remote-backup-service natbras
peer-backup hot
vrrp-id 3 interface GigabitEthernet0/1/2.2
#
interface GigabitEthernet0/1/1.1
undo shutdown
vlan-type dot1q 2001
ip address 10.1.1.1 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 200
vrrp vrid 1 track service-location 1 reduced 60
vrrp vrid 1 preempt-mode timer delay 1500
vrrp recover-delay 20
#
interface GigabitEthernet0/1/1.2
undo shutdown
vlan-type dot1q 2002
service-instance-group group1
service-location 1
service-location 2
remote-backup-service natbras
#
nat instance nat id 1
service-instance-group group1
nat address-group group1 group-id 1 bind-ip-pool pool1
nat outbound 3001 address-group group1
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
ip pool natbras bas local
gateway 192.168.0.1 255.255.255.0
section 0 192.168.0.2 192.168.0.254
#
aaa
authentication-scheme auth1
authentication-mode radius
accounting-scheme acct1
accounting-mode radius
domain natbras
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool natbras
user-group natbras bind nat instance nat
#
remote-backup-service natbras
peer 10.1.1.1 source 10.1.1.2 port 7000
protect redirect ip-nexthop 10.1.1.1 interface GigabitEthernet0/1/1.1
ip-pool natbras
#
remote-backup-profile natbras
service-type bras
backup-id 10 remote-backup-service natbras
peer-backup hot
vrrp-id 3 interface GigabitEthernet0/1/2.2
#
interface GigabitEthernet0/1/1.1
undo shutdown
vlan-type dot1q 2001
ip address 10.1.1.2 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 150
vrrp vrid 1 track service-location 1 reduced 60
#
interface GigabitEthernet0/1/1.2
undo shutdown
vlan-type dot1q 2002
ip address 10.1.2.2 255.255.255.0
admin-vrrp vrid 2 ignore-if-down
vrrp vrid 2 virtual-ip 10.1.2.3
vrrp vrid 2 priority 150
vrrp vrid 2 track service-location 2 reduced 60
#
interface GigabitEthernet0/1/2.10
user-vlan 2010
remote-backup-profile natbras
bas
access-type layer2-subscriber default-domain authentication natbras
#
interface GigabitEthernet0/1/2.2
vlan-type dot1q 2002
ip address 192.168.2.100 255.255.255.0
vrrp vrid 3 virtual-ip 192.168.2.200
admin-vrrp vrid 3
vrrp vrid 3 priority 120
vrrp vrid 3 track service-location 1 reduced 50
#
return
?.6. Example for Configuring Centralized NAT Load Balancing Plus HA Inter-Chassis
Hot Backup
This section provides an example for configuring centralized NAT load balancing
plus HA inter-chassis hot backup.
Networking Requirements
In the centralized networking scenario shown in figure1, CGN1 and CGN2 are
deployed close to the CR on the MAN core as standalone devices, and a NAT
service board is installed in slot 9 on each CGN device. Load balancing needs to be
implemented between the different CPUs on the NAT service boards on both CGN
devices. A VRRP channel also needs to be established between the two devices
through GE interfaces. In addition, the master/backup states of the CGN devices
need to be determined through VRRP, and the states of service CPUs need to be
consistent with the VRRP states.
Figure 1-122 Networking for configuring centralized NAT load balancing plus HA
inter-chassis hot backup
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Set the number of sessions supported by the service board in slot 9 to 6M.
2. Enable HA hot backup.
3. Create a service-location group, configure members for HA dual-device inter-
chassis backup, and configure a VRRP channel.
4. Create and configure VRRP groups.
5. Associate HA with the VRRP groups.
6. Bind the service-location groups to the VRRP groups.
7. Create service-instance groups and bind them to the service-location groups.
8. Create NAT instances and bind them to the service-instance groups.
9. Configure NAT traffic diversion policies and NAT conversion policies.
10. Import the desired default route.
11. Configure a route-policy on the user-side device.
Data Preparation
To complete the configuration, you need the following data.
No. Data
12 IDs of the NAT address pools and names of the global static address
pools to be bound to the NAT address pools on the master and
backup devices
14 User-side route-policy
Procedure
Step 1 Configure IP addresses for device interfaces. For configuration details, see
Configuration Files in this section.
Step 2 Set the number of sessions supported by the service boards in slot 9 to 6M on
master and backup devices.
# Configure CGN1.
<HUAWEI> system-view
[~HUAWEI] sysname CGN1
[*HUAWEI] commit
[~CGN1] vsm on-board-mode disable
[*CGN1] commit
[~CGN1] license
[~CGN1-license] active nat session-table size 6 slot 9 0
[*CGN1-license] active nat session-table size 6 slot 9
[*CGN1-license] active nat bandwidth-enhance 100 slot 9
[*CGN1-license] commit
[~CGN1-license] quit
# Configure CGN2.
<HUAWEI> system-view
[~HUAWEI] sysname CGN2
[*HUAWEI] commit
# Configure CGN2.
[~CGN2] service-ha hot-backup enable
[*CGN2] commit
Step 4 Create and configure VRRP groups on the master and backup devices.
# On CGN1, enter the GE 0/2/1.1 interface view, create VRRP group 1, and set the
virtual IP address of the VRRP group to 10.1.1.3. Configure the VRRP group as an
mVRRP group, set CGN1's priority in the VRRP group to 200, and set the VRRP
preemption delay to 1500s.
[~CGN1] interface GigabitEthernet 0/2/1.1
[~CGN1-GigabitEthernet0/2/1.1] vlan-type dot1q 2001
[*CGN1-GigabitEthernet0/2/1.1] ip address 10.1.1.1 255.255.255.0
[*CGN1-GigabitEthernet0/2/1.1] vrrp vrid 1 virtual-ip 10.1.1.3
[*CGN1-GigabitEthernet0/2/1.1] admin-vrrp vrid 1 ignore-if-down
[*CGN1-GigabitEthernet0/2/1.1] vrrp vrid 1 priority 200
[*CGN1-GigabitEthernet0/2/1.1] vrrp vrid 1 preempt-mode timer delay 1500
[*CGN1-GigabitEthernet0/2/1.1] vrrp recover-delay 20
[*CGN1-GigabitEthernet0/2/1.1] commit
[~CGN1-GigabitEthernet0/2/1.1] quit
# On CGN1, enter the GE 0/2/1.2 interface view, create VRRP group 2, and set the
virtual IP address of the VRRP group to 10.1.2.3. Configure the VRRP group as an
mVRRP group, set CGN1's priority in the VRRP group to 200, and set the VRRP
preemption delay to 1500s.
[~CGN1] interface GigabitEthernet 0/2/1.2
[~CGN1-GigabitEthernet0/2/1.2] vlan-type dot1q 2002
[*CGN1-GigabitEthernet0/2/1.2] ip address 10.1.2.1 255.255.255.0
[*CGN1-GigabitEthernet0/2/1.2] vrrp vrid 2 virtual-ip 10.1.2.3
[*CGN1-GigabitEthernet0/2/1.2] admin-vrrp vrid 2 ignore-if-down
[*CGN1-GigabitEthernet0/2/1.2] vrrp vrid 2 priority 200
[*CGN1-GigabitEthernet0/2/1.2] vrrp vrid 2 preempt-mode timer delay 1500
[*CGN1-GigabitEthernet0/2/1.2] vrrp recover-delay 20
[*CGN1-GigabitEthernet0/2/1.2] commit
[~CGN1-GigabitEthernet0/2/1.2] quit
# On CGN2, enter the GE 0/2/1.1 interface view, create VRRP group 1, and set the
virtual IP address of the VRRP group to 10.1.1.3. In addition, configure the VRRP
group as an mVRRP group and set CGN2's priority in the VRRP group to 150.
[~CGN2] interface GigabitEthernet 0/2/1.1
[~CGN2-GigabitEthernet0/2/1.1] vlan-type dot1q 2001
[*CGN2-GigabitEthernet0/2/1.1] ip address 10.1.1.2 255.255.255.0
[*CGN2-GigabitEthernet0/2/1.1] vrrp vrid 1 virtual-ip 10.1.1.3
[*CGN2-GigabitEthernet0/2/1.1] admin-vrrp vrid 1 ignore-if-down
[*CGN2-GigabitEthernet0/2/1.1] vrrp vrid 1 priority 150
[*CGN2-GigabitEthernet0/2/1.1] commit
[~CGN2-GigabitEthernet0/2/1.1] quit
# On CGN2, enter the GE 0/2/1.2 interface view, create VRRP group 2, and set the
virtual IP address of the VRRP group to 10.1.2.3. In addition, configure the VRRP
group as an mVRRP group and set CGN2's priority in the VRRP group to 150.
[~CGN2] interface GigabitEthernet 0/2/1.2
[~CGN2-GigabitEthernet0/2/1.2] vlan-type dot1q 2002
[*CGN2-GigabitEthernet0/2/1.2] ip address 10.1.2.2 255.255.255.0
[*CGN2-GigabitEthernet0/2/1.2] vrrp vrid 2 virtual-ip 10.1.2.3
[*CGN2-GigabitEthernet0/2/1.2] admin-vrrp vrid 2 ignore-if-down
[*CGN2-GigabitEthernet0/2/1.2] vrrp vrid 2 priority 150
[*CGN2-GigabitEthernet0/2/1.2] commit
[~CGN2-GigabitEthernet0/2/1.2] quit
Step 5 On the master and backup devices, create a service-location group, configure
members for HA dual-device inter-chassis backup, and configure a VRRP channel.
Ensure that the direct link between the master and back devices is not interrupted.
Otherwise, the backup channel cannot be established.
NOTE
Service-location IDs and the number of service-location groups configured on the master
and backup devices must be the same. Otherwise, backup may fail, affecting services.
# Enter the GE 0/2/1.2 interface view and associate HA with VRRP group 2.
[~CGN1] interface GigabitEthernet 0/2/1.2
[~CGN1-GigabitEthernet0/2/1.2] vrrp vrid 2 track service-location 2 reduced 60
[*CGN1-GigabitEthernet0/2/1.2] vrrp vrid 2 track interface GigabitEthernet 0/2/2 reduced 60
[*CGN1-GigabitEthernet0/2/1.2] vrrp vrid 2 track interface GigabitEthernet 0/2/3 reduced 60
[*CGN1-GigabitEthernet0/2/1.2] commit
[~CGN1-GigabitEthernet0/2/1.2] quit
# Run the display vrrp 1 and display vrrp 2 commands on both CGN1 and CGN2
to check the VRRP master/backup state. This state reflects the master/backup
state of the service-location groups on the CGN devices.
[~CGN1] display vrrp 1
GigabitEthernet 0/2/1.1 | Virtual Router 1
State : Master
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 400 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
[~CGN1] display vrrp 2
GigabitEthernet 0/2/1.2 | Virtual Router 1
State : Master
Virtual IP : 10.1.2.3
Master IP : 10.1.2.1
Local IP : 10.1.2.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 400 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
[~CGN2] display vrrp 1
GigabitEthernet 0/2/1.1 | Virtual Router 1
State : Backup
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.2
PriorityRun : 150
PriorityConfig : 150
MasterPriority : 200
Preempt : YES Delay Time :0s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
[~CGN2] display vrrp 2
GigabitEthernet 0/2/1.2 | Virtual Router 1
State : Backup
Virtual IP : 10.1.2.3
Master IP : 10.1.2.1
Local IP : 10.1.2.2
PriorityRun : 150
PriorityConfig : 150
MasterPriority : 200
Preempt : YES Delay Time :0s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
Step 7 Bind the service-location groups to the corresponding VRRP groups on the master
and backup devices.
NOTE
[~CGN2-service-location-2] quit
Step 8 Create a service-instance group on each of the master and backup devices and
bind the service-instance group to the service-location group. (You must bind a
service-instance group to an RBS. Otherwise, traffic is transmitted over the backup
channel during a master/backup device switchover, affecting services.)
# Configure CGN1.
1. Configure an RBS named rbs.
[~CGN1] remote-backup-service rbs
[*CGN1-rm-backup-srv-rbs] peer 10.1.1.2 source 10.1.1.1 port 7000
[*CGN1-rm-backup-srv-rbs] commit
[~CGN1-rm-backup-srv-rbs] quit
2. Create a service-instance group named group1, and bind it to service-location
groups 1 and 2, and to RBS rbs.
[~CGN1] service-instance-group group1
[*CGN1-service-instance-group-group1] service-location 1
[*CGN1-service-instance-group-group1] service-location 2
[*CGN1-service-instance-group-group1] remote-backup-service rbs
[*CGN1-service-instance-group-group1] commit
[~CGN1-service-instance-group-group1] quit
# Configure CGN2.
1. Configure an RBS named rbs.
[~CGN2] remote-backup-service rbs
[*CGN2-rm-backup-srv-rbs] peer 10.1.1.1 source 10.1.1.2 port 7000
[*CGN2-rm-backup-srv-rbs] commit
[~CGN2-rm-backup-srv-rbs] quit
2. Create a service-instance group named group1, and bind the service-instance
group to service-location groups 1 and 2, and to RBS rbs.
[~CGN2] service-instance-group group1
[*CGN2-service-instance-group-group1] service-location 1
[*CGN2-service-instance-group-group1] service-location 2
[*CGN2-service-instance-group-group1] remote-backup-service rbs
[*CGN2-service-instance-group-group1] commit
[~CGN2-service-instance-group-group1] quit
# Run the display service-location command on the two CGN devices to check
HA information. In the command output, Vrrp state must be consistent with the
HA state, and Batch-backup state indicates whether batch backup is finished.
[~CGN1] display service-location 1
service-location 1
Backup scene type: inter-box
Location slot ID: 9
Remote-backup interface: GigabitEthernet0/2/1.1
Peer: 10.1.1.2
Vrrp ID: 1
Vrrp bind interface: GigabitEthernet0/2/1.1
Vrrp state: master
Bound service-instance-group number: 1
Batch-backup state: finished
[~CGN1] display service-location 2
service-location 2
Backup scene type: inter-box
Location slot ID: 9
Remote-backup interface: GigabitEthernet0/2/1.2
Peer: 10.1.2.2
Vrrp ID: 2
Vrrp bind interface: GigabitEthernet0/2/1.2
Vrrp state: master
Bound service-instance-group number: 1
Batch-backup state: finished
# Configure CGN2.
Configure a CGN global static address pool. (You must configure the slave
parameter. Otherwise, services will be affected.)
[~CGN2] nat ip-pool pool1 slave
[*CGN2-nat-ip-pool-pool1] section 0 11.11.11.1 mask 24
[*CGN2-nat-ip-pool-pool1] nat-instance subnet length initial 25 extend 27
[*CGN2-nat-ip-pool-pool1] commit
[~CGN2-nat-ip-pool-pool1] quit
Step 10 Bind the NAT instance to the service-instance group on the master and backup
devices.
# On CGN1, bind the NAT instance named nat to the service-instance group
named group1.
[~CGN1] nat instance nat id 1
[*CGN1-nat-instance-nat] port-range 4096
[*CGN1-nat-instance-nat] service-instance-group group1
[*CGN1-nat-instance-nat] commit
[~CGN1-nat-instance-nat] quit
# On CGN2, bind the NAT instance named nat to the service-instance group
named group1.
[~CGN2] nat instance nat id 1
[*CGN1-nat-instance-nat] port-range 4096
[*CGN2-nat-instance-nat] service-instance-group group1
[*CGN2-nat-instance-nat] commit
[~CGN2-nat-instance-nat] quit
Step 11 Configure a traffic classification rule, a traffic behavior, and a NAT traffic diversion
policy on the master and backup devices, and apply the NAT traffic diversion
policy.
# On CGN1, configure a NAT traffic diversion policy and a NAT conversion policy.
1. Configure an ACL traffic classification rule.
[~CGN1] acl 3001
[*CGN1-acl4-advance-3001] rule 1 permit ip source 10.0.0.0 0.0.0.255
[*CGN1-acl4-advance-3001] commit
[~CGN1-acl4-advance-3001] quit
4. Define a NAT traffic diversion policy to associate the traffic classifier with the
traffic behavior.
[~CGN1] traffic policy policy1
[*CGN1-trafficpolicy-p1] classifier classifier1 behavior behavior1
[*CGN1-trafficpolicy-p1] commit
[~CGN1-trafficpolicy-p1] quit
5. Apply the NAT traffic diversion policy in the GE 0/2/2 interface view. In VS
mode, the traffic-policy command is supported only by the admin VS.
[~CGN1] interface GigabitEthernet 0/2/2
[*CGN1-GigabitEthernet0/2/2] ip address 10.1.6.1 24
[*CGN1-GigabitEthernet0/2/2] traffic-policy policy1 inbound
[*CGN1-GigabitEthernet0/2/2] commit
[~CGN1] quit
port-range 4096
service-instance-group group1
nat address-group group1 group-id 1 bind-ip-pool pool1
nat outbound any address-group1
[~CGN2] display nat instance nat
nat instance nat id 1
port-range 4096
service-instance-group group1
nat address-group group1 group-id 1 bind-ip-pool pool1
nat outbound any address-group1
Step 12 Configure the master and backup devices to establish a BGP peer relationship with
the user-side CE and import the default route advertised by the CR. Configure a
local-preference-based route-policy on the user-side CE to allow the CE to
preferentially select the default route from CGN1.
# Configure CGN1.
[~CGN1] bgp 200
[*CGN1-bgp] peer 10.2.2.2 as-number 200
[*CGN1-bgp] peer 10.2.2.2 connect-interface LoopBack0
[*CGN1-bgp] ipv4-family unicast
[*CGN1-bgp-af-ipv4] peer 10.2.2.2 default-route-advertise
[*CGN1-bgp-af-ipv4] commit
[~CGN1-bgp-af-ipv4] quit
[~CGN1-bgp] import-route unr
[*CGN1-bgp] commit
[~CGN1-bgp] quit
# Configure CGN2.
[~CGN2] bgp 200
[*CGN2-bgp] peer 10.2.2.2 as-number 200
[*CGN2-bgp] peer 10.2.2.2 connect-interface LoopBack0
[*CGN2-bgp] ipv4-family unicast
[*CGN2-bgp-af-ipv4] peer 10.2.2.2 default-route-advertise
[*CGN2-bgp-af-ipv4] commit
[~CGN2-bgp-af-ipv4] quit
[~CGN2-bgp] import-route unr
[*CGN2-bgp] commit
[~CGN2-bgp] quit
----End
Configuration Files
● CGN1 configuration file
#
sysname CGN1
#
vsm on-board-mode disable
#
service-ha hot-backup enable
#
service-location 1
location slot 9
remote-backup interface GigabitEthernet 0/2/1.1 peer 10.1.1.2
vrrp vrid 1 interface GigabitEthernet 0/2/1.1
#
service-location 2
location slot 9
remote-backup interface GigabitEthernet 0/2/1.2 peer 10.1.2.2
vrrp vrid 2 interface GigabitEthernet 0/2/1.2
#
service-instance-group group1
service-location 1
service-location 2
remote-backup-service rbs
#
nat ip-pool pool1
nat-instance subnet length initial 25 extend 27
section 0 11.11.11.1 mask 24
#
nat instance nat id 1
port-range 4096
service-instance-group group1
nat address-group group1 group-id 1 bind-ip-pool pool1
nat outbound any address-group group1
#
remote-backup-service rbs
peer 10.1.1.2 source 10.1.1.1 port 7000
#
acl number 3001
rule 1 permit ip source 10.0.0.0 0.0.0.255
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat
#
traffic policy policy1
share-mode
classifier classifier1 behavior behavior1 precedence 1
#
license
active nat session-table size 6 slot 9
active nat session-table size 6 slot 9
active nat bandwidth-enhance 100 slot 9
#
interface GigabitEthernet0/2/1.1
undo shutdown
vlan-type dot1q 2001
ip address 10.1.1.1 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 200
vrrp vrid 1 track service-location 1 reduced 60
vrrp vrid 1 track interface GigabitEthernet0/2/2 reduced 60
vrrp vrid 1 track interface GigabitEthernet0/2/3 reduced 60
vrrp vrid 1 preempt-mode timer delay 1500
vrrp recover-delay 20
#
interface GigabitEthernet0/2/1.2
undo shutdown
vlan-type dot1q 2002
ip address 10.1.2.1 255.255.255.0
vrrp vrid 2 virtual-ip 10.1.2.3
#
traffic classifier classifier1 operator or
if-match acl 3001 precedence 1
#
traffic behavior behavior1
nat bind instance nat
#
traffic policy policy1
share-mode
classifier classifier1 behavior behavior1 precedence 1
#
license
active nat session-table size 6 slot 9
active nat session-table size 6 slot 9
active nat bandwidth-enhance 100 slot 9
#
interface GigabitEthernet0/2/1.1
undo shutdown
vlan-type dot1q 2001
ip address 10.1.1.2 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 150
vrrp vrid 1 track service-location 1 reduced 60
vrrp vrid 1 track interface GigabitEthernet0/2/2 reduced 60
vrrp vrid 1 track interface GigabitEthernet0/2/3 reduced 60
#
interface GigabitEthernet0/2/1.2
undo shutdown
vlan-type dot1q 2002
ip address 10.1.2.2 255.255.255.0
vrrp vrid 2 virtual-ip 10.1.2.3
admin-vrrp vrid 2 ignore-if-down
vrrp vrid 2 priority 150
vrrp vrid 2 track service-location 2 reduced 60
vrrp vrid 2 track interface GigabitEthernet0/2/2 reduced 60
vrrp vrid 2 track interface GigabitEthernet0/2/3 reduced 60
#
interface GigabitEthernet0/2/2
undo shutdown
ip address 10.1.4.1 255.255.255.0
traffic-policy policy1 inbound
#
interface GigabitEthernet0/2/3
undo shutdown
ip address 10.1.5.1 255.255.255.0
#
interface LoopBack0
ip address 10.4.4.4 255.255.255.255
#
bgp 200
peer 10.2.2.2 as-number 200
peer 10.2.2.2 connect-interface Loopback0
#
ipv4-family unicast
undo synchronization
import-route unr
peer 10.2.2.2 enable
peer 10.2.2.2 default-route-advertise
#
return
● CE configuration file
#
sysname CE
#
interface GigabitEthernet0/2/1
undo shutdown
ip address 10.11.11.2 255.255.255.0
#
interface GigabitEthernet0/2/2
undo shutdown
ip address 10.11.12.2 255.255.255.0
#
interface LoopBack0
ip address 10.2.2.2 255.255.255.255
#
bgp 200
peer 10.3.3.3 as-number 200
peer 10.3.3.3 connect-interface Loopback0
peer 10.4.4.4 as-number 200
peer 10.4.4.4 connect-interface Loopback0
#
ipv4-family unicast
undo synchronization
import-route unr
peer 10.3.3.3 enable
peer 10.3.3.3 route-policy local_pre import
peer 10.4.4.4 enable
#
return
?.7. Example for Configuring VPN over Centralized NAT Inter-chassis Hot Backup
(One to Many)
This section provides an example for configuring VPN over centralized NAT inter-
chassis HA backup in centralized deployment mode.
Networking Requirements
NOTE
Interface1, interface2, and interface3 in this example represent GE0/2/0, GE0/2/1, and GE0/3/0,
respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable the license function on CGN devices and configure NAT session table
resources.
2. Configure a user-side VPN instance and a network-side VPN instance on each
CGN device and bind the VPN instances.
3. Enable HA hot backup.
4. Create a service-location group, configure members for HA dual-device inter-
chassis backup, and configure a VRRP channel.
5. Bind an RBS group to a service-instance group to implement NAT on the user-
side VPN.
6. Create a VRRP group and associate HA with VRRP.
7. Bind the service-location group to the VRRP group.
8. Create a service-instance group and bind the service-location group to the
service-instance group.
9. Create a NAT instance, enable the VPN NAT function, bind the NAT instance
to the service-instance group and allowed VPN instance, and configure an
address pool and a traffic conversion policy.
10. Configure a NAT traffic diversion policy.
Data Preparation
No. Data
4 Name of an RBS
7 ID of a VRRP group
Procedure
Step 1 Enable the license function on CGN devices and configure NAT session table
resources.
# Configure CGN1.
<HUAWEI> system-view
[~HUAWEI] sysname CGN1
[*HUAWEI] commit
[~CGN1] vsm on-board-mode disable
[*CGN1] commit
[~CGN1] license
[~CGN1-license] active nat session-table size 6 slot 9
[*CGN1-license] active nat bandwidth-enhance 40 slot 9
[*CGN1-license] commit
[~CGN1-license] quit
# Configure CGN2.
<HUAWEI> system-view
[~HUAWEI] sysname CGN2
[*HUAWEI] commit
[~CGN2] vsm on-board-mode disable
[*CGN2] commit
[~CGN2] license
[~CGN2-license] active nat session-table size 6 slot 9
[*CGN2-license] active nat bandwidth-enhance 40 slot 9
[*CGN2-license] commit
[~CGN2-license] quit
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
Step 2 Configure a user-side VPN instance and a network-side VPN instance on each CGN
device and bind the VPN instances.
# Configure CGN1.
1. Configure user-side VPN instances named inside-vpn1 and inside-vpn2 on
CGN1.
[~CGN1] ip vpn-instance inside-vpn1
[*CGN1-vpn-instance-inside-vpn1] ipv4-family
[*CGN1-vpn-instance-inside-vpn1-af-ipv4] route-distinguisher 200:1
[*CGN1-vpn-instance-inside-vpn1-af-ipv4] vpn-target 101:101 export-extcommunity
[*CGN1-vpn-instance-inside-vpn1-af-ipv4] vpn-target 101:101 import-extcommunity
[*CGN1-vpn-instance-inside-vpn1-af-ipv4] commit
[~CGN1-vpn-instance-inside-vpn1-af-ipv4] quit
[~CGN1-vpn-instance-inside-vpn1] quit
[~CGN1] ip vpn-instance inside-vpn2
[*CGN1-vpn-instance-inside-vpn2] ipv4-family
[*CGN1-vpn-instance-inside-vpn2-af-ipv4] route-distinguisher 200:2
[*CGN1-vpn-instance-inside-vpn2-af-ipv4] vpn-target 102:102 export-extcommunity
[*CGN1-vpn-instance-inside-vpn2-af-ipv4] vpn-target 102:102 import-extcommunity
[*CGN1-vpn-instance-inside-vpn2-af-ipv4] commit
[~CGN1-vpn-instance-inside-vpn2-af-ipv4] quit
[~CGN1-vpn-instance-inside-vpn2] quit
2. Configure a network-side VPN instance named outside-vpn on CGN1.
[~CGN1] ip vpn-instance outside-vpn
[*CGN1-vpn-instance-outside-vpn] ipv4-family
[*CGN1-vpn-instance-outside-vpn-af-ipv4] route-distinguisher 200:3
[*CGN1-vpn-instance-outside-vpn-af-ipv4] vpn-target 103:103 export-extcommunity
[*CGN1-vpn-instance-outside-vpn-af-ipv4] vpn-target 103:103 import-extcommunity
[*CGN1-vpn-instance-outside-vpn-af-ipv4] commit
[~CGN1-vpn-instance-outside-vpn-af-ipv4] quit
[~CGN1-vpn-instance-outside-vpn] quit
3. On CGN1, bind the private VPN instances on the user side.
[~CGN1] interface GigabitEthernet 0/2/1.1
[*CGN1-GigabitEthernet0/2/1.1] vlan-type dot1q 200
[*CGN1-GigabitEthernet0/2/1.1] ip binding vpn-instance inside-vpn1
[*CGN1-GigabitEthernet0/2/1.1] ip address 172.16.10.1 24
[*CGN1-GigabitEthernet0/2/1.1] commit
[~CGN1-GigabitEthernet0/2/1.1] quit
[~CGN1] interface GigabitEthernet 0/2/1.2
# Configure CGN2.
1. Configure user-side VPN instances named inside-vpn1 and inside-vpn2 on
CGN2.
[~CGN2] ip vpn-instance inside-vpn1
[*CGN2-vpn-instance-inside-vpn1] ipv4-family
[*CGN2-vpn-instance-inside-vpn1-af-ipv4] route-distinguisher 200:1
[*CGN2-vpn-instance-inside-vpn1-af-ipv4] vpn-target 101:101 export-extcommunity
[*CGN2-vpn-instance-inside-vpn1-af-ipv4] vpn-target 101:101 import-extcommunity
[*CGN2-vpn-instance-inside-vpn1-af-ipv4] commit
[~CGN2-vpn-instance-inside-vpn1-af-ipv4] quit
[~CGN2-vpn-instance-inside-vpn1] quit
[~CGN2] ip vpn-instance inside-vpn2
[*CGN2-vpn-instance-inside-vpn2] ipv4-family
[*CGN2-vpn-instance-inside-vpn2-af-ipv4] route-distinguisher 200:2
[*CGN2-vpn-instance-inside-vpn2-af-ipv4] vpn-target 102:102 export-extcommunity
[*CGN2-vpn-instance-inside-vpn2-af-ipv4] vpn-target 102:102 import-extcommunity
[*CGN2-vpn-instance-inside-vpn2-af-ipv4] commit
[~CGN2-vpn-instance-inside-vpn2-af-ipv4] quit
[~CGN2-vpn-instance-inside-vpn2] quit
NOTE
# Create service-location group 1 on CGN1, add CPU 0 of the service board in slot
9 as an HA dual-device inter-chassis backup member, and set the local VRRP
outbound interface to GE0/2/0 and the peer IP address to 172.16.5.2.
[~CGN1] service-location 1
[*CGN1-service-location-1] location slot 9
[*CGN1-service-location-1] remote-backup interface GigabitEthernet 0/2/0 peer 172.16.5.2
[*CGN1-service-location-1] commit
[~CGN1-service-location-1] quit
# Create service-location group 1 on CGN2, add CPU 0 of the service board in slot
9 as an HA dual-device inter-chassis backup member, and set the local VRRP
outbound interface to GE 0/2/0 and the peer IP address to 172.16.5.1.
[~CGN2] service-location 1
[*CGN2-service-location-1] location slot 9
[*CGN2-service-location-1] remote-backup interface GigabitEthernet 0/2/0 peer 172.16.5.1
[*CGN2-service-location-1] commit
[~CGN2-service-location-1] quit
Step 5 Bind an RBS group to a service-instance group to implement NAT on the user-side
VPN.
NOTE
Step 6 Create a VRRP group and configure HA and VRRP association. Set the VRRP
preemption delay to 1500s to ensure that the NAT information is backed up
completely. Set the VRRP recovery delay to 15s.
# Configure CGN1.
[~CGN1] interface GigabitEthernet 0/2/0
[~CGN1-GigabitEthernet0/2/0] ip address 172.16.5.1 24
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 virtual-ip 172.16.5.100
[*CGN1-GigabitEthernet0/2/0] admin-vrrp vrid 1 ignore-if-down
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 priority 200
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 preempt-mode timer delay 1500
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 track service-location 1 reduced 60
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 track interface GigabitEthernet 0/2/1.1 reduced 60
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 track interface GigabitEthernet 0/2/1.2 reduced 60
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 track interface GigabitEthernet 0/3/0 reduced 60
[*CGN1-GigabitEthernet0/2/0] vrrp recover-delay 15
[*CGN1-GigabitEthernet0/2/0] commit
[~CGN1-GigabitEthernet0/2/0] quit
# Configure CGN2.
[~CGN2] interface GigabitEthernet 0/2/0
[~CGN2-GigabitEthernet0/2/0] ip address 172.16.5.2 24
[*CGN2-GigabitEthernet0/2/0] vrrp vrid 1 virtual-ip 172.16.5.100
[*CGN2-GigabitEthernet0/2/0] admin-vrrp vrid 1 ignore-if-down
[*CGN2-GigabitEthernet0/2/0] vrrp vrid 1 priority 150
[*CGN2-GigabitEthernet0/2/0] vrrp vrid 1 track service-location 1 reduced 60
[*CGN2-GigabitEthernet0/2/0] vrrp recover-delay 15
[*CGN2-GigabitEthernet0/2/0] commit
[~CGN2-GigabitEthernet0/2/0] quit
After the configuration, the master/backup status of CGN1 and CGN2 can be
determined by VRRP. You can run the display vrrp 1 command to check the
master/backup VRRP status, which reflects the master/backup status of the
service-location groups on CGN1 and CGN2. State in the command output
indicates the CGN device's status.
# Configure CGN1.
[~CGN1] service-location 1
[~CGN1-service-location-1] vrrp vrid 1 interface GigabitEthernet 0/2/0
[*CGN1-service-location-1] commit
[~CGN1-service-location-1] quit
NOTE
Step 8 Create a service-instance group and bind the service-location group to the service-
instance group.
# Configure CGN1.
[~CGN1] service-instance-group group1
[*CGN1-service-instance-group-group1] service-location 1
[*CGN1-service-instance-group-group1] commit
[~CGN1-service-instance-group-group1] quit
NOTE
Run the display service-location 1 command on the CGN devices to check the HA
information. VRRP State in the command output indicates the status of the HA
backup group, which must be consistent with the CGN devices' VRRP status.
Batch-backup state in the command output indicates whether batch backup has
completed.
Step 9 Create a NAT instance, enable the VPN NAT function, bind the NAT instance to
the service-instance group and allowed VPN instance, and configure an address
pool and a traffic conversion policy.
# Configure CGN1.
[~CGN1] nat instance nat id 1
[*CGN1-nat-instance-nat] port-range 64
[*CGN1-nat-instance-nat] service-instance-group group1
[*CGN1-nat-instance-nat] nat address-group group1 group-id 1 vpn-instance outside-vpn
[*CGN1-nat-instance-nat-nat-address-group-group1] section 0 11.11.11.0 mask 24
[*CGN1-nat-instance-nat-nat-address-group-group1] commit
[~CGN1-nat-instance-nat-nat-address-group-group1] quit
[~CGN1-nat-instance-nat] nat outbound any address-group group1
[*CGN1-nat-instance-nat] commit
[~CGN1-nat-instance-nat] quit
NOTE
The configuration on CGN2 is similar to that on CGN1. Pay attention to the following
points:
● Ensure that the VPN instance before NAT is the same as that bound to the user-side
inbound interface (GE0/2/1 on PE1).
● Ensure that the VPN instance after NAT is the same as that bound to the network-side
outbound interface (GE0/3/0 on PE2).
3. Configure a traffic behavior vpn_nat1 and bind it to the NAT instance named
nat.
[~CGN1] traffic behavior vpn_nat1
[*CGN1-behavior-vpn_nat1] nat bind instance nat
[*CGN1-behavior-vpn_nat1] commit
[~CGN1-behavior-vpn_nat1] quit
4. Define a NAT traffic policy named vpn_nat1 to associate all ACL rules with
the traffic behaviors.
[~CGN1] traffic policy vpn_nat1
[*CGN1-trafficpolicy-vpn_nat1] classifier vpn_nat1 behavior vpn_nat1
[*CGN1-trafficpolicy-vpn_nat1] commit
[~CGN1-trafficpolicy-vpn_nat1] quit
NOTE
----End
Configuration Files
● CGN1 configuration file
#
sysname CGN1
#
port-range 64
service-instance-group group1
nat address-group group1 group-id 1 vpn-instance outside-vpn
section 0 11.11.11.0 mask 24
nat outbound any address-group group1
#
acl number 3001
rule 5 permit ip source 10.1.1.0 0.0.0.255
#
traffic classifier vpn_nat1 operator or
if-match acl 3001 precedence 1
#
traffic behavior vpn_nat1
nat bind instance nat
#
traffic policy vpn_nat1
share-mode
classifier vpn_nat1 behavior vpn_nat1 precedence 1
#
interface GigabitEthernet0/2/0
undo shutdown
ip address 172.16.5.1 255.255.255.0
vrrp vrid 1 virtual-ip 172.16.5.100
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 200
vrrp vrid 1 preempt-mode timer delay 1500
vrrp vrid 1 track service-location 1 reduced 60
vrrp vrid 1 track interface GigabitEthernet 0/2/1.1 reduced 60
vrrp vrid 1 track interface GigabitEthernet 0/2/1.2 reduced 60
vrrp vrid 1 track interface GigabitEthernet 0/3/0 reduced 60
vrrp recover-delay 15
#
interface GigabitEthernet0/3/0
ip binding vpn-instance outside-vpn
ip address 172.16.9.1 255.255.255.0
#
interface GigabitEthernet0/2/1.1
vlan-type dot1q 200
ip binding vpn-instance inside-vpn1
ip address 172.16.10.1 255.255.255.0
traffic-policy vpn_nat1 inbound
#
interface GigabitEthernet0/2/1.2
vlan-type dot1q 202
ip binding vpn-instance inside-vpn2
ip address 172.16.11.1 255.255.255.0
traffic-policy vpn_nat1 inbound
#
return
● CGN2 configuration file
#
sysname CGN2
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
ip vpn-instance inside-vpn1
ipv4-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 101:101 export-extcommunity
vpn-target 101:101 import-extcommunity
#
ip vpn-instance inside-vpn2
ipv4-family
route-distinguisher 200:2
apply-label per-instance
vpn-target 102:102 export-extcommunity
vpn-target 102:102 import-extcommunity
#
ip vpn-instance outside-vpn
ipv4-family
route-distinguisher 200:3
apply-label per-instance
vpn-target 103:103 export-extcommunity
vpn-target 103:103 import-extcommunity
#
ospf 101 vpn-instance inside-vpn1
default cost inherit-metric
import-route unr
opaque-capability enable
area 0.0.0.0
network 172.16.6.0 0.0.0.255
#
ospf 102 vpn-instance inside-vpn2
default cost inherit-metric
import-route unr
opaque-capability enable
area 0.0.0.0
network 172.16.7.0 0.0.0.255
#
bgp 300
private-4-byte-as enable
#
ipv4-family unicast
undo synchronization
#
ipv4-family vpn-instance outside-vpn
import-route direct
import-route unr
peer 172.16.8.2 as-number 400
peer 172.16.8.2 connect-interface GigabitEthernet0/3/0
#
remote-backup-service vpn_nat
peer 172.16.5.1 source 172.16.5.2 port 1024
#
service-ha hot-backup enable
#
service-location 1
location slot 9
vrrp vrid 1 interface GigabitEthernet 0/2/0
remote-backup interface GigabitEthernet 0/2/0 peer 172.16.5.1
#
service-instance-group group1
service-location 1
remote-backup-service vpn_nat
#
nat instance nat id 1
port-range 64
service-instance-group group1
nat address-group group1 group-id 1 vpn-instance outside-vpn
section 0 11.11.11.0 mask 24
nat outbound any address-group group1
#
acl number 3001
rule 5 permit ip source 10.1.1.0 0.0.0.255
#
traffic classifier vpn_nat1 operator or
if-match acl 3001 precedence 1
#
traffic policy vpn_nat1
share-mode
classifier vpn_nat1 behavior vpn_nat1 precedence 1
#
interface GigabitEthernet0/2/0
undo shutdown
ip address 172.16.5.2 255.255.255.0
vrrp vrid 1 virtual-ip 172.16.5.100
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 150
vrrp vrid 1 track service-location 1 reduced 60
vrrp recover-delay 15
#
interface GigabitEthernet0/3/0
ip binding vpn-instance outside-vpn
ip address 172.16.8.1 255.255.255.0
#
interface GigabitEthernet0/2/1.1
vlan-type dot1q 200
ip binding vpn-instance inside-vpn1
ip address 172.16.6.1 255.255.255.0
traffic-policy vpn_nat1 inbound
#
interface GigabitEthernet0/2/1.2
vlan-type dot1q 202
ip binding vpn-instance inside-vpn2
ip address 172.16.7.1 255.255.255.0
traffic-policy vpn_nat1 inbound
#
return
?.8. Example for Configuring VPN over Centralized NAT Inter-chassis Hot Backup
(One to One)
This section provides an example for configuring VPN over centralized NAT inter-
chassis hot backup.
Networking Requirements
On the network shown in Figure 1-124, one-to-one address translation is
performed on user traffic between one network-side VPN and one user-side VPN.
A VRRP channel is established between CGN1 and CGN2 over GE interfaces. CGN1
and CGN2 work in master/backup mode and their master/backup status is
determined by VRRP. HA is associated with VRRP. CGN1 is equipped with a service
board in slot 9, and CGN2 is equipped with a service board in slot 9. It is required
that VPN over inter-chassis hot backup be implemented between CPU 0 of the
service board in slot 9 on CGN1 and CPU 0 of the service board in slot 9 on CGN2.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable the license function on CGN devices and configure NAT session table
resources.
2. Configure a user-side VPN instance and a network-side VPN instance on each
CGN device and bind the VPN instances.
3. Enable HA hot backup.
4. Create a service-location group, configure members for HA dual-device inter-
chassis backup, and configure a VRRP channel.
5. Create a VRRP group and associate HA with VRRP.
6. Bind the service-location group to the VRRP group.
7. Create a service-instance group and bind the service-location group to the
service-instance group.
8. Create a NAT instance, enable the VPN NAT function, bind the NAT instance
to the service-instance group and allowed VPN instance, and configure an
address pool and a traffic conversion policy.
9. Configure a NAT traffic diversion policy.
Data Preparation
No. Data
6 ID of a VRRP group
Procedure
Step 1 Enable the license function on CGN devices and configure NAT session table
resources.
# Configure CGN1.
<HUAWEI> system-view
[~HUAWEI] sysname CGN1
[*HUAWEI] commit
[~CGN1] vsm on-board-mode disable
[*CGN1] commit
[~CGN1] license
# Configure CGN2.
<HUAWEI> system-view
[~HUAWEI] sysname CGN2
[*HUAWEI] commit
[~CGN2] vsm on-board-mode disable
[*CGN2] commit
[~CGN2] license
[~CGN2-license] active nat session-table size 6 slot 9
[*CGN2-license] active nat bandwidth-enhance 40 slot 9
[*CGN2-license] commit
[~CGN2-license] quit
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
Step 2 Configure a user-side VPN instance and a network-side VPN instance on each CGN
device and bind the VPN instances.
# Configure CGN1.
1. Configure a user-side VPN instance named inside-vpn on CGN1.
[~CGN1] ip vpn-instance inside-vpn
[*CGN1-vpn-instance-inside-vpn] ipv4-family
[*CGN1-vpn-instance-inside-vpn-af-ipv4] route-distinguisher 200:1
[*CGN1-vpn-instance-inside-vpn-af-ipv4] vpn-target 101:101 export-extcommunity
[*CGN1-vpn-instance-inside-vpn-af-ipv4] vpn-target 101:101 import-extcommunity
[*CGN1-vpn-instance-inside-vpn-af-ipv4] commit
[~CGN1-vpn-instance-inside-vpn-af-ipv4] quit
[~CGN1-vpn-instance-inside-vpn] quit
3. On CGN1, bind the VPN instance named inside-vpn on the user side.
[~CGN1] interface GigabitEthernet 0/2/1
[~CGN1-GigabitEthernet0/2/1] ip binding vpn-instance inside-vpn
[*CGN1-GigabitEthernet0/2/1] ip address 172.16.10.1 24
[*CGN1-GigabitEthernet0/2/1] commit
[~CGN1-GigabitEthernet0/2/1] quit
4. On CGN1, bind the VPN instance named global-vpn on the network side.
[~CGN1] interface GigabitEthernet 0/3/0
[~CGN1-GigabitEthernet0/3/0] ip binding vpn-instance global-vpn
[*CGN1-GigabitEthernet0/3/0] ip address 172.16.9.1 24
[*CGN1-GigabitEthernet0/3/0] commit
[~CGN1-GigabitEthernet0/3/0] quit
# Configure CGN2.
1. Configure a user-side VPN instance named inside-vpn on CGN2.
[~CGN2] ip vpn-instance inside-vpn
[*CGN2-vpn-instance-inside-vpn] ipv4-family
[*CGN2-vpn-instance-inside-vpn-af-ipv4] route-distinguisher 200:1
[*CGN2-vpn-instance-inside-vpn-af-ipv4] vpn-target 101:101 export-extcommunity
[*CGN2-vpn-instance-inside-vpn-af-ipv4] vpn-target 101:101 import-extcommunity
[*CGN2-vpn-instance-inside-vpn-af-ipv4] commit
[~CGN2-vpn-instance-inside-vpn-af-ipv4] quit
[~CGN2-vpn-instance-inside-vpn] quit
3. On CGN2, bind the VPN instance named inside-vpn on the user side.
[~CGN2] interface GigabitEthernet 0/2/1
[~CGN2-GigabitEthernet0/2/1] ip binding vpn-instance inside-vpn
[*CGN2-GigabitEthernet0/2/1] ip address 172.16.7.1 24
[*CGN2-GigabitEthernet0/2/1] commit
[~CGN2-GigabitEthernet0/2/1] quit
4. On CGN2, bind the VPN instance named global-vpn on the network side.
[~CGN2] interface GigabitEthernet 0/3/0
[~CGN2-GigabitEthernet0/3/0] ip binding vpn-instance global-vpn
[*CGN2-GigabitEthernet0/3/0] ip address 172.16.8.1 24
[*CGN2-GigabitEthernet0/3/0] commit
[~CGN2-GigabitEthernet0/3/0] quit
# Configure CGN1.
[~CGN1] service-ha hot-backup enable
[*CGN1] commit
NOTE
# Create service-location group 1 on CGN1, add CPU 0 of the service board in slot
9 as an HA dual-device inter-chassis backup member, and set the local VRRP
outbound interface to GE 0/2/0 and the peer IP address to 172.16.5.2.
[~CGN1] service-location 1
[*CGN1-service-location-1] location slot 9
[*CGN1-service-location-1] remote-backup interface GigabitEthernet 0/2/0 peer 172.16.5.2
[*CGN1-service-location-1] commit
[~CGN1-service-location-1] quit
# Create service-location group 1 on CGN2, add CPU 0 of the service board in slot
9 as an HA dual-device inter-chassis backup member, and set the local VRRP
outbound interface to GE 0/2/0 and the peer IP address to 172.16.5.1.
[~CGN2] service-location 1
[*CGN2-service-location-1] location slot 9
[*CGN2-service-location-1] remote-backup interface GigabitEthernet 0/2/0 peer 172.16.5.1
[*CGN2-service-location-1] commit
[~CGN2-service-location-1] quit
Step 5 Create a VRRP group and configure HA and VRRP association. Set the VRRP
preemption delay to 1500s to ensure that the NAT information is backed up
completely. Set the VRRP recovery delay to 15s.
# Configure CGN1.
[~CGN1] interface GigabitEthernet 0/2/0
[~CGN1-GigabitEthernet0/2/0] ip address 172.16.5.1 24
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 virtual-ip 172.16.5.100
[*CGN1-GigabitEthernet0/2/0] admin-vrrp vrid 1 ignore-if-down
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 priority 200
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 preempt-mode timer delay 1500
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 track service-location 1 reduced 60
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 track interface GigabitEthernet 0/2/1 reduced 60
[*CGN1-GigabitEthernet0/2/0] vrrp vrid 1 track interface GigabitEthernet 0/3/0 reduced 60
[*CGN1-GigabitEthernet0/2/0] vrrp recover-delay 15
[*CGN1-GigabitEthernet0/2/0] commit
[~CGN1-GigabitEthernet0/2/0] quit
# Configure CGN2.
[~CGN2] interface GigabitEthernet 0/2/0
[~CGN2-GigabitEthernet0/2/0] ip address 172.16.5.2 24
[*CGN2-GigabitEthernet0/2/0] vrrp vrid 1 virtual-ip 172.16.5.100
[*CGN2-GigabitEthernet0/2/0] admin-vrrp vrid 1 ignore-if-down
[*CGN2-GigabitEthernet0/2/0] vrrp vrid 1 priority 150
[*CGN2-GigabitEthernet0/2/0] vrrp vrid 1 track service-location 1 reduced 60
[*CGN2-GigabitEthernet0/2/0] vrrp recover-delay 15
[*CGN2-GigabitEthernet0/2/0] commit
[~CGN2-GigabitEthernet0/2/0] quit
After the configuration, the master/backup status of CGN1 and CGN2 can be
determined by VRRP. You can run the display vrrp 1 command to check the
master/backup VRRP status, which reflects the master/backup status of the
service-location groups on CGN1 and CGN2. State in the command output
indicates the CGN device's status.
The following example uses the command output on CGN1:
[~CGN1] display vrrp 1
GigabitEthernet 0/2/0 | Virtual Router 1
State : Master
Virtual IP : 172.16.5.100
Master IP : 172.16.5.1
Local IP : 172.16.5.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 300 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track IF : GigabitEthernet0/2/1 Priority Reduced :60
IF State : UP
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2016-10-18 11:14:48 UTC+10:59
Last Change Time : 2016-10-18 14:02:46 UTC+10:59
[~CGN1] service-location 1
[~CGN1-service-location-1] vrrp vrid 1 interface GigabitEthernet 0/2/0
[*CGN1-service-location-1] commit
[~CGN1-service-location-1] quit
NOTE
Step 7 Create a service-instance group and bind the service-location group to the service-
instance group.
# Configure CGN1.
[~CGN1] service-instance-group group1
[*CGN1-service-instance-group-group1] service-location 1
[*CGN1-service-instance-group-group1] commit
[~CGN1-service-instance-group-group1] quit
NOTE
Run the display service-location 1 command on the CGN devices to check the HA
information. VRRP State in the command output indicates the status of the HA
backup group, which must be consistent with the CGN devices' VRRP status.
Batch-backup state in the command output indicates whether batch backup has
completed.
The following example uses the command output on CGN1:
[~CGN1] display service-location 1
service-location 1
Backup scene type : inter-box
Location slot ID :9
Remote-backup interface: GigabitEthernet0/2/0
Peer: 172.16.5.2
Vrrp ID: 1
Vrrp bind interface: GigabitEthernet0/2/0
Vrrp state: master
Bound service-instance-group number: 1
Batch-backup state: finished
Step 8 Create a NAT instance, enable the VPN NAT function, bind the NAT instance to
the service-instance group and allowed VPN instance, and configure an address
pool and a traffic conversion policy.
# Configure CGN1.
[~CGN1] nat instance nat id 1
[*CGN1-nat-instance-nat] port-range 64
[*CGN1-nat-instance-nat] service-instance-group group1
[*CGN1-nat-instance-nat] nat address-group group1 group-id 1 vpn-instance global-vpn
[*CGN1-nat-instance-nat-nat-address-group-group1] section 0 11.11.11.0 mask 24
[*CGN1-nat-instance-nat-nat-address-group-group1] commit
[~CGN1-nat-instance-nat-nat-address-group-group1] quit
[~CGN1-nat-instance-nat] nat allow-access inside vpn-instance inside-vpn global vpn-instance global-
vpn
[*CGN1-nat-instance-nat] nat outbound any address-group group1
[*CGN1-nat-instance-nat] commit
[~CGN1-nat-instance-nat] quit
NOTE
The configuration on CGN2 is similar to that on CGN1. Pay attention to the following
points:
● Ensure that the VPN instance before NAT is the same as that bound to the user-side
inbound interface (GE 0/2/1 on PE1).
● Ensure that the VPN instance after NAT is the same as that bound to the network-side
outbound interface (GE 0/3/0 on PE2).
3. Define a traffic behavior named vpn_nat1 and bind it to the NAT instance
named nat.
[~CGN1] traffic behavior vpn_nat1
[*CGN1-behavior-vpn_nat1] nat bind instance nat
[*CGN1-behavior-vpn_nat1] commit
[~CGN1-behavior-vpn_nat1] quit
4. Define a NAT traffic policy named vpn_nat1 to associate the traffic classifier
with the traffic behavior.
[~CGN1] traffic policy vpn_nat1
[*CGN1-policy-vpn_nat1] classifier vpn_nat1 behavior vpn_nat1
[*CGN1-policy-vpn_nat1] commit
[~CGN1-policy-vpn_nat1] quit
NOTE
----End
Configuration Files
● CGN1 configuration file
#
sysname CGN1
#
vsm on-board-mode disable
#
license
active nat session-table size 6 slot 9
location slot 9
vrrp vrid 1 interface GigabitEthernet 0/2/0
remote-backup interface GigabitEthernet 0/2/0 peer 172.16.5.1
#
service-instance-group group1
service-location 1
#
nat instance nat id 1
port-range 64
service-instance-group group1
nat address-group group1 group-id 1 vpn-instance global-vpn
section 0 11.11.11.0 mask 24
nat allow-access inside vpn-instance inside-vpn global vpn-instance global-vpn
nat outbound any address-group group1
#
acl number 3001
rule 5 permit ip source 10.1.1.0 0.0.0.255
#
traffic classifier vpn_nat1 operator or
if-match acl 3001 precedence 1
#
traffic behavior vpn_nat1
nat bind instance nat
#
traffic policy vpn_nat1
share-mode
classifier vpn_nat1 behavior vpn_nat1 precedence 1
#
interface GigabitEthernet0/2/0
undo shutdown
ip address 172.16.5.2 255.255.255.0
vrrp vrid 1 virtual-ip 172.16.5.100
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 150
vrrp vrid 1 track service-location 1 reduced 60
vrrp recover-delay 15
#
interface GigabitEthernet0/3/0
ip binding vpn-instance global-vpn
ip address 172.16.8.1 255.255.255.0
#
interface GigabitEthernet0/2/1
ip binding vpn-instance inside-vpn
ip address 172.16.7.1 255.255.255.0
traffic-policy vpn_nat1 inbound
#
return
Networking Requirements
On the network shown in Figure 1-125, in distributed deployment mode, a
terminal user dials up through PPPoE, and BRAS1 and BRAS2 are interconnected
through a switch. RUI is used to achieve user information backup in exclusive
address pool mode. A NAT service board is deployed in slot 9 on BRAS1 and
another NAT service board is deployed in slot 9 on BRAS2. A VRRP channel is
established on the GE interfaces of BRAS1 and BRAS2. NAT inter-chassis hot
backup is implemented on CPU 0 of the NAT service board in slot 9 on BRAS1 and
CPU 0 of the NAT service board in slot 9 on BRAS2. The NAT service's master/
backup status is determined by VRRP, and the service board status is associated
with VRRP.
GE0/1/2.2 192.168.2.10/24
GE0/1/3 10.12.1.0/24
Loopback0 10.10.10.100/32
GE0/1/2.2 192.168.2.100/24
GE0/1/3 10.12.2.0/24
Loopback0 99.99.99.99/32
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Index of a service-location group
● CPU IDs and slot IDs of the service boards on BRAS1 and BRAS2 (CPU 0 in
slot 9 in this example)
● Interface and its IP address of a VRRP channel over the direct link between
BRAS1 and BRAS2
● Index, virtual IP address, member priorities, and preemption delay of a VRRP
group over the direct link
● Name of a service-instance group
● Index of a service-location group
● Index and name of a NAT instance, NAT address pool name, and address
range
● ACL number, traffic classifier name, traffic behavior name, and traffic policy
name of the NAT traffic conversion policy
● IPv4 address pool, IP address of the address pool gateway, and IP address
segment allocated for user access
● User group name, user domain name, and AAA authentication scheme and
accounting scheme
● Name and IP address of a RADIUS server group
● Remote backup identifier for RUI backup
● User-side interfaces and their IP addresses on BRAS1 and BRAS2
● Interface and its IP address of a user-side VRRP channel
● Index, virtual IP address, member priorities, and preemption delay of the user-
side VRRP group
Procedure
Step 1 Enable HA hot backup on the master and backup devices.
# Configure BRAS1.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS1
[*HUAWEI] commit
[~BRAS1] vsm on-board-mode disable
[*BRAS1] commit
[~BRAS1] service-ha hot-backup enable
[*BRAS1] commit
# Configure BRAS2.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS2
[*HUAWEI] commit
[~BRAS2] vsm on-board-mode disable
[*BRAS2] commit
[~BRAS1] service-ha hot-backup enable
[*BRAS1] commit
Step 2 Configure a session table size for the NAT service board's CPU on BRAS1 and
BRAS2.
# Set the number of session tables supported by the NAT service board's CPU in
slot 9 on BRAS1 to 6M.
[~BRAS1] license
[~BRAS1-license] active nat session-table size 6 slot 9
[*BRAS1-license] active nat bandwidth-enhance 40 slot 9
[*BRAS1-license] commit
[~BRAS1-license] quit
# Set the number of session tables supported by the NAT service board's CPU in
slot 9 on BRAS2 to 6M.
[~BRAS2] license
[~BRAS2-license] active nat session-table size 6 slot 9
[*BRAS2-license] active nat bandwidth-enhance 40 slot 9
[*BRAS2-license] commit
[~BRAS2-license] quit
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
Step 3 Create a service-location group on the master and backup devices, add members
to the group, and configure a VRRP channel.
# Create service-location group 1 on BRAS1, add CPU 0 in slot 9 to the group, and
set the VRRP outbound interface to GE 0/1/1 and the peer IP address to 10.1.1.2.
[~BRAS1] service-location 1
[*BRAS1-service-location-1] location slot 9
[*BRAS1-service-location-1] remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.2
[*BRAS1-service-location-1] commit
[~BRAS1-service-location-1] quit
# Create service-location group 1 on BRAS2, add CPU 0 in slot 9 to the group, and
set the VRRP outbound interface to GE 0/1/1 and the peer IP address to 10.1.1.1.
[~BRAS2] service-location 1
[*BRAS2-service-location-1] location slot 9
[*BRAS2-service-location-1] remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.1
[*BRAS2-service-location-1] commit
[~BRAS2-service-location-1] quit
# On BRAS1, enter the view of GE 0/1/1, create VRRP group 1, and set the virtual
IP address of the VRRP group to 10.1.1.3. Configure the VRRP group as an mVRRP
group, and set BRAS1's priority in the VRRP group to 200, the VRRP preemption
delay to 1500s, and the VRRP recovery delay to 15s.
[~BRAS1] interface GigabitEthernet 0/1/1
[~BRAS1-GigabitEthernet0/1/1] ip address 10.1.1.1 24
[*BRAS1-GigabitEthernet0/1/1] vrrp vrid 1 virtual-ip 10.1.1.3
[*BRAS1-GigabitEthernet0/1/1] admin-vrrp vrid 1 ignore-if-down
[*BRAS1-GigabitEthernet0/1/1] vrrp vrid 1 priority 200
[*BRAS1-GigabitEthernet0/1/1] vrrp vrid 1 preempt-mode timer delay 1500
[*BRAS1-GigabitEthernet0/1/1] vrrp recover-delay 15
[*BRAS1-GigabitEthernet0/1/1] commit
[~BRAS1-GigabitEthernet0/1/1] quit
# On BRAS2, enter the view of GE 0/1/1, create VRRP group 1, and set the virtual
IP address of the VRRP group to 10.1.1.3. Configure the VRRP group as an mVRRP
group, set BRAS2's priority in the VRRP group to 150, and set the VRRP recovery
delay to 15s.
[~BRAS2] interface GigabitEthernet 0/1/1
[~BRAS2-GigabitEthernet0/1/1] ip address 10.1.1.2 24
[*BRAS2-GigabitEthernet0/1/1] vrrp vrid 1 virtual-ip 10.1.1.3
[*BRAS2-GigabitEthernet0/1/1] admin-vrrp vrid 1 ignore-if-down
[*BRAS2-GigabitEthernet0/1/1] vrrp vrid 1 priority 150
[*BRAS2-GigabitEthernet0/1/1] vrrp recover-delay 15
[*BRAS2-GigabitEthernet0/1/1] commit
[~BRAS2-GigabitEthernet0/1/1] quit
Step 5 Associate HA with VRRP on the direct-connect interfaces on the master and
backup devices.
# Run the display vrrp 1 command on BRAS1 and BRAS2 to view the master/
backup VRRP status, which reflects the master/backup status of the service-
location group. State in the command output indicates the BRAS status.
[~BRAS1] display vrrp 1
GigabitEthernet 0/1/1 | Virtual Router 1
State : Master
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 400 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
NOTE
Master in the command output indicates that BRAS1 is the master device.
[~BRAS2] display vrrp 1
GigabitEthernet0/1/1 | Virtual Router 1
State : Backup
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.2
PriorityRun : 150
PriorityConfig : 150
MasterPriority : 200
Preempt : YES Delay Time :0s
Hold Multiplier :4
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:26:40 UTC+08:00
Last Change Time : 2011-10-18 14:02:22 UTC+08:00
Step 6 Bind the service-location group to the VRRP group on BRAS1 and BRAS2.
Step 7 Create a service-instance group on BRAS1 and BRAS2 and bind them to the
service-location group.
# Create a service-instance group named group1 on BRAS1 and bind it to service-
location group 1.
[~BRAS1] service-instance-group group1
[*BRAS1-service-instance-group-group1] service-location 1
[*BRAS1-service-instance-group-group1] commit
[~BRAS1-service-instance-group-group1] quit
Step 8 Create a NAT instance on BRAS1 and BRAS2 and bind it to the service-instance
group and NAT address pool.
# Create a NAT instance named nat on BRAS1, bind it to the service-instance
group named group1, and configure the IP addresses in the NAT address pool to
range from 11.11.11.100 to 11.11.11.105.
[~BRAS1] nat instance nat id 1wuwuu
[*BRAS1-nat-instance-nat] service-instance-group group1
[*BRAS1-nat-instance-nat] nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
[*BRAS1-nat-instance-nat] nat outbound any address-group address-group1
[*BRAS1-nat-instance-nat] commit
[~BRAS1-nat-instance-nat] quit
# Run the display nat instance nat command on BRAS1 and BRAS2 to view NAT
configurations.
[~BRAS1] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100
11.11.11.105
nat outbound any address-group address-group1
[~BRAS2] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100
11.11.11.105
nat outbound any address-group address-group1
Step 9 Configure PPPoE user access on BRAS1 and BRAS2 and bind the user group to the
NAT instance.
For details about how to configure a user group and IP address pool in a domain,
see Configuring PPPoE Access in HUAWEI NetEngine 8100 M14/M8, NetEngine
8000 M14K/M14/M8K/M8/M4 & NetEngine 8000E M14/M8 series User Access-
PPPoE Access Configuration.
# On BRAS1, configure user information (user group named natbras, IP address
pool named natbras, user domain named natbras, and AAA) and bind the user
group to the NAT instance named nat.
1. Configure VT 1.
[~BRAS1] interface virtual-template 1
[*BRAS1-Virtual-Template1] ppp authentication-mode chap
[*BRAS1-Virtual-Template1] commit
[~BRAS1-Virtual-Template1] quit
6. Configure a user domain named natbras and bind it to the NAT instance
named nat.
[~BRAS1] aaa
[~BRAS1-aaa] domain natbras
[*BRAS1-aaa-domain-natbras] authentication-scheme auth1
[*BRAS1-aaa-domain-natbras] accounting-scheme acct1
[*BRAS1-aaa-domain-natbras] radius-server group rd1
[*BRAS1-aaa-domain-natbras] ip-pool natbras
[*BRAS1-aaa-domain-natbras] user-group natbras bind nat instance nat
[*BRAS1-aaa-domain-natbras] quit
[*BRAS1-aaa] commit
[~BRAS1-aaa] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see BRAS2
configuration file in this section.
4. Define a traffic policy to associate the traffic classifier with the traffic
behavior.
[~BRAS1] traffic policy p1
[*BRAS1-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS1-trafficpolicy-p1] commit
[~BRAS1-trafficpolicy-p1] quit
5. Apply the NAT traffic diversion policy in the system view.
[~BRAS1] traffic-policy p1 inbound
[*BRAS1] commit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see BRAS2
configuration file in this section.
Step 11 Configure the route of the NAT address pool as a static blackhole route and
advertise it to a routing protocol. Set the ID of an OSPF process to 1. OSPF is used
as an IGP to advertise routes.
# Configure BRAS1.
[~BRAS1] ip route-static 11.11.11.0 27 null 0
[*BRAS1] commit
[~BRAS1] ospf 1
[*BRAS1-ospf-1] import-route static
[*BRAS1-ospf-1] commit
[~BRAS1-ospf-1] quit
NOTE
Step 12 On each of the master and backup devices, configure a user-side VRRP group
(between BRAS1/BRAS2 and SWITCH) and enable it to track the service-location
group. If the service-location group is not tracked, a CGN board failure cannot
trigger a master/backup BRAS switchover. As a result, new distributed NAT users
cannot go online.
# On BRAS1, enter the view of GE 0/1/2.2, create VRRP group 2, and set the
virtual IP address of the VRRP group to 192.168.2.200. Configure the VRRP group
as an mVRRP group, set BRAS1's priority in the VRRP group to 150 and the VRRP
preemption delay to 1500s, and associate the VRRP group with service-location
group 1.
[~BRAS1] interface GigabitEthernet0/1/2.2
[~BRAS1-GigabitEthernet0/1/2.2] vlan-type dot1q 2002
[*BRAS1-GigabitEthernet0/1/2.2] ip address 192.168.2.10 255.255.255.0
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 virtual-ip 192.168.2.200
[*BRAS1-GigabitEthernet0/1/2.2] admin-vrrp vrid 2
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 priority 150
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 preempt-mode timer delay 1500
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 track service-location 1 reduced 50
[*BRAS1-GigabitEthernet0/1/2.2] commit
[~BRAS1-GigabitEthernet0/1/2.2] quit
# On BRAS2, enter the view of GE 0/1/2.2, create VRRP group 2, and set the
virtual IP address of the VRRP group to 192.168.2.200. Configure the VRRP group
as an mVRRP group, set BRAS1's priority in the VRRP group to 120, and associate
the VRRP group with service-location group 1.
[~BRAS2] interface GigabitEthernet0/1/2.2
[~BRAS2-GigabitEthernet0/1/2.2] vlan-type dot1q 2002
Step 13 Establish a BFD session named bfd on BRAS1 and BRAS2 to quickly detect
interface or link exceptions and trigger a master/backup VRRP switchover.
# Configure a BFD session on BRAS1 and associate a VRRP group with the BFD
session.
1. Establish a BFD session named bfd at the access side to rapidly detect
interface or link faults. After detecting an interface fault or a link fault, BFD
triggers a master/backup VRRP switchover. The IP address of GE 0/1/2.2 on
BRAS2 is 192.168.2.100.
[~BRAS1] bfd
[~BRAS1-bfd] quit
[*BRAS1] bfd bfd bind peer-ip 192.168.2.100
[*BRAS1-bfd-session-bfd] discriminator local 100
[*BRAS1-bfd-session-bfd] discriminator remote 200
[*BRAS1-bfd-session-bfd] commit
[*BRAS1-bfd-session-bfd] quit
# Configure a BFD session on BRAS2 and associate a VRRP group with the BFD
session.
1. Establish a BFD session named bfd at the access side to rapidly detect
interface or link faults. After detecting an interface fault or a link fault, BFD
triggers a master/backup VRRP switchover. The IP address of GE 0/1/2.2 on
BRAS2 is 192.168.2.100.
[~BRAS2] bfd
[~BRAS2-bfd] quit
[*BRAS2] bfd bfd bind peer-ip 192.168.2.10
[*BRAS2-bfd-session-bfd] discriminator local 200
[*BRAS2-bfd-session-bfd] discriminator remote 100
[*BRAS2-bfd-session-bfd] commit
[*BRAS2-bfd-session-bfd] quit
Step 14 Configure RUI backup on BRAS1 and BRAS2 and back up BRAS user information in
exclusive address pool mode.
1. Configure an RBS on BRAS1 and BRAS2.
# Configure an RBS named natbras on BRAS1.
[~BRAS1] remote-backup-service natbras
[*BRAS1-rm-backup-srv-natbras] peer 10.1.1.2 source 10.1.1.1 port 7000
[*BRAS1-rm-backup-srv-natbras] track interface gigabitethernet0/1/1
[*BRAS1-rm-backup-srv-natbras] commit
[~BRAS1-rm-backup-srv-natbras] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see
BRAS2 configuration file in this section.
3. On BRAS1 and BRAS2, configure NAS parameters and the traffic backup
interval in the RBS view.
# On BRAS1, configure the logical IP address of NAS as 1.2.3.4, the logical
interface as GE 0/1/2, logical host name as huawei, and the user traffic
backup interval as 10 minutes.
[*BRAS1-rm-backup-prf-natbras] nas logic-ip 1.2.3.4
[*BRAS1-rm-backup-prf-natbras] nas logic-port GigabitEthernet 0/1/2
[*BRAS1-rm-backup-prf-natbras] nas logic-sysname huawei
[*BRAS1-rm-backup-prf-natbras] traffic backup interval 10
[*BRAS1-rm-backup-prf-natbras] commit
[~BRAS1-rm-backup-prf-natbras] quit
4. Configure the user-side sub-interface on BRAS1 and BRAS2 and bind the sub-
interface to an RBP.
# On BRAS1, configure GE 0/1/2.10 as the user-side sub-interface and bind
the sub-interface to the RBP named natbras.
[~BRAS1] interface GigabitEthernet0/1/2.10
[*BRAS1-GigabitEthernet0/1/2.10] remote-backup-profile natbras
[*BRAS1-GigabitEthernet0/1/2.10] commit
[~BRAS1-GigabitEthernet0/1/2.10] quit
[~BRAS1-ospf-1] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see
BRAS2 configuration file in this section.
----End
Configuration Files
● BRAS1 configuration file
#
sysname BRAS1
#
vsm on-board-mode disable
#
ip route-static 11.11.11.0 27 null 0
#
peer-backup route-cost auto-advertising
#
ospf 1
import-route static
default cost 10 tag 100 type 2
import-route unr
#
bfd
#
bfd bfd bind peer-ip 192.168.2.100
discriminator local 100
discriminator remote 200
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645
radius-server accounting 192.168.7.249 1646
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
#
interface Virtual-Template1
ppp authentication-mode chap
#
service-ha hot-backup enable
#
service-location 1
location slot 9
vrrp vrid 1 interface GigabitEthernet 0/1/1
remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.2
#
service-instance-group group1
service-location 1
#
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat outbound any address-group address-group1
#
interface GigabitEthernet0/1/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
#
acl number 6001
rule 1 permit ip source user-group natbras
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
ip pool natbras bas local
gateway 192.168.0.1 255.255.255.0
section 0 192.168.0.2 192.168.0.254
#
aaa
authentication-scheme auth1
authentication-mode radius
accounting-scheme acct1
accounting-mode radius
domain natbras
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool natbras
user-group natbras bind nat instance nat
#
remote-backup-service natbras
peer 10.1.1.1 source 10.1.1.2 port 7000
track interface GigabitEthernet0/1/1
#
remote-backup-profile natbras
service-type bras
backup-id 10 remote-backup-service natbras
peer-backup hot
vrrp-id 2 interface GigabitEthernet0/1/2.2
ip-pool natbras
nas logic-ip 1.2.3.4
nas logic-port GigabitEthernet0/1/2
nas logic-sysname huawei
traffic backup interval 10
#
interface GigabitEthernet0/1/2.10
user-vlan 2010
pppoe-server bind virtual-template 1
remote-backup-profile natbras
bas
access-type layer2-subscriber default-domain authentication natbras
authentication-method ppp
#
interface GigabitEthernet0/1/2.2
vlan-type dot1q 2002
ip address 192.168.2.100 255.255.255.0
vrrp vrid 2 virtual-ip 192.168.2.200
admin-vrrp vrid 2
vrrp vrid 2 priority 120
vrrp vrid 2 track service-location 1 reduced 50
#
interface loopback0
ip address 99.99.99.99 255.255.255.255
#
return
Networking Requirements
On the network shown in Figure 1-126, in distributed deployment mode, a
terminal user dials up through PPPoE, and BRAS1 and BRAS2 are interconnected
through a switch. Dual-device RUI is used to achieve user information backup in
shared address pool mode. A NAT service board is deployed in slot 9 on BRAS1
and another NAT service board is deployed in slot 9 on BRAS2. A VRRP channel is
established between BRAS1 and BRAS2 through GE interfaces. CPU 0 of the NAT
service board in slot 9 on BRAS1 and CPU 0 of the NAT service board in slot 9 on
BRAS2 implement NAT inter-chassis hot backup. The NAT service's master/backup
status is determined by VRRP, and the service board status is associated with VRRP.
The two devices work in master/backup status by running VRRP.
GE0/1/2.2 192.168.2.10/24
GE0/1/3 10.12.1.0/24
Loopback0 10.10.10.100/32
GE0/1/2.2 192.168.2.100/24
GE0/1/3 10.12.2.0/24
Loopback0 10.10.10.99/32
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Index of a service-location group
● Active CPU IDs and slot IDs of the service boards on BRAS1 and BRAS2 (CPU 0
in slot 9 in this example)
● Interface and its IP address of a VRRP channel over the direct link between
BRAS1 and BRAS2
● Index, virtual IP address, member priorities, and preemption delay of a VRRP
group over the direct link
● Name of a service-instance group
● Index of a service-location group
● Index and name of a NAT instance, NAT address pool name, and address
range
● ACL number, traffic classifier name, traffic behavior name, and traffic policy
name of the NAT traffic conversion policy
● IPv4 address pool, IP address of the address pool gateway, and IP address
segment allocated for user access
● User group name, user domain name, and AAA authentication scheme and
accounting scheme
● Name and IP address of a RADIUS server group
● Remote backup identifier for RUI backup
● User-side interfaces and their IP addresses on BRAS1 and BRAS2
● Interface and its IP address of a user-side VRRP channel
● Index, virtual IP address, member priorities, and preemption delay of the user-
side VRRP group
Procedure
Step 1 Enable HA hot backup on the master and backup devices.
# Configure BRAS1.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS1
[*HUAWEI] commit
[~BRAS1] vsm on-board-mode disable
[*BRAS1] commit
[~BRAS1] service-ha hot-backup enable
[*BRAS1] commit
# Configure BRAS2.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS2
[*HUAWEI] commit
[~BRAS2] vsm on-board-mode disable
[*BRAS2] commit
[~BRAS1] service-ha hot-backup enable
[*BRAS1] commit
Step 2 Configure a session table size for the NAT service board's CPU on BRAS1 and
BRAS2.
# Set the number of session tables supported by the NAT service board's CPU in
slot 9 on BRAS1 to 6M.
[~BRAS1] license
[~BRAS1-license] active nat session-table size 6 slot 9
[*BRAS1-license] active nat bandwidth-enhance 40 slot 9
[*BRAS1-license] commit
[~BRAS1-license] quit
# Set the number of session tables supported by the NAT service board's CPU in
slot 9 on BRAS2 to 6M.
[~BRAS2] license
[~BRAS2-license] active nat session-table size 6 slot 9
[*BRAS2-license] active nat bandwidth-enhance 40 slot 9
[*BRAS2-license] commit
[~BRAS2-license] quit
NOTE
The method for configuring bandwidth resources varies according to the board type. As
such, determine whether to run the active nat bandwidth-enhance command and the
corresponding parameters based on the board type.
Step 3 Create a service-location group on the master and backup devices, add members
to the group, and configure a VRRP channel.
# Create service-location group 1 on BRAS1, add CPU 0 in slot 9 to the group, and
set the VRRP outbound interface to GE 0/1/1 and the peer IP address to 10.1.1.2.
[~BRAS1] service-location 1
[*BRAS1-service-location-1] location slot 9
[*BRAS1-service-location-1] remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.2
[*BRAS1-service-location-1] commit
[~BRAS1-service-location-1] quit
# Create service-location group 1 on BRAS2, add CPU 0 in slot 9 to the group, and
set the VRRP outbound interface to GE 0/1/1 and the peer IP address to 10.1.1.1.
[~BRAS2] service-location 1
[*BRAS2-service-location-1] location slot 9
[*BRAS2-service-location-1] remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.1
[*BRAS2-service-location-1] commit
[~BRAS2-service-location-1] quit
# On BRAS2, enter the view of GE 0/1/1, create VRRP group 1, and set the virtual
IP address of the VRRP group to 10.1.1.3. Configure the VRRP group as an mVRRP
group, set BRAS2's priority in the VRRP group to 150, and set the VRRP recovery
delay to 15s.
Step 5 Associate HA with VRRP on the direct-connect interfaces on the master and
backup devices.
# On BRAS1, enter the view of GE 0/1/1, and associate service-location group 1
with VRRP group 1.
[~BRAS1] interface GigabitEthernet 0/1/1
[~BRAS1-GigabitEthernet0/1/1] vrrp vrid 1 track service-location 1 reduced 60
[*BRAS1-GigabitEthernet0/1/1] commit
[~BRAS1-GigabitEthernet0/1/1] quit
# Run the display vrrp 1 command on BRAS1 and BRAS2 to view the master/
backup VRRP status, which reflects the master/backup status of the service-
location group. State in the command output indicates the BRAS status.
[~BRAS1] display vrrp 1
GigabitEthernet 0/1/1 | Virtual Router 1
State : Master
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.1
PriorityRun : 200
PriorityConfig : 200
MasterPriority : 200
Preempt : YES Delay Time : 400 s
Hold Multiplier :3
TimerRun :1s
TimerConfig :1s
Auth Type : NONE
Virtual MAC : 00e0-fc12-3456
Check TTL : YES
Config Type : admin-vrrp
Backup-forward : disabled
Fast-resume : disabled
Track Service-location : 1 Priority Reduced : 60
Service-location State : UP
Create Time : 2011-10-18 11:14:48 UTC+10:59
Last Change Time : 2011-10-18 14:02:46 UTC+10:59
NOTE
Master in the command output indicates that BRAS1 is the master device.
[~BRAS2] display vrrp 1
GigabitEthernet0/1/1 | Virtual Router 1
State : Backup
Virtual IP : 10.1.1.3
Master IP : 10.1.1.1
Local IP : 10.1.1.2
PriorityRun : 150
PriorityConfig : 150
MasterPriority : 200
Step 6 Bind the service-location group to the VRRP group on BRAS1 and BRAS2.
Step 7 Create a service-instance group on BRAS1 and BRAS2 and bind them to the
service-location group.
Step 8 Create a NAT instance on BRAS1 and BRAS2 and bind it to the service-instance
group and NAT address pool.
# Create a NAT instance named nat on BRAS1, bind it to the service-instance
group named group1, and configure the IP addresses in the NAT address pool to
range from 11.11.11.100 to 11.11.11.105.
[~BRAS1] nat instance nat id 1
[*BRAS1-nat-instance-nat] service-instance-group group1
[*BRAS1-nat-instance-nat] nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
[*BRAS1-nat-instance-nat] nat outbound any address-group address-group1
[*BRAS1-nat-instance-nat] commit
[~BRAS1-nat-instance-nat] quit
# Run the display nat instance nat command on BRAS1 and BRAS2 to view NAT
configurations.
[~BRAS1] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100
11.11.11.105
nat outbound any address-group address-group1
[~BRAS2] display nat instance nat
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100
11.11.11.105
nat outbound any address-group address-group1
Step 9 Configure PPPoE user access on BRAS1 and BRAS2 and bind the user group to the
NAT instance.
For details about how to configure a user group and IP address pool in a domain,
see Configuring PPPoE Access in User Access-PPPoE Access Configuration.
# On BRAS1, configure user information (user group named natbras, IP address
pool named natbras, user domain named natbras, and AAA) and bind the user
group to the NAT instance named nat.
1. Configure VT 1.
[~BRAS1] interface virtual-template 1
[*BRAS1-Virtual-Template1] ppp authentication-mode chap
[*BRAS1-Virtual-Template1] commit
[~BRAS1-Virtual-Template1] quit
2. Configure the authentication scheme named auth1 and the accounting
scheme named acct1.
[~BRAS1] aaa
[~BRAS1-aaa] authentication-scheme auth1
[*BRAS1-aaa-authen-auth1] authentication-mode radius
[*BRAS1-aaa-authen-auth1] quit
[*BRAS1-aaa] accounting-scheme acct1
[*BRAS1-aaa-accounting-acct1] accounting-mode radius
[*BRAS1-aaa-accounting-acct1] commit
[~BRAS1-aaa-accounting-acct1] quit
3. Configure a RADIUS server group named rd1.
[~BRAS1] radius-server group rd1
[*BRAS1-radius-rd1] radius-server authentication 192.168.7.249 1645
[*BRAS1-radius-rd1] radius-server accounting 192.168.7.249 1646
[*BRAS1-radius-rd1] commit
[~BRAS1-radius-rd1] radius-server type plus11
[~BRAS1-radius-rd1] radius-server shared-key-cipher YsHsjx_202206
[*BRAS1-radius-rd1] commit
[~BRAS1-radius-rd1] quit
4. Configure an address pool named natbras for users to go online.
[~BRAS1] ip pool natbras bas local
[*BRAS1-ip-pool-natbras] gateway 192.168.0.1 255.255.255.0
[*BRAS1-ip-pool-natbras] commit
[~BRAS1-ip-pool-natbras] section 0 192.168.0.2 192.168.0.254
[~BRAS1-ip-pool-natbras] quit
5. Configure a user group named natbras.
[~BRAS1] user-group natbras
[*BRAS1] commit
6. Configure a user domain named natbras and bind it to the NAT instance
named nat.
[~BRAS1] aaa
[~BRAS1-aaa] domain natbras
[*BRAS1-aaa-domain-natbras] authentication-scheme auth1
[*BRAS1-aaa-domain-natbras] accounting-scheme acct1
[*BRAS1-aaa-domain-natbras] radius-server group rd1
[*BRAS1-aaa-domain-natbras] commit
[~BRAS1-aaa-domain-natbras] ip-pool natbras
[~BRAS1-aaa-domain-natbras] user-group natbras bind nat instance nat
[~BRAS1-aaa-domain-natbras] quit
[~BRAS1-aaa] quit
7. Configure a VLAN for the user-side sub-interface GE 0/1/2.10, specify a VT
interface for the sub-interface, and configure the BAS interface.
[~BRAS1] interface GigabitEthernet0/1/2.10
[*BRAS1-GigabitEthernet0/1/2.10] commit
[~BRAS1-GigabitEthernet0/1/2.10] user-vlan 2010
[~BRAS1-GigabitEthernet0/1/2.10-user-vlan-2010] quit
[~BRAS1-GigabitEthernet0/1/2.10] pppoe-server bind virtual-template 1
[*BRAS1-GigabitEthernet0/1/2.10] commit
[~BRAS1-GigabitEthernet0/1/2.10] bas
[~BRAS1-GigabitEthernet0/1/2.10-bas] access-type layer2-subscriber default-domain
authentication natbras
[*BRAS1-GigabitEthernet0/1/2.10-bas] authentication-method ppp
[*BRAS1-GigabitEthernet0/1/2.10-bas] commit
[~BRAS1-GigabitEthernet0/1/2.10-bas] quit
[~BRAS1-GigabitEthernet0/1/2.10] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see BRAS2
configuration file in this section.
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see BRAS2
configuration file in this section.
Step 11 Configure the route of the NAT address pool as a static blackhole route and
advertise it to a routing protocol. Set the ID of an OSPF process to 1. OSPF is used
as an IGP to advertise routes.
# Configure BRAS1.
[~BRAS1] ip route-static 11.11.11.0 27 null 0
[*BRAS1] commit
[~BRAS1-ospf-1] ospf 1
[*BRAS1-ospf-1] import-route static
[*BRAS1-ospf-1] commit
[~BRAS1-ospf-1] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see BRAS2
configuration file in this section.
Step 12 On each of the master and backup devices, configure a user-side VRRP group
(between BRAS1/BRAS2 and SWITCH) and enable it to track the service-location
group. If the service-location group is not tracked, a CGN board failure cannot
trigger a master/backup BRAS switchover. As a result, new distributed NAT users
cannot go online.
# On BRAS1, access the view of GE 0/1/2.2, create VRRP group 2, and set the
virtual IP address of the VRRP group to 192.168.2.200. Configure the VRRP group
as an mVRRP group, set BRAS1's priority in the VRRP group to 150 and the VRRP
preemption delay to 1500s, and associate the VRRP group with service-location
group 1.
[~BRAS1] interface GigabitEthernet0/1/2.2
[~BRAS1-GigabitEthernet0/1/2.2] vlan-type dot1q 2002
[*BRAS1-GigabitEthernet0/1/2.2] ip address 192.168.2.10 255.255.255.0
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 virtual-ip 192.168.2.200
[*BRAS1-GigabitEthernet0/1/2.2] admin-vrrp vrid 2
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 priority 150
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 preempt-mode timer delay 1500
[*BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 track service-location 1 reduced 50
[*BRAS1-GigabitEthernet0/1/2.2] commit
[~BRAS1-GigabitEthernet0/1/2.2] quit
# On BRAS2, access the view of GE 0/1/2.2, create VRRP group 2, and set the
virtual IP address of the VRRP group to 192.168.2.200. Configure the VRRP group
as an mVRRP group, set BRAS1's priority in the VRRP group to 120, and associate
the VRRP group with service-location group 1.
[~BRAS2] interface GigabitEthernet0/1/2.2
[~BRAS2-GigabitEthernet0/1/2.2] vlan-type dot1q 2002
[*BRAS2-GigabitEthernet0/1/2.2] ip address 192.168.2.100 255.255.255.0
[*BRAS2-GigabitEthernet0/1/2.2] vrrp vrid 2 virtual-ip 192.168.2.200
[*BRAS2-GigabitEthernet0/1/2.2] admin-vrrp vrid 2
[*BRAS2-GigabitEthernet0/1/2.2] vrrp vrid 2 priority 120
[*BRAS2-GigabitEthernet0/1/2.2] vrrp vrid 2 track service-location 1 reduced 50
[*BRAS2-GigabitEthernet0/1/2.2] commit
[~BRAS2-GigabitEthernet0/1/2.2] quit
Step 13 Establish a BFD session named bfd on BRAS1 and BRAS2 to quickly detect
interface or link exceptions and trigger a master/backup VRRP switchover.
# Configure a BFD session on BRAS1 and associate a VRRP group with the BFD
session.
1. Establish a BFD session named bfd at the access side to rapidly detect
interface or link faults. After detecting an interface fault or a link fault, BFD
triggers a master/backup VRRP switchover. The IP address of GE 0/1/2.2 on
BRAS2 is 192.168.2.100.
[~BRAS1] bfd
[*BRAS1-bfd] quit
[*BRAS1] bfd bfd bind peer-ip 192.168.2.100
[*BRAS1-bfd-session-bfd] discriminator local 100
[*BRAS1-bfd-session-bfd] discriminator remote 200
[*BRAS1-bfd-session-bfd] commit
[*BRAS1-bfd-session-bfd] quit
2. On GE 0/1/2.2, associate the VRRP group with the BFD session.
[~BRAS1] interface GigabitEthernet0/1/2.2
[~BRAS1-GigabitEthernet0/1/2.2] vrrp vrid 2 track bfd-session 100 peer
[*BRAS1-GigabitEthernet0/1/2.2] commit
[~BRAS1-GigabitEthernet0/1/2.2] quit
# Configure a BFD session on BRAS2 and associate a VRRP group with the BFD
session.
1. Establish a BFD session named bfd at the access side to rapidly detect
interface or link faults. After detecting an interface fault or a link fault, BFD
triggers a master/backup VRRP switchover. The IP address of GE 0/1/2.2 on
BRAS2 is 192.168.2.100.
[~BRAS2] bfd
[*BRAS2-bfd] quit
[*BRAS2] bfd bfd bind peer-ip 192.168.2.10
Step 14 Configure RUI backup on BRAS1 and BRAS2 and back up BRAS information in
shared address pool mode.
1. Configure an RBS on BRAS1 and BRAS2.
# Configure an RBS named natbras on BRAS1.
[~BRAS1] remote-backup-service natbras
[*BRAS1-rm-backup-srv-natbras] peer 10.1.1.2 source 10.1.1.1 port 7000
[*BRAS1-rm-backup-srv-natbras] protect redirect ip-nexthop 10.1.1.2 interface
GigabitEthernet0/1/1
[*BRAS1-rm-backup-srv-natbras] ip-pool natbras
[*BRAS1-rm-backup-srv-natbras] commit
[~BRAS1-rm-backup-srv-natbras] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see
BRAS2 configuration file in this section.
3. On BRAS1 and BRAS2, configure NAS parameters and the traffic backup
interval in the RBS view.
# On BRAS1, configure the logical IP address of NAS as 1.2.3.4, the logical
interface as GE 0/1/2, logical hostname as huawei, and the user traffic
backup interval as 10 minutes.
[*BRAS1-rm-backup-prf-natbras] nas logic-ip 1.2.3.4
[*BRAS1-rm-backup-prf-natbras] nas logic-port GigabitEthernet 0/1/2
[*BRAS1-rm-backup-prf-natbras] nas logic-sysname huawei
[*BRAS1-rm-backup-prf-natbras] traffic backup interval 10
[*BRAS1-rm-backup-prf-natbras] commit
[~BRAS1-rm-backup-prf-natbras] quit
4. Configure the user-side sub-interface on BRAS1 and BRAS2 and bind the sub-
interface to an RBP.
# On BRAS1, configure GE 0/1/2.10 as the user-side sub-interface and bind
the sub-interface to the RBP named natbras.
[~BRAS1] interface GigabitEthernet0/1/2.10
[*BRAS1-GigabitEthernet0/1/2.10] remote-backup-profile natbras
[*BRAS1-GigabitEthernet0/1/2.10] commit
[~BRAS1-GigabitEthernet0/1/2.10] quit
NOTE
The configuration of BRAS2 is similar to that of BRAS1. For configuration details, see
BRAS2 configuration file in this section.
Track-interface0 : GigabitEthernet0/1/1
Track-interface1 : --
Last up time : 2016-06-02 16:15:8
Last down time : 2016-06-02 16:3:36
Last down reason : TCP closed for packet error.
--------------------------------------------------------
----End
Configuration Files
● BRAS1 configuration file
#
sysname BRAS1
#
vsm on-board-mode disable
#
ip route-static 11.11.11.0 27 null 0
#
peer-backup route-cost auto-advertising
#
ospf 1
import-route static
default cost 10 tag 100 type 2
import-route unr
#
bfd
#
bfd bfd bind peer-ip 192.168.2.100
discriminator local 100
discriminator remote 200
#
license
active nat session-table size 6 slot 9
active nat bandwidth-enhance 40 slot 9
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646
#
service-ha hot-backup enable
#
service-location 1
location slot 9
vrrp vrid 1 interface GigabitEthernet 0/1/1
remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.1
#
service-instance-group group1
service-location 1
#
nat instance nat id 1
service-instance-group group1
nat address-group address-group1 group-id 1 11.11.11.100 11.11.11.105
nat outbound any address-group address-group1
#
interface GigabitEthernet0/1/1
undo shutdown
ip address 10.1.1.2 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 150
vrrp vrid 1 track service-location 1 reduced 60
vrrp recover-delay 15
#
user-group natbras
#
acl number 6001
rule 1 permit ip source user-group natbras
#
traffic classifier c1 operator or
if-match acl 6001 precedence 1
#
traffic behavior b1
nat bind instance nat
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
traffic-policy p1 inbound
#
ip pool natbras bas local
gateway 192.168.0.1 255.255.255.0
section 0 192.168.0.2 192.168.0.254
#
aaa
#
authentication-scheme auth1
authentication-mode radius
#
accounting-scheme acct1
accounting-mode radius
#
domain natbras
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
ip-pool natbras
nas logic-ip 1.2.3.4
nas logic-port GigabitEthernet0/1/2
nas logic-sysname huawei
traffic backup interval 10
user-group natbras bind nat instance nat
#
remote-backup-service natbras
peer 10.1.1.1 source 10.1.1.2 port 7000
protect redirect ip-nexthop 10.1.1.2 interface gigabitethernet0/1/1
ip-pool natbras
#
remote-backup-profile natbras
service-type bras
backup-id 10 remote-backup-service natbras
peer-backup hot
vrrp-id 2 interface GigabitEthernet0/1/2.2
#
interface GigabitEthernet0/1/2.10
user-vlan 2010
pppoe-server bind virtual-template 1
remote-backup-profile natbras
bas
access-type layer2-subscriber default-domain authentication natbras
authentication-method ppp
#
interface GigabitEthernet0/1/2.2
vrrp vrid 2 track bfd-session 200 peer
vlan-type dot1q 2002
ip address 192.168.2.100 255.255.255.0
vrrp vrid 2 virtual-ip 192.168.2.200
admin-vrrp vrid 2
vrrp vrid 2 priority 120
vrrp vrid 2 track service-location 1
#
interface loopback0
ip address 10.10.10.99 255.255.255.255
#
return
Networking Requirements
In centralized deployment scenarios, CGN1 and CGN2 are deployed close to two
CRs on the MAN core as the standalone devices. CGN boards are installed on the
CGN devices. The two CGNs use GE interfaces to establish a VRRP channel. To
implement inter-chassis HA backup for DS-Lite services, configure VRRP to
determine the master/backup CGN status, and associate HA with VRRP. As shown
in Figure 1-127, home users using private IPv4 addresses access an IPv6 MAN
through the CPE that supports dual stack and DS-Lite. A DS-Lite tunnel is
established between the CPE and DS-Lite device. The CPE transmits traffic with the
private IPv4 address along the DS-Lite tunnel to the DS-Lite device. The DS-Lite
device decapsulates traffic, uses NAT to translate the private IPv4 address to a
public IPv4 address, and forwards the traffic to the IPv4 Internet.
Interface1, interface2, and interface3 in this example represent GE0/1/1, GE0/1/2, and
GE0/1/3, respectively.
Scenario Requirements
A service board is installed on CGN1 and is located in slot 9. A service board is
installed on CGN2 and is located in slot 10. Configure the two CGNs to implement
HA backup for DS-Lite services between CPU 0 in CGN1's slot 9 and CPU 0 in
CGN2's slot 10.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
No. Data
1 ID of an HA backup group
3 Slot ID and CPU ID of CGN2's standby service board (CPU 0 in slot 10)
Procedure
Step 1 Enable HA hot backup in the system view of master and backup devices.
# Enable HA hot backup on CGN1.
[~CGN1] vsm on-board-mode disable
[*CGN1] commit
[~CGN1] service-ha hot-backup enable
[*CGN1] commit
# Enter the GE 0/1/3 interface view on CGN2, create VRRP group 10, and set the
virtual IP address of the VRRP group to 10.0.0.3. Configure the VRRP group as an
mVRRP group and enable the VRRP group to ignore an interface Down event. Set
CGN2's priority in the VRRP group to 120.
[~CGN2] interface GigabitEthernet 0/1/3
[~CGN2-GigabitEthernet0/1/3] ip address 10.0.0.2 255.255.255.0
[*CGN2-GigabitEthernet0/1/3] vrrp vrid 10 virtual-ip 10.0.0.3
[*CGN2-GigabitEthernet0/1/3] admin-vrrp vrid 10 ignore-if-down
[*CGN2-GigabitEthernet0/1/3] vrrp vrid 10 priority 120
[*CGN2-GigabitEthernet0/1/3] quit
[*CGN2] commit
Step 3 Create an HA backup group on CGN1 and CGN2, add members to the group, and
configure a VRRP channel.
NOTE
Service-location IDs and the number of service-location groups configured on the master
and backup devices must be the same. Otherwise, backup may fail, affecting services.
# Create HA backup group 21 on CGN1, add CPU 0 in slot 9 to the group, and set
the local VRRP outbound interface to GE 0/1/3 and the peer IP address to 10.0.0.2.
[~CGN1] service-location 21
[*CGN1-service-location-21] location slot 9
[*CGN1-service-location-21] remote-backup interface GigabitEthernet 0/1/3 peer 10.0.0.2
[*CGN1-service-location-21] quit
[*CGN1] commit
# Create HA backup group 21 on CGN2, add CPU 0 in slot 10 to the group, and
set the local VRRP outbound interface to GE 0/1/3 and the peer IP address to
10.0.0.1.
[~CGN2] service-location 21
[*CGN2-service-location-21] location slot 10
[*CGN2-service-location-21] remote-backup interface GigabitEthernet 0/1/3 peer 10.0.0.1
[*CGN2-service-location-21] quit
[*CGN2] commit
# Enter the GE 0/1/3 interface view on CGN2, and associate HA backup group 21
with VRRP group 10.
[~CGN2] interface GigabitEthernet 0/1/3
[~CGN2-GigabitEthernet0/1/3] vrrp vrid 10 track service-location 21 reduced 50
[*CGN2-GigabitEthernet0/1/3] quit
[*CGN2] commit
NOTE
Run the display vrrp 10 command on CGN1 and CGN2 to view the master/backup VRRP
status, which reflects the master/backup status of the HA backup group. Master in the
command output indicates that CGN1 is the master device.
Step 5 Bind the HA backup group to the VRRP group on CGN1 and CGN2.
# Bind HA backup group 21 to VRRP group 10 on CGN1, and specify the interface
for establishing a VRRP connection with CGN2 as GE 0/1/3.
[~CGN1] service-location 21
[~CGN1-service-location-21] vrrp vrid 10 interface GigabitEthernet 0/1/3
[*CGN1-service-location-21] quit
[*CGN1] commit
# Bind HA backup group 21 to VRRP group 10 on CGN2, and specify the interface
for establishing a VRRP connection with CGN1 as GE 0/1/3.
[~CGN2] service-location 21
[~CGN2-service-location-21] vrrp vrid 10 interface GigabitEthernet 0/1/3
[*CGN2-service-location-21] quit
[*CGN2] commit
Step 6 Create a service-instance group on CGN1 and CGN2 and bind it to the HA backup
group.
# Create service-instance group 21 on CGN1 and bind it to HA backup group 21.
[~CGN1] service-instance-group 21
[*CGN1-service-instance-group-21] service-location 21
[*CGN1-service-instance-group-21] quit
[*CGN1] commit
NOTE
Step 7 Create a DS-Lite instance on CGN1 and CGN2, bind the DS-Lite instance to the
service-instance group, and configure an address pool and an address translation
policy.
# Create a DS-Lite instance named dslite1 on CGN1, bind the DS-Lite instance to
service-instance group 21, and set the port range to 4096. Configure a DS-Lite
address pool named group1 in dslite1, and configure an address segment for the
DS-Lite address pool. Set the DS-Lite action to any so that traffic does not need to
match any ACL rule and DS-Lite translation is performed for traffic using
addresses from the DS-Lite address pool. Configure local and remote IP addresses
for the DS-Lite tunnel, enable DS-Lite ALG for all protocols, and set the DS-Lite
address translation mode to full-cone.
[~CGN1] ds-lite instance dslite1 id 21
[*CGN1-ds-lite-instance-dslite1] port-range 4096
[*CGN1-ds-lite-instance-dslite1] service-instance-group 21
[*CGN1-ds-lite-instance-dslite1] ds-lite address-group group1 group-id 1
[*CGN1-ds-lite-instance-dslite1-ds-lite-address-group-group1] section 0 1.1.1.1 mask 24
[*CGN1-ds-lite-instance-dslite1-ds-lite-address-group-group1] quit
[*CGN1-ds-lite-instance-dslite1] ds-lite outbound any address-group group1
[*CGN1-ds-lite-instance-dslite1] local-ipv6 2001:db8:2::12 prefix-length 128
[*CGN1-ds-lite-instance-dslite1] remote-ipv6 2001:db8:1:: prefix-length 41
[*CGN1-ds-lite-instance-dslite1] ds-lite alg all
[*CGN1-ds-lite-instance-dslite1] ds-lite filter mode full-cone
[*CGN1-ds-lite-instance-dslite1] quit
[*CGN1] commit
# Create a DS-Lite instance named dslite1 on CGN2, bind the DS-Lite instance to
service-instance group 21, and set the port range to 4096. Configure a DS-Lite
address pool named group1 in dslite1, and configure an address segment for the
DS-Lite address pool. Set the DS-Lite action to any so that traffic does not need to
match any ACL rule and DS-Lite translation is performed for traffic using
addresses from the DS-Lite address pool. Configure local and remote IP addresses
for the DS-Lite tunnel, enable DS-Lite ALG for all protocols, and set the DS-Lite
address translation mode to full-cone.
[~CGN2] ds-lite instance dslite1 id 21
[*CGN2-ds-lite-instance-dslite1] port-range 4096
[*CGN2-ds-lite-instance-dslite1] service-instance-group 21
[*CGN2-ds-lite-instance-dslite1] ds-lite address-group group1 group-id 1
[*CGN2-ds-lite-instance-dslite1-ds-lite-address-group-group1] section 0 1.1.1.1 mask 24
[*CGN2-ds-lite-instance-dslite1-ds-lite-address-group-group1] quit
[*CGN2-ds-lite-instance-dslite1] ds-lite outbound any address-group group1
[*CGN2-ds-lite-instance-dslite1] local-ipv6 2001:db8:2::12 prefix-length 128
[*CGN2-ds-lite-instance-dslite1] remote-ipv6 2001:db8:1:: prefix-length 41
[*CGN2-ds-lite-instance-dslite1] ds-lite alg all
[*CGN2-ds-lite-instance-dslite1] ds-lite filter mode full-cone
[*CGN2-ds-lite-instance-dslite1] quit
[*CGN2] commit
NOTE
For details about how to configure basic DS-Lite services, see DS-Lite Configuration.
----End
Configuration Files
CGN1 configuration file
#
sysname CGN1
#
vsm on-board-mode disable
#
interface GigabitEthernet0/1/3
ip address 10.0.0.1 255.255.255.0
vrrp vrid 10 virtual-ip 10.0.0.3
admin-vrrp vrid 10 ignore-if-down
vrrp vrid 10 priority 150
vrrp vrid 10 preempt-mode timer delay 1500
vrrp recover-delay 20
vrrp vrid 10 track interface GigabitEthernet0/1/2 reduced 50
vrrp vrid 10 track interface GigabitEthernet0/1/1 reduced 50
vrrp vrid 10 track service-location 21 reduced 50
#
service-ha hot-backup enable
#
service-location 21
location slot 9
remote-backup interface GigabitEthernet0/1/3 peer 10.0.0.2
Networking Requirements
In distributed deployment, a CGN board is installed on BRAS1 and BRAS2. The two
BRASs use GE interfaces to establish a VRRP channel. To implement inter-chassis
HA backup for DS-Lite services, configure VRRP to determine the master/backup
CGN status, and associate HA with VRRP. As shown in Figure 1-129, home users
using private IPv4 addresses access an IPv6 MAN through the CPE that supports
dual stack and DS-Lite. A DS-Lite tunnel is established between the CPE and DS-
Lite device. The CPE transmits traffic with the private IPv4 address along the DS-
Lite tunnel to the DS-Lite device. The DS-Lite device decapsulates traffic, uses NAT
to translate the private IPv4 address to a public IPv4 address, and forwards the
traffic to the IPv4 Internet.
Interface1, interface2, and interface3 in this example represent GE0/1/1, GE0/1/2, and
GE0/1/3, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a license.
2. Enable HA inter-chassis hot backup.
3. Create an HA backup group, add members to the group, and configure a
VRRP channel.
4. Create and configure a VRRP group.
5. Associate HA with VRRP.
6. Bind the HA backup group to the VRRP group.
7. Create a service-instance group and bind it to the HA backup group.
8. Create a DS-Lite instance, bind the DS-Lite instance to the service-instance
group, and configure an address pool.
9. Configure DS-Lite user information and RADIUS authentication on the BRAS.
10. Configure a traffic diversion policy for the DS-Lite tunnel.
11. Configure a DS-Lite conversion policy.
12. Configure user-side VRRP.
13. Configure a remote backup service (RBS) and a remote backup profile (RBP),
and bind the service-instance group to the RBS.
Data Preparation
To complete the configuration, you need the following data:
No. Data
4 VRRP channel's interface and its IP address on the master and backup
devices
10 RBS name
11 RBP name
12 User-side interface
Procedure
Step 1 Configure a license.
# Configure BRAS1.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS1
[*HUAWEI] commit
[~BRAS1] vsm on-board-mode disable
[*BRAS1] commit
[~BRAS1] license
[~BRAS1-license] active ds-lite vsuf slot 10
[*BRAS1-license] active nat session-table size 16 slot 10
[*BRAS1-license] active nat bandwidth-enhance 40 slot 10
[*BRAS1-license] commit
[~BRAS1-license] quit
# Configure BRAS2.
<HUAWEI> system-view
[~HUAWEI] sysname BRAS2
[*HUAWEI] commit
[~BRAS2] vsm on-board-mode disable
[*BRAS2] commit
[~BRAS2] license
[~BRAS2-license] active ds-lite vsuf slot 10
[*BRAS2-license] active nat session-table size 16 slot 10
[*BRAS2-license] active nat bandwidth-enhance 40 slot 10
[*BRAS2-license] commit
[~BRAS2-license] quit
# Configure BRAS2.
[~BRAS2] service-ha hot-backup enable
[*BRAS2] commit
Step 3 Create an HA backup group on the two devices, add members to the group, and
configure a VRRP channel.
NOTE
Service-location IDs and the number of service-location groups configured on the master
and backup devices must be the same. Otherwise, backup may fail, affecting services.
# Create HA backup group 21 on BRAS1, add CPU 0 in slot 9 to the group, and set
the local VRRP outbound interface to GE 0/1/3 and the peer IP address to 10.0.0.2.
[~BRAS1] service-location 21
[*BRAS1-service-location-21] location slot 9
[*BRAS1-service-location-21] remote-backup interface GigabitEthernet 0/1/3 peer 10.0.0.2
[*BRAS1-service-location-21] quit
[*BRAS1] commit
# Create HA backup group 21 on BRAS2, add CPU 0 in slot 10 to the group, and
set the local VRRP outbound interface to GE 0/1/3 and the peer IP address to
10.0.0.1.
[~BRAS2] service-location 21
[*BRAS2-service-location-21] location slot 10
[*BRAS2-service-location-21] remote-backup interface GigabitEthernet 0/1/3 peer 10.0.0.1
[*BRAS2-service-location-21] quit
[*BRAS2] commit
# On BRAS2, enter the GE0/1/3 interface view, create VRRP group 10, and set the
virtual IP address of the VRRP group to 10.0.0.3. Configure the VRRP group as an
mVRRP group and enable the VRRP group to ignore an interface Down event. Set
BRAS2's priority in the VRRP group to 120.
[~BRAS2] interface GigabitEthernet 0/1/3
[~BRAS2-GigabitEthernet0/1/3] ip address 10.0.0.2 255.255.255.0
[*BRAS2-GigabitEthernet0/1/3] vrrp vrid 10 virtual-ip 10.0.0.3
[*BRAS2-GigabitEthernet0/1/3] admin-vrrp vrid 10 ignore-if-down
[*BRAS2-GigabitEthernet0/1/3] vrrp vrid 10 priority 120
[*BRAS2-GigabitEthernet0/1/3] quit
[*BRAS2] commit
# On BRAS2, enter the GE0/1/3 interface view and associate HA backup group 21
with VRRP group 10.
[~BRAS2] interface GigabitEthernet 0/1/3
[~BRAS2-GigabitEthernet0/1/3] vrrp vrid 10 track service-location 21 reduced 50
[*BRAS2-GigabitEthernet0/1/3] quit
[*BRAS2] commit
NOTE
Run the display vrrp 10 command on the two BRAS devices to check the master/backup
VRRP status, which reflects the master/backup status of the HA backup group. Master in
the command output indicates that the BRAS is the master device.
Step 6 Bind the HA backup group to the VRRP group on the two devices.
# On BRAS1, bind HA backup group 21 to VRRP group 10 and specify the interface
for establishing a VRRP channel with the peer device as GE0/1/3.
[~BRAS1] service-location 21
[~BRAS1-service-location-21] vrrp vrid 10 interface GigabitEthernet 0/1/3
[*BRAS1-service-location-21] quit
[*BRAS1] commit
# On BRAS2, bind HA backup group 21 to VRRP group 10, and specify the
interface for establishing a VRRP channel with the peer device as GE0/1/3.
[~BRAS2] service-location 21
[~BRAS2-service-location-21] vrrp vrid 10 interface GigabitEthernet 0/1/3
[*BRAS2-service-location-21] quit
[*BRAS2] commit
Step 7 Create a service-instance group on the two devices and bind it to the HA backup
group.
# On BRAS1, create service-instance group 21 and bind it to HA backup group 21.
[~BRAS1] service-instance-group 21
[*BRAS1-service-instance-group-21] service-location 21
[*BRAS1-service-instance-group-21] quit
[*BRAS1] commit
[~BRAS2] service-instance-group 21
[*BRAS2-service-instance-group-21] service-location 21
[*BRAS2-service-instance-group-21] quit
[*BRAS2] commit
NOTE
Run the display service-location 21 command on the two BRAS devices to check HA
information. Vrrp state in the command output reflects the status of HA backup group 21,
which must be consistent with the VRRP status. Batch-backup state in the command
output indicates whether batch backup has completed.
Step 8 Create a DS-Lite instance on the two devices, bind the instance to the service-
instance group, and configure an address pool.
# Create a DS-Lite instance named dslite1 on BRAS1.
[~BRAS1] ds-lite instance dslite1 id 21
[*BRAS1-ds-lite-instance-dslite1] port-range 4096
[*BRAS1-ds-lite-instance-dslite1] service-instance-group 21
[*BRAS1-ds-lite-instance-dslite1] ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
[*BRAS1-ds-lite-instance-dslite1] commit
[~BRAS1-ds-lite-instance-dslite1] local-ipv6 2001:db8:2::12 prefix-length 128
[*BRAS1-ds-lite-instance-dslite1] remote-ipv6 2001:db8:1:: prefix-length 41
[*BRAS1-ds-lite-instance-dslite1] ds-lite alg all
[*BRAS1-ds-lite-instance-dslite1] ds-lite filter mode full-cone
[*BRAS1-ds-lite-instance-dslite1] quit
[*BRAS1] commit
NOTE
For details about how to configure basic DS-Lite services, see "DS-Lite Configuration".
# Configure BRAS2.
[~BRAS2] user-group group2
[*BRAS2] commit
3. Specify the RADIUS server and the domain to which the user belongs.
# Configure BRAS1.
# Configure BRAS2.
[~BRAS2] radius-server group rd1
[*BRAS2-radius-rd1] radius-server authentication 192.168.7.249 1645 weight 0
[*BRAS2-radius-rd1] radius-server accounting 192.168.7.249 1646 weight 0
[*BRAS2-radius-rd1] radius-server shared-key YsHsjx_202206
[*BRAS2-radius-rd1] commit
[~BRAS2-radius-rd1] radius-server type plus11
[~BRAS2-radius-rd1] radius-server traffic-unit kbyte
[~BRAS2-radius-rd1] quit
[~BRAS2] aaa
[~BRAS2-aaa] authentication-scheme auth1
[*BRAS2-aaa-authen-auth1] authentication-mode radius
[*BRAS2-aaa-authen-auth1] commit
[~BRAS2-aaa-authen-auth1] quit
[~BRAS2-aaa] accounting-scheme acct1
[*BRAS2-aaa-accounting-acct1] accounting-mode radius
[~BRAS2-aaa-accounting-acct1] commit
[~BRAS2-aaa-accounting-acct1] quit
[~BRAS2-aaa] domain dslite1
[*BRAS2-aaa-domain-dslite1] authentication-scheme auth1
[*BRAS2-aaa-domain-dslite1] accounting-scheme acct1
[*BRAS2-aaa-domain-dslite1] radius-server group rd1
[*BRAS2-aaa-domain-dslite1] commit
[~BRAS2-aaa-domain-dslite1] user-group group1 bind ds-lite instance dslite1
[*BRAS2-aaa-domain-dslite1] commit
[~BRAS2-aaa-domain-dslite1] quit
[~BRAS2-aaa] quit
[*BRAS1-behavior-b1] commit
[~BRAS1-behavior-b1] quit
[~BRAS1] traffic policy p1
[*BRAS1-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS1-trafficpolicy-p1] commit
[~BRAS1-trafficpolicy-p1] quit
[~BRAS1] traffic-policy p1 inbound
[*BRAS1] commit
# Configure BRAS2.
[~BRAS2] acl ipv6 6001
[*BRAS2-acl6-ucl-6001] rule 1 permit ipv6 source user-group group1
[*BRAS2-acl6-ucl-6001] commit
[~BRAS2-acl6-ucl-6001] quit
[~BRAS2] traffic classifier c1
[*BRAS2-classifier-c1] if-match ipv6 acl 6001
[*BRAS2-classifier-c1] commit
[~BRAS2-classifier-c1] quit
[~BRAS2] traffic behavior b1
[*BRAS2-behavior-b1] ds-lite bind instance dslite1
[*BRAS2-behavior-b1] commit
[~BRAS2-behavior-b1] quit
[~BRAS2] traffic policy p1
[*BRAS2-trafficpolicy-p1] classifier c1 behavior b1
[*BRAS2-trafficpolicy-p1] commit
[~BRAS2-trafficpolicy-p1] quit
[~BRAS2] traffic-policy p1 inbound
[*BRAS2] commit
# Configure BRAS1.
[~BRAS1] acl ipv6 3000
[*BRAS1-acl6-adv-3000] rule 1 permit ipv6 source 2001:db8:1:: 41
[*BRAS1-acl6-adv-3000] commit
[~BRAS1-acl6-ucl-6001] quit
[~BRAS1] ds-lite instance dslite1
[*BRAS1-ds-lite-instance-dslite1] ds-lite outbound 3000 address-group group1
[*BRAS1-ds-lite-instance-dslite1] commit
[~BRAS1-ds-lite-instance-dslite1] quit
# Configure BRAS2.
[~BRAS2] acl ipv6 3000
[*BRAS2-acl6-adv-3000] rule 1 permit ipv6 source 2001:db8:1:: 41
[*BRAS2-acl6-adv-3000] commit
[~BRAS2-acl6-ucl-6001] quit
[~BRAS2] ds-lite instance dslite1
[*BRAS2-ds-lite-instance-dslite1] ds-lite outbound 3000 address-group group1
[*BRAS2-ds-lite-instance-dslite1] commit
[~BRAS2-ds-lite-instance-dslite1] quit
Step 12 Configure a user-side VRRP group on the master and backup devices and
configure the VRRP group to track service-location.
# Configure BRAS1.
[~BRAS1] bfd
[*BRAS1-bfd] quit
[*BRAS1] bfd ma bind peer-ip 192.168.1.1
[*BRAS1-bfd-session-ma] commit
[~BRAS1] quit
[~BRAS1] interface GigabitEthernet0/1/1.2
[~BRAS1-GigabitEthernet0/1/1.2] vlan-type dot1q 101
[*BRAS1-GigabitEthernet0/1/1.2] ip address 192.168.1.2 255.255.0.0
[*BRAS1-GigabitEthernet0/1/1.2] vrrp vrid 2 virtual-ip 192.168.1.10
[*BRAS1-GigabitEthernet0/1/1.2] admin-vrrp vrid 2
# Configure BRAS2.
[~BRAS2] bfd
[*BRAS2-bfd] quit
[*BRAS2] bfd ma bind peer-ip 192.168.1.2
[*BRAS2-bfd-session-ma] commit
[~BRAS2] quit
[~BRAS2] interface GigabitEthernet0/1/1.2
[~BRAS2-GigabitEthernet0/1/1.2] vlan-type dot1q 101
[*BRAS2-GigabitEthernet0/1/1.2] ip address 192.168.1.1 255.255.0.0
[*BRAS2-GigabitEthernet0/1/1.2] vrrp vrid 2 virtual-ip 192.168.1.10
[*BRAS2-GigabitEthernet0/1/1.2] admin-vrrp vrid 2
[*BRAS2-GigabitEthernet0/1/1.2] vrrp vrid 2 priority 150
[*BRAS2-GigabitEthernet0/1/1.2] vrrp vrid 2 preempt-mode timer delay 1500
[*BRAS2-GigabitEthernet0/1/1.2] vrrp vrid 2 track interface GigabitEthernet 0/1/2 reduced 50
[*BRAS2-GigabitEthernet0/1/1.2] vrrp vrid 2 track service-location 21 reduced 50
[*BRAS2-GigabitEthernet0/1/1.2] vrrp vrid 2 track bfd-session session-name ma
[*BRAS2-GigabitEthernet0/1/1.2] commit
[~BRAS2-GigabitEthernet0/1/1.2] quit
Step 13 Configure RUI, including an RBS and an RBP, on the master and backup devices.
# Configure BRAS1.
[~BRAS1] remote-backup-service cgn
[*BRAS1-rm-backup-srv-cgn] peer 10.0.0.2 source 10.0.0.1 port 7000
[*BRAS1-rm-backup-srv-cgn] protect redirect ip-nexthop 10.0.0.2 interface GigabitEthernet0/1/3
[*BRAS1-rm-backup-srv-cgn] ipv6-pool group1
[*BRAS1-rm-backup-srv-cgn] commit
[~BRAS1-rm-backup-srv-cgn] quit
[~BRAS1] remote-backup-profile cgn
[*BRAS1-rm-backup-prf-cgn] service-type bras
[*BRAS1-rm-backup-prf-cgn] backup-id 10 remote-backup-service cgn
[*BRAS1-rm-backup-prf-cgn] peer-backup hot
[*BRAS1-rm-backup-prf-cgn] vrrp-id 2 interface GigabitEthernet0/1/1.2
[*BRAS1-rm-backup-prf-cgn] commit
[~BRAS1-rm-backup-prf-cgn] quit
# Configure BRAS2.
[~BRAS2] remote-backup-service cgn
[*BRAS2-rm-backup-srv-cgn] peer 10.0.0.1 source 10.0.0.2 port 7000
[*BRAS2-rm-backup-srv-cgn] protect redirect ip-nexthop 10.0.0.1 interface GigabitEthernet0/1/3
[*BRAS2-rm-backup-srv-cgn] ipv6-pool group1
[*BRAS2-rm-backup-srv-cgn] commit
[~BRAS2-rm-backup-srv-cgn] quit
[~BRAS2] remote-backup-profile cgn
[*BRAS2-rm-backup-prf-cgn] service-type bras
[*BRAS2-rm-backup-prf-cgn] backup-id 10 remote-backup-service cgn
[*BRAS2-rm-backup-prf-cgn] peer-backup hot
[*BRAS2-rm-backup-prf-cgn] vrrp-id 2 interface GigabitEthernet0/1/1.2
[*BRAS2-rm-backup-prf-cgn] commit
[~BRAS2-rm-backup-prf-cgn] quit
# Configure BRAS2.
[~BRAS2] service-instance-group 21
[*BRAS2-service-instance-group-group1] remote-backup-service cgn
[*BRAS2-service-instance-group-group1] commit
[~BRAS2-service-instance-group-group1] quit
# Configure BRAS2.
[~BRAS2] interface GigabitEthernet0/1/1
[~BRAS2-GigabitEthernet0/1/1] ipv6 enable
[*BRAS2-GigabitEthernet0/1/1] ipv6 address auto link-local
[*BRAS2-GigabitEthernet0/1/1] remote-backup-profile cgn
[*BRAS2-GigabitEthernet0/1/1] commit
[~BRAS2-GigabitEthernet0/1/1] bas
[~BRAS2-GigabitEthernet0/1/1-bas] access-type layer2-subscriber default-domain authentication dslite1
[*BRAS2-GigabitEthernet0/1/1-bas] authentication-method-ipv6 bind
[*BRAS2-GigabitEthernet0/1/1-bas] commit
[~BRAS2-GigabitEthernet0/1/1-bas] quit
[~BRAS2-GigabitEthernet0/1/1] quit
----End
Configuration Files
BRAS1 configuration file
#
sysname BRAS1
#
vsm on-board-mode disable
#
license
active nat session-table size 16 slot 10
active ds-lite vsuf slot 10
active nat bandwidth-enhance 40 slot 10
active bas slot 10
#
radius-server group rd1
radius-server authentication 192.168.7.249 1645 weight 0
radius-server accounting 192.168.7.249 1646 weight 0
radius-server shared-key %^%#x*CgITP4C~;q,*+DEW'JBWe#)"Q&|7bX]b:Y<{w'%^%#
radius-server type plus11
radius-server traffic-unit kbyte
#
service-ha hot-backup enable
#
service-location 21
location slot 9
remote-backup interface GigabitEthernet0/1/3 peer 10.0.0.2
vrrp vrid 10 interface GigabitEthernet0/1/3
#
service-instance-group 21
service-location 21
remote-backup-service cgn
#
ds-lite instance dslite1 id 21
port-range 4096
service-instance-group 21
ds-lite address-group group1 group-id 1 11.11.11.100 11.11.11.110
ds-lite outbound 3000 address-group group1
local-ipv6 2001:db8:2::12 prefix-length 128
remote-ipv6 2001:db8:1:: prefix-length 41
ds-lite alg all
ds-lite filter mode full-cone
#
bfd
#
ipv6 prefix group1 delegation
prefix 2001:db8:1::/32
#
ipv6 pool group1 bas delegation
prefix group1
#
user-group group1
#
remote-backup-service cgn
peer 10.0.0.2 source 10.0.0.1 port 7000
protect redirect ip-nexthop 10.0.0.2 interface GigabitEthernet0/1/3
ipv6-pool group1
#
remote-backup-profile cgn
service-type bras
backup-id 10 remote-backup-service cgn
peer-backup hot
vrrp-id 2 interface GigabitEthernet0/1/1.2
#
acl ipv6 number 3000
rule 1 permit ipv6 source 2001:db8:1:: 41
#
acl ipv6 number 6001
rule 1 permit ipv6 source user-group group1
#
dhcpv6 duid llt
#
traffic classifier c1 operator or
if-match ipv6 acl 6001 precedence 1
#
traffic behavior b1
ds-lite bind instance dslite1
#
traffic policy p1
share-mode
classifier c1 behavior b1 precedence 1
#
aaa
authentication-scheme auth1
authentication-mode radius
accounting-scheme acct1
accounting-mode radius
domain dslite1
authentication-scheme auth1
accounting-scheme acct1
radius-server group rd1
prefix-assign-mode unshared
ipv6-pool group1
ppp address-release separate
user-group group1 bind ds-lite instance dslite1
#
interface GigabitEthernet0/1/1
undo shutdown
ipv6 enable
ipv6 address auto link-local
undo dcn
ipv6 nd autoconfig managed-address-flag
remote-backup-profile cgn
bas
#
access-type layer2-subscriber default-domain authentication dslite1
authentication-method-ipv6 bind
#
#
interface GigabitEthernet0/1/2
undo shutdown
ip address 10.4.1.1 255.255.0.0
#
interface GigabitEthernet0/1/3
undo shutdown
ip address 10.0.0.1 255.255.255.0
vrrp vrid 10 virtual-ip 10.0.0.3
admin-vrrp vrid 10 ignore-if-down
vrrp vrid 10 priority 150
vrrp vrid 10 preempt-mode timer delay 1500
vrrp recover-delay 20
vrrp vrid 10 track interface GigabitEthernet0/1/2 reduced 50
vrrp vrid 10 track service-location 21 reduced 50
#
interface GigabitEthernet0/1/1.2
vlan-type dot1q 101
ip address 192.168.1.2 255.255.0.0
vrrp vrid 2 virtual-ip 192.168.1.10
admin-vrrp vrid 2
vrrp vrid 2 priority 150
vrrp vrid 2 preempt-mode timer delay 1500
vrrp vrid 2 track interface GigabitEthernet0/1/2 reduced 50
vrrp vrid 2 track bfd-session session-name ma
vrrp vrid 2 track service-location 21 reduced 50
#
bfd ma bind peer-ip 192.168.1.1
#
ospf 10
import-route unr
opaque-capability enable
area 0.0.0.0
network 10.0.0.0 0.0.255.255
network 10.4.1.0 0.0.255.255
#
traffic-policy p1 inbound
#
return
prefix-assign-mode unshared
ipv6-pool group1
ppp address-release separate
user-group group1 bind ds-lite instance dslite1
#
interface GigabitEthernet0/1/1
undo shutdown
ipv6 enable
ipv6 address auto link-local
undo dcn
ipv6 nd autoconfig managed-address-flag
remote-backup-profile cgn
bas
#
access-type layer2-subscriber default-domain authentication dslite1
authentication-method-ipv6 bind
#
#
interface GigabitEthernet0/1/2
undo shutdown
ip address 10.5.1.1 255.255.0.0
#
interface GigabitEthernet0/1/3
undo shutdown
ip address 10.0.0.2 255.255.0.0
vrrp vrid 10 virtual-ip 10.0.0.3
admin-vrrp vrid 10
vrrp vrid 10 priority 150
vrrp vrid 10 track interface GigabitEthernet0/1/2 reduced 50
vrrp vrid 10 track service-location 21 reduced 50
#
interface GigabitEthernet0/1/1.2
vlan-type dot1q 101
ip address 192.168.1.1 255.255.0.0
vrrp vrid 2 virtual-ip 192.168.1.10
admin-vrrp vrid 2
vrrp vrid 2 priority 150
vrrp vrid 2 track interface GigabitEthernet0/1/2 reduced 50
vrrp vrid 2 track bfd-session session-name ma
vrrp vrid 2 track service-location 21 reduced 50
#
bfd ma bind peer-ip 192.168.1.2
#
ospf 10
import-route unr
opaque-capability enable
area 0.0.0.0
network 10.0.0.0 0.0.255.255
network 10.5.1.0 0.0.255.255
#
traffic-policy p1 inbound
#
return
Networking Requirements
In the centralized networking scenario shown in Figure 1-130, a NAT service
board is deployed in slot 9 on CGN1 and another NAT service board is deployed in
slot 9 on CGN2. CGN1 and CGN2, between which a VRRP channel is established
over GE interfaces, are deployed close to the CR on the MAN core as standalone
devices. CPU0 of the NAT service board in slot 9 on CGN1 and CPU0 of the NAT
service board in slot 9 on CGN2 implement NAT64 inter-chassis hot backup. VRRP
enabled for the channel determines the master/backup status of the CGN devices,
and the service board status is associated with VRRP.
GE0/1/2 2001:db8::1:110e/126
GE0/1/2 2001:db8::1:110f/126
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable NAT64 and configure NAT64 session resources.
2. Enable HA hot backup.
3. Create a service-location group, configure members for HA dual-device inter-
chassis backup, and configure a VRRP channel.
4. Create and configure a VRRP group.
5. Associate HA with VRRP.
6. Bind the service-location group to the VRRP group.
7. Create a service-instance group and bind it to the service-location group.
8. Create a NAT64 instance and bind it to the service-instance group.
9. Configure a NAT64 traffic diversion policy and a NAT64 conversion policy.
Data Preparation
To complete the configuration, you need the following data:
● Index of the service-location group
● Active CPU's slot ID and ID (9 and 0, respectively) of a service board on each
device
● Interfaces of the inter-chassis backup channel and IP addresses of the peer
devices
● Index, virtual IP address, member priorities, and preemption delay of a VRRP
group
● Name of a service-instance group
● Name and index of a NAT64 instance
● Name, index, and IP address range of a NAT64 address pool
● NAT64 IPv6 prefix of 64:FF9B::/96
● ACL number and ACL rule
● Traffic classifier name, traffic behavior name, and traffic diversion policy
name
Procedure
Step 1 Enable NAT64 and configure NAT64 session resources.
# Configure the master device CGN1.
<HUAWEI> system-view
[~HUAWEI] sysname CGN1
[*HUAWEI] commit
[~CGN1] vsm on-board-mode disable
[*CGN1] commit
[~CGN1] license
[*CGN1-license] active nat64 vsuf slot 9
[*CGN1-license] active nat session-table size 16 slot 9
[*CGN1-license] commit
[~CGN1-license] quit
NOTE
The configuration of the NAT64 GTL license on CGN2 is the same as that on CGN1. For
configuration details, see CGN2 configuration file.
Step 2 Enable HA hot backup in the system view of master and backup devices.
# Enable HA hot backup on CGN1.
[~CGN1] service-ha hot-backup enable
[*CGN1] commit
NOTE
The configuration of CGN2 is similar to that of CGN1. For configuration details, see CGN2
configuration file in this section.
# On CGN2, enter the view of GE 0/1/1, create VRRP group 1, and set the VRRP
virtual IP address to 10.1.1.3. Configure the VRRP group as an mVRRP group, set
CGN2's priority in the VRRP group to 150, and set the VRRP recovery delay to 15s.
[~CGN2] interface GigabitEthernet0/1/1
[~CGN2-GigabitEthernet0/1/1] vrrp vrid 1 virtual-ip 10.1.1.3
[*CGN2-GigabitEthernet0/1/1] admin-vrrp vrid 1 ignore-if-down
NOTE
The configuration of CGN2 is the same as that of CGN1. For configuration details, see
CGN2 configuration file in this section.
Step 6 Bind the service-location group to the VRRP group on each CGN device.
NOTE
The configuration of CGN2 is the same as that of CGN1. For configuration details, see
CGN2 configuration file in this section.
Step 7 Create a service-instance group and bind it to the service-location group on each
CGN device.
NOTE
The configuration of CGN2 is the same as that of CGN1. For configuration details, see
CGN2 configuration file in this section.
Step 8 Create a NAT64 instance on each CGN device and bind it to the service-instance
group.
# On CGN1, create a NAT64 instance named nat64 and bind it to the service-
instance group named group1. Configure a NAT64 IPv6 prefix of 64:FF9B::/96. This
prefix must be the same as the prefix allocated by the DNS64 server.
[~CGN1] nat64 instance nat64 id 1
[*CGN1-nat64-instance-nat64] service-instance-group group1
[*CGN1-nat64-instance-nat64] nat64 address-group nat64-group1 group-id 1
[*CGN1-nat64-instance-nat64-nat64-address-group-nat64-group1] section 0 11.1.1.1 mask 24
[*CGN1-nat64-instance-nat64-nat64-address-group-nat64-group1] quit
[*CGN1-nat64-instance-nat64] nat64 prefix 64:FF9B:: prefix-length 96 1
[*CGN1-nat64-instance-nat64] commit
[~CGN1-nat64-instance-nat64] quit
NOTE
The configuration of CGN2 is the same as that of CGN1. For configuration details, see
CGN2 configuration file in this section.
Step 9 Configure a NAT64 traffic diversion policy and a NAT64 conversion policy. For
configuration details, see "Example for Configuring Centralized NAT64" in NAT
and IPv6 Transition> NAT64 Configuration.
# Configure a NAT64 traffic diversion policy and a NAT64 conversion policy on
CGN1.
1. Configure a NAT64 traffic diversion policy.
a. Configure an ACL-based traffic classification rule so that only hosts on
internal network segment 2001:db8::1:1112/126 can access the IPv4
Internet.
[~CGN1] acl ipv6 number 3001
[*CGN1-acl6-adv-3001] rule 5 permit ipv6 source 2001:db8::1:1112/126 destination
64:FF9B::/96
[*CGN1-acl6-adv-3001] commit
[~CGN1-acl6-adv-3001] quit
f. Apply the NAT64 traffic diversion policy in the user-side interface view.
[~CGN1] interface GigabitEthernet0/1/2
[*CGN1-GigabitEthernet0/1/2] traffic-policy p1 inbound
[*CGN1-GigabitEthernet0/1/2] commit
[~CGN1-GigabitEthernet0/1/2] quit
NOTE
The configuration of CGN2 is similar to that of CGN1. For configuration details, see CGN2
configuration file in this section.
----End
Configuration Files
● CGN1 configuration file
#
sysname CGN1
#
vsm on-board-mode disable
#
license
active nat64 vsuf slot 9
active nat session-table size 16 slot 9
#
acl ipv6 number 3001
rule 5 permit ipv6 source 2001:db8::1:1112/126 destination 64:FF9B::/96
#
traffic classifier c1 operator or
if-match ipv6 acl 3001 precedence 1
#
traffic behavior b1
nat64 bind instance nat64
#
traffic policy p1
classifier c1 behavior b1 precedence 1
#
service-ha hot-backup enable
#
service-location 1
location slot 9
vrrp vrid 1 interface GigabitEthernet 0/1/1
remote-backup interface GigabitEthernet 0/1/1 peer 10.1.1.2
#
service-instance-group group1
service-location 1
#
nat64 instance nat64 id 1
service-instance-group group1
nat64 address-group nat64-group1 group-id 1
section 0 11.1.1.1 mask 24
#
nat64 outbound any address-group nat64-group1
nat64 prefix 64:FF9B:: prefix-length 96 1
#
interface GigabitEthernet0/1/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.3
admin-vrrp vrid 1 ignore-if-down
vrrp vrid 1 priority 200
vrrp vrid 1 preempt-mode timer delay 1500
vrrp vrid 1 track service-location 1 reduced 60
vrrp recover-delay 15
#
interface GigabitEthernet 0/1/2
undo shutdown
ipv6 enable
ipv6 address 2001:db8::1:110e 126
traffic-policy p1 inbound
#
return
Definition
● Mapping of Address and Port using Translation (MAP-T) is a 4over6 IPv6
transition technique that implements stateless mapping and dual translation.
● Mapping of Address and Port with Encapsulation (MAP-E) is also a 4over6
IPv6 transition technique that implements stateless mapping and dual
encapsulation.
Purpose
Two major contradictions exist during the evolution from IPv4 to IPv6:
● IPv4 address shortage holds back the rapid IPv4 service development.
● Only a few IPv6 applications are implemented despite the vast address space
IPv6 offers.
IPv4 address reusing (A+P) overcomes the limited IPv4 address space to some
extent. However, the vast number of devices that are deployed affects various
services and applications. IPv4 address exhaustion impacts users, ICPs, ISPs, and
carriers to different degrees during IPv6 development given that they have
differing concerns, creating an imbalance in the development of the IPv6 industry
chain. In addition, the development of the IPv6 industry chain is slowed down by
the IPv4 address sharing mechanism, while the deployment scale of this
mechanism is influenced by the ongoing development of the IPv6 industry chain.
To ensure that existing IPv4 services can continue to run in addition to facilitating
the transition toward IPv6 services, the long-term evolution solution focuses on a
4over6 scenario that accommodates both IPv4 and IPv6 service characteristics.
There are various types of transition techniques in the 4over6 scenario. The
stateless and dual translation/encapsulation techniques used in the MAP
technology make this the IETF's preferred solution.
Benefits
Benefits to carriers:
The mapping address and port (MAP) technology combines stateless and dual
translation and encapsulation techniques. The MAP technology allows for stateless
reuse of addresses and ports. It is comprised of MAP-E and MAP-T techniques.
MAP defines stateless address encapsulation and translation mechanisms when
IPv4 and IPv6 services are carried over IPv6-only networks. MAP-CEs and a MAP-
BR function as edge devices to define a MAP domain. IPv4 service flows exist
outside the MAP domain.
Port Mapping
Concept of A+P
From the perspective of IPv4 addresses and transport layer ports, the number of
32-bit IPv4 addresses is limited, and a few 16-bit transport layer ports (transport
port) are used. Therefore, the IPv4 addresses can be extended by using the
transport layer ports.
MAP mapping also uses the A+P concept that maps a pair of a public IPv4 address
and a port number to an IPv4 private address. In MAP, 16 bits of a transport layer
port number are divided into three parts: A, port-set ID, and M.
About A and a
Ports 0 through 1023 are well-known port numbers. It is recommended that the
ports be extended to ports 0 through 4095 (2^12 ports in total) in MAP. Namely,
the default value of a is recommended to be 4 (16 bits-12 bits). a is generally set
to a non-zero value. If a is set to 0, ports in all port ranges can be allocated.
About port-set ID and k
Size k of a port-set ID (PSID) determines a sharing ratio R of 2^k. That is, the
transport layer ports can be divided into 2^k sets, where each set is used by a CPE.
Each of the CPEs that share an IPv4 address obtains a unique PSID that identifies
a unique port set.
About m
Size m of the M field determines a contiguous block of ports in a port set. The
continuous length is 2^m.
Consequently, a total of 2^k port sets are obtained. Each port set is identified by a
unique PSID. A port set identified by each PSID contains a number of [(2^a) - 1] x
(2^m) ports.
The following is an example of a MAP mapping rule.
m = 2 can be calculated based on the sharing ratios of R = 1024 and a = 4. Table
1 shows mappings between PSIDs and port sets.
Ports in a port set are not all continuous, so the table displays them as segments
of continuous ports. If the PSID is 0, when A is 0001 (that is, port-set-1 in the
table, also the first segment of continuous ports), the corresponding ports are
0001000000000000 (4096), 0001000000000001 (4097), 0001000000000010
(4098) and 0001000000000011 (4099); when A is 0010 (that is, port-set-2 in the
table, also the second segment of continuous ports), the corresponding ports are
0010000000000000 (8192), 0010000000000001 (8193), 0010000000000010
(8194) and 0010000000000011 (8195). The same rule is applied to obtain the
remaining mappings between PSIDs and port-sets.
Address Mapping
The mapping between IPv4 and IPv6 addresses is implemented by embedding a
part of IPv4 addresses and a part of ports into IPv6 addresses. The part of IPv4
addresses is IPv4-Addr-suffix, and that of a port set is a PSID. The IPv4 address
and port information is closely associated with IPv6 addresses to create stateless
mapping.
The mapping between IPv4+port and IPv6 addresses shows that the public IPv4
address and port sequence can be derived based on the attributes, including the
End-user IPv6-prefix, Rule-IPv6-prefix, EA-bits, Rule-IPv4, and PSID offset,
regardless of the MAP-CE or MAP-BR.
In Figure3, the MAP technology combines an IPv4 address and a PSID to create an
interface ID for identifying a MAP-CE. The interface ID is combined with the End-
user IPv6-prefix to form an IPv6 address, which uniquely identifies the MAP-CE in
a MAP domain.
An interface ID can be formed in two ways: One way uses the mapping rule
defined in an RFC. In this rule, the most significant 16 bits are 0, and are
combined with the IPv4 address and PSID fields to form an interface ID. The other
way uses the mapping rule defined by an IETF draft. In this rule, the most
significant 8 bits and the least significant 8 bits are all 0s, and are combined with
the IPv4 address and PSID fields to form an interface ID. Huawei devices use the
former rule by default. However, if interworking device uses the mapping rule
defined by the IETF draft, you can configure the Huawei devices to use the latter
mapping rule. This applies only to centralized MAP-E scenarios.
IPv4 address field: If a public IPv4 address is allocated, the IPv4 address field is
set to the allocated IPv4 address, with a length of 32 bits. If an IP prefix is
assigned, that is, an IP address segment is allocated to an IPv4 user (possibly an
enterprise user), the IPv4 address field needs to be filled with 0 on the right. For
example, if an IPv4 prefix of 1.1.1.0/29 is allocated to a user, the IPv4 address
field must be set to 0x01010100 (hexadecimal).
PSID field: If the Port-Set ID extracted from the EA-bits field is less than 16 bits,
pad 0 on the right. For example, if the PSID field value is 0xAC, 0s are padded to
generate value 0xAC00. If an IPv4 prefix or an exclusive IPv4 address is assigned,
no port set ID is extracted, and the PSID field value is 0x0000.
BMR (mandatory)
A BMR is used to compute an IPv4 address, a port set, and an IPv6 address for a
MAP-CE. The BMR must be configured on MAP-CEs and the MAP-BR. MAP-CEs
use this rule to perform NAT44 translation, and IPv6 translation and encapsulation
for IPv4 user packets. The MAP-BR uses this rule to perform the following
operations:
● Decapsulates or removes tunnel information from IPv6 packets for IPv4
addresses.
● Performs IPv6 translation and encapsulation based on IPv4 addresses and
ports for return traffic, and forwards the traffic to MAP-CEs over IPv6 routes
in the MAP domain.
The basic BMR parameters include the Rule-IPv6-prefix, Rule-IPv4-prefix, EA-bits-
length, and Port-Set ID-offset. These parameters can be used on a MAP-CE to
calculate the shared IPv4 address and port sequence and the IPv6 address of the
MAP-CE.
In the MAP domain, multiple sub-domains can be divided based on the IPv4
subnet logic. Each IPv4 subnet serves as a sub-domain. Mapping rules on all MAP-
CEs in each sub-domain can be summarized to a single rule. Each MAP-CE is
assigned a particular End-user IPv6-prefix and the same BMR.
FMR (optional)
An FMR is used to allow MAP-CEs to communicate with one another on a mesh
network, without passing through the MAP-BR. When a local MAP-CE accesses
another MAP-CE, the destination IPv6 address must be set to the address of the
peer MAP-CE. The source IPv6 address in packets is generated based on the BMR,
and the IPv6 address of the destination MAP-CE is to be converted based on the
FMR. The MAP-CEs use the same BMR (in a certain range). Therefore, the FMR
and BMR can be the same. Therefore, a BMR can be configured as an FMR.
For example, a BMR is the same as an FMR a MAP domain. When the IPv4 service
of a MAP-CE2 user accesses the IPv4 service of a MAP-CE1 user, MAP-CE2 uses
NAT44 and the BMR to generate a source IPv6 address and uses the FMR to
generate the IPv6 address of the destination MAP-CE1.
DMR (optional)
Different from the BMR and FMR that apply to both MAP-E and MAP-T scenarios,
DMRs differ between MAP-E and MAP-T scenarios:
● MAP-T: Based on the DMR, the destination IPv4 address in packets is an
address outside a MAP domain, and packets carrying such a destination IP
address are forwarded outside the MAP domain through the MAP-BR. The
DMR defines the Rule-IPv6-prefix and Rule-IPv4-prefix. The Rule-IPv6-prefix
value is set to the IPv6 prefix of the MAP-BR. The Rule-IPv4-prefix value is set
to 0.0.0.0/0. A MAP-CE uses the DMR as a default mapping rule for matching
IPv4 routes. In the following figure, after the DMR is used, the destination
IPv6 address is generated using the Rule-IPv6-prefix configured based on the
DMR and destination IPv4 address.
● MAP-E: The DMR refers to destinations outside the MAP domain. In MAP-E,
an IPv6 header has an IPv4 address embedded. Therefore, if the destination
IPv4 address in packets is an address outside a MAP domain, the MAP-BR's
IPv6 address merely needs to be encapsulated. When the packet reaches the
MAP-BR, the MAP-BR removes the IPv6 header to restore the destination IPv4
address. In MAP-E, using the DMR is to configure the MAP-BR's IPv6 address.
Option 94 needs to be encapsulated in packets for only MAP-E users. Its format is
as follows:
● The option code is 94, and the option length is variable. Packets are
encapsulated based on the actual packet length.
● encapsulated-options: other options of MAP-E users. Such options are sub-
options of Option 94, including Option 89, Option 93, and Option 90.
Option 95 (OPTION_S46_CONT_MAPT)
Option 95 needs to be encapsulated in packets for only MAP-T users. Its format is
as follows:
● The option code is 95, and the option length is variable. Packets are
encapsulated based on the actual packet length.
● encapsulated-options: other options of MAP-T users. Such options are sub-
options of Option 95, including Option 89, Option 93, and Option 91.
Option 89 (OPTION_S46_RULE)
Option 89 needs to be encapsulated in packets for both MAP-E and MAP-T users.
It is a sub-option of Options 94 and 95. Its format is as follows:
● The option code is 89, and the option length is variable. Packets are
encapsulated based on the actual packet length.
● flags: The format is as follows:
The reserved bits are all 0s and the default F bit is 1. The F bit can be 0 if the
option-s46 fmr-flag disable command is run in the address pool.
● ea-len: It is obtained from map-rule referenced by the prefix pool and is
configured using a command.
● prefix4-len and ipv4-prefix: They are obtained from map-rule referenced by
the prefix pool and are configured using a command. The ipv4-prefix field is
fixed at 4 bytes, and the prefix4-len field is padded with 0s.
● prefix6-len and ipv6-prefix: They are obtained from map-rule referenced by
the prefix pool and are configured using a command. If the prefix6-len value
cannot be divided by eight with no remainder, the ipv6-prefix value must be
padded with 0s to be a multiplier of eight, whereas the prefix6-len value
remains.
● S46_RULE-options: It indicates Option 93, which is a sub-option of Option 89.
Option 93 (OPTION_S46_PORTPARAMS)
Option 93 needs to be encapsulated in packets for both MAP-E and MAP-T users.
It is a sub-option of Option 89. Its format is as follows:
● The option code is 93, and the option length is fixed at 4 bytes.
● offset: It is obtained from map-rule referenced by the address pool and is
configured using a command.
● PSID-len and PSID: used in the following scenarios:
– ea-len + prefix4-len < 32: Both PSID-len and PSID values are 0, and an
IPv4 prefix is assigned to a MAP user.
– ea-len + prefix4-len = 32: Both PSID-len and PSID values are 0, and an
exclusive IP address is assigned to a MAP user.
– ea-len + prefix4-len > 32: The PSID-len field value is (ea-len + prefix4-len
- 32). The valid PSID is the last PSID-len bits in a PD prefix of a MAP user
(16 bits are reserved for the PSID and the PSID is suffixed with zeros if
less than 16 bits). A shared IP address is assigned to a MAP user, and the
IPv4 port is computed based on the PSID.
Option 90 (OPTION_S46_BR)
Option 90 needs to be encapsulated in packets for only MAP-E users. It is a sub-
option of Option 94. Its format is as follows:
● The option code is 90, and the option length is fixed at 16 bytes.
● br-ipv6-address: It is encapsulated based on the option-s46 br-ipv6-address
command run in the address pool. It is fixed at 16 bytes. If a mask length is
128 bits, a unicast address is used. If a mask length is less than 128 bits, an
anycast address is used (in this case, 0s are padded on the right of the mask).
Option 91 (OPTION_S46_DMR)
Option 91 needs to be encapsulated in packets for only MAP-T users. It is a sub-
option of Option 95. Its format is as follows:
● The option code is 91, and the option length is variable. Packets are
encapsulated based on the actual packet length.
● dmr-prefix6-len and dmr-ipv6-prefix: They are encapsulated based on the
option-s46 dmr-prefix command run in the address pool. If the dmr-prefix6-
len value cannot be divided by eight with no remainder, the dmr-ipv6-prefix
value must be padded with 0s to be a multiplier of eight, whereas the dmr-
prefix6-len value remains.
Background
IPv4-to-IPv6 evolution faces the following challenges:
● IPv4 address shortage holds back the rapid IPv4 service development.
● An immense number of IPv6 addresses are available, whereas a few IPv6
applications are implemented.
IPv4 address reusing (address and port) seems to relieve the pressure posed by
rapid IPv4 address consumption. However, a great number of devices are
deployed, which more or less affects various services and applications. In IPv6
development, users, ICPs, ISPs, and carriers show different sensitivity levels to IPv4
address exhaustion, which results in an imbalance in the IPv6 industry chain. Each
party has its own considerations when promoting IPv6 development. In addition,
the IPv4 address sharing mechanism seems to slow down the development of the
IPv6 industry chain. The ongoing development of the IPv6 industry chain, however,
seems to challenge the deployment scale of the IPv4 address sharing mechanism.
To ensure IPv4 service continuity and IPv6 industry development, the 4over6
scenario that is compatible with both IPv4 service and IPv6 development becomes
the focus in long term evolution.
In the 4over6 scenario, various types of transition techniques are also generated.
MAP technology combines the stateless and dual translation and encapsulation
techniques and becomes the most concerned solution of the IETF.
Technical Characteristics
MAP is a stateless, distributed, and tunneling transition technique. MAP allows
stateless reuse of addresses and ports. MAP is classified as MAP-T or MAP-E based
on packet formats.
Centralized Scenario
In a centralized scenario, the MAP-BR and BRAS reside on different devices. The
BRAS functions deliver MAP addresses and mapping rules to MAP-CEs in DHCPv6
IA_PD mode. Router A functioning as the MAP-BR resides on the edge of a MAP
domain and accesses the public IPv4 network through the IPv6 network that is
within the MAP domain. The MAP-CEs use each other's public IPv4 address to
communicate through the MAP-BR.
Configuration Roadmap
In the centralized scenario, MAP-T must be configured on the BRAS and MAP-BR.
Configuring a BRAS
NOTE
Context
BMR parameters are configured using multiple commands in an instance. In
addition to the IPv4 prefix and length of a MAP domain, the IPv6 prefix and
length, and EA length, PSID offset can also be configured. The PSID offset can be
used to reserve ports for public IP addresses.
Procedure
Step 1 Run system-view
NOTE
A unique IPv6 address must be set in each BMR. The IPv6 addresses of BMRs cannot be
identical.
----End
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 pool pool-name bas delegation
An IPv6 delegation address pool is created, and the IPv6 delegation address pool
view is displayed.
Step 3 Run prefix prefix-name
An IPv6 prefix pool is bound to the IPv6 address pool.
Step 4 Run option-s46 dmr-prefix dmr-prefix-name
A DMR prefix name is bound to the IPv6 address pool. The NetEngine 8100 M,
NetEngine 8000E M, NetEngine 8000 M encapsulates the IPv6 prefix as Option 91
information (OPTION_S46_DMR) into the DHCPv6 Response message sent to
MAP-T users.
Step 5 (Optional) Run option-s46 fmr-flag disable
The F-flag bit is set to 0 in DHCPv6 OPTION_S46_RULE (option 89).
Step 6 Run commit
The configuration is committed.
----End
Configuring a MAP-BR
Context
BMR parameters are set in an instance. The parameters include the IPv4 prefix
and length, IPv6 prefix and length, embedded address length, and PSID offset. The
PSID offset is used to reserve ports based on public IP addresses.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map rule rule-name
The MAP rule view is displayed.
Step 3 Run rule-prefix ipv6–prefix prefix-length v6prefix-length ipv4–prefix ipv4–prefix
prefix-length v4prefix-length [ vpn-instance vpn-instance-name ] ea-length ea-
length [ psid-offset offset-length ]
Parameters are configured to verify and encapsulate MAP packets.
NOTE
A unique IPv6 address must be set in each BMR. The IPv6 addresses of BMRs cannot be
identical.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run dmr-prefix dmr-name ipv6-prefix ipv6-prefix prefix-length prefix-length
The IPv6 prefix address and length are set in a DMR.
Step 3 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view of the PE is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run dmr-prefix dmr-name
A DMR is bound to the MAP-T instance.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
A BMR has been configured.
Procedure
Step 1 Run system-view
The system view of the PE is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map-rule rule-name
A BMR is bound to the MAP-T instance.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
After the MSS value is set in the MAP instance view, a device changes the MSS
value in all TCP packets that belong to the MAP service. If the MSS value in the
negotiation process is greater than the configured MSS value, the device uses the
MSS value configured by the user. If the MSS value during the negotiation is
smaller than the configured MSS value, the MSS value in the negotiation process
is retained. When the maximum packet length is set, the maximum length of IP
packets that can be sent by the interface is calculated using a formula (Maximum
packet length + 20-byte IP header + 20-byte TCP header). The maximum length of
IP packets that can be sent by the interface is limited by the MTU value. Therefore,
when the value calculated using the formula (Maximum packet length + 20-byte
IP header + 20-byte TCP header) is greater than the MTU value, the packets are
fragmented. Therefore, you are advised to set the TCP MSS value as large as
possible without causing packet fragmentation to improve packet transmission
efficiency. If the size of packets for MAP processing is larger than a link MTU, the
packets are fragmented. You can reduce the MSS value in TCP, which prevents a
service board from fragmenting packets and helps improve MAP efficiency.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map tcp adjust-mss mss-value
An MSS value is set in TCP SYN and SYN ACK packets.
Step 4 Run commit
The configuration is committed.
----End
Context
● If the size of packets is greater than the configured MTU value, the packets
are broken into a great number of fragments.
● If the MTU is set too large, packets may be transmitted at a low speed.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map mtu mtu-value
An MTU value is set for IPv6 packets in a MAP instance.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map traffic-class class-value
The Traffic-Class field is set for public network-to-private network IPv6 traffic.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map ipv4-tos tos-value
An IPv4 ToS value is set for IPv4 traffic after private network-to-public network
IPv6 traffic is converted to IPv4 traffic.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map ipv6 df-override enable
The device is enabled to set the DF field to 0 for IPv4 packets after IPv6 packets
are converted to IPv4 packets.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
The configurations of basic MAP-T functions are complete.
Procedure
● Run the display map instance [ instance-name ] [ verbose ] command to
check the configurations of a MAP-T instance.
● Run the display map rule [ rule-name ] command to check the basic
mapping rule (BMR) configurations of a MAP-T instance.
----End
Distributed Scenario
In a distributed scenario, the MAP-BR and BRAS reside on the same device. The
device functions as the BRAS to deliver MAP addresses and mapping rules to MAP-
CEs in DHCPv6 IA_PD mode. (In this example, only key configurations of IPv6
address assignment are provided. For details, see HUAWEI NetEngine 8100
M14/M8, NetEngine 8000 M14K/M14/M8K/M8/M4 & NetEngine 8000E M14/M8
series Router Configuration Guide - User Access - IPv6 Address Management
Configuration.) The device also functions as a MAP-BR and resides on the edge of
a MAP domain. The device allows the MAP-CEs to access the public IPv4 network
through the IPv6 network that is within the MAP domain. In addition, the MAP-
CEs can use each other's public IPv4 address to communicate through the MAP-
BR.
Configuration Roadmap
In the distributed scenario, MAP-T is configured on the MAP-BR (BRAS).
Configuring a BMR
This section describes how to configure a basic mapping rule (BMR). Configure
BMR rules on the BRAS to instruct the BRAS to assign IPv6 and IPv4 addresses to
MAP-CEs.
Context
BMR parameters are configured using multiple commands in an instance. In
addition to the IPv4 prefix and length of a MAP domain, the IPv6 prefix and
length, and EA length, PSID offset can also be configured. The PSID offset can be
used to reserve ports for public IP addresses.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map rule rule-name
The MAP rule view is displayed.
Step 3 Run rule-prefix ipv6–prefix prefix-length v6prefix-length ipv4–prefix ipv4–prefix
prefix-length v4prefix-length [ vpn-instance vpn-instance-name ] ea-length ea-
length [ psid-offset offset-length ]
NOTE
A unique IPv6 address must be set in each BMR. The IPv6 addresses of BMRs cannot be
identical.
----End
Configuring a DMR
A default mapping rule (DMR) can be created. A MAP-CE encapsulates an IPv6
prefix defined in the DMR into packets and directs the packets to a service boards.
The service boards convert the packets in a MAP-T instance to which the DMR is
bound.
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
An IPv6 delegation prefix pool is created, and the IPv6 delegation prefix pool view
is displayed.
----End
Procedure
Step 1 Run system-view
An IPv6 delegation address pool is created, and the IPv6 delegation address pool
view is displayed.
A DMR prefix name is bound to the IPv6 address pool. The NetEngine 8100 M,
NetEngine 8000E M, NetEngine 8000 M encapsulates the IPv6 prefix as Option 91
information (OPTION_S46_DMR) into the DHCPv6 Response message sent to
MAP-T users.
----End
Procedure
Step 1 Run system-view
----End
Prerequisites
A BMR has been configured.
Procedure
Step 1 Run system-view
The system view of the PE is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map-rule rule-name
A BMR is bound to the MAP-T instance.
Step 4 Run commit
The configuration is committed.
----End
Context
By default, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M
preferentially assigns MAP-E prefixes. If the allocation attempt fails, the NetEngine
8100 M, NetEngine 8000E M, NetEngine 8000 M assigns MAP-T prefixes to users.
Run the following steps if you want the NetEngine 8100 M, NetEngine 8000E M,
NetEngine 8000 M to preferentially assign MAP-T prefixes to users:
Procedure
Step 1 Run system-view
The system view is configured.
Step 2 Run aaa
The AAA view is displayed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map icmp-tranfer enable
The MAP conversion function is enabled for ICMP/ICMPv6 error packets.
Step 4 Run commit
The configuration is committed.
----End
Context
After the MSS value is set in the MAP instance view, a device changes the MSS
value in all TCP packets that belong to the MAP service. If the MSS value in the
negotiation process is greater than the configured MSS value, the device uses the
MSS value configured by the user. If the MSS value during the negotiation is
smaller than the configured MSS value, the MSS value in the negotiation process
is retained. When the maximum packet length is set, the maximum length of IP
packets that can be sent by the interface is calculated using a formula (Maximum
packet length + 20-byte IP header + 20-byte TCP header). The maximum length of
IP packets that can be sent by the interface is limited by the MTU value. Therefore,
when the value calculated using the formula (Maximum packet length + 20-byte
IP header + 20-byte TCP header) is greater than the MTU value, the packets are
fragmented. Therefore, you are advised to set the TCP MSS value as large as
possible without causing packet fragmentation to improve packet transmission
efficiency. If the size of packets for MAP processing is larger than a link MTU, the
packets are fragmented. You can reduce the MSS value in TCP, which prevents a
service board from fragmenting packets and helps improve MAP efficiency.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map tcp adjust-mss mss-value
An MSS value is set in TCP SYN and SYN ACK packets.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-t instance map-t-instance-name [ id id ]
The MAP-T instance view is displayed.
Step 3 Run map traffic-class class-value
The Traffic-Class field is set for public network-to-private network IPv6 traffic.
Step 4 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
An IPv4 ToS value is set for IPv4 traffic after private network-to-public network
IPv6 traffic is converted to IPv4 traffic.
----End
Procedure
Step 1 Run system-view
The device is enabled to set the DF field to 0 for IPv4 packets after IPv6 packets
are converted to IPv4 packets.
----End
Prerequisites
The configurations of basic MAP-T functions are complete.
Procedure
● Run the display map instance [ instance-name ] [ verbose ] command to
check the configurations of a MAP-T instance.
● Run the display map rule [ rule-name ] command to check the basic
mapping rule (BMR) configurations of a MAP-T instance.
----End
Centralized Scenario
In a centralized scenario, the MAP-BR and BRAS reside on different devices. The
BRAS functions deliver MAP addresses and mapping rules to MAP-CEs in DHCPv6
IA_PD mode. Router A functioning as the MAP-BR resides on the edge of a MAP
domain and allows MAP-CEs to access the public IPv4 network through the IPv6
network that is within the MAP domain. The MAP-CEs use each other's public IPv4
address to communicate through the MAP-BR.
Configuration Roadmap
In the centralized scenario, MAP-E must be configured on the BRAS and MAP-BR.
Configuring a BRAS
NOTE
Context
BMR parameters are configured using multiple commands in an instance. In
addition to the IPv4 prefix and length of a MAP domain, the IPv6 prefix and
length, and EA length, PSID offset can also be configured. The PSID offset can be
used to reserve ports for public IP addresses.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map rule rule-name
The MAP rule view is displayed.
Step 3 Run rule-prefix ipv6–prefix prefix-length v6prefix-length ipv4–prefix ipv4–prefix
prefix-length v4prefix-length [ vpn-instance vpn-instance-name ] ea-length ea-
length [ psid-offset offset-length ]
The device is enabled to verify and encapsulate MAP packets.
NOTE
A unique IPv6 address must be set in each BMR. The IPv6 addresses of BMRs cannot be
identical.
----End
?.2. Configuring a BR
A border relay (BR) is created. The MAP-CE encapsulates an IPv6 prefix defined
based on the BR into traffic and directs the traffic to an interface board. The MAP-
CE then selects a MAP-E instance to convert the traffic.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Procedure
Step 1 Run system-view
An IPv6 delegation prefix pool is created, and the IPv6 delegation prefix pool view
is displayed.
----End
Procedure
Step 1 Run system-view
An IPv6 delegation address pool is created, and the IPv6 delegation address pool
view is displayed.
----End
Configuring a MAP-BR
Context
BMR parameters are set in an instance. The parameters include the IPv4 prefix
and length, IPv6 prefix and length, embedded address length, and PSID offset. The
PSID offset is used to reserve ports based on public IP addresses.
Procedure
Step 1 Run system-view
NOTE
A unique IPv6 address must be set in each BMR. The IPv6 addresses of BMRs cannot be
identical.
----End
?.2. Configuring a BR
A border relay (BR) is created. The MAP-CE encapsulates an IPv6 prefix defined
based on the BR into traffic and directs the traffic to an interface board. The MAP-
CE then selects a MAP-E instance to convert the traffic.
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
----End
Prerequisites
A BMR has been configured.
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
The mapping rule that complies with the IETF draft is used for MAP-E conversion.
----End
Prerequisites
If the size of packets for processing is larger than a link MTU, IPv6 packets are
fragmented. You can reduce the MSS value in TCP, which prevents a service board
from fragmenting packets and helps improve MAP efficiency.
Procedure
Step 1 Run system-view
----End
Context
● If the size of packets is greater than the configured MTU value, the packets
are broken into a great number of fragments.
● If the MTU is set too large, packets may be transmitted at a low speed.
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map-e instance map-e-instance-name [ id id ]
The MAP-E instance view is displayed.
Step 3 Run map traffic-class class-value
The Traffic-Class field is set for public network-to-private network IPv6 traffic.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
The configurations of basic MAP-E functions are complete.
Procedure
● Run the display map instance [ instance-name ] [ verbose ] command to
check the configurations of a MAP-E instance.
● Run the display map rule [ rule-name ] command to check the basic
mapping rule (BMR) configurations of a MAP-E instance.
----End
Distributed Scenario
In a distributed scenario, the MAP-BR and BRAS reside on the same device. The
device functions as the BRAS to deliver MAP addresses and mapping rules to MAP-
CEs in DHCPv6 IA_PD mode. (In this example, only key configurations of IPv6
address assignment are provided. For details, see HUAWEI NetEngine 8100
M14/M8, NetEngine 8000 M14K/M14/M8K/M8/M4 & NetEngine 8000E M14/M8
series Router Configuration Guide - User Access - IPv6 Address Management
Configuration.) The device also functions as a MAP-BR and resides on the edge of
a MAP domain. The device allows the MAP-CEs to access the public IPv4 network
through the IPv6 network that is within the MAP domain. In addition, the MAP-
CEs can use each other's public IPv4 address to communicate through the MAP-
BR.
Configuration Roadmap
In the distributed scenario, MAP-E is configured on the MAP-BR (BRAS).
Configuring a BMR
This section describes how to configure a basic mapping rule (BMR). A BMR is
used to convert user-side IPv6 addresses into IPv4 addresses and network-side IPv4
addresses into IPv6 addresses.
Context
BMR parameters are set in an instance. The parameters include the IPv4 prefix
and length, IPv6 prefix and length, embedded address length, and PSID offset. The
PSID offset is used to reserve ports based on public IP addresses.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run map rule rule-name
The MAP rule view is displayed.
NOTE
A unique IPv6 address must be set in each BMR. The IPv6 addresses of BMRs cannot be
identical.
----End
Configuring a BR
A border relay (BR) is created. The MAP-CE encapsulates an IPv6 prefix defined
based on the BR into traffic and directs the traffic to an interface board. The MAP-
CE then selects a MAP-E instance to convert the traffic.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run br-ipv6-address br-name ipv6-address ipv6-address prefix-length prefix-
length
The IPv6 prefix address and length are set in on the BR.
Step 3 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 prefix prefix-name delegation
An IPv6 delegation prefix pool is created, and the IPv6 delegation prefix pool view
is displayed.
Step 3 Run map-rule rule-name
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 pool pool-name bas delegation
An IPv6 delegation address pool is created, and the IPv6 delegation address pool
view is displayed.
Step 3 Run prefix prefix-name
An IPv6 prefix pool is bound to the IPv6 address pool.
Step 4 Run option-s46 br-ipv6-address br-address-name
A BR device name is bound to an IPv6 address pool. The NetEngine 8100 M,
NetEngine 8000E M, NetEngine 8000 M adds OPTION_S46_BR (option 90) to an
IPv6 prefix in a DHCPv6 Response message to be sent to MAP-E users.
Step 5 (Optional) Run option-s46 fmr-flag disable
The F-flag bit is set to 0 in DHCPv6 OPTION_S46_RULE (option 89).
Step 6 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view of the PE is displayed.
Step 2 Run map-e instance map-e-instance-name [ id id ]
The MAP-E instance view is displayed.
Step 3 Run br-ipv6-address br-name
----End
Prerequisites
A BMR has been configured.
Procedure
Step 1 Run system-view
The system view of the PE is displayed.
Step 2 Run map-e instance map-e-instance-name [ id id ]
The MAP-E instance view is displayed.
Step 3 Run map-rule rule-name
A BMR is bound to the MAP-E instance.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
If the size of packets for processing is larger than a link MTU, IPv6 packets are
fragmented. You can reduce the MSS value in TCP, which prevents a service board
from fragmenting packets and helps improve MAP efficiency.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Procedure
Step 1 Run system-view
The Traffic-Class field is set for public network-to-private network IPv6 traffic.
----End
Prerequisites
The configurations of basic MAP-E functions are complete.
Procedure
● Run the display map instance [ instance-name ] [ verbose ] command to
check the configurations of a MAP-E instance.
● Run the display map rule [ rule-name ] command to check the basic
mapping rule (BMR) configurations of a MAP-E instance.
----End
Context
NOTICE
MAP statistics cannot be restored after being deleted. Exercise caution when
running the following command.
Procedure
● After you confirm to delete MAP statistics, run the reset map statistics [ rule
rule-name ] [ slot slot-id ] command in the user view.
----End
Context
NOTICE
Resetting such an interface board causes all functions on the board to fail. Exercise
caution when running the following command.
Procedure
● After you confirm to reset an interface board, run the reset slot slot-id
command in the user view to make new configurations take effect.
----End
Context
Run the following command in any view to monitor the MAP operating status in
routine maintenance.
Procedure
Step 1 Run the display map statistics { transmitted | payload } rule rule-name [ slot
slot-id ] command to check the MAP operating status.
----End
Networking Requirements
In a centralized scenario shown in Figure 1-150, the MAP-BR and BRAS reside on
different devices. The BRAS delivers MAP addresses and mapping rules to MAP-
CEs in DHCPv6 IA_PD mode. (In this example, only key BRAS configurations are
provided. For details about IPv6 address assignment configuration, see HUAWEI
NetEngine 8100 M14/M8, NetEngine 8000 M14K/M14/M8K/M8/M4 & NetEngine
8000E M14/M8 seriesRouter Configuration Guide - User Access - IPv6 Address
Management Configuration.) RouterA (MAP BR) resides on the edge of a MAP
domain and allows MAP-CEs to access the public IPv4 network through the IPv6
network that is within the MAP domain. The MAP-CEs can also communicate with
each other through the MAP-BR based on each other's public IPv4 address.
In this example, interface 1 and interface 2 are GE 0/2/0 and GE 0/2/1, respectively.
Prerequisites
● The MAP function has been activated.
● The MAP-BR has at least one interface board that supports the MAP function.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a BMR.
2. Configure a DMR prefix address and a prefix length.
3. Configure an IPv6 prefix pool and address pool on the BRAS.
4. Configure a MAP-T instance and bind the DMR and BMR rules to the MAP-T
instance.
Data Preparation
To complete the configuration, you need the following data:
● BMR name (bmr_name), IPv6 address (2001:db8:1::1), prefix length (48 bits),
IPv4 address (1.1.1.0), mask (24), EA length (16 bits), and PSID offset (4)
● DMR name (dmr_name), IPv6 address (2001:db8:3::1), and prefix length (96
bits)
● MAP-T instance name (1) and ID (1)
● Interface1's IPv6 address (2001:db8:2::1) and mask length (64 bits).
● Interface2's IPv4 address (11.1.1.1) and mask length (24 bits)
Procedure
Step 1 Configure a BMR on the BRAS to instruct the BRAS to assign IPv6 and IPv4
addresses to the MAP-CEs. In this example, the IPv6 prefix address assigned to the
MAP-CE is 2001:db8:1::1, the prefix length is 48, and the length of the EA-bits is
16. The public IPv4 prefix address allocated to the MAP-CE is 1.1.1.0 and the
prefix length is 24. The offset length of the PSID field is 4, which means that ports
0 to 4096 are reserved.
<BRAS> system-view
[~BRAS] map rule bmr_name
[*BRAS-map-rule-bmr_name] rule-prefix 2001:db8:1::1 prefix-length 48 ipv4-prefix 1.1.1.0 prefix-length
24 ea-length 16 psid-offset 4
[*BRAS-map-rule-bmr_name] commit
[~BRAS-map-rule-bmr_name] quit
Step 2 Configure a DMR rule on the BRAS and combine the IPv6 prefix configured in the
DMR rule with the destination IPv4 address on a MAP-CE to form a destination
IPv6 address.
[~BRAS] dmr-prefix dmr_name ipv6-prefix 2001:db8:3::1 prefix-length 96
[*BRAS] commit
Step 3 Configure an IPv6 prefix pool and an address pool on the BRAS.
● Configure an IPv6 prefix pool named pre1, bind it to the MAP rule named
bmr_name, and assign IPv4 and IPv6 addresses that comply with BMR rules
to MAP-CEs.
[~BRAS] ipv6 prefix pre1 delegationf
[*BRAS-ipv6-prefix-pre1] map-rule bmr_name
[*BRAS-ipv6-prefix-pre1] commit
[~BRAS-ipv6-prefix-pre1] quit
● Configure an IPv6 address pool named pool1 and bind it to the prefix pool
named pre1. Bind the prefix dmr_name to the IPv6 address pool so that the
BRAS encapsulates the IPv6 prefix as Option 91 information
(OPTION_S46_DMR) into the DHCPv6 Response message sent to MAP-CEs.
[~BRAS] ipv6 pool pool1 bas delegation
[*BRAS-ipv6-pool-pool1] prefix pre1
[*BRAS-ipv6-pool-pool1] option-s46 dmr-prefix dmr_name
[*BRAS-ipv6-pool-pool1] commit
[~BRAS-ipv6-pool-pool1] quit
Step 4 Configure BMRs on the MAP-BR. Configure the BMR to remove the user-side IPv4
address from the IPv6 address and encapsulate IPv6 information into the IPv4
address and port number of the network-side traffic.
<RouterA> system-view
[~RouterA] map rule bmr_name
[*RouterA-map-rule-bmr_name] rule-prefix 2001:db8:1::1 prefix-length 48 ipv4-prefix 1.1.1.0 prefix-
length 24 ea-length 16 psid-offset 4
[*RouterA-map-rule-bmr_name] commit
[~RouterA-map-rule-bmr_name] quit
Step 6 Configure a MAP-T instance on the MAP-BR and bind the configured DMR and
BMR rules to the MAP-T instance. The DMR rule is used to translate the IPv6
packets imported from the MAP-CE to the MAP-T instance for address translation
and bind the BMR rules to encapsulate and verify the packets in the instance.
[~RouterA] map-t instance 1 id 1
[*RouterA-map-t-instance-1] dmr-prefix dmr_name
[*RouterA-map-t-instance-1] map-rule bmr_name
[*RouterA-map-t-instance-1] commit
[~RouterA-map-t-instance-1] quit
Step 7 Configure IP addresses of the user- and network-side interfaces on the MAP-BR.
[~RouterA] interface GigabitEthernet0/2/0
[~RouterA-GigabitEthernet0/2/0] ipv6 enable
[*RouterA-GigabitEthernet0/2/0] ipv6 address 2001:db8:2::1 64
[*RouterA-GigabitEthernet0/2/0] commit
[~RouterA-GigabitEthernet0/2/0] quit
[~RouterA] interface GigabitEthernet0/2/1
[~RouterA-GigabitEthernet0/2/1] ip address 11.1.1.1 24
[*RouterA-GigabitEthernet0/2/1] commit
[~RouterA-GigabitEthernet0/2/1] quit
[~RouterA] ipv6 route-static 2001:db8:1::1 48 2001:db8:2::2
[*RouterA] commit
Step 8 Configure an address for the interface connecting the BRAS to the MAP-BR device.
[~BRAS] interface GigabitEthernet0/2/0
[*BRAS-GigabitEthernet0/2/0] ipv6 enable
[*BRAS-GigabitEthernet0/2/0] ipv6 address 2001:db8:2::2 64
[*BRAS-GigabitEthernet0/2/0] undo shutdown
[*BRAS-GigabitEthernet0/2/0] commit
[~BRAS-GigabitEthernet0/2/0] quit
[~BRAS] ipv6 route-static 2001:db8:3::1 48 2001:db8:2::1
[*BRAS] commit
----End
#
interface GigabitEthernet0/2/0
undo shutdown
ipv6 enable
ipv6 address 2001:db8:2::2/64
#
ipv6 route-static 2001:db8:3::1 48 2001:db8:2::1
#
Networking Requirements
In a distributed scenario shown in Figure 1-151, both the MAP-BR and BRAS
reside on Router A. The BRAS functions as a DHCPv6 server to deliver MAP
addresses and mapping rules to MAP-CEs in DHCPv6 IA_PD mode. Router A also
functions as a MAP-BR and resides on the edge of a MAP domain. Router A allows
the MAP-CEs to access the public IPv4 network through the IPv6 network that is
within the MAP domain. In addition, the MAP-CEs can use each other's public IPv4
address to communicate through the MAP-BR.
Prerequisites
● The MAP license has been loaded to the MAP-BR, and the MAP function has
been activated.
● The MAP-BR has at least one interface board that supports the MAP function.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure BMR rules.
2. Configure the prefix address and length of a DMR.
3. Configure an IPv6 prefix pool and an IPv6 address pool.
4. Configure a MAP-T instance and bind the DMR and BMR rules to the MAP-T
instance.
5. Configure user access and RADIUS authentication on the BRAS.
Data Preparation
To complete the configuration, you need the following data:
● BMR name (bmr_name), IPv6 address (2001:db8:1::1), prefix length (48), IPv4
address (1.1.1.0), mask length (24), EA-length (16), and PSID-offset length
(4)
● DMR name (dmr_name), IPv6 address (2001:db8:3::1), and prefix length (96
bits)
● MAP-T instance name (1) and ID (1)
Procedure
Step 1 Configure a BMR on the BRAS to instruct the BRAS to assign IPv6 and IPv4
addresses to the MAP-CEs. In this example, the IPv6 prefix address assigned to the
MAP-CE is 2001:db8:1::1, the prefix length is 48, and the length of the EA-bits is
16. The public IPv4 prefix address allocated to the MAP-CE is 1.1.1.0 and the
prefix length is 24. The offset length of the PSID field is 4, which means that ports
0 to 4096 are reserved.
<RouterA> system-view
[~RouterA] map rule bmr_name
[*RouterA-map-rule-bmr_name] rule-prefix 2001:db8:1::1 prefix-length 48 ipv4-prefix 1.1.1.0 prefix-
length 24 ea-length 16 psid-offset 4
[*RouterA-map-rule-bmr_name] commit
[~RouterA-map-rule-bmr_name] quit
Step 2 Configure a DMR rule on the device and combine the IPv6 prefix configured in the
DMR rule with the destination IPv4 address on the MAP-CE to form a destination
IPv6 address.
[~RouterA] dmr-prefix dmr_name ipv6-prefix 2001:db8:3::1 prefix-length 96
[*RouterA] commit
Step 3 Configure an IPv6 prefix pool and an address pool on the device.
● Configure an IPv6 prefix pool named pre1, bind it to the MAP rule named
bmr_name, and assign IPv4 and IPv6 addresses that comply with BMR rules
to MAP-CEs.
[~RouterA] ipv6 prefix pre1 delegation
[*RouterA-ipv6-prefix-pre1] map-rule bmr_name
[*RouterA-ipv6-prefix-pre1] commit
[~RouterA-ipv6-prefix-pre1] quit
● Configure an IPv6 address pool named pool1 and bind it to the prefix pool
named pre1. Bind the prefix dmr_name to the IPv6 address pool so that the
BRAS encapsulates the IPv6 prefix as Option 91 information
(OPTION_S46_DMR) into the DHCPv6 Response message sent to MAP-CEs.
[~RouterA] ipv6 pool pool1 bas delegation
[*RouterA-ipv6-pool-pool1] prefix pre1
[*RouterA-ipv6-pool-pool1] option-s46 dmr-prefix dmr_name
[*RouterA-ipv6-pool-pool1] commit
[~RouterA-ipv6-pool-pool1] quit
Step 4 Configure a MAP-T instance on the device and bind the configured DMR and BMR
rules to the MAP-T instance. The DMR rule is used to translate the IPv6 packets
imported from the MAP-CE to the MAP-T instance for address translation and bind
the BMR rules to encapsulate and verify the packets in the instance.
[~RouterA] map-t instance 1 id 1
[*RouterA-map-t-instance-1] dmr-prefix dmr_name
[*RouterA-map-t-instance-1] map-rule bmr_name
[*RouterA-map-t-instance-1] commit
[~RouterA-map-t-instance-1] quit
Step 5 Configure IP addresses of the user- and network-side interfaces on the MAP-BR.
[~RouterA] interface GigabitEthernet0/2/0
[~RouterA-GigabitEthernet0/2/0] ipv6 enable
[*RouterA-GigabitEthernet0/2/0] ipv6 address auto link-local
[*RouterA-GigabitEthernet0/2/0] commit
[~RouterA-GigabitEthernet0/2/0] quit
----End
Networking Requirements
In a centralized scenario shown in Figure 1-152, the MAP-BR (Router A) and BRAS
reside on different devices. The BRAS delivers MAP IPv6 addresses and mapping
rules to MAP-CEs in DHCPv6 IA_PD mode. Router A resides on the edge of a MAP
domain and allows MAP-CEs to access the public IPv4 network through the IPv6
network that is within the MAP domain. The MAP-CEs use each other's public IPv4
address to communicate through the MAP-BR.
In this example, interface 1 and interface 2 are GE 0/2/0 and GE 0/2/1, respectively.
Prerequisites
● The MAP function has been activated.
● The MAP-BR has at least one interface board that supports the MAP function.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a BR.
2. Configure BMR rules.
3. Configure a MAP-E instance and bind the BMR and BR rules to the MAP-E
instance.
4. Configure an IPv6 prefix pool and address pool on the BRAS.
5. Configure the IP address for each interface and a static route.
Data Preparation
To complete the configuration, you need the following data:
● BR name (br_name), IPv6 address (2001:db8:1124::1), and prefix length (96
bits)
● BMR name (bmr_name), IPv6 address (2001:db8:1::1), prefix length (48), IPv4
address (1.1.1.0), mask length (24), EA-length (16), and PSID-offset length
(4)
● MAP-E instance name (2) and ID (2)
● Interface1's IPv6 address (2001:db8:2::1) and mask length (64 bits).
● Interface2's IPv4 address (11.1.1.1) and mask length (24 bits)
Procedure
Step 1 Configure the BR function on a BRAS named br_name and set the IPv6 address to
2001:db8:1124::1 and the prefix length to 96. Use the BR's IPv6 address as the
destination address carried in packets sent by MAP-CEs.
<BRAS> system-view
[~BRAS] br-ipv6-address br_name ipv6-address 2001:db8:1124::1 prefix-length 96
[*BRAS] commit
Step 2 Configure a BMR on the BRAS to instruct the BRAS to assign IPv6 and IPv4
addresses to the MAP-CEs. In this example, the IPv6 prefix address assigned to the
MAP-CE is 2001:db8:1::1, the prefix length is 48, and the length of the EA-bits is
16. The public IPv4 prefix address allocated to the MAP-CE is 1.1.1.0 and the
prefix length is 24. The offset length of the PSID field is 4, which means that ports
0 to 4096 are reserved.
[~BRAS] map rule bmr_name
[*BRAS-map-rule-bmr_name] rule-prefix 2001:db8:1::1 prefix-length 48 ipv4-prefix 1.1.1.0 prefix-length
24 ea-length 16 psid-offset 4
[*BRAS-map-rule-bmr_name] commit
[~BRAS-map-rule-bmr_name] quit
Step 3 Configure an IPv6 prefix pool and an address pool on the BRAS.
● Configure an IPv6 prefix pool named pre1 and bind it to the BMR rule named
bmr_name to allocate PD prefixes to MAP-CEs.
[~BRAS] ipv6 prefix pre1 delegation
[*BRAS-ipv6-prefix-pre1] map-rule bmr_name
[*BRAS-ipv6-prefix-pre1] commit
[~BRAS-ipv6-prefix-pre1] quit
● Configure an IPv6 address pool named pool1 and bind it to the prefix pool
named pre1. Bind the BR name br_name to the IPv6 address pool. Then, the
BRAS encapsulates the IPv6 prefix as Option 90 information
(OPTION_S46_BR) into the DHCPv6 Response message sent to the MAP-E
users.
<BRAS> system-view
[~BRAS] ipv6 pool pool1 bas delegation
[*BRAS-ipv6-pool-pool1] prefix pre1
[*BRAS-ipv6-pool-pool1] option-s46 br-ipv6-address br_name
[*BRAS-ipv6-pool-pool1] commit
[~BRAS-ipv6-pool-pool1] quit
Step 4 On the MAP-BR, configure the BR that is the local IPv6 address as the destination
address of IPv6 packets sent by MAP-CEs. Set the BR name to br_name, the IPv6
address to 2001:db8:1124::1, and the prefix length to 96.
<RouterA> system-view
[~RouterA] br-ipv6-address br_name ipv6-address 2001:db8:1124::1 prefix-length 96
[*RouterA] commit
Step 5 Configure BMRs on the MAP-BR. Configure the BMR to remove the user-side IPv4
address from the IPv6 address and encapsulate IPv6 information into the IPv4
address and port number of the network-side traffic.
Step 6 Configure a MAP-E instance on the MAP-BR and bind the configured BR and BMR
rules to the MAP-E instance. The BR is used to import the encapsulated traffic of
MAP-CEs to an interface board, select the MAP-E instance for conversion, and bind
the BMR rule to encapsulate and verify the packets in the instance.
[~RouterA] map-e instance 2 id 2
[*RouterA-map-e-instance-2] br-ipv6-address br_name
[*RouterA-map-e-instance-2] map-rule bmr_name
[*RouterA-map-e-instance-2] commit
[~RouterA-map-e-instance-2] quit
Step 7 Configure IP addresses of the user- and network-side interfaces on the MAP-BR.
[~RouterA] interface GigabitEthernet0/2/0
[~RouterA-GigabitEthernet0/2/0] ipv6 enable
[*RouterA-GigabitEthernet0/2/0] ipv6 address 2001:db8:2::1 64
[*RouterA-GigabitEthernet0/2/0] commit
[~RouterA-GigabitEthernet0/2/0] quit
[~RouterA] interface GigabitEthernet0/2/1
[~RouterA-GigabitEthernet0/2/1] ip address 11.1.1.1 24
[*RouterA-GigabitEthernet0/2/1] commit
[~RouterA-GigabitEthernet0/2/1] quit
[~RouterA] ipv6 route-static 2001:db8:1::1 48 2001:db8:2::2
[*RouterA] commit
Step 8 Configure an address for the interface connecting the BRAS to the MAP-BR device.
[~BRAS] interface GigabitEthernet0/2/0
[*BRAS-GigabitEthernet0/2/0] ipv6 enable
[*BRAS-GigabitEthernet0/2/0] ipv6 address 2001:db8:2::2 64
[*BRAS-GigabitEthernet0/2/0] undo shutdown
[*BRAS-GigabitEthernet0/2/0] commit
[~BRAS] ipv6 route-static 2001:db8:1124::1 64 2001:db8:2::1
[*BRAS] commit
----End
Networking Requirements
In a distributed scenario shown in Figure 1-153, both the MAP-BR and BRAS
reside on Router A. The BRAS functions as a DHCPv6 server to deliver MAP IPv6
addresses and mapping rules to MAP-CEs in DHCPv6 IA_PD mode. Router A also
functions as a MAP-BR and resides on the edge of a MAP domain. Router A allows
the MAP-CEs to access the public IPv4 network through the IPv6 network that is
within the MAP domain. In addition, the MAP-CEs can use each other's public IPv4
address to communicate through the MAP-BR.
In this example, interface 1 and interface 2 are GE 0/2/0 and GE 0/2/1, respectively.
Interface 1 is used in this example.
Prerequisites
● The MAP license has been loaded to the MAP-BR, and the MAP function has
been activated.
● The MAP-BR has at least one interface board that supports the MAP function.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a BR.
2. Configure BMR rules.
3. Configure an IPv6 prefix pool and an IPv6 address pool.
4. Configure a MAP-E instance and bind the BMR and BR rules to the MAP-E
instance.
5. Configure user access and RADIUS authentication on the BRAS.
Data Preparation
To complete the configuration, you need the following data:
● BR name (br_name), IPv6 address (2001:db8:1124::1), and prefix length (96
bits)
● BMR name (bmr_name), IPv6 address (2001:db8:1::1), prefix length (48), IPv4
address (1.1.1.0), mask length (24), EA-length (16), and PSID-offset length
(4)
● MAP-E instance name (2) and ID (2)
● Interface1's IPv6 address (2001:db8:2::1), IPv6 mask length (64 bits), IPv4
address (10.1.1.1), and IPv4 mask length (24 bits)
● Interface2's IPv4 address (11.1.1.1) and mask length (24 bits)
Procedure
Step 1 On the MAP-BR, configure the BR that is the local IPv6 address as the destination
address of IPv6 packets sent by MAP-CEs. Set the BR name to br_name, the IPv6
address to 2001:db8:1124::1, and the prefix length to 96.
<RouterA> system-view
[~RouterA] br-ipv6-address br_name ipv6-address 2001:db8:1124::1 prefix-length 96
[*RouterA] commit
Step 2 Configure a BMR rule on the MAP-BR to instruct the BRAS to assign IPv6 and IPv4
addresses to MAP-CEs. Configure the BMR to remove the user-side IPv4 address
from the IPv6 address and encapsulate IPv6 information into the IPv4 address and
port number of the network-side traffic.
[~RouterA] map rule bmr_name
[*RouterA-map-rule-bmr_name] rule-prefix 2001:db8:1::1 prefix-length 48 ipv4-prefix 1.1.1.0 prefix-
length 24 ea-length 16 psid-offset 4
[*RouterA-map-rule-bmr_name] commit
[~RouterA-map-rule-bmr_name] quit
Step 3 Configure an IPv6 prefix pool and an address pool on the BRAS.
● Configure an IPv6 prefix pool named pre1 and bind it to the BMR rule named
bmr_name to allocate PD prefixes to MAP-CEs.
[~RouterA] ipv6 prefix pre1 delegation
[*RouterA-ipv6-prefix-pre1] map-rule bmr_name
[*RouterA-ipv6-prefix-pre1] commit
[~RouterA-ipv6-prefix-pre1] quit
● Configure an IPv6 address pool named pool1 and bind it to the prefix pool
named pre1. Bind the BR name br_name to the IPv6 address pool. Then, the
BRAS encapsulates the IPv6 prefix as Option 90 information
(OPTION_S46_BR) into the DHCPv6 Response message sent to the MAP-E
users.
[~RouterA] ipv6 pool pool1 bas delegation
[*RouterA-ipv6-pool-pool1] prefix pre1
[*RouterA-ipv6-pool-pool1] option-s46 br-ipv6-address br_name
[*RouterA-ipv6-pool-pool1] commit
[~RouterA-ipv6-pool-pool1] quit
Step 4 Configure a MAP-E instance on the device and bind the configured BR and BMR
rules to the MAP-E instance. The BR is used to import the encapsulated traffic of
MAP-CEs to an interface board, select the MAP-E instance for conversion, and bind
the BMR rule to encapsulate and verify the packets in the instance.
[~RouterA] map-e instance 2 id 2
[*RouterA-map-e-instance-2] br-ipv6-address br_name
[*RouterA-map-e-instance-2] map-rule bmr_name
[*RouterA-map-e-instance-2] commit
[~RouterA-map-e-instance-2] quit
Step 5 Configure IP addresses of the user- and network-side interfaces on the device.
[~RouterA] interface GigabitEthernet0/2/0
[~RouterA-GigabitEthernet0/2/0] ipv6 enable
[*RouterA-GigabitEthernet0/2/0] ipv6 address auto link-local
[*RouterA-GigabitEthernet0/2/0] commit
[~RouterA-GigabitEthernet0/2/0] quit
[~RouterA] interface GigabitEthernet0/2/1
[~RouterA-GigabitEthernet0/2/1] ip address 11.1.1.1 24
[*RouterA-GigabitEthernet0/2/1] commit
[~RouterA-GigabitEthernet0/2/1] quit
[~RouterA] quit
----End
Definition
An IPv4 over IPv6 tunnel connects isolated IPv4 sites over the IPv6 network.
Objective
During the later transition phase from IPv4 to IPv6, IPv6 networks have been
widely deployed, and IPv4 sites are scattered across IPv6 networks. It is not
economical to connect these isolated IPv4 sites with private lines. The common
solution is the tunneling technology. With this technology, IPv4 over IPv6 tunnels
can be created on IPv6 networks to enable communication between isolated IPv4
sites through IPv6 public networks.
Benefits
Using IPv6 tunnels as virtual links for IPv4 networks allows carriers to fully utilize
existing networks without upgrading internal devices of their backbone networks.
Background
During the later transition phase from IPv4 to IPv6, IPv6 networks have been
widely deployed, and IPv4 sites are scattered across IPv6 networks. It is not
economical to connect these isolated IPv4 sites with private lines. The common
solution is the tunneling technology. With this technology, IPv4 over IPv6 tunnels
can be created on IPv6 networks to enable communication between isolated IPv4
sites through IPv6 public networks.
Version A 4-bit field indicating the version number The value is 6 for an
of the Internet Protocol IPv6 header.
Traffic An 8-bit field indicating the traffic class of The value is an integer
Class an IPv4 over IPv6 tunnel, used to identify ranging from 0 to 255.
the service class of packets and similar to The default value is 0.
the ToS field in IPv4
Flow A 20-bit field used to mark the packets of The value is an integer
Label a specified service flow so that a device ranging from 0 to
can recognize and provide special handling 1048575. The default
of packets in the flow value is 0.
Next An 8-bit field indicating the type of header The value is 4 in IPv4
Header immediately following the IPv6 header over IPv6 tunnel
scenarios.
Hop Limit An 8-bit field indicating the maximum The value is an integer
number of hops along a tunnel, allowing ranging from 1 to 255.
packet transmission termination when The default value is
routing loops occur on an IPv4 over IPv6 64.
tunnel
Source A 128-bit field indicating the source IPv6 The address is a 32-
Address address of an IPv6 packet digit hexadecimal
number, in the format
of X:X:X:X:X:X:X:X.
Implementation Principle
An IPv4 over IPv6 tunnel is manually configured between two border routers. You
must manually specify the source address/source interface and the destination
address/destination domain name of the tunnel.
As shown in Figure 1-155, packets passing through the IPv4 over IPv6 tunnel are
processed on border nodes (B and C), and the other nodes (A, D, and intermediate
nodes between B and C) are unaware of the tunnel. IPv4 packets are transmitted
between A, B, C, and D, whereas IPv6 packets are transmitted between B and C.
Therefore, border routers B and C must be able to process both IPv4 and IPv6
packets, that is, IPv4/IPv6 dual protocol stack must be supported and enabled on
B and C.
Figure 1-155 shows the processing of IPv4 packets along an IPv4 over IPv6 tunnel.
1. IPv4 packet forwarding: Node A sends an IPv4 packet to node B in which the
destination address is the IPv4 address of node D.
2. Tunnel encapsulation: After B receives the IPv4 packet from A on the IPv4
network, B finds that the destination address of the IPv4 packet is not itself
and the outbound interface to the next hop is a tunnel interface. B then adds
an IPv6 header to the packet. Specifically, node B encapsulates its own IPv6
address and that of node C into the Source Address and Destination Address
fields, respectively, sets the value of the Version field to 6 and that of the Next
Header field to 4, and encapsulates other fields that ensure the transmission
of the packet along the tunnel as required.
3. Tunnel forwarding: Node B searches the IPv6 routing table based on the
Destination Address field carried in the IPv6 packet header and forwards the
encapsulated IPv6 packet to node C. Other nodes on the IPv6 network are
unaware of the tunnel and process the encapsulated packet as an ordinary
IPv6 packet.
4. Tunnel decapsulation: Upon receipt of the IPv6 packet in which the
destination address is its own IPv6 address, node C decapsulates the packet by
removing its IPv6 header based on the Version field and determines the
encapsulated packet is an IPv4 packet based on the Next Header field.
5. IPv4 packet forwarding: Node C searches the IPv4 routing table based on the
Destination Address field of the IPv4 packet and forwards the packet to Node
D.
During the later transition phase from IPv4 to IPv6, IPv6 networks have been
widely deployed, and IPv4 sites are scattered across IPv6 networks. It is not
economical to connect these isolated IPv4 sites with private lines. The common
solution is the tunneling technology. With this technology, IPv4 over IPv6 tunnels
can be created on IPv6 networks to enable communication between isolated IPv4
sites through IPv6 public networks.
Usage Scenario
To enable IPv4 networks to communicate with each other through an IPv6
network, configure an IPv4 over IPv6 tunnel on the devices where IPv4 networks
border an IPv6 network.
On the network shown in Figure 1-156, an IPv4 over IPv6 tunnel can be
established between two border devices to provide stable connection for isolated
Pv4 networks or between a terminal and a border device to allow the terminal to
access the remote IPv4 network. You can configure multiple IPv4 over IPv6 tunnels
on a border device for communicating with multiple IPv6 networks.
Pre-configuration Tasks
Before configuring an IPv4 over IPv6 tunnel, connect interfaces and configure IPv6
addresses for the interfaces so that the route between the interfaces is reachable.
Procedure
Step 1 Run system-view
In VPN scenarios, the loopback interface must be bound to the VPN instance to
which the device belongs.
----End
Context
An IPv4 over IPv6 tunnel is established between tunnel interfaces on two devices.
As such, you need to configure a tunnel interface on both devices. For an IPv4
over IPv6 tunnel interface, you must specify a protocol type, source IPv6 address/
source interface, and destination IPv6 address/destination domain name.
A tunnel interface is a logical interface and its status is Down in the following
situations:
● The destination address configured for the tunnel interface is unreachable or
is the address of the tunnel interface.
● The status of the source interface configured for the tunnel interface is Down.
● The IP address configured on the tunnel interface is invalid.
Perform the following steps on the devices at both ends of a tunnel:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel interface-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run tunnel-protocol ipv4-ipv6
The tunnel protocol is set to IPv4 over IPv6.
Step 4 Run source { ipv6-address | interface-type interface-number }
A source IPv6 address or source interface is specified for the tunnel interface.
You can only specify a loopback interface as the source interface for a tunnel
interface. Likewise, you can only specify the IPv6 address of a loopback interface
as the source IPv6 address for a tunnel interface.
Step 5 Configure a destination IPv6 address or destination domain name for the tunnel
as required:
● Configure a destination IPv6 address.
Run the destination [ vpn-instance vpn-instance-name ] ipv6-address
command to specify a destination IPv6 address for the tunnel.
The destination IPv6 address specified for the tunnel can only be the IPv6
address of the loopback interface on the peer end.
● Configure a destination domain name.
Run the dns resolve command to enable dynamic DNS resolution.
Run the destination [ vpn-instance vpn-instance-name ] domain domain-
name command to specify a destination domain name for the tunnel.
In VPN scenarios, the VPN instance to which the peer device is bound must be
specified.
Step 6 Run ip address ip-address { mask | mask-length } [ sub ]
An IPv4 address is configured for the tunnel interface.
Step 7 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Choose one of the following methods to configure the route with the outgoing
interface as the tunnel interface:
● Run the ip route-static ip-address { mask | mask-length } tunnel interface-
number command to configure a static route.
When configuring a static route, you must configure both ends of the tunnel.
Note that the destination address is the destination IPv4 address of the
packet before IPv4 over IPv6 encapsulation is performed; the outbound
interface is the local tunnel interface.
In VPN scenarios, you must also specify the VPN instance to which the next-
hop address belongs.
● Configure dynamic routes. You can use the Border Gateway Protocol (BGP) or
the Interior Gateway Protocol (IGP), excluding Intermediate System-to-
Intermediate System (IS-IS). Detailed configurations are not mentioned here.
When configuring a dynamic routing protocol, you must enable the dynamic
routing protocol on tunnel interfaces and the IPv4 network-side interfaces.
Step 3 Run commit
----End
Procedure
Step 1 Run system-view
NOTE
The configuration in the tunnel interface view is similar to that in the system view, and
therefore is not described here.
The configurations in the system view take effect for all IPv4 over IPv6 tunnels configured on a
node. The configurations in the tunnel interface view take effect only for the current tunnel
interface and override the configurations in the system view.
A flow label value is set for the IPv4 over IPv6 tunnel so that the device can
recognize and provide special handling of packets in a specified flow.
The maximum number of hops along the IPv4 over IPv6 tunnel is configured so
that packet transmission can be terminated when routing loops occur on the
tunnel.
----End
Prerequisites
All configurations of an IPv4 over IPv6 tunnel are complete.
Procedure
● Run the display interface tunnel [ interface-number ] command to check the
status of tunnel interfaces.
● Run the display ip routing-table command to check IPv6 routing table
information.
● Run the ping { -a source-ip-address dest-ip-address | -vpn-instance vpn-
instance-name } command to check whether the two ends of the tunnel are
reachable.
----End
Networking Requirements
On the network shown in Figure 1-157, two IPv4 networks are connected with an
IPv6 network through DeviceA and DeviceB. To interconnect the hosts on the two
IPv4 networks, configure an IPv4 over IPv6 tunnel between DeviceA and DeviceB.
Interface1 and interface2 in this example represent GE 0/1/0 and GE0/1/1, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IPv4 over IPv6 tunnel on the edge devices.
Data Preparation
To complete the configuration, you need the following data:
● Source and destination IPv6 addresses of the tunnel interfaces
● IPv4 addresses of tunnel interfaces
Procedure
Step 1 Configure IPv6 addresses for interfaces.
# Configure DeviceA.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface gigabitethernet 0/1/0
[~DeviceA-GigabitEthernet0/1/0] ipv6 enable
[*DeviceA-GigabitEthernet0/1/0] ipv6 address 2001:db8:100::1 64
[*DeviceA-GigabitEthernet0/1/0] undo shutdown
[*DeviceA-GigabitEthernet0/1/0] commit
[~DeviceA-GigabitEthernet0/1/0] quit
# Configure DeviceB.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceB
[*HUAWEI] commit
[~DeviceB] interface gigabitethernet 0/1/0
[~DeviceB-GigabitEthernet0/1/0] ipv6 enable
[*DeviceB-GigabitEthernet0/1/0] ipv6 address 2001:db8:100::2 64
[*DeviceB-GigabitEthernet0/1/0] undo shutdown
[*DeviceB-GigabitEthernet0/1/0] commit
[~DeviceB-GigabitEthernet0/1/0] quit
# Configure DeviceB.
[~DeviceB] interface gigabitethernet 0/1/1
[~DeviceB-GigabitEthernet0/1/1] ip address 1.1.1.2 255.255.255.0
[*DeviceB-GigabitEthernet0/1/1] undo shutdown
[*DeviceB-GigabitEthernet0/1/1] commit
[~DeviceB-GigabitEthernet0/1/1] quit
# Configure DeviceB.
Step 4 Configure IPv4 addresses, source interfaces, and destination IPv6 addresses for
tunnel interfaces.
# Configure DeviceA.
[~DeviceA] interface tunnel 1
[~DeviceA-Tunnel1] tunnel-protocol ipv4-ipv6
[~DeviceA-Tunnel1] ip address 192.168.1.1 255.255.255.0
[*DeviceA-Tunnel1] source LoopBack 1
[*DeviceA-Tunnel1] destination 2001:db8:300::2
[*DeviceA-Tunnel1] commit
[~DeviceA-Tunnel1] quit
# Configure DeviceB.
[~DeviceB] interface tunnel 1
[~DeviceB-Tunnel1] tunnel-protocol ipv4-ipv6
[~DeviceB-Tunnel1] ip address 192.168.1.2 255.255.255.0
[*DeviceB-Tunnel1] source LoopBack 1
[*DeviceB-Tunnel1] destination 2001:db8:200::1
[*DeviceB-Tunnel1] commit
[~DeviceB-Tunnel1] quit
Step 5 Configure the route with the outgoing interface as the tunnel interface.
# Configure DeviceA.
[~DeviceA] ipv6 route-static 2001:db8:300::2 64 gigabitethernet 0/1/0 2001:db8:100::2
[*DeviceA] commit
[~DeviceA] ospf 1
[*DeviceA-ospf-1] area 0
[*DeviceA-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
[*DeviceA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*DeviceA-ospf-1-area-0.0.0.0] commit
# Configure DeviceB.
[~DeviceB] ipv6 route-static 2001:db8:200::1 64 gigabitethernet 0/1/0 2001:db8:100::1
[*DeviceB] commit
[~DeviceB] ospf 1
[*DeviceB-ospf-1] area 0
[*DeviceB-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
[*DeviceB-ospf-1-area-0.0.0.0] network 172.16.1.0 0.0.0.255
[*DeviceB-ospf-1-area-0.0.0.0] commit
# Configure PC1.
Configure IP address 10.1.1.1/24 for PC1. This IP address must be on the same
network segment as that of GE 0/1/1 on DeviceA. The method of configuring an
IP address is determined by the operating system of PC1, and therefore is not
described.
# Configure PC2.
Configure IP address 172.16.1.1/24 for PC2. This IP address must be on the same
network segment as that of GE 0/1/1 on DeviceB. The method of configuring an IP
address is determined by the operating system of PC2, and therefore is not
described.
# Ping the IPv4 address of the tunnel interface on DeviceB from DeviceA. The
command output shows that the ping operation is successful.
[~DeviceA] ping 192.168.1.2
PING 192.168.1.2: 56 data bytes, press CTRL_C to break
Reply from 192.168.1.2: bytes=56 Sequence=1 ttl=255 time=108 ms
Reply from 192.168.1.2: bytes=56 Sequence=2 ttl=255 time=2 ms
Reply from 192.168.1.2: bytes=56 Sequence=3 ttl=255 time=3 ms
Reply from 192.168.1.2: bytes=56 Sequence=4 ttl=255 time=3 ms
Reply from 192.168.1.2: bytes=56 Sequence=5 ttl=255 time=2 ms
# On PC1, ping the IP address of PC2. The following command output shows that
the ping operation succeeds.
C:\> ping 172.16.1.1
Pinging 172.16.1.1 with 32 bytes of data:
Reply from 172.16.1.1: time<1ms
Reply from 172.16.1.1: time<1ms
Reply from 172.16.1.1: time<1ms
Reply from 172.16.1.1: time<1ms
Ping statistics for 172.16.1.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
----End
Configuration Files
● Configuration file of DeviceA
#
sysname DeviceA
#
interface GigabitEthernet0/1/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:100::1/64
#
interface GigabitEthernet0/1/1
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:200::1/64
binding tunnel ipv4-ipv6
#
interface Tunnel1
ip address 192.168.1.1 255.255.255.0
tunnel-protocol ipv4-ipv6
source LoopBack1
destination 2001:DB8:300::2
#
ospf 1
area 0.0.0.0
network 192.168.1.0 0.0.0.255
network 10.1.1.0 0.0.0.255
#
ipv6 route-static 2001:DB8:300:: 64 GigabitEthernet0/1/0 2001:DB8:100::2
#
return
Definition
An IPv6 over IPv4 tunnel connects isolated IPv6 sites over the IPv4 network.
Objective
During the earlier transition phase from IPv4 to IPv6, IPv4 networks have been
widely deployed, and IPv6 sites are scattered across IPv4 networks. It is not
economical to connect these isolated IPv6 sites with private lines. The common
solution is the tunneling technology. With this technology, IPv6 over IPv4 tunnels
can be created on IPv4 networks to enable communication between isolated IPv6
sites through IPv4 public networks.
Benefits
Fully uses existing networks. Devices on an IPv4 backbone network do not need to
upgrade to IPv6 networks.
1. On the border router, IPv4/IPv6 dual stack is enabled, and an IPv6 over IPv4
tunnel is configured.
2. After the border router receives a packet from the IPv6 network, if the
destination address of the packet is not the border router and the outbound
interface is a tunnel interface, the border router appends an IPv4 header to
the IPv6 packet to encapsulate it as an IPv4 packet.
3. On the IPv4 network, the encapsulated packet is transmitted to the remote
border router.
4. The remote border router receives the packet, removes the IPv4 header, and
then sends the decapsulated IPv6 packet to the remote IPv6 network.
IPv6 over IPv4 tunnels are classified into IPv6 over IPv4 manual tunnels and
IPv6-to-IPv4 (6to4) tunnels depending on the application scenarios.
The following describes the characteristics and applications of each.
IPv6-to-IPv4 Tunnel
A 6to4 tunnel can connect multiple isolated IPv6 sites through an IPv4 network. A
6to4 tunnel can be a P2MP connection, whereas a manual tunnel is a P2P
connection. Therefore, routers on both ends of the 6to4 tunnel are not configured
in pairs.
A 6to4 tunnel uses a special IPv6 address, a 6to4 address in the format of
2002:IPv4 address:subnet ID:interface ID. A 6to4 address has a 48-bit prefix
composed of 2002:IPv4 address. The IPv4 address is the globally unique IPv4
address applied by an isolated IPv6 site. This IPv4 address must be configured on
the physical interfaces connecting the border routers between IPv6 and IPv4
networks to the IPv4 network. The IPv6 address has a 16-bit subnet ID and a 64-
bit interface ID, which are assigned by users in the isolated IPv6 site.
When the 6to4 tunnel is used for communication between the 6to4 network and
the native IPv6 network, you can configure an anycast address with the prefix
2002:c058:6301/48 on the tunnel interface of the 6to4 relay router.
The difference between a 6to4 address and anycast address is as follows:
● If a 6to4 address is used, you must configure different addresses for tunnel
interfaces of all devices.
● If an anycast address is used, you must configure the same address for the
tunnel interfaces of all devices, effectively reducing the number of addresses.
A 6to4 network refers to a network on which all nodes are configured with 6to4
addresses. A native IPv6 network refers to a network on which nodes do not need
to be configured with 6to4 addresses. A 6to4 relay is required for communication
between 6to4 networks and native IPv6 networks.
6RD Tunneling
IPv6 rapid deployment (6RD) tunneling allows rapid deployment of IPv6 services
over an existing IPv4 network.
As an enhancement to the 6to4 solution, 6RD tunneling allows service providers to
use one of their own IPv6 prefixes instead of the well-known 2002::/16 prefix
standardized for 6to4. 6RD tunneling provides more flexible network planning,
allowing different service providers to deploy 6RD tunnels using different prefixes.
Therefore, 6RD tunneling is the most widely used IPv6 over IPv4 tunneling
technology.
Basic Concepts
Figure 1-160 introduces the basic concepts of 6RD tunneling and 6RD relay.
● 6RD domain
A 6RD domain is a special IPv6 network. The IPv6 address prefixes of devices
or hosts within a 6RD domain share the same 6RD delegated prefix. A 6RD
domain consists of 6RD customer edge (CE) devices and 6RD border relays
(BRs). Each 6RD domain uses a unique 6RD prefix.
● 6RD CE
A 6RD CE is an edge node connecting a 6RD network to an IPv4 network. An
IPv4 address needs to be configured for the interface connecting the 6RD CE
to the IPv4 network. An IPv6 address needs to be configured for the interface
connecting the 6RD CE to the 6RD network, and the IPv6 prefix is a 6RD
delegated prefix.
● 6RD BR
A 6RD BR is used to connect a 6RD network to an IPv6 network. At least one
IPv4 interface needs to be configured for the 6RD BR. Each 6RD domain has
only one 6RD BR.
● 6RD prefix
A 6RD prefix is an IPv6 prefix used by a service provider. It is part of a 6RD
delegated prefix.
● IPv4 prefix length
The IPv4 prefix length is calculated by subtracting specified high-order bits
from the source tunnel address (IPv4 address). The rest of the IPv4 address is
part of the 6RD delegated prefix.
● 6RD delegated prefix
A 6RD delegated prefix is an IPv6 prefix assigned to a host or a device in a
6RD domain. The 6RD delegated prefix is created by combining a 6RD prefix
and all or part of an IPv4 address.
6RD Address Format
As shown in Figure 1-161, a 6RD address is composed of a 6RD prefix (IPv6 prefix
selected by a service provider for use by a 6RD domain), an IPv4 address, a subnet
ID, and an interface identifier.
A 6RD address has a 64-bit length and consists of a 6RD delegated prefix and a
customized subnet mask. The 6RD delegated prefix is a combination of a 6RD
prefix and all or part of an IPv4 address. The length of the IPv4 address is
determined by the IPv4 prefix length configured for the 6RD tunnel. That is, after
subtracting specified high-order bits from the IPv4 address, the rest of the IPv4
address becomes part of the 6RD delegated prefix.
Service Scenarios
A 6RD tunnel can be used in two scenarios: interworking between 6RD domains
and interworking between a 6RD domain and an IPv6 network.
● As shown in Figure 1-162, two 6RD domains interwork over a 6RD tunnel.
d. Upon receiving the IPv4 packet, 6RD CE B decapsulates the IPv4 packet,
searches for the destination address contained in the IPv6 packet header,
and routes the IPv6 packet to host B.
e. After receiving the packet, host B responds to the packet. The returned
packet is processed in a similar way.
● As shown in Figure 1-163, a 6RD domain and an IPv6 network interwork over
a 6RD tunnel.
world. With the tunneling technology, IPv6 over IPv4 tunnels can be created on
the IPv4 networks to connect the isolated IPv6 sites. To establish IPv6 over IPv4
tunnels, the IPv4/IPv6 dual stack must be enabled on the routers at the borders of
the IPv4 and IPv6 networks.
Usage Scenario
To enable IPv6 networks to communicate with each other through an IPv4
network, configure IPv6 over IPv4 tunnels on the routers where IPv6 networks
border an IPv4 network.
A manual IPv6 over IPv4 tunnel can be established between two border routers to
provide a stable connection for separated IPv6 networks. It can also be established
between a terminal system and a border router to provide a connection for the
terminal system to access an IPv6 network. The devices at the two ends of an IPv6
over IPv4 tunnel must support the IPv4/IPv6 dual stack. The intermediate devices
do not need to support the IPv4/IPv6 dual stack. You can configure multiple
manual IPv6 over IPv4 tunnels on the border router to communicate with multiple
IPv6 networks.
Pre-configuration Tasks
Before configuring a manual IPv6 over IPv4 tunnel, complete the following tasks:
Procedure
Step 1 Run system-view
NOTE
The destination address of an IPv6 over IPv4 tunnel can be a physical or loopback interface
address.
----End
Usage Scenario
To enable IPv6 networks to communicate with each other through an IPv4
network, configure IPv6 over IPv4 tunnels on the routers where IPv6 networks
border the IPv4 network.
6to4 tunnels use special 6to4 addresses that are in the format of 2002:a.b.c.d::/48,
in which a.b.c.d represents the source address of the tunnel interface. During
communication, the IPv4 address in a 6to4 address is used to encapsulate packets.
The 6to4 tunnel does not need to be configured with a destination address.
Pre-configuration Tasks
Before configuring a 6to4 tunnel, complete the following tasks:
● Connect interfaces and configure physical parameters for the interfaces to
ensure that the physical status of the interfaces is Up.
● Configure link layer protocol parameters for the interfaces to ensure that the
link layer protocol status of the interfaces is Up.
● Configure the IPv4/IPv6 dual stack.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel interface-number
A tunnel interface is created.
Step 3 Run tunnel-protocol ipv6-ipv4 6to4
The tunnel mode is set to 6to4 tunnel mode.
Step 4 Run source { ip-address | interface-type interface-number }
The source address or source interface of the tunnel is specified.
Step 5 Run ipv6 enable
IPv6 is configured on the interface.
Step 6 Run ipv6 address { ipv6-address prefix-length | ipv6-address-mask }
An IPv6 address is configured for the tunnel interface.
NOTE
The IPv6 address prefix specified in this command must be the same as the prefix of the
address of the 6to4 network where the border router resides.
----End
Usage Scenario
A 6RD tunnel is a P2MP tunnel and is an enhancement to the 6to4 solution. 6RD
tunneling allows carriers to use one of their own IPv6 prefixes as the 6RD prefix,
mitigating the IPv4 address shortage of the 6to4 solution.
Pre-configuration Tasks
Before configuring a 6RD tunnel, complete the following tasks:
● Connect interfaces and set their physical parameters to ensure that the
physical status of the interfaces is Up.
● Configure link-layer protocol parameters for interfaces to go Up at the link
layer.
● Ensure reachable routes between the source and destination interfaces.
● Enable IPv4/IPv6 dual stack on the interfaces.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel interface-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run tunnel-protocol ipv6-ipv4 6rd
The tunnel encapsulation mode is set to 6RD.
Step 4 Run source { ip-address | interface-type interface-number }
The source address or source interface is specified for the 6RD tunnel interface.
Step 5 Run ipv6-prefix { ipv6-address prefix-length | ipv6-prefix-mask }
The 6RD prefix and 6RD prefix length are configured.
A 6RD prefix is the IPv6 prefix assigned by a carrier for a 6RD address. The 6RD
delegated prefix is calculated by combining the 6RD prefix and part or all bits of
an IPv4 address. The 6RD delegated prefix is used to assign IPv6 address prefixes
to devices or hosts in a 6RD domain.
The 6RD prefix for each 6RD domain must be unique.
Step 6 Run ipv4-prefix length ipv4-prefix-length
The IPv4 prefix length of the 6RD tunnel is configured.
The IPv4 prefix length indicates the number of high-order bits to be deleted from
the source tunnel address (an IPv4 address). The remaining bits of the IPv4
address and the 6RD prefix together form the 6RD delegated prefix.
You are advised to set ipv4-prefix-length to 0 when the carrier network is an IPv4
network. On an IPv4 carrier network, the source tunnel address is entirely
embedded in the 6RD delegated prefix, which is used to search for the destination
tunnel address.
The IPv4 prefix length of the 6RD tunnel in each 6RD domain must be unique.
The 6RD delegated prefix can be automatically calculated based on the configured
source tunnel address or source interface, 6RD prefix, IPv6 prefix length, and IPv4
prefix length. You can run the display this interface command in the tunnel
interface view to check the calculated 6RD delegated prefix.
Step 7 Run ipv6 enable
IPv6 is enabled on the interface.
Step 8 Run ipv6 address { ipv6-address prefix-length | ipv6-address-mask }
An IPv6 address is configured for the tunnel interface, and the address prefix is the
6RD delegated prefix.
Step 9 (Optional) Run border-relay address br-ipv4-address The IPv4 address of the 6RD
BR is configured.
The IPv4 address of the 6RD BR needs to be configured on a 6RD CE only in a
relay scenario where a 6RD domain interworks with a native IPv6 network.
----End
Networking Requirements
As shown in Figure 1-164, two IPv6 networks are connected to DeviceB on the
IPv4 backbone network through DeviceA and DeviceC. To interconnect the two
IPv6 networks, configure a manual IPv6 over IPv4 tunnel between DeviceA and
DeviceC.
Figure 1-164 Networking diagram for configuring a manual IPv6 over IPv4 tunnel
NOTE
Precautions
When configuring a manual IPv6 over IPv4 tunnel, note the following points:
● Create a tunnel interface and set parameters for the tunnel interface.
● Perform the following configuration on the routers at both ends of the tunnel.
The source IP address of the local end is the destination IP address of the
remote end of the tunnel. Similarly, the destination IP address of the local end
is the source IP address of the remote end.
● Configure an IP address for the tunnel interface to support routing protocols.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each physical interface.
2. Configure IPv6 addresses for tunnel interfaces and source and destination
addresses for the involved tunnel.
3. Set the protocol type to IPv6-IPv4.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces
● IPv6 addresses of the tunnel interfaces, and source and destination addresses
of the tunnel
Procedure
Step 1 Configure IPv6 addresses for interfaces on DeviceA, DeviceB, and DeviceC.
# Configure DeviceA.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface gigabitethernet 0/1/0
[~DeviceA-GigabitEthernet0/1/0] ip address 192.168.50.2 255.255.255.0
[*DeviceA-GigabitEthernet0/1/0] undo shutdown
[*DeviceA-GigabitEthernet0/1/0] quit
# Configure DeviceB.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceB
[*HUAWEI] commit
[~DeviceB] interface gigabitethernet 0/1/0
[~DeviceB-GigabitEthernet0/1/0] ip address 192.168.50.1 255.255.255.0
[*DeviceB-GigabitEthernet0/1/0] undo shutdown
[*DeviceB-GigabitEthernet0/1/0] quit
[*DeviceB] interface gigabitethernet 0/2/0
[*DeviceB-GigabitEthernet0/2/0] ip address 192.168.51.1 255.255.255.0
[*DeviceB-GigabitEthernet0/2/0] undo shutdown
[*DeviceB-GigabitEthernet0/2/0] commit
[~DeviceB-GigabitEthernet0/2/0] quit
# Configure DeviceC.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceC
[*HUAWEI] commit
[~DeviceC] interface gigabitethernet 0/1/0
[~DeviceC-GigabitEthernet0/1/0] ip address 192.168.51.2 255.255.255.0
[*DeviceC-GigabitEthernet0/1/0] undo shutdown
[*DeviceC-GigabitEthernet0/1/0] quit
Step 2 Create tunnel interfaces, and configure a tunnel mode, IPv6 addresses for the
tunnel interfaces, and source and destination addresses for the tunnel.
# Configure DeviceA.
[*DeviceA] interface Tunnel 10
[*DeviceA-Tunnel10] tunnel-protocol ipv6-ipv4
[*DeviceA-Tunnel10] ipv6 enable
[*DeviceA-Tunnel10] ipv6 address 2001:db8::1 64
[*DeviceA-Tunnel10] source 192.168.50.2
[*DeviceA-Tunnel10] destination 192.168.51.2
[*DeviceA-Tunnel10] quit
# Configure DeviceC.
[*DeviceC] interface Tunnel 10
[*DeviceC-Tunnel10] tunnel-protocol ipv6-ipv4
[*DeviceC-Tunnel10] ipv6 enable
[*DeviceC-Tunnel10] ipv6 address 2001:db8::2 64
[*DeviceC-Tunnel10] source 192.168.51.2
[*DeviceC-Tunnel10] destination 192.168.50.2
[*DeviceC-Tunnel10] quit
Step 3 Configure static routes to ensure that DeviceA and DeviceC are reachable.
# Configure DeviceA.
[*DeviceA] ip route-static 192.168.51.0 255.255.255.0 192.168.50.1
[*DeviceA] commit
# Configure DeviceC.
[*DeviceC] ip route-static 192.168.50.0 255.255.255.0 192.168.51.1
[*DeviceC] commit
# Ping the IPv6 address of Tunnel 10 on DeviceA from DeviceC. DeviceC can
receive return packets from DeviceA.
[~DeviceC] ping ipv6 2001:db8::1
PING 2001:db8::1 : 56 data bytes, press CTRL_C to break
Reply from 2001:db8::1
bytes=56 Sequence=1 hop limit=64 time = 28 ms
Reply from 2001:db8::1
bytes=56 Sequence=2 hop limit=64 time = 27 ms
Reply from 2001:db8::1
bytes=56 Sequence=3 hop limit=64 time = 26 ms
Reply from 2001:db8::1
bytes=56 Sequence=4 hop limit=64 time = 27 ms
Reply from 2001:db8::1
bytes=56 Sequence=5 hop limit=64 time = 26 ms
--- 2001:db8::1 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 26/26/28 ms
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 192.168.50.2 255.255.255.0
#
interface Tunnel 10
ipv6 enable
ipv6 address 2001:db8::1/64
tunnel-protocol ipv6-ipv4
source 192.168.50.2
destination 192.168.51.2
#
ip route-static 192.168.51.0 255.255.255.0 192.168.50.1
#
return
#
sysname DeviceC
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 192.168.51.2 255.255.255.0
#
interface Tunnel 10
ipv6 enable
ipv6 address 2001:db8::2/64
tunnel-protocol ipv6-ipv4
source 192.168.51.2
destination 192.168.50.2
#
ip route-static 192.168.50.0 255.255.255.0 192.168.51.1
#
return
Networking Requirements
As shown in Figure 1-165, two IPv6 networks are both 6to4 networks; both
DeviceA and DeviceB connect to a 6to4 network and the IPv4 backbone network.
To interconnect the two 6to4 networks, a 6to4 tunnel needs to be configured
between DeviceA and DeviceB.
To allow interworking of 6to4 networks, a 6to4 address needs to be configured for
the host, with the prefix of 2002:IPv4 address: and prefix length of 48 bits. As
shown in Figure 1-165, the IPv4 address of the interface connecting DeviceA and
the IPv4 network is 1.1.1.1. Then, the 6to4 address for the 6to4 network where
DeviceA resides should have a prefix of 2002:101:101:: and a prefix length of 64
bits.
Precautions
When configuring a 6to4 tunnel, note the following points:
● Create a tunnel interface and set parameters for the tunnel interface.
● Configure only the source IPv4 address of the tunnel. The destination IP
address of the tunnel is the same as the destination IP address contained in
the original IPv6 packet. The source IP address of a 6to4 tunnel must be
unique.
● Assign a 6to4 address to the interface that connects a border router to a 6to4
network and an IPv4 address to the interface that connects a border router to
an IPv4 network.
● Configure an IP address for the tunnel interface to support routing protocols.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure IP addresses for interfaces on DeviceA and DeviceB.
# Configure DeviceA.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface GigabitEthernet 0/1/0
[~DeviceA-GigabitEthernet0/1/0] ip address 1.1.1.1 8
[*DeviceA-GigabitEthernet0/1/0] undo shutdown
[*DeviceA-GigabitEthernet0/1/0] quit
[*DeviceA] interface gigabitethernet 0/1/1
[*DeviceA-GigabitEthernet0/1/1] ipv6 enable
[*DeviceA-GigabitEthernet0/1/1] ipv6 address 2002:101:101:1::1 64
[*DeviceA-GigabitEthernet0/1/1] undo shutdown
[*DeviceA-GigabitEthernet0/1/1] quit
# Configure DeviceB.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceB
[*HUAWEI] commit
[~DeviceB] interface GigabitEthernet 0/1/0
[~DeviceB-GigabitEthernet0/1/0] ip address 1.1.1.2 8
[*DeviceB-GigabitEthernet0/1/0] undo shutdown
[*DeviceB-GigabitEthernet0/1/0] quit
[*DeviceB] interface gigabitethernet 0/1/1
[*DeviceB-GigabitEthernet0/1/1] ipv6 enable
[*DeviceB-GigabitEthernet0/1/1] ipv6 address 2002:101:102:1::1 64
[*DeviceB-GigabitEthernet0/1/1] undo shutdown
[*DeviceB-GigabitEthernet0/1/1] quit
# Configure DeviceB.
[*DeviceB] interface Tunnel 10
[*DeviceB-Tunnel10] tunnel-protocol ipv6-ipv4 6to4
[*DeviceB-Tunnel10] ipv6 enable
[*DeviceB-Tunnel10] ipv6 address 2002:101:102::1 64
[*DeviceB-Tunnel10] source 1.1.1.2
[*DeviceB-Tunnel10] quit
NOTE
A reachable route must exist between DeviceA and DeviceB. In this example, the two
routers are directly connected. Therefore, no routing protocol is configured.
# Configure DeviceB.
[*DeviceB] ipv6 route-static 2002:: 16 Tunnel 10
[*DeviceB] commit
# Ping the 6to4 address of GE 0/1/1 on DeviceB from DeviceA. The command
output shows the ping is successful.
[~DeviceA] ping ipv6 2002:101:102:1::1
PING 2002:101:102:1::1 : 56 data bytes, press CTRL_C to break
Reply from 2002:101:102:1::1
bytes=56 Sequence=1 hop limit=64 time=37 ms
Reply from 2002:101:102:1::1
bytes=56 Sequence=2 hop limit=64 time=2 ms
Reply from 2002:101:102:1::1
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 1.1.1.1 255.0.0.0
#
interface GigabitEthernet 0/1/1
undo shutdown
ipv6 enable
ipv6 address 2002:101:101:1::1/64
#
interface Tunnel 10
ipv6 enable
ipv6 address 2002:101:101::1/64
tunnel-protocol ipv6-ipv4 6to4
source 1.1.1.1
#
ipv6 route-static 2002:: 16 Tunnel 10
#
return
Networking Requirements
As shown in Figure 1-166, DeviceA functions as a 6to4 router and connects to the
6to4 network; DeviceB is a 6to4 relay router and connects to the IPv6 network
(2001:db8::/64); DeviceA connects to DeviceB through the IPv4 backbone network.
To interconnect the hosts on the 6to4 and IPv6 networks, a 6to4 tunnel between
DeviceA and DeviceB needs to be established.
The method of configuring a tunnel between a 6to4 relay router and a 6to4 router
is the same as the method of configuring a tunnel between 6to4 routers. To
interconnect a 6to4 network with an IPv6 network, configure a static route to the
IPv6 network on each 6to4 router.
Precautions
When configuring a 6to4 tunnel, note the following points:
● Create a tunnel interface and set parameters for the tunnel interface.
● Configure only the source IPv4 address of the tunnel. The destination IPv4
address of the tunnel is the same as the destination IPv4 address contained in
the original IPv6 packet. The source address of a 6to4 tunnel must be unique.
● Assign a 6to4 address to the interface that connects a border router to a 6to4
network and an IPv4 address to the interface that connects a border router to
an IPv4 network.
● Configure an IP address for the tunnel interface to support routing protocols.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IPv4 and IPv6 addresses of the interfaces
● Source IP address of the tunnel interface
● Static route to the indirectly connected router
Procedure
Step 1 Configure IP addresses for interfaces on DeviceA and DeviceB.
# Configure DeviceA.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface GigabitEthernet 0/1/0
[~DeviceA-GigabitEthernet0/1/0] ip address 1.1.1.1 255.0.0.0
[*DeviceA-GigabitEthernet0/1/0] undo shutdown
[*DeviceA-GigabitEthernet0/1/0] quit
[*DeviceA] interface gigabitethernet 0/1/1
[*DeviceA-GigabitEthernet0/1/1] ipv6 enable
[*DeviceA-GigabitEthernet0/1/1] ipv6 address 2002:101:101:1::1 64
[*DeviceA-GigabitEthernet0/1/1] undo shutdown
[*DeviceA-GigabitEthernet0/1/1] quit
# Configure DeviceB.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceB
[*HUAWEI] commit
[~DeviceB] interface GigabitEthernet 0/1/0
[~DeviceB-GigabitEthernet0/1/0] ip address 2.2.2.2 255.0.0.0
[*DeviceB-GigabitEthernet0/1/0] undo shutdown
[*DeviceB-GigabitEthernet0/1/0] quit
[*DeviceB] interface gigabitethernet 0/1/1
[*DeviceB-GigabitEthernet0/1/1] ipv6 enable
[*DeviceB-GigabitEthernet0/1/1] ipv6 address 2001:db8::1 64
[*DeviceB-GigabitEthernet0/1/1] undo shutdown
[*DeviceB-GigabitEthernet0/1/1] quit
# Configure DeviceB.
[*DeviceB] interface Tunnel 10
[*DeviceB-Tunnel10] tunnel-protocol ipv6-ipv4 6to4
[*DeviceB-Tunnel10] ipv6 enable
[*DeviceB-Tunnel10] ipv6 address 2002:202:202::1 64
[*DeviceB-Tunnel10] source 2.2.2.2
[*DeviceB-Tunnel10] quit
# Ping the IPv6 address of GE 0/1/1 on DeviceB from DeviceA. The command
output shows the ping is successful.
[~DeviceA] ping ipv6 2001:db8::1
PING 2001:db8::1 : 56 data bytes, press CTRL_C to break
Reply from 2001:db8::1
bytes=56 Sequence=1 hop limit=64 time=10 ms
Reply from 2001:db8::1
bytes=56 Sequence=2 hop limit=64 time=2 ms
Reply from 2001:db8::1
bytes=56 Sequence=3 hop limit=64 time=2 ms
Reply from 2001:db8::1
bytes=56 Sequence=4 hop limit=64 time=2 ms
Reply from 2001:db8::1
bytes=56 Sequence=5 hop limit=64 time=2 ms
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
admin
interface GigabitEthernet0/1/0
undo shutdown
ip address 1.1.1.1 255.0.0.0
#
interface GigabitEthernet0/1/1
undo shutdown
ipv6 enable
ipv6 address 2002:101:101:1::1/64
#
interface Tunnel 10
ipv6 enable
ipv6 address 2002:101:101::1/64
tunnel-protocol ipv6-ipv4 6to4
source 1.1.1.1
#
ipv6 route-static :: 0 2002:202:202::1
#
ipv6 route-static 2002:: 16 Tunnel 10
#
return
Networking Requirements
As shown in Figure 1-167, both DeviceA and DeviceB support the IPv4/IPv6 dual
stack, and they are connected to an IPv6 network and an IPv4 network. Both
DeviceA and DeviceB are 6RD CEs. The IPv6 networks are 6RD networks. A 6RD
tunnel needs to be established between DeviceA and DeviceB so that the hosts on
the two IPv6 networks can communicate.
Figure 1-167 Networking diagram for configuring a 6RD tunnel to allow the
clients and devices in different 6RD domains to communicate
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. On DeviceA and DeviceB, configure IPv4 addresses for the physical interfaces
connected to the IPv4 network and enable IPv6 packet forwarding.
2. On DeviceA and DeviceB, configure the source IPv4 address, 6RD prefix, IPv6
prefix length, and IPv4 prefix length for a 6RD tunnel so that the devices can
calculate the 6RD delegated prefix based on a combination of these
parameters.
3. On DeviceA and DeviceB, configure IPv6 addresses for the physical interfaces
connected to the 6RD domains based on the 6RD delegated prefix.
4. Configure the IPv6 addresses on PC1 and PC2, with the IPv6 prefix set to a 64-
bit address prefix that contains the 6RD prefix and subnet ID.
5. On DeviceA, configure a static route destined for the 6RD domain in which
DeviceB resides. On DeviceB, configure a static route destined for the 6RD
domain in which DeviceA resides.
Data Preparation
To complete the configuration, you need the following data:
● IPv4 addresses of interfaces
● Source IPv4 addresses of 6RD tunnel interfaces
● 6RD prefix of the 6RD tunnel
● IPv4 prefix length of the 6RD tunnel
Procedure
Step 1 Configure IPv4 addresses for interfaces that connect devices to the IPv4 network.
# Configure DeviceA.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface gigabitethernet 0/1/1
[*DeviceA-GigabitEthernet0/1/1] ip address 10.1.1.1 24
[*DeviceA-GigabitEthernet0/1/1] commit
[~DeviceA-GigabitEthernet0/1/1] quit
# Configure DeviceB.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceB
[*HUAWEI] commit
[~DeviceB] interface gigabitethernet 0/1/1
[*DeviceB-GigabitEthernet0/1/1] ip address 10.1.1.2 24
[*DeviceB-GigabitEthernet0/1/1] commit
[~DeviceB-GigabitEthernet0/1/1] quit
# Configure DeviceB.
[~DeviceB] interface Tunnel 1
[*DeviceB-Tunnel1] tunnel-protocol ipv6-ipv4 6rd
[*DeviceB-Tunnel1] ipv6 enable
[*DeviceB-Tunnel1] source GigabitEthernet 0/1/1
[*DeviceB-Tunnel1] ipv6-prefix 2001:db8::/32
[*DeviceB-Tunnel1] ipv4-prefix length 8
[*DeviceB-Tunnel1] commit
NOTE
The 6RD delegated prefix can be automatically calculated based on the configured source
tunnel IPv4 address or source interface name, 6RD prefix, IPv6 prefix length, and IPv4 prefix
length. You can run the display this interface command to view the 6RD delegated prefix
and then configure an IPv6 address for the tunnel interface.
In this example, the IPv4 prefix length of the 6RD tunnel is set to 8 bits. To generate a 56-
bit 6RD delegated prefix, the device removes the left-most 8 bits from the tunnel source
address (IPv4 address of GE 0/1/1) and adds the 6RD prefix 2001:db8::/32 before the
remaining 24 bits (in hexadecimal notation) of the tunnel source address.
Step 4 Configure IPv6 addresses for the tunnel interfaces based on the 6RD delegated
prefix.
# Configure DeviceA.
[~DeviceA-Tunnel1] ipv6 address 2001:db8:101:100::1 56
[*DeviceA-Tunnel1] commit
[~DeviceA-Tunnel1] quit
# Configure DeviceB.
[~DeviceB-Tunnel1] ipv6 address 2001:db8:101:200::1 56
[*DeviceB-Tunnel1] commit
[~DeviceB-Tunnel1] quit
# Configure DeviceA.
[~DeviceA] interface gigabitethernet 0/1/2
[*DeviceA-GigabitEthernet0/1/2] ipv6 enable
[*DeviceA-GigabitEthernet0/1/2] ipv6 address 2001:db8:101:101::1 64
[*DeviceA-GigabitEthernet0/1/2] commit
[~DeviceA-GigabitEthernet0/1/2] quit
# Configure DeviceB.
[~DeviceB] interface gigabitethernet 0/1/2
[*DeviceB-GigabitEthernet0/1/2] ipv6 enable
[*DeviceB-GigabitEthernet0/1/2] ipv6 address 2001:db8:101:201::1 64
[*DeviceB-GigabitEthernet0/1/2] commit
[~DeviceB-GigabitEthernet0/1/2] quit
Step 6 Configure static routes destined for the 6RD domains connected to DeviceA and
DeviceB.
# Configure a static route destined for the 6RD domain connected to DeviceB.
[~DeviceA] ipv6 route-static 2001:db8:: 32 Tunnel 1
[*DeviceA] commit
# Configure a static route destined for the 6RD domain connected to DeviceA.
[~DeviceB] ipv6 route-static 2001:db8:: 32 Tunnel 1
[*DeviceB] commit
NOTE
The leftmost 56 bits of the IPv6 address of GE 0/1/2 must be the same as the 6RD
delegated prefix.
# On PC1, ping the IPv6 address of PC2. The following command output shows
that the ping is successful.
C:\> ping 2001:db8:101:201::2
Pinging 2001:db8:101:201::2 with 32 bytes of data:
Reply from 2001:db8:101:201::2: time<1ms
Reply from 2001:db8:101:201::2: time<1ms
Reply from 2001:db8:101:201::2: time<1ms
Reply from 2001:db8:101:201::2: time<1ms
Ping statistics for 2001:db8:101:201::2:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet0/1/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet0/1/2
undo shutdown
ipv6 enable
ipv6 address 2001:db8:101:101::1 64
#
interface Tunnel 1
ipv6 enable
ipv6 address 2001:db8:101:100::1 56
tunnel-protocol ipv6-ipv4 6rd
source GigabitEthernet0/1/1
ipv6-prefix 2001:db8::/32
ipv4-prefix length 8
#
ipv6 route-static 2001:db8:: 32 Tunnel1
#
return
interface GigabitEthernet0/1/2
undo shutdown
ipv6 enable
ipv6 address 2001:db8:101:201::1 64
#
interface Tunnel 1
ipv6 enable
ipv6 address 2001:db8:101:200::1 56
tunnel-protocol ipv6-ipv4 6rd
source GigabitEthernet0/1/1
ipv6-prefix 2001:db8::/32
ipv4-prefix length 8
#
ipv6 route-static 2001:db8:: 32 Tunnel1
#
return
Networking Requirements
As shown in Figure 1-168, both DeviceA and DeviceB support the IPv4/IPv6 dual
stack and are connected to an IPv6 network and an IPv4 network. DeviceA
functions as a 6RD CE and is connected to an IPv6 6RD network. DeviceB
functions as a 6RD BR and is connected to a common IPv6 network outside the
6RD domain. A 6RD tunnel needs to be established between DeviceA and DeviceB
so that the hosts on the two IPv6 networks can communicate with one another.
Figure 1-168 Networking diagram for configuring a 6RD tunnel to enable a 6RD
domain and a common IPv6 network to communicate
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. On DeviceA and DeviceB, configure IPv4 addresses for the physical interfaces
connected to the IPv4 network and enable IPv6 packet forwarding.
2. On DeviceA and DeviceB, configure the source IPv4 address, 6RD prefix, IPv6
prefix length, and IPv4 prefix length for a 6RD tunnel. Then calculate the 6RD
delegated prefix based on a combination of these parameters.
3. On DeviceA and DeviceB, configure IPv6 addresses for the physical interfaces
connected to the 6RD domain and IPv6 network based on the 6RD delegated
prefix.
4. Configure the IPv6 addresses on PC1 and PC2 and set the IPv6 prefix of PC1
to a 64-bit address prefix that contains the 6RD prefix and subnet ID and the
IPv6 prefix of PC2 to a common IPv6 address prefix.
5. On DeviceA, configure a static route destined for the IPv6 network on which
DeviceB resides. On DeviceB, configure a static route destined for the 6RD
domain in which DeviceA resides.
Data Preparation
To complete the configuration, you need the following data:
● IPv4 addresses of interfaces
● Source IPv4 address of a 6RD tunnel interface
● 6RD prefix and length of a 6RD tunnel
● IPv4 prefix length of a 6RD tunnel
Procedure
Step 1 Configure IPv4 addresses for interfaces that connect devices to the IPv4 network.
# Configure DeviceA.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface gigabitethernet 0/1/1
[*DeviceA-GigabitEthernet0/1/1] ip address 10.1.1.1 24
[*DeviceA-GigabitEthernet0/1/1] commit
[~DeviceA-GigabitEthernet0/1/1] quit
# Configure DeviceB.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceB
[*HUAWEI] commit
[~DeviceB] interface gigabitethernet 0/1/1
[*DeviceB-GigabitEthernet0/1/1] ip address 10.1.1.2 24
[*DeviceB-GigabitEthernet0/1/1] commit
[~DeviceB-GigabitEthernet0/1/1] quit
[*DeviceA-Tunnel1] commit
# Configure DeviceB.
[~DeviceB] interface Tunnel 1
[*DeviceB-Tunnel1] tunnel-protocol ipv6-ipv4 6rd
[*DeviceB-Tunnel1] ipv6 enable
[*DeviceB-Tunnel1] source GigabitEthernet 0/1/1
[*DeviceB-Tunnel1] ipv6-prefix 2001:db8::/32
[*DeviceB-Tunnel1] ipv4-prefix length 8
[*DeviceB-Tunnel1] commit
NOTE
The 6RD delegated prefix can be automatically calculated based on the configured source
tunnel address or source interface, 6RD prefix, IPv6 prefix length, and IPv4 prefix length.
You can run the display this interface command to view the 6RD delegated prefix and
then configure an IPv6 address for the tunnel interface.
Step 4 Configure IPv6 addresses for the tunnel interfaces based on the 6RD delegated
prefix.
# Configure DeviceA.
[~DeviceA-Tunnel1] ipv6 address 2001:db8:101:100::1 56
[*DeviceA-Tunnel1] commit
[~DeviceA-Tunnel1] quit
# Configure DeviceB.
[~DeviceB-Tunnel1] ipv6 address 2001:db8:101:200::1 56
[*DeviceB-Tunnel1] commit
[~DeviceB-Tunnel1] quit
# Configure DeviceB.
[~DeviceB] interface gigabitethernet 0/1/2
[*DeviceB-GigabitEthernet0/1/2] ipv6 enable
[*DeviceB-GigabitEthernet0/1/2] ipv6 address 2001:db8::1 64
[*DeviceB-GigabitEthernet0/1/2] commit
[~DeviceB-GigabitEthernet0/1/2] quit
# Configure a static route destined for the 6RD domain connected to DeviceA.
[~DeviceB] ipv6 route-static 2001:db8:: 32 Tunnel 1
[*DeviceB] commit
# Configure PC2.
Configure the IPv6 address 2002:db8::2/64 for PC2 based on the 6RD delegated
prefix. The PC2's IPv6 address must be on the same network segment as GE 0/1/2
of DeviceB. Configure a static route 2001:db8:: 32 from PC2 to DeviceB. The
method of configuring the IPv6 address and static route is related to the operating
system running on PC2. The configuration details are not provided.
Step 8 Verify the configuration.
# After completing the configuration, check the IPv6 status of Tunnel 1 on
DeviceA or DeviceB. The following command output shows that the IPv6 status of
Tunnel 1 is UP.
[~DeviceA] display ipv6 interface Tunnel 1
Tunnel1 current state : UP
IPv6 protocol current state : UP
IPv6 is enabled, link-local address is FE80::3ABA:9A00:9DC:D303
Global unicast address(es):
2001:DB8:101:100::1, subnet is 2001:DB8:101:100::/56
Joined group address(es):
FF02::1:FF00:1
FF02::1:FFDC:D303
FF02::2
FF02::1
MTU is 1500 bytes
ND DAD is enabled, number of DAD attempts: 1
ND reachable time is 1200000 milliseconds
ND retransmit interval is 1000 milliseconds
Hosts use stateless autoconfig for addresses
# On PC1, ping the IPv6 address of PC2. The following command output shows
that the ping is successful.
C:\> ping 2001:db8::2
Pinging 2001:db8::2 with 32 bytes of data:
Reply from 2001:db8::2: time<1ms
Reply from 2001:db8::2: time<1ms
Reply from 2001:db8::2: time<1ms
Reply from 2001:db8::2: time<1ms
Ping statistics for 2001:db8::2:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet0/1/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet0/1/2
undo shutdown
ipv6 enable
ipv6 address 2001:db8:101:101::1 64
#
interface Tunnel 1
ipv6 enable
ipv6 address 2001:db8:101:100::1 56
tunnel-protocol ipv6-ipv4 6rd
source GigabitEthernet0/1/1
ipv6-prefix 2001:db8::/32
ipv4-prefix length 8
border-relay address 10.1.1.2
#
ipv6 route-static 2001:db8:: 32 Tunnel1
#
return