Fortigate-Vm Ha Deployment Guide For Avx Series Network Functions Platform
Fortigate-Vm Ha Deployment Guide For Avx Series Network Functions Platform
May-2019 rev. a
1. Introduction............................................................................................................... 2
2. Prerequisites ............................................................................................................. 3
Array's AVX Series network functions platform hosts up to 32 fully independent virtual appliances (VAs),
including Array load balancing, SSL VPN and WAF as well as 3rd-party VAs from leading networking and
security vendors. Designed with managed service provider and enterprises in mind, the AVX Series
enables data center consolidation without sacrificing the agility of cloud and virtualization or the
performance of dedicated appliances. Uniquely capable of assigning dedicated CPU, SSL, memory and
interface resources per VA, the AVX Series network functions platform is the only solution to deliver
guaranteed performance in shared environments.
A firewall is a network security device that monitors incoming and outgoing network traffic and determines
whether to allow or block specific traffic based on a defined set of security rules. A firewall sandwich is a
deployment in which multiple firewalls are sandwiched between a pair of load balancers to improve
availability, scalability, and manageability across the IT infrastructure.
The following sections will describe the steps required to deploy a Fortinet FortiGate-VM HA (High
Availability) on the AVX Series network functions platform.
The Fortinet FortiGate-VM (Virtual Machine) is a Next-Generation Firewall that offers flexible deployments
from the network edge to the core, data center, internal segment, and the cloud. FortiGate-VM firewalls
delivers scalable performance of advanced security services like threat protection, SSL inspection, and
ultra-low latency for protecting internal segments and mission critical environments. For the purposes of
this deployment guide, the FortiGate-VM will be deployed on the AVX as a VA instance.
2
2. Prerequisites
This deployment guide requires the following hardware and software products.
The AVX appliance can be purchased from Array Networks or authorized resellers. For more information
on deploying the AVX appliance, please refer to the AVX Web UI Guide, which is accessible through the
product's Web User Interface.
The FortiGate-VM instances may be purchased from Fortinet or a reseller. For more information on
deploying the FortiGate-VM instances for KVM, please visit https://www.fortinet.com.
Note: Assuming you have all these components, it should roughly take 90-120 minutes to complete the
entire configuration in this deployment guide.
In this deployment, there is one FortiGate-VM instance running on each of the AVX platforms. The
FortiGate-VM instances will have identical IP configuration on the ingress (port3) and egress (port4)
interfaces. Port3 and port4 are the traffic ports and both use the SR-IOV ports. The HA heartbeat uses
port1. Port1 is a virtual port on the virtual switch. The virtual switch is bound to an SR-IOV port for
external communication.
The client machine is running CentOS and the Web Server is running CentOS, both external to the AVX
platforms. The two CentOS machines are required only to validate the design.
The client (top) will generate Web Server (bottom) requests to the CentOS Web Servers via the
FortiGate-VM instance. In the event of HA failure and the master/active FortiGate-VM instance fails, the
standby FortiGate-VM instance will take over as the master/active FortiGate-VM. If the original
master/active FortiGate-VM instance becomes active again, it will resume ownership as the master/active
instance.
Login: array
Password: admin
2. Type “enable” and hit the <Enter> key twice to enter enable mode. No password is required.
AN>enable
Enable password:
AN#
AN#config terminal
AN(config)#
AN(config)#hostname AVX1
AVX1(config)#
5. Configure the IP address and default gateway for the management port.
6. Enable WebUI.
AVX1(config)#webui on
7. Save changes.
AVX1(config)#write memory
You may now access the AVX1 appliance using the WebUI at https://<IP>:8888. In this example,
https://10.10.152.171:8888.
Licenses are required for each VA instance. Please contact Fortinet to obtain licenses.
1. Create a FortiGate-VM VA instance named FG-1. Select the VA size to match the FortiGate-VM
instance size you are installing (see table below).
4. Click on Save. Navigate to VA Management > VA to view your newly created VA instance
1. Create a Virtual Switch named vsw1 and attach the FG-1 VA instance. Assign the Virtual Port Name
to vport1.
3. Click on the General Settings tab and toggle the Binding Interface. Select an available SR-IOV port
for binding. In our example, port3 is selected.
5. Confirm the interfaces are correct for FG-1 by navigating to VA Management > VA and selecting FG-
1.
You should see one management port, two SR-IOV traffic ports and a virtual port.
1. Locate the VA instance named FG-1 and click on the symbol under the Action column to start the
VA instance.
1. Login into the FG-1 console with the username “admin”. You can use the AVX WebUI VA console or
the AVX VA console option. By default, there is no password. Just press Enter.
2. The AVX ports do not map identically to the ports on the FortiGate-VM instance. Conduct a check to
confirm correct port numbering and MAC addresses using the “get hardware nic portX” command on
the FG-1 console.
3. Configure the network settings (management = port2, ingress = port3, egress = port4) as follows:
edit “port2”
next
edit "port3"
next
edit "port4"
end
edit 1
end
7. Login again into the FG-1 WebUI and confirm the network settings for port2, port3 and port4.
9. Configure the IPV4 Policy for port3 (WAN) to port4 (LAN) traffic as follows:
10. Configure the IPV4 Policy for port4 (LAN) to port3 (WAN) traffic as follows:
a. Mode = Active-Passive
12. Click on OK. FG-1 on AVX1 is the first member and Master of the HA cluster.
AN(config)#hostname AVX2
AVX2(config)#
After step 7 is completed, you may now access the AVX2 appliance using the WebUI at https://<IP>:8888.
For our example, https://10.10.152.172:8888.
Note that the AVX2 WebUI will have a different management IP address than AVX1.
3. Configure the network settings (management = port2, ingress = port3, egress = port4) as follows:
edit "port2"
end
a. Mode = Active-Passive
4. Now force a failure by rebooting the Master on AVX1 and checking the status on AVX2.
5. When the FGVM08TM19000909 on AVX1 (left-hand side) boots back up successfully, it will resume
ownership as the Master.
Corporate To purchase
Headquarters China India Array Networks
[email protected] [email protected] [email protected] Solutions, please
408-240-8700 +010-84446688 +91-080-41329296
contact your
1 866 MY-ARRAY
www.arraynetworks.com Array Networks
France and North Africa Japan representative at
EMEA [email protected] sales-japan@ 1-866-MY-ARRAY
[email protected] +33 6 07 511 868 arraynetworks.com (692-7729) or
+32 2 6336382 +81-44-589-8315 authorized reseller
May-2017 rev. a
© 2017 Array Networks, Inc. All rights reserved. Array Networks and the Array Networks logo are trademarks of Array Networks, Inc. in
the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the
property of their respective owners. Array Networks assumes no responsibility for any inaccuracies in this document. Array Networks
reserves the right to change, modify, transfer, or otherwise revise this publication without notice.