0% found this document useful (0 votes)
365 views

Azure Networking Basics Course Content

This document provides an overview of a course on implementing Microsoft Azure networking. The course covers topics like Azure virtual networks, network security groups, load balancing, hybrid cloud networking, and monitoring Azure networks. It includes learning objectives, module outlines, and descriptions of course content.

Uploaded by

Manoj Pundir
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
365 views

Azure Networking Basics Course Content

This document provides an overview of a course on implementing Microsoft Azure networking. The course covers topics like Azure virtual networks, network security groups, load balancing, hybrid cloud networking, and monitoring Azure networks. It includes learning objectives, module outlines, and descriptions of course content.

Uploaded by

Manoj Pundir
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 61

Implementing Microsoft Azure Networking

By Tim Warner

Software-defined networking is a foundational element in the Microsoft Azure infrastructure-


as-a-Service (IaaS) scenario. This course teaches you the basics of Azure networking,
including virtual networks, hybrid cloud connectivity, and more.

Most newcomers to the Microsoft Azure public cloud face their biggest learning curve with software-
defined networking. This course, Implementing Microsoft Azure Networking, teaches you all about
the building blocks of Azure virtual networks, including planning, deployment, configuration,
monitoring, and security. You will learn how to design virtual networks that support your workloads
with the highest security and performance. You'll also learn how to configure hybrid connectivity
between your on-premises environment and Azure. Additionally, you will learn how to closely
monitor your network performance to keep in compliance with your service-level agreements
(SLAs). By the end of this course, you will be able to design and deploy Azure infrastructure-as-a-
service (IaaS) networking with confidence and accuracy.

1. Azure Networking Basics


2. Azure Virtual Networks
3. Network Security Groups
4. Azure Load Balancers
5. Network Virtual Appliances
6. Hybrid Cloud Networking
7. Monitoring and Securing Azure Virtual Networks.

Course Overiew

1.1 Azure networking Basics

Overview

IaaS in Azure Services Management

ARM Networking Architecture

Demo: Deplopy a virtual Network with an ARM Template

Demo: Tour Azure Networking Components

Microsoft Azure Hybrid Cloud Architecture

Virtual IP Addresses and Name Resolutions

Demo: Azure VM Networking Configuration


Summary

2.1 Azure Virtual Networks

Overview

Virtual Network Capabliites and Components.

Vnet-to-vnet vpn and vnet peering

App service Environment (ASE)

Demo: Create and Azure Virtual Network

Public and private IPv4 Addresses in Azure.

Azure Networking Resource limits

IPv6 and Name Resolution in Azure

Demo: Configuring IPV6 in Azure

Summary

3.1 Network Security Groups

Overview

About Network Security Groups (NSGs)

Demo: Create an NSG in the Azure Portal

Demo: Work with NSGs

NSG Logging and Network Watcher

Demo: Enable NSG Logging

Demo: Use Network Watcher

Muti-NIC Virtual Machines.

User-defined Routes and IP Forwarding

Demo: Deploy a Network Virtual Appliance

Summary

4.1 Azure Load Balancers

Overview

Preliminary Definations

Azure Load Balancers

Load Distribution Methods

Azure External and internal Load Blancers


Demo: Deploy an External Load Balancer

Azure Application Gateway

Demo: Deploy and internal Load Balancers

Demo: Getting Started with Application Gateway.

Azure Traffic Manager.

Traffic Manager Routing Methods.

Demo: configure Traffic Manger

Summary

5.1 Network Virtual Appliances

Overiview

Network Virtual Appliance Use Cases.

Microsoft Azure Defense in Depth

Network Virtual Appliance Topologies.

Azure Marketplace Partners.

Demo: Deploy a Network Virtual Appliance

Demo: Configure a Network Virtual Applicance.

Summary.

Hybrid Cloud Networking

Overview

Our Case Study Environment

Azure VPN Facts

Site-to-Site VPN

Vnet and Point-to-site VPNs

Azure VPN Gateway Types.

Forced Tunneling

ExpressRoute

Demo: Deploy a site-to-site VPN

Demo: Site Project – Azure Cloud Shell

Demo: Complete and Test the VPN

Summary
Monitoring and Securing Azure Virtual Networks.

Overview

Azure Network Monitoring Workflow

Open-Source Tools

Demo: Enable Diagnostics and Use Azure Monitor

Demo: Investigate Metrics and Alerts

Demo: Network Performance Monitor and Network Watcher

Azure Network Security Center

Summary

Course Overview

Hi everyone, my name is Tim Warner, and welcome to my course Implementing Microsoft


Azure Networking. I'm a Microsoft MVP in Cloud and Datacenter management, as well as a
Pluralsight staff author. Azure software define networking normally represents the Cloud
architects and IT ops professionals' first significant learning curve. By the end of the course,
you'll be able to design, deploy, configure, secure, and monitor Azure virtual networks and
other network related resources with confidence and accuracy. I hope you'll join me on this
journey to learn Cloud networking with the Implementing Microsoft Azure Networking
course at Pluralsight.

Azure Networking Basics


Overview

Hello there, and welcome to Pluralsight. My name is Tim Warner, and this course is entitled
Implementing Microsoft Azure Networking. Welcome to this first module in the course. It's
title is Azure Networking Basics. What are you going to learn during this first module of the
networking course? Well, the first thing we're going to do is take care of a common point of
confusion that newcomers to Azure have and that is the differences between Azure
infrastructure as a service v1, also called Service Management, versus IaaS in Azure v2,
which is called Azure Resource Manager, or ARM. Now, ASM is really a legacy technology,
and we're not going to spend much time on it, but I want to make sure to cover it enough such
that you know the difference. Now that point bears repeating. Although you can manage
many classic azure resources in the resource manager portal, you really need to make sure
that all of your current and future deployments use ARM and not the classic model. The
classic model is still around for backward compatibility with legacy deployments, that haven't
yet been migrated. We're going to spend all the rest of our time in ARM, visualizing network
communications flow. This can take some getting used to if you haven't done software
defined networking, or SDN, yet. You'll also learn how to grasp IaaS virtual IP addresses and
name resolution. Understand that these objectives in this mobdule are largely theoretical. I'll
be throwing a lot of information at you simply because, let's face it, Azure is a huge
cornucopia of services, there are plenty of moving parts and plates to spin. Insert your
metaphor here I guess. But what we're doing in this module is laying the foundation for
building practical skills. To wit, you'll find, if you look at the overall course outline for this
networking track, in module 2, for instance, we're going to dive deeply into the virtual
network object itself. So in this module, my goal for your as an instructor is to give you a
high-level overview. With that, enough introduction. Let's get into the content and have some
fun. Now, a subject as broad as Azure networking presupposes quite a bit of knowledge. So
let's take a look at what you should know coming into this course. First of all, TCP/IP
internetworking. If you don't understand things like IPv4 addressing and subnetting, the
purpose and function of network devices like routers, load balancers, firewalls, how to do
network troubleshooting, and how to secure networks, if you don't have those fundamental
skills, then you have a gap there, don't you, because you're not only going to have to learn all
of this background stuff before you can get started with Azure, and then there's the whole
Azure piece that you need to put together as well. So if you do identify a content gap here, I
would recommend you take a look at the skill set defined in Microsoft Technology Associate,
MTA, certification exam for networking. There's one exam involved in earning this
credential. It's Exam 98366, and it's called Networking Fundamentals. You can not only skill
up to grasp Azure networking, but you can also have an extra Microsoft certification
credential to put on your Linkedin profile or your resume. Another site, preliminarily, that I
want you to be aware of is the Azure Architecture Center. In the exercise files that always
accompany my courses, you'll find copious hyperlinks. Of course, you could always run a
Bing search for Azure Architecture Center if you just can't wait. But what I want to draw
your attention to, there's a lot at this site, but the Azure Reference Architectures are a series
of detailed annotated diagrams that describe difference ways to deploy Azure services. And
when we're talking about networking, there's lots of reference architectures that involve
networking just by definition. Anything with IaaS, for instance, is going to, by definition,
have to involve networking. So I'll be borrowing diagrams from this site, and we'll be
working through them together to enhance your understanding.

IaaS in Azure Service Management

First though, Azure Service Management Cloud Services. If you've ever looked at the old
Azure portal, it's manage.windowsazure.com, you're in the world of IaaS v1, while there's
also platform as a service there, but we could just call it Azure Service Management, or
ASM. This is a collection of APIs that was the original deployment model used in Azure. It's
called Classic in the new Azure Resource Manager portal, but Microsoft and I strongly
suggest, that for any and all future deployments, you use Resource Manager. What's going on
with Service Management, as you can see in this diagram, is, first of all, we have a container
called a cloud service. This is a logical object that is capable of scaling in some of the
traditional features we would expect from any cloud. You can pack VMs into a cloud service.
You can also pack platform as a service web applications or web/worker roles. You'll notice
that the cloud service also contains other IaaS-related features, like disk storage. You'll need
BLOB storage, for instance, to store your virtual machine, OS, and data disks, as well as your
virtual network, which gives your VM the ability to communicate with other machines. There
is the VM, actually, that has one or more IP addresses associated with it, and, potentially, you
can do load balancing among multiple VMs, web apps, or worker roles. Well, you might be
thinking, "Well, this sounds logical Tim. "What's wrong with IaaS v1?" It's not so much an
issue with what's wrong with it but the fact that the Azure development teams have given us a
successor to this model, called Resource Manager, that fixes many of the inconveniences that
are in IaaS v1. For instance, you notice all these disaggregated pieces, like virtual networks,
storage account, VMs, are all bound to a cloud service, and it's really difficult to deaggregate
them and to move, for instance, a virtual network to another cloud service, to move a VM to
another cloud service. Each component sort of stands on its own. It's not really a modular
architecture like we have with Resource Manager. Speaking of which, let's take a look at
Resource Manager because that forms the centerpiece of all of our Azure readiness training
with Microsoft and Pluralsight.

ARM Networking Architecture

Azure Public Cloud - Network Communications Flow in Resource Manager. Okay, here we
go. Now you'll notice that where I borrow an image from the Azure Architecture Center or
the Microsoft Documentation, I give you a hyperlink in the lower-left part of the slide. Azure
Resource Manager, or ARM, is also called IaaS v2, and there's a lot to it that we really,
frankly, have to presuppose. I have to presuppose so much knowledge on your part, but you
can easily pick up your basic Azure terminology by studying some of the other more basic
readiness training that we've developed in this Pluralsight-Microsoft partnership. But, suffice
it to say, that, among other things, the Resource Manager APIs modularize each component
in your platform as a service or infrastructure as a service stack, not only making them easier
to manage separately but also, with the resource group, you can manage all related resources
and their lifecycle together. It's just a really great model. The web portal for ARM is
portal.azure.com, and that's the one that you should be using. Now, what do we have as far as
networking in Resource Manager from a high level? You don't need to have a load balancer,
but you should understand that Azure gives you software-based load balancers for use with
your virtual machines and your web apps. We're going to focus to the virtual machine IaaS
scenario here. You may have a web server farm placed in an object called a virtual network,
and you have multiple identically configured instances of those VMs. The load balancer in
this case serves to equitably distribute loads. So you would expect that that Azure load
balancer is equipped with at least one public IPv4 address. You can actually do IPv6 at the
load balancer if you want to. And then that traffic gets routed into a member of this object, as
I said, called a virtual network. In Azure, a virtual network is a container object in an
isolation boundary, okay? That's really something important right off the top, that if you have
two VMs inside a single virtual network, Azure will allow them to communicate with each
other immediately, barring anything like user-defined routes or network security group
restrictions, but I get ahead of myself. If, by contrast, your VMs are in separate virtual
networks, even in the same region, even in the same subscription, those VMs have no default
connectivity between them. The virtual network, like I said, is an isolation boundary. Now,
there's plenty of ways to do inter-VNet, as virtual networks are called, communications. We'll
get to that in time. Within a virtual network, you define one or more logical subnetworks.
IPv4 subnets just like you've probably configured on-prem. VMs are also often times
deployed into what are availability sets. This is especially true for Vms that are clones of
each other, like web servers, and the idea here is that, under the hood where you don't see the
abstraction, in the Azure data centers, Microsoft will place the VMs on different hardware
hosts in different racks to protect against hardware failures in the Azure hardware fabric.
Okay, you still with me? Each VM is going to need some kind of storage repository. It could
be a traditional Azure storage account. It could be using managed disks, but, if you're going
to use a storage account, the VHDs, or virtual hard disks, are stored as binary large objects, or
BLOBs. You want to make sure that your resources are all in the same region to reduce
latency as much as possible. But that's the basic network flow. I guess the one thing I missed
in this diagram are the NICs. Notice that each virtual machine has one or more virtual
network interfaces. And it's those NICs that wind up attaching to other objects like Azure
load balancers or, potentially, network security groups, or NSGs. You may be thinking, "Tim,
you haven't defined NSG yet." Hang on, we're getting there. This is a more complex drawing
that shows multi-region IaaS. For instance, you may have an n-tier application that you've
decided to deploy as virtual machines. So you have a primary region with your n-tier
applications, your web tier, your business logic tier, your data storage tier. You might have a
jumpbox. You may have cloud-based active directory running in cloud-based virtual
machines. And you've replicated that entire environment into a secondary region. This could
be an Active/Active failover situation, or you could have them Active/Active in service and
use what is known in Azure as traffic manager to perform DNS-level load balancing between
one or more, yes, Azure software load balancer objects. I want you to get comfortable with
this stuff as soon as possible, and, yes, I'm going to be doing copious demos so you can see it.
Now, this is a potentially expensive deployment because, of course, you're mirroring your
entire IaaS infrastructure across multiple regions, but think about how expensive it would be
to deploy applications in multiple on-premises data centers around the world and keep super
high speed links between them. You see what I mean? In this example, our two regions are
connected by a virtual private network, a VPN, that's defined entirely in the Azure cloud. By
the end of this training course, you'll understand how all of these parts and pieces work in
great detail. For now, don't feel too overwhelmed. Once again, I'm just attempting to give you
the big picture in this module and to show you some of the major possibilities. Final point
before we move on, I can't forget about this, is that software load balancers in Azure can
actually be external to the virtual network, which was those first two that I showed you next
to traffic manager, but you can also load balance within a virtual network. So in this example,
you may have, for instance, a SharePoint server farm running in a virtual network where you
have your web front ends communicating with the SharePoint business logic servers and that
traffic being equitably distributed by an Azure internal load balancer, and your business logic
servers, that all need to communicate to, say, SQL server, could also take advantage of an
internal load balancer. So, the load balancer gives you not only performance and high
availability, it gives you flexibility as well and also some security that we'll get into later.

Demo: Deploy a Virtual Network with an ARM Template

Hi there. In this demo, I'd like to show you how easy it is to deploy a network without
necessarily clicking through the Azure portal on several screens. Now, if you're thinking,
"Well, Tim, that's exactly what I need to know how to do," Hang with me. I'm going to show
you several ways to create and manage networking resources in Azure. I want to start
showing you the ARM template sooner rather than later, because it needs to be a part of your
Azure professional life. Anyway, here we are in a Windows 10 Enterprise Edition
workstation, and I'll fire up the Edge browser, and I'm already logged in to my subscription in
Resource Manager. It's portal.azure.com. But, instead, where I'm going to take us is to the
Azure.com public page, called Azure Quickstart Templates. This link you can find in the
exercise files, or you could just Bing it. You need to understand that in Azure Resource
Manager, every deployment is defined in an underlying JavaScript object notation, or JSON,
template. In fact, look at our Azure readiness training because my friend and colleague James
Bannon did a whole course on Azure Resource Manager templates. But I've gone ahead and
run a search for virtual network, and many of these templates are given to us by Microsoft
employees but many of them are from community members. And we have one here called 2
VMs in a Vnet- Internal Load Balancer with LB rules. If you're thinking, "Wow, you mean
you can do a big parallel deployment in Resource Manager, including stuff like networking?"
The answer to that question is absolutely yes you can, one of the great powers and beauties of
Resource Manager. Now, the Quickstart Templates gallery is a starting point. You can always
download the source for these files, open them up, in say Visual Studio with the Azure SDK
installed, and you're off and running. They're documented, they show you the parameters
used, oftentimes there'll be a runtime cost estimate based on the assets that are included in the
template. You can browse the underlying source at GitHub or, check this out, this deploy to
Azure button will log us directly into the custom deployment screen in the Resource Manager
portal. Now, you have to already have logged into your subscription for this to work, but it's
brought up the custom deployment blade, as you can seen here, and we're prompted to fill in
the parameters that are specified in the template. Most of these should be pretty logical. You
choose which subscription you want, whether it's a new resource group container or an
existing. I'll call this New Vnet, press tab. When you see a green check, that means that your
choice has passed validation. In addition to being declaratively described in these ARM
templates, Resource Manager will also validate the templates to make sure that your input is
valid. Make sure that the location is correct, and then we have a bunch of required and
optional settings here. Now there's quite a bit to this template. And then, ultimately, you
agree and then purchase to actually submit the deployment. If you want to edit the parameters
in here to a more granular degree, you can click edit template, and what we'll see here is a
JSON outline of what's in the template, and you can actually set default values for
parameters, or, if we go done to variables and open this up, we can override what was in the
default. So, the availability set in the template's defined as AV set. I'm going to change this to
availability set 1. Isn't that cool? And notice that you can download your work. Let's click
save to save our change. Once you've filled everything out, you'll have to agree to the terms
and conditions, because, after all, this is going to incur runtime costs. You click purchase, and
it submits the deployment to Resource Manager. First thing that happens is a validation, and
mine passed, obviously, because now it's in process. And we can always get a handle on how
your deployment's running by clicking the notification bell. And you can actually click the
deployment to bring it out as a separate, what's called a, blade. And these blades can often be
maximized to fill the screen. We can periodically refresh to see if anything new has taken
place. And, if we scroll down to the very bottom, we get a running list of assets as they're
created. Isn't that cool?

Demo: Tour Azure Networking Components

While that's cooking, let me show you what some networking components look like in the
Azure portal. Let me open up the favorites list. I have a bunch of favorites already pre-
populated that are related to networking. Favorites are simply shortcuts to different Azure
services. You can come up here to the search button and just do a search for anything you
want. For instance, it dropped down and showed me my AD VNet, which is an existing
virtual network that I already have. I can give that a click. You'll notice a consistent user
interface in Azure Resource Manager, where you have a searchable list of settings, and you
can search inside each of these settings. In other words, it doesn't have to be just looking for,
for instance, sub for subnets. It will actually show you any settings that include that string
match. So it's a very intelligent search. You'll find on most resources, they'll be an essentials
area that you can expand or contract that is hyperlinked, and you can sometimes make
changes. You may just be able to copy using a little JavaScript. And, sometimes, you can
actually change directly from here. And then what's in the dashboard area is going to depend
on what kind of object it is. This is showing us devices that are currently installed on the
virtual network. We go under settings, we can see address space, connected devices subnets,
DNS servers. Remember, I mentioned that the default is to do Azure-provided DNS servers,
but you can populate that with additional addresses. So I have two domain controllers that
these are obviously private IP addresses that I want to make sure any VM on this virtual
network has these IP addresses in addition to being able to rely on the Azure ones. Isn't that
neat? Now there's lots of other moving parts here when it comes to Azure networking. We
have network interfaces which are, in turn, bound to individual virtual machines. Public IP
addresses are actually tracked separately as separate objects. You see I've created one called
adl bp ip, and this is associated with a load balancer for a deployment that I already have.
And if you're wondering, "Okay, what's the IP address, and is it static or dynamic?" You can
always find that stuff here by looking through the settings. In the essentials area, it shows us
the public IP address that's currently assigned to it, but if we go under Settings -
Configuration, we can see with a simple control, we can go between dynamic and static
assignment as well as give it a DNS name label. Finally, let's take a look at our load
balancers. We have that, again, as a separate object, and these require a little more heavy
lifting to set up, because more knowledge is required at the outset. You have to understand
what they do. Let me give this one a click. Basically, load balancer, to set this up in Azure,
you create what's called your front end IP pool. This is going to be your public IP address that
is listening for incoming internet connections. And then the load balancer needs a back end
pool, which is a collection of, in this case, virtual machines that it's going to route traffic
equitably to. And then, if you've ever worked with load balancers, you know that you have to
have a health probe to periodically check to see if your end points are up or down so the load
balancer knows not to route traffic to a downed node. It looks like I have some homework to
do here. I have to create a health probe. The load balancing rules, again, it looks like I have
some more work to do here. You can route to particular back end pools based on the traffic,
and you can even do network address translation, or NAT, that allows you to direct traffic to
specific virtual machines. For instance, in this example, I have incoming connections that are
called for remote desktop protocol, RDP. You know that's 3389. It's going to pass 3389 into
my adpdc virtual machine, and, if I come in trying to connect to bdc, I will do a connection
attempt at 33389, so that's a way to provide some security through obscurity, if you've ever
heard that term. Well, looks like our deployment succeeded, so let's go back here to the
notifications area. And it takes us to the new VNet that we've created, and, actually, it looks
like it's taken us to the new VNet resource group that it created. I forgot to mention that, that
all of your deployments are ultimately placed in a top-level container called a resource group.
And we can see under essentials here that this resource group that was defined by a template,
this template here that we looked at in the Quickstart templates area includes quite a few
assets here. I mean, look at all that: two network interface cards, a virtual network, a storage
account, a load balancer, disks in virtual machines. We did that just by deploying a simple
template and modifying a few parameters. I hope I have you excited, not only for Azure
networking but also through administrative automation with Azure Resource Manager
templates.

Microsoft Azure Hybrid Cloud Architecture

Next, let's look at the Microsoft hybrid cloud and the network communications flow within it.
So, thus far, I've showed you the basic lay of the land for Azure IaaS within a single virtual
network, and then I showed a multi-region load balanced deployment that obviously includes
more than one virtual network and a virtual private network, or VPN, connecting them. This
is a very common scenario, where you want to connect your on-premises network to the
Azure cloud. Specifically, you're connecting an on-premises network to a Azure virtual
network. Once again, we have a VPN gateway that's created. Again, there's flexibility. You
could have a high-speed connection that bypasses the internet altogether, that's called
ExpressRoute. We'll get to those options later, but, here, we're just looking at a garden variety
site-to-site VPN where we're connecting our on-premises VPN concentrator hardware to an
associated software VPN gateway in the Azure cloud that you pay for as you use it, just like
any other Azure resource. Now that VPN gateway opens up a lot of bidirectional
communication possibilities. For instance, you could have your cloud VMs, your Windows
server VMs running in Azure, join an on-premises active directory domain and vice versa.
You can place on-premises domain controllers into the virtual network depending upon your
link speed. That's really going to be your weakest link in a hybrid cloud scenario. That's
going to determine how much traffic you push through that hybrid cloud link, but just simply
the ability to make this kind of connection is really inspiring, to me anyway, and I hope it is
for your as well.

Virtual IP Addresses and Name Resolution

Virtual IP Addresses and DNS Name Resolution. Now, if you've worked with TCP/IP
addresses, you know that every host in a TCP/IP network needs an address, and it needs to be
unique for the network that it's on. Your Azure load balancer, I think logic would tell you, is
going to need to have at least one public IPv4 or IPv6 address, because it serves as an internet
connected endpoint. And that's true, it does. Even at the VM level in Resource Manager, you
can give your VMs public IP addresses, and you can even do static IP addresses. Although
you don't get to request specific public IPs, you can associate a reserved public IP in Azure
for your VM. That's a special case though, and we'll get to that. But you notice the lock icons
next to the VMs? Those are network security groups or what we could call software firewalls.
And you may not want your VMs equipped with a public IP, you may, instead, want to force
all ingress or inbound traffic through a load balancer or, maybe, a network security appliance.
So that brings up the question of private IP addresses that are from the Request for Comments
RFC 1918 range, your 10 dot, your 17216, your 192.168. Course those are free, and you
would design those as you create the subnets in your virtual network. Okay? With regard to
public IPs, we'll get into this much more in the next module, but the first five reservations are
free. Microsoft will give you those, and then you pay a small amount, a couple dollars a
month, to do reserved public IPs after that. Mainly, it's your load balancer. There's some other
objects that will need, or at least just strongly recommended that you use, a reserved public IP
for, but the privates, like I said, are going to be private only within your virtual network. You
also want to make sure, from a planning perspective, to not overlap address ranges. You may
have a 192.168/16 network on-premises, and, if you use that same range on your virtual
network in the cloud and have overlapping IP subnets, that's going to be a bad day. So, you
always want to think from day one about co-existence, network co-existence, with IPv4
addressing. Alright? DNS, or domain name system. Azure takes care of name resolution
within the virtual network. It's just automagically done for you. You can add your own
custom DNS server IP addresses. If you have a hybrid cloud, you could specify on-premises
DNS servers. If you have a domain controller or a DNS server in the virtual network, you can
plug that in. The big trend that I want to drill into your minds is you're not going to do much
at all network configuration in the VM. You and I are probably accustomed to going into
your network connections properties. You don't do that with Azure. In fact, you can run into
some big problems if you try to do that. Instead, your network configuration needs to be done
at the API level using the Azure portal. You can use the Azure command line interace, or
CLI. You can use Azure PowerShell or the software development kits.

Demo: Azure VM Networking Configuration

Whereas in the previous demos we looked at Azure networking in the portal from a fairly
high level, here I want to go down to the virtual machine level, and let's looks at some
practical Azure network overview stuff. To begin, we'll go to the virtual machines node, and
you'll see I have a few of them running here. I'm going to focus on these top two that are part
of the twodcs resource group. I have a pdc and a bdc, and if you give a click to one of those
and take a look at its settings, we can review all of its network configuration, again, a few
different ways. We can see right in the essentials page, it's public IP address, which I'll give
you a hint: that's the IP address that we saw before, earlier, in the earlier demo, that is the
load balancer. And we can take a look at network settings more granularly by looking at the
virtual network interface object that's associated with this that'll tell you the private IP
address. If the VM has a public IP itself, it'll show up on this list, and, if you've attached a
network security group to the NIC, again, it'll show up. If you give that a click, it'll drill into
the virtual network interface's own essentials page where you can modify the properties of
the VNIC. IP configuration, for instance, is where you go to set IP addressing. Let's select the
IP config 1, which is the default configuration for a VNIC, and, essentially, you've got public
IP address and private IP address settings. As I've told you, this pdc and the bdc VM are both
on a load balancer, so they don't need public IP, but they do need private IP. And you can do
dynamic or Azure will automagically give you, via DHCP, private address that's from your
virtual network subnet, or you can override and do static. I mentioned earlier on the slide
portion of this module that you want to get accustomed to doing virtual machine network
config in the portal or programmatically and not at the VM level. To underscore that, let's
come back to the VM's overview page. This is called a journey, by the way, the fact that we
can scroll horizontally through a whole bunch of blades that we've drilled into. Let's come
back and select our original pdc virtual machine. I'll close out of this network interface's
screen, and we'll select pdc. To connect to the VN, we'll click this connect button here. This
is going to deploy an RDP connectoid that I'll just go ahead and open. I'll suppress any
messages that I see. It's a domain controller and a domain called company, so I'll authentic as
Company Tim, and we should get a full screen session to that virtual machine, just like you're
used to doing RDP on-premises. It's the very same idea. Now, technically, what's happening
is that from my Windows 10 workstation, I'm making a call to this public IP address, 231,
that is an Azure load balancer, and that load balancer is going into the virtual network into the
virtual machine and being captured by the VM itself. Let me minimize server manager here.
Now you can add a couple other layers there, like I've mentioned, for instance, using the
network security group to more granularly filter traffic. That's always a really good idea, but
as long as the NSG has a rule that allows this RDP access, you have instant connectivity. If
you've never deployed a VM and you've wondered, you create your administrative credential
when you deploy the VM. Now this is a domain controller, so I can set it up however I want
to. I'm going to right click the start button, go to run, and type ncpa.cpl. This is a shortcut that
I've used for years to modify the network properties of a VM. Let me double left click this
virtual network adaptor, and we'll go to details. And the text is probably too small for you to
see, so I'm going to recite it. The IP address here is exactly what we saw in the portal,
10.0.0.4. The IPv4 default gateway is 10.0.0.1. You might be thinking, "Huh, where is that?"
Well, that's part of the Azure software defined networking stack. The IPv4 DHCP server, this
is a public IP, again, that's Azure DNS taken care. We have immediate name resolution
between this and any other machine that's on the virtual network, barring things like firewalls
and remote access. Let me try, for instance, to do a ping of adbdc, and, sure enough, it easily
resolved that host name and the private 10.0.0.5 address that my other virtual machine is
listening on. And I'm here to tell you in closing that if we had a site-to-site VPN with our on-
premises network, we could do pings and have transparent network connectivity between
Azure VMs and on-premises physical or VMs just as easy as can be. So there it is, and we
close the connection, a little bit more on the VM aspect of software defined networking in
Azure.
Summary

What have we learned in this module? First, I hope I've convinced you that Azure software
defined networking, or SDN, allows you to extend your on-premise network seamlessly.
Actually, even if you're not doing on-premises to cloud hybrid networking, just the fact that
you can deploy networks as rich, robust, and secured as you can on-prem, all with software,
that's what software defined networking is. Now, I've been in IT for a long time, so I've spent
a lot of time in server rooms, in racks with my hands on networking equipment. I sometimes
feel a little sad that, as much as a love the cloud, I hope that newcomers to IT will spend the
time with physical ethernet, so they can make sure to have a better picture in their mind when
they're using Azure software defined networking, but I'm getting on a soapbox and ranting.
So let me stop and move to the next point, which is to repeat the thesis of this module that
planning is everything with Azure, and plan your Azure virtual network topology to align
neatly with on-prem. If you're working with a customer, and they say, "Well, we don't have a
on-premises need for the near future," say, "That's great, I honor that, but we're still going to
build our virtual network such that there's never going to be a conflict with the addressing
scheme and the topology on-premises." We haven't talked about Azure Stack in this module,
because, really, Azure Stack isn't on the menu for this course. We cover it more thoroughly in
other readiness training. That having been said, if you're familiar with Azure Stack, which is
sometimes defined as the Azure cloud brought on-premises, this gives you another layer of
consistency between your on-premises environment and your Azure virtual networks in the
cloud. It's something worth looking into, and I have some documentation links for you in the
exercise files. With that, I want to thank you very much for hanging in. I know this is a lot of
new material, potentially, and, in the next module, as promised, we're going to start to dive in
to each segment of the Azure networking space, starting with the Azure virtual network. I
look forward to seeing you then.

Azure Virtual Networks


Overview

Hi there, welcome to Pluralsight. This is Tim, welcoming you to the module entitled Azure
Virtual Networks. What are you going to learn over the course of this second module in our
trolley ride through Azure networking. I have three way points on our journey today. First is
understanding virtual network capabilities. If you came out of the previous module you may
feel a little bit of an overwhelm factor. We're going to take a step back and look at the virtual
network in relative isolation. We'll also cover the basics of IP addressing in Azure both IPv4
and IPv6 as well as covering name resolution. Let's have some fun and let's learn.

Virtual Network Capabilities and Components

Virtual Network Capabilities. In this slide I have a laundry list of six points that I think sum
up the major elements of the Azure Virtual Network. First a virtual network, as I said in the
previous module, represents an isolation boundary for your Azure Virtual Machines. These
could be Windows Server Virtual Machines or Linux VMs. The virtual network by definition
has internet connectivity, otherwise, you'd never be able to connect to your cloud VMs.
Although Vnets are isolated and they have no interconnectivity by default there are a number
of ways that you can link VNet-to-VNet connectivity, and we'll cover those a little bit later in
this module. Why would you want to do that you might ask? Well I've worked with some
customers who didn't know at the outset that VNets were isolation boundaries, and they
figured they would put one tier of their web application in one VNet and another tier in
another, not realizing the extent of the isolation. We could get around something like that
pretty easily by creating a VNet peering relationship. On-premises connectivity is the link,
the hybrid cloud link, between your on-premises network infrastructure and one or more
Azure virtual networks that normally requires a virtual private network, or VPN gateway.
There's also express route available. Now you'll find that Azure itself, that is the Azure
service fabric itself, will take care of stuff like name resolution within a virtual network as
well as routing between subnets. Now if you have experience with ethernet networking you're
familiar with a layer three boundary in IP where you have multiple virtual local area
networks on prem and you need a layer three switch or a router to perform that traffic flow.
In Azure what are called system routes take care of that, but you still can do traffic filtering
and routing yourself as an administrator. The filtering normally takes place with the network
security group, which is a software firewall, and user-defined routes and IP forwarding, as I
said, allow you to take more granular control of your traffic flow in a virtual network. Here's
another diagram from the Azure architecture center, and as usual I have the attribution in the
lower left. Check also the exercise files that I always make for my modules for additional
hyperlinks. I want to draw your attention, first of all, to the VNet object, the Azure virtual net
object, when you create one of these you define an overarching IPv4 address space. In this
example the text is really small. It's 10.0.0.0/16. Now once you have that overall address
space you can subdivide it. If you've ever done IPv4 subnetting it's the same basic idea. Now
this example has several subnets, not only three for a three-tier application, but there's a
separate subnet for the VPN gateway. The virtual network includes a DMZ that may include
virtual network appliances where you can do things like load balancing and firewalling using
enterprise-class virtual hardware. One thing that I like about this particular deployment is that
whoever designed it specified a separate/25 subnet for management. A common way to
securely administer your VMs within a virtual network is to use what's called a jump box
where you have, say a Windows 10 or Windows Server VM in its own subnet, that you can
VPN into either through a site-to-site gateway or a point-to-site gateway. And then once
you've connected to your jump box, because the jump box is on the same VNet as your Azure
resources, you would then administer those resources from the jump box. That's a common
design pattern with Azure IaaS networking.

Vnet-to-Vnet VPN and Vnet Peering

I mentioned several times actually that VNets are isolation boundaries, and between each
other, that is with multiple VNets defined in a subscription or even between subscriptions,
there's no default route at all. So you have to override that. One way to link VNets is to create
a VNet-to-VNet VPN. This is an IPsec IKE tunnel that you can define not only between
VNets within a single subscription, but you can also do a VNet-to-VNet VPN across different
subscriptions. There's many use cases for this. You may have two business units that each run
their own VNet and you need to combine logically the two VNets and the resources within
them to make them easier to communicate with each other. You could be as an architect
correcting bad initial design where the initial topology is broken out into too many VNets and
this is a way to collapse them. Because you are using an IPsec tunnel, this is going to provide
a secure connection between those VPNs. Note that the VNets can be in different Azure
regions. So you might have in this example, on the slide you see a West US VNet4 and an
East US VNet1 that by virtue of a VNet-to-VNet connection you have the ability to link
resources with those two VNets. And understand that we're talking about software-defined
networking, so those gateway icons you see represent software VPN gateways defined in
your subscription. Believe it or not there's another way to link VNets. This is something that
the Azure product groups have clearly had to deal with with their customers. Customers
needing to link multiple VNets together you can do what's called a peering relationship. This
is easier to configure in as much as you do not need a VPN gateway. You can literally go into
the properties of each virtual network to create a peer link and then away you go. You've got
transparent connectivity. Now remember things that I've told you in the previous module.
They still stand here in module two about designing your IP address ranges so they do not
conflict. In this example the resource manager VNet at left is 10.1/16, and the right side
virtual network is 10.2/16. Now historically VNet peering had a significant limitation. And
that was your virtual networks needed to exist in the same Azure region. However, that
situation began to change in fall of 2017 when Microsoft put global VNet peering into public
preview. Now by the time you're watching this video the global VNet peering feature may be
generally available, or it may remain in public preview. But this is really great news because
it allows you to peer virtual networks in different geographical locations without any public
internet involvement, extra router hops, and as I mentioned previously, using a VNet-to-VNet
virtual private network gateway.

App Service Environment (ASE)

Now over in the PaaS world we're dealing pretty much exclusively with Azure infrastructure
as a service, which our Windows Server and Linux VMs running in the Azure public cloud,
but on the PaaS side we have Azure App Services, and in our Microsoft Pluralsight shared
content we have plenty of coverage for the App Services situation. Well, normally if you
know anything about platform as a service the idea there is you get increased agility and the
ability to focus on your web app and its code and functionality without having to worry about
the day-to-day care and feeding of the underlying virtual machines. Well, for businesses that
have a need for PaaS agility but the security that IaaS offers you can do something called an
App Service Environment, or ASE. An ASE is an Azure App Services app that's actually
linked to an honest to goodness Azure virtual network. So that unlocks the ability to do traffic
filtering with network security groups as well as deploying network security appliances like
Azure's own web application firewall. Now the WAF, as it's known, is really awesome. It
does full stack firewalling for your web apps, including screening traffic along the lines of,
say, the OWASP top 10 vulnerabilities, so it has some intelligence behind it that you
normally cannot get with just pure Azure App Services platform as a service. With App
Service Environment you're also given dedicated hardware, not just the VMs that your web
apps run on, but you actually get dedicated server hardware in your Azure data centers.
You're going to be required to choose high-end VM classes, multiple instances of those VMs
as well as premium storage, which is solid state disk. So the App Service Environment gives
you high performance as well as high security and isolation. You'll be required, as I said, to
have at least two tiers, frontend and backend, and multiple instances of each tier. If you're
thinking, well boy this could be more money, it is. It's going to involve a bigger financial
investment than using App Services alone, okay? And that's simply because of all the
aforementioned points. But, for businesses that have special security and performance
requirements, ASEs are something to look at, and they fit into this module because they deal
with the same virtual networks that we have with IaaS. Yet another option. It seems there are
so many options in Azure. You can take an Azure App Services application and integrate it
directly with an Azure virtual network. That does go outside our scope, so I'm not going to
say too much more other than that, number one, you can look in the exercise files for some
documentation links, and number two, it's not the same thing as App Service Environment.
Virtual network integration with App Services would be where you have your web app
running in App Services as a hosted platform as a service solution, but you have a
requirement to communicate, say, with a backend SQL server or database server that's
running in a virtual network on a virtual machine. That's the use case intended for this Azure
virtual network App Services integration. Now let's go ahead and build a virtual network. It's
time for us to get our hands dirty. We're going to use the Azure portal, but know that the steps
that we do are perfectly applicable in Azure CLI or Azure PowerShell or even using an SDK.
Specifically, we're going to create a resource group called azcourse. It's going to have a
virtual network called azvnet1 with the IP range of 192.168/16. We're going to define a two-
tier web application with two subnets. One called frontend, that's 192.168.1/24, and our
backend data tier will be 192.168.20/24. Ultimately we're going to have two web frontends
and the frontend subnet and two database servers on the backend subnet. You'll notice we
have network security groups in this example as well as internal and external load balancers.
Now we're not going to deploy everything all at once. At this point I just want to lay down
the basic virtual network and define the subnets, okay?

Demo: Create an Azure Virtual Network

We're going to deploy a virtual network that maps to the Vizio diagram that I showed you a
moment ago. Now you'll notice I'm here in the Azure portal, and I have that case study
network image showing up on my dashboard. Just out of interest, if you want to know how to
do this you can go to edit dashboard, and in the tile gallery there's a widget called markdown.
If you're familiar with markdown. It's a simple markup language. The name markdown is a
pun on markup that does support some basic HTML, and all I've done here is add an image
tag and pathed out to a ping file that I have on my web server, and it displays it right here. It's
very convenient for us 'cause it reflects what we're about to build. To create the virtual
network you can either come up to search resources and type virtual network or if you have it
on your favorites bar you can click it, which is what I just did. And then in the virtual
network's blade we'll click add to begin the deployment process. Now all of the names are
coming from my diagram. Azvnet1 is the name and the address range is going to be
192.168.0.0/16. The little green check means that it's a valid option. You can do one subnet
here in the portal, but if you use Azure CLI or Azure PowerShell you can define all your
subnets. I'm going to do the frontend subnet now. And I'm going to do 192.168.10.0/24. I'll
make sure that my proper subscription is selected and I may have a resource group with that
name. Yes I do, azcourse. But if I didn't have that I would choose create new and define it
there. Let me verify my settings, make sure everything looks good. If you click automation
options over on the right it will show you the underlying arm template that defines it. And
you can actually add it to a persistent library in the Azure Cloud or you could just go to
deploy to deploy it. Let me close that out and it'll bring us back to where we were. And we'll
click create to run a quick validation and let Azure Resource Manager complete the
deployment. It should take only a moment. There we go, deployment succeeded. Let's open
our notification. Deployment succeeded. So that's good. And we'll refresh the view so we can
actually see our new object, azvnet1. Click that and reveal its properties. Let's go to address
space and verify that, 192.168/16. Now notice we can add additional address ranges here.
This allows your VNets to scale big time. We don't have any connected devices yet so we'll
go to subnets and we're going to need to create an additional subnet here for the backend. So
let me do backend is the name, 192.168.20/24 is the address. And notice that you can choose
a network security group, or NSG. I'm going to leave that at none right now because in the
next module when we cover NSGs we'll come back and bind it. You can bind NSGs either to
subnets or to individual VNICs on VMs. It's best practice to do it at the subnet level. It's a
more simple approach. Route table is where you can take care of things like user-defined
routes and IP forwarding. We don't need that yet either. So let's just click OK. Nice thing
about this is that in most cases, if you make a mistake or want to come back and tweak
something Azure will let you do it. That's always nice. Let's go do DNS servers. And I'm
going to leave the Azure provided as the default. Go to peerings. This is where you can create
a peer relationship, let's click add, with another VNet. You provide a friendly name for the
peer, the subscription, choose the virtual network, whether you're going to allow or not allow
virtual network access. You'll want to enable that in most cases. And then there's some
advanced options here. I just wanted to make sure to show you that peering because we
talked about it in the lecture part of this activity. You know, just to complete the initial
configuration here, I'm going to come over to virtual machines, and I'm going to deploy one
of those web servers. So let's go through this. Now if you've been through our Azure
readiness training on managing Azure infrastructure, you already know how to do this. I'm
going to click through pretty quickly here, so if you're finding that you're a little lost, I'd
recommend that you look at our getting started with Azure infrastructure so you can really
slow down and master this. I'm in the Azure Marketplace here. I'm going to search for
Windows Server 2016 data center. And I'll give that a click and I'll select the first entry in the
list, and click create. When you deploy a VM we give it a friendly name. Again, referring to
my Vizio drawing, this is going to be a web frontend, I'm going to name it web1. I'll use
traditional HDD disks instead of SSDs. I'll create a local administrator with a strong
password. If your password's not strong enough you'll get a red exclamation mark. It needs to
be between 12 and 123 characters long. May confirm. Verify my subscription. I want to make
sure to put this in our existing azcourse resource group in my location. This save money piece
is pretty interesting. If you have a software assurance licensing agreement with Microsoft on
premises, you can get a savings of up to 40% on your VM runtime costs by applying your on-
premises licenses in Azure. Isn't that cool? That's called the hybrid use benefit. I'm going to
choose no for that now and click OK. And very quickly I'll choose a size, an instance size for
the VM. I'm going to choose D1 V2 Standard, which is fine for our purposes. And this step
three is really the important piece from a networking perspective. We can either use manage
disks or a traditional storage account. I going to use a traditional storage account. Let me
create it. You can actually create it right in line here. It needs to be a unique name under core
windows net, although later you can add your own DNS name. Azcstorage1. Let's see if that's
global unique. It is. Standard level and I'll use LRS redundancy, which is the basic level of
redundancy. Now here we get to the meat and potatoes, the network options. Now notice that
the template's going to default to the most recently created. So it actually selected my azvnet,
but you can always override it or even create a new one if you want to. And as far as the
subnet goes, it's the same option. We can choose frontend or backend. This is a web server so
it does need to be frontend. Do we need a public IP address? Well, it's going to give us one, a
dynamic public IP if we need it. And because I'm not using a load balancer yet we do in fact
need it or we'll never be able to connect to the VM. I will go for dynamic here rather than
static. It's not that big of a deal. There's the network security group. The other place where
you can assign an NSG is the NIC on the VM configuration. You can extend a VM by adding
different extensions that do different things, maybe an operations management suite agent
and so on. For high availability, there's no cost to using an availability set. So I think you
should always use one. Asfe stands for availability set frontend. And I'm just going to have a
maximum of two fault and update domains. If you don't know what those are look at our
managing infrastructure training. I'll enable boot diagnostics. I'll keep guest OS diagnostics
disabled right now. We don't need it and click OK. It's going to run a validation, and if it
passes we can submit this deployment. All right, we're back through the magic of editing. Let
me open the notifications. Deployment succeeded. That's a good thing. Now you'll notice that
all my previous notifications are gone. You can dismiss just informational messages, just
stuff that's been completed, or everything. The completed is really nice because it used to be
in Azure that it would default to dismiss all and you would lose even active deployment
notifications, so completed works really well here. Let's go to virtual networks and let's load
up our azvnet1 and on the essentials pane under connected devices we see an entry. Now
notice it doesn't say web1. Instead it says web1218, which is a network interface that the
deployment template gave us. It didn't give us a chance to name it, unfortunately. But it
deployed it for us. You'll notice that we have a private IP address, 10.4, bound to this virtual
network interface. It's important for you to remember that in the Azure virtual network the
Azure platform itself reserves some IP addresses within each subnet. Specifically, the first
and last IP addresses of the subnets are reserved for protocol conformance. Along with three
more addresses used for Azure services. So it looks like this VM is on the proper subnet, but
we'll need to make sure to bind a network security group as soon as possible.

Public and Private IPv4 Addresses in Azure

IP addressing and name resolution. One of the big points I mentioned in module one that,
again, I want to repeat here, is that the vast majority of your IP configuration and your
network configuration for your Azure virtual machines needs to be done through the API,
through resource manager API calls either in the Azure portal or programmatically. You're
not going to monkey with the network settings in the VMs. Here we're going to compare and
contrast public IPv4 addresses versus private. Now public addresses can be assigned directly
to the virtual machine either dynamically by Azure or statically by Azure. You can also
assign and should assign a public IP address to your internet-facing external load balancer.
VPN gateway will also need a public IP address and also the application gateway, which is a
special purpose gateway that gives you some additional filtering intelligence. You get your
first five reserved, or static, addresses for free. I say static in quotes because you do not get to
choose your public IPv4 address, but you can reserve it. Anything beyond the first five is
charged at .004 cents an hour. So it's not overly expensive, but there is a charge. And I
already mentioned you can't request a static public IP. Private IP addresses are defined in the
RFC 1918 range, so they're therefore free for use. The reason why they were developed is to
allow businesses to extend the life of the limited IPv4 public address space. I said they're
free. They're derived from the subnet network ID. So what I mean by that is you define your
VNet level address pool, and then you partition your subnets based on that. Now you'll notice
when you create, say, a slash 24 subnet you would expect to receive 254 addresses, .1
through .254. But you'll find that the dynamic assignments that Azure DHCP uses starts
with .4. Which means that Azure itself will steal the first three IP addresses for its own
internal use. You can't use those as a tenant. And you should define static IP addresses
programmatically and never in the VM itself, right? Well I've mentioned that too often now.
I'll stop repeating it.

Azure Networking Resource Limits

This is an important slide. It's important to know your resource limits in Azure Resource
Manager. Now these on this slide are defaults per region per subscription. I give you a link in
the exercise files to the page where I got this information from. It's a page that you'll
definitely want to bookmark. Actually, let me give you that URL right now because it's that
important that you have it. I have a bitly short link for you. It's timw.info/limits. That will
take you to a Microsoft Azure documentation article called Azure Subscription and Services
Limits Quotas and Constraints. It's really important that you look at that document regularly
because the truth of the matter is those maximums are oftentimes flexible, and you can
actually open an online customer support request at no charge if you're situation calls for
raising the limit or quota above the default limit. Know that there are some maximum limits
that can't go above that level but there is flexibility. So per region, per subscription, you're
limited to 50 VNets, 1000 subnets per VNet, 9 DNS servers per VNet, 4,096 private IP
addresses per VNet, 20 static public IP addresses, and 60 dynamic public IP addresses. It's
important and constructive to know what numbers you're dealing with. You really want to
check the documentation. On one hand some of these numbers may seem absurdly high and
you'd never reach that. I mean 1000 subnets per VNet, but then others you might find whoa,
50 VNets. I could potentially run into a problem if I'm maintaining assets for different
business groups or different customers. Now there's ways around these limits. Most of them
are soft limits where you can work with Microsoft to have the limit raised. And there's also
other ways to go. For instance, you could have an enterprise agreement, or EA, subscription,
rather than pay as you go. With enterprise agreement you pay in advance on a yearly basis.
You make a financial commitment to Azure and then you burn off those funds over the year.
But what's neat with enterprise agreement, EA, is that you get access to a separate portal
where you can divide your spend into multiple subscriptions. And once you're dealing with
multiple subscriptions, these limits become a lot easier to work with and less scary.

IPv6 and Name Resolution in Azure

Now speaking of scary, what about IPv6 in Azure. We know that IPv6 is here and it's used by
many internet service providers, probably most or all of them I would say at this point. But in
the business world I haven't seen it all that much yet, unfortunately. Know that as of late 2016
Azure made IPv6 generally available in most public regions. As of this recording it might be
all, but I don't think so. You'll want to check the documentation to get the final word from
Microsoft. As with many Azure technologies, you want to keep your eye on Microsoft's
roadmap, because the state of IPv6 and dual stack in Azure is under development. That
having been said, as I've just mentioned actually, most public Azure regions can host dual
stack VMs and IPv6 for Azure Virtual Machines has been in general availability, or GA,
status, for a while. The link you want to get to for the IPv6 support in Microsoft Technology's
roadmap is timw.info/rm. Most public Azure regions can host dual stack. Which is where
you're running IPv6 and IPv4 concurrently. This is for VMs and for external load balancers
only. All right? Just those two resources. You have to configure IPv6 programmatically.
There is no in portal experience, at least as of this recording in summer 2017. You could do it
with an Azure Resource Manager template, Azure PowerShell, or the Azure CLI v2. How
does name resolution work in Azure. Well, out of the box you have Azure provided DNS,
which is automatic name resolution within a virtual network. Custom DNS servers are IP
addresses that you add to the virtual network. These addresses, I should say, will be
autoassigned to any VMs within the virtual network. And you can add public DNS server
addresses. You can add the private IP address of a DNS server that's running in that virtual
network. If you have VNet peering or a VNet-to-VNet VPN you can specify a DNS server in
another VNet, a private address that is. Or if you're doing hybrid cloud, hybrid networking
you could specify an on-premises DNS server. Note that you always have Azure provided
DNS. Your custom DNS servers ride on top and do not replace the Azure provided ones.
Azure DNS, this is a little confusing, is a separate service in Azure, that allows you to stand
up your own public and private DNS zones. Yes you can purchase a public domain name and
manage the zone within Azure, but you can also create your own private DNS zones and
benefit from Azure's global replication, which is pretty convenient. What's the
communication flow for name resolution in Azure? It's pretty straightforward. If you've
configured enterprise DNS then you already know about stuff like root hints and configuring
forwarding, and in Windows Server there's conditional forwarding. That's what we have here.
We have on-premises, let's say we have in this example that looks like an express route icon
there. So we have a high-speed link between our on-premises environment and a virtual
network in the Azure Cloud and our local DNS servers are configured to forward queries,
maybe conditionally forward queries to the cloud-based DNS server that's running on an
Azure virtual network. And notice that that Azure-based DNS server is using a private
nonrouteable IP. That is made manifest through your site-to-site VPN or your express route
link. Now connections between virtual nets also are involving forwarders, simple as that. You
would have the DNS configuration for each virtual network include an entry of the DNS
server addresses of the other network. Nothing too different from what you see on-prem, I
would say.

Demo: Configuring IPv6 in Azure

In this brief demo we're going to deal with, believe it or not, internet protocol version six. We
haven't seen any reference to it here in the portal. That's because it really isn't in the portal. I
want to draw your attention to this documentation article in the Azure docs. It's called Get
started creating an internet facing load balancer with IPv6 using PowerShell for Resource
Manager. Now that's one of the longest article titles I've ever seen. (laughing) Anyway, the
URL for this is in the exercise files or you could just find it on your own. Let's look at the
deployment scenario here. In this example deployment scenario we have an Azure external
load balancer that has a public IPv4 and an IPv6 address that are going to map incoming
traffic into our VM end points inside a virtual network. Now in this example I wouldn't say
that the VMs necessarily need public IPv6 addresses. But they have them anyway just for
grins, okay? And the rest of this page is a whole bunch of PowerShell that I've actually pasted
into a script file called Azure IPv6.ps1 that sure enough you have in your exercise files.
Aren't I a nice guy? My wife says so on most days anyway. But anyway, the reference, the
page ref here is on line two and the first part is setting the environment. Now let me go back
to step zero actually, because you may not have used Azure PowerShell before. In order to do
this you're going to need to install Azure PowerShell on your development workstation. As
long as you're running a recent version of PowerShell, and by recent I'm talking current
version, like 5, on a Windows 10 machine, let's say, you can use package management in the
PowerShell get modules to install it. What you'll want to do is first make sure you can reach
out to the Azure gallery and see the Azure RM module so we can do find module, name,
AzureRM. Yeah, sure enough it's there. And then we'll up arrow and pipe to install module
verbose, and if you want to upgrade, if you have a previous version and you want to get the
latest one you could do a force. So I don't need to do any of that because I already have the
modules installed. And once you do have them installed you run login AzureRMAccount to
authenticate to your subscription. I'll put in my password and authenticate. And it's going to
tell me what Azure's going to tell me what it thinks is my default subscription. Let me scroll
down a bit. Pluralsight 2017, that's correct. But if it wasn't we could run select Azure RM
subscription to pass in the subscription ID or the subscription ID that we need. And the first
step normally when we're doing a deployment we'll create a new resource group. And that
resource group is a container object that will host my related assets here, and it needs to be
associated with a valid region. Now there's a lot of code here, so we don't have the time or
scope to go line by line describing everything. But suffice it to say that the PowerShell's
pretty bread and butter, which means if you have a basic or intermediate level of PowerShell
skill you'll find no big surprises here. It should make sense. We have a bunch of variables that
are being built up and then passed into subsequent commandlets. And this example we're first
creating a VNet, an Azure virtual network with a subnet. On line 10 we're creating a subnet
with new AzureRMvirtualNetwork subnet config that has the address prefix of 10.0.2/24 and
then we're creating a new virtual network that is going to involve a location and an overall
address prefix. In this case it's 10.0.0.0/16 and we're putting our backnet subnet onto this
virtual network. This is a nice feature of programmatic Azure Resource Manager that you're
not limited to the user interface elements in the Azure portal. You can do everything at once
if you want to. We have to create some public IP addresses for the frontend pool. There's
going to be a public IP for IPv4 as well as IPv6 as you would expect. You choose parameters
and values that map to your preferences. Allocation method static, allocation method
dynamic. IP address version could be IPv4 or IPv6 and then you provide a unique domain
label. I don't think I actually need to run the rest of this PowerShell. I'm just going to walk
you through it for speed. You'll see down below that last call to new
AzureRMvirtualNetwork resulted in a warning that said the output object type of this
commandlet will be modified in a future release. You want to make a note of these things
because Azure runs at the speed of Cloud, which means that new features are introduced
every day and sometimes breaking changes happen. So it's important to be plugged into the
community. You should watch the Azure related GitHub repositories, check the exercise files
for some heads up on that, and to stay in touch with Microsoft wherever possible. The
backend address pools are going to be your VMs basically. Your VM IPv6 and IPv4
configurations. Network address translation are translation rules, where calls to the load
balancer for particular IP configurations are going to be passed to a virtual machine in your
backend pool on the VNet. In this case we're creating v6 and v4 rules that look for incoming
TCP 443, which as you know is TLS or HTTPS, and we're mapping it to port 4443 in the
virtual network, okay? That's often done for security reasons. You've heard of least serviced
and least privilege? You want to avoid default ports wherever humanly possible. On the
public facing endpoint 443 is a given, but inside a virtual network I would suggest you not
run services on default ports. Load balancers can tell which nodes are up and which ones are
down by doing either an HTTP probe where it looks at a particular path. In this case it's
looking for an aspx file that would have to exist on those servers. So this isn't going to run as
such unless I actually have this working. And it's going to try that call on port 80 periodically.
If you're not running a web server you would want to use a TCP probe where you simply
select a port that you want to test or probe to detect whether the host is up or down. Here we
have our load balancing rules for IPv4 and IPv6. Notice that you're passing in a frontend IP, a
backend address pool, a probe, a protocol, so you're really doing the full load balancing logic
here on lines 37 and 38. I guess there's a third rule here. So it's really 37 through 39, and then
we create the actual load balancer on line 42. Again, passing in a location, a name, but then
you pass in frontend IP configurations, your NAT rules, your probe. So again, you're tying
together a bunch of steps that you've done earlier in the script. Once you do that it's time to
get a reference to the VNet and the subnet for use in creating your VMs. You'll need to create
an IP configuration for your network interfaces for the VMs. There's quite a bit of code on
that. You can create an availability set with new Azure RM availability set and a storage
account with new Azure RM storage account. And then ultimately you can create your VM
by stepping through the importation, if that's a word, of your configuration, your host
operating system, what OS image you're going to grab, associating a network interface with
the image associating disks with the image, and then ultimately creating the VM. Well it's
only 70 something lines of code. So it's not crazy, but can be intimidating if you're not fully
up to speed with PowerShell. That in a nutshell though is how you can do IPv6 on VMs and
load balancers in Azure Resource Manager.
Summary

All right, well there you have it. What have we learned in this module? First I think that you
would agree with me that the VNet, the virtual network, is the foundational building block of
Azure infrastructure as a service. I know if you talk to five Azure administrators and ask
them what the foundational building block is, maybe one person'll say network and another
person'll say storage, and another person will say compute. But ultimately, unless your VNet
is configured properly, your VMs are not going to communicate with anything. And unless
you secure your VNet, which is a subject that we have yet to cover in this course, you could
be exposing your business to some big and bad consequences. Now you can find a route, pun
certainly intended, to meet any cloud connection need. For example, I had a customer ask me
not too long ago, we use F5 load balancers on premises, is there anything similar in
functionality in the Azure Cloud, and I was happy to say, actually F5 networks does have
virtual network appliances in Azure. So you can upload your configurations and just maybe
tweak them a little bit and use the very same administration workflow for F5 in the Cloud in
Azure as you can on premises you see? So it's a question of a, performing due diligence and
skilling up so you're aware that these Azure services exist, and then it's a question of learning
how to use them and seeing where they'll apply in your work each day. So with that I'll let
you go. In the next module we will hone in finally. We've been flirting around with security
for a while. But we'll finally hone in on it and we'll deep dive into firewalling with network
security groups. Thanks very much for your attention and participation. I look forward to
seeing you in module three.

Network Security Groups


Overview

Hello there and welcome to Pluralsight. My name is Tim Warner and this module is entitled
Network Security Groups. I have three learning goals for you in this module, and they're all
related to information security in the Azure cloud. We need to understand exactly what NSGs
are and how you implement them in an Azure IS scenario. We'll also cover Multi-NIC virtual
machines, which do have a place in certain scenarios. And we'll also cover to a degree
customer VNET routing. Let's get started.

About Network Security Groups (NSGs)

The network security group, or NSG. The network security group in Azure, called an NSG
for short is a stateful, software firewall object. Software because it's part of software-defined
networking in Azure, stateful because the NSG takes into account more than just a simple IP
address and port combination. It can actually watch conversations. Moreover, NSGs can
protect both inbound, called ingress, as well as outbound called egress network traffic. The
rules that you define in NSGs contain 5-tuple hashes. These 5-tuple hashes serve to identify
conversations coming into your virtual network and to your hosts and just the opposite for
egress. The source and destination IP addresses, the source and destination ports and then the
protocol, either TCP or UDP, and those form the manifestation of your rules. Let's talk more
about that. The rules in an NSG are based on priority with lower numbers having higher
priority. If you've ever worked with firewall configuration, firewall hardware, then you're
familiar with access control lists or ACLs. It's a similar deal with NSGs. One NSG can
consist of multiple rules, and the first rule that applies to a traffic stream is the one that
becomes the effective rule. Now you can attach NSGs to either the subnet object or to the
VM object. I've mentioned this previously in this course, that I suggest you go for the subnet
because number one, a single network security group can govern ingress and egress traffic for
all VMs on that subnet that makes for a more convenient management experience and
because of that it's easier to maintain over time. Now there are cases perhaps where your
subnet rules are more relaxed than what a particular VM needs. You can, in fact, scope NSGs
such that you have one bound to a subnet and another bound to the VM at the same time, and
in this case the effective rule will be decided by the nearest NSG, so for inbound traffic you
may allow a particular traffic stream at the subnet level but when it hits a particular VM that
has a conflicting or opposite rule at the NIC level, that will become the official one. Now
NSGs are cool because they're modular, that is, you can create a single NSG and bind it to
multiple subnets or VM NICs. On the other hand, you can bind only one NSG at a time to a
subnet or a NIC, alright? Now another point that many administrators have is, what about
Windows firewall on Windows server VMs and what about, say, IP tables or another
software OS Host firewall on your Linux Vms? The idea there is that the NSG does not
necessarily replace the host level firewall. Technically it could, but personally, I would
recommend against it. It's always a good idea to minimize attack surface by applying multiple
levels of control. If you're wondering, you can, in fact, override the NSG for inbound traffic
at the Windows firewall level for instance, because ultimately the Windows firewall exists in
the VMs own operating system environment. Just be careful. This is a screenshot from the
Azure documentation. As usual we have the attribution link in the lower left that shows a
flowchart for traffic, the way that NSGs handle the logic of traffic, so starting at the left, the
Azure host receives traffic. That host, let's assume, is a virtual machine. Is it inbound traffic?
Let's assume yes. In that case we load, or Azure loads, inbound NSG rules by priority. We get
the first rule on the list and decide, does this match the traffic? If no, is it matched by the next
rule, and ultimately your final rule, and then depending upon where the match is it's either
going to be an allow or a deny rule, and an allow will allow the packet and a deny will drop
the packet. It's as simple as that. I want to take a moment to describe a couple of the more
recent developments in network security groups in Azure. One that I find very interesting and
some of my consulting customers are already using this to great effect is the service tag
assignment. What we see here in the slide is a screenshot of the inbound security rule form in
the network security group's blade in the Azure portal, and you know that the source property
has several choices associated with it. For instance, you an scope the source of an inbound
rule to a particular IP address range and so forth. One of the options now is service tag. And
if you choose service tag you can then choose from a number of pre-determined options
across several resource providers. You see the first three options: internet, virtual network,
and Azure load balancer. This makes it a lot easier to create rules for instance, from VNET to
VNET in your subscription, or inbound rules where the internet is the source point. Now, you
can also add into your NSG rules what's called the application security group or ASG. An
application security group is analogous to a network security group. You know now that an
NSG can be linked to a virtual network and or a virtual machine's virtual network interface
card. And I tend to advise my consulting clients to link the NSGs at the subnet level, because
it can be more difficult to troubleshoot when you have numerous NSGs applied at the VNIC
layer. Well application security groups represent a different way to look at organizing your
VMs. Let me describe the workflow. This is a feature in public preview as of this recording
in late February 2018. By the time you're watching this video it could be in general
availability. First thing you do is create your application security groups based on arbitrary
VM collections. When I say arbitrary, I'm normally thinking of application tiers like web
servers, app servers and database servers. As of this recording, you need to use the Azure CLI
or the Azure PowerShell command-lets to create the ASGs. You then create your network
security groups as usual, and you create an inbound rule making use of service tags. For
instance, you could say an inbound rule, coming in from the internet, and check this out, the
destination would be the application security group. That means that any virtual network
interface that corresponds to a VM that needs to be part of that web server's tier will be linked
to the ASG. So you create your ASGs and the ASGs become references in your NSG rules,
and they also are linked to the virtual network interface cards of your virtual machines. What
a great degree of flexibility this offers us.

Demo: Create an NSG in the Azure Portal

Alrighty then. Let's get this party started. In this demo, I'm going to create a network security
group and we'll define some rules and bind it to a subnet so you understand exactly how this
is accomplished. We'll use the Azure portal here, so here we are in the Azure portal and we'll
navigate to network security groups just in case i haven't showed you or if you haven't seen in
our other readiness training, the way that you can add and modify the favorites bar is to scroll
down to the very bottom and click more services. And in the more services list, you can filter
and then when you highlight the little star it places that entry in the list and then there's a grab
handle at the edge of each one of these entries that you can reorder it. And this favorites list is
going to be per user, not per subscription. Let's go to network security groups as I said before,
and to create one of these guys is really easy. We'll click add, and I'm going to call this front
end NSG, because this will be our front end network security group. I'm going to make sure
to place this in the resource group that we've created for the course azcourse, and I'll click
create, and that's all there is to it. The network security group object is pretty simple. It's all in
how you configure and scope that NSG that counts. That deployment succeeded. Let's refresh
our view to bring it out. Front end on NSG. Here we go. We'll click this, and we'll scroll
down the settings list to what I want to show you because it's all here under settings. We have
separate entries for inbound and outbound security rules. Remember, once again, I keep
repeating myself that the NSG can be scoped to the network interface or the subnet. So these
are our main options. Let's go to inbound security rules, and you'll see and add and default
rules. What are default rules, you might wonder? Well if you give that a click, you're going to
see three super high priority rules created. These can be useful for you. Two are allow rules,
one is a deny. They're intentionally given super-high numbers to not conflict with any
administrator-defined rules that are going to have much lower priority numbers, I'm sure.
Basically, what you have here is in this first rule, it allows inbound VNET traffic. Source
virtual network, destination virtual network. It makes sense that we want to not accidentally
hobble our VMs from within a virtual network from communicating with each other. If you're
using a load balancer, the second rule is cool because it starts with Azure load balancer
destination any, this would be any VMs within the subnet. This allows your load balancer to
perform HTTP or TCP probes for your nodes. That can be very useful if you're using a load
balancer. This final rule is what's called a blanket deny all, and normally when I've built
firewall rule sets in the enterprise, you'll have a catch-all rule at the end here to deny
anything, so you see the source is any and the destination is any. So the only hazard here is
that you want to be absolutely certain to catch and allow any traffic that you do desire
inbound to your VMs, otherwise you're going to get caught by this deny all inbound, alright?
So let's click add and for the sake of simplicity, let's say we'll begin with RDP management
traffic to our web front ends. Over time we may want to delete this rule and have only web
traffic; namely TCP 80 and 443, or we may want to do RDP but on an off port. But I'll just
call this management RDP, and I'm not going to start with one. You don't want to do your
priorities one, two, three, because then later you're not going to be able to shim any rules
before or after it. I would suggest you start at 100 and move by 10s or 20s. We'll say source
can be any, a CIDR block or an Azure resource manager tag. Now CIDR block is useful
'cause you're able to white-list or black-list entire IP address ranges. Any is pretty darn broad.
I'm going to choose any here only because we're in a test dev environment, and we're getting
started. For the service, if you want to specify your own ports or port range, you'll choose
custom and then you choose TCP, UDP or any and then you specify the range. It can be a
single port or it can be a hyphenated range of ports. Now this does not accept a comma or
semi-colon delimited list. If you want to create more than one all or block rule on ports, you'll
need to create separate security rules, okay? Now let me get rid of that, and decide to use a
pre-done rule. You'll notice that the service list consists of several popular rules like HTTP is
going to default to TCP 80 for instance. If you're doing Linux VMs you almost always want
to make sure you're allowing SSH, that is if the VM will be listening on the default port
number that is. But for RDP remote desktop, let's go down and find that entry. That will
default to TCP 3389 which, in this case, will work just fine. We'll scroll down, make sure that
the action is allow, and click OK, and we're going to have our rule deployed in just a
moment. And you see that the list of rules is being updated. It just created my management
RDP rule.

Demo: Work with NSGs

Excellent. So now we can bind this NSG to a network interface or a subnet. As I've said
several times, I'm going to make sure to do a subnet. I'm going to choose a subnet here, and
we'll click associate, and we're asked, okay, you want to associate the NSG with a subnet?
First, let's choose the appropriate virtual network. We'll choose my azvnet1 that's part of the
same resource group, and now we need to select which subnet. I'm going to choose front end.
We've already created these pieces, so notice that they fit together very neatly like Legos if
you're familiar with Legos. We can see this association in many different spots. We can, for
instance, go over to our virtual machines node now and find our web1 VM that we deployed
in the previous module that exists on that particular subnet. If we go to network interfaces,
which are where the VMs network configuration is managed, we can see that this virtual
network interface, currently has this public IP address, this private IP address that's within
that dot 10 front end subnet range, and we can see in this example that this VNIC has a
network security group already bound to it. Ouch, we don't want that. So let me click the
virtual network interface. I'm glad we checked this. And let's come down to network security
group, says web1-msg, well that must have been included in the template that we used when
we deployed this. We'll go to edit, network security group, none, and save the change. I'm
doing this because this VM and its network interface card are going to be governed by traffic
that's been associated at the subnet level, okay? As my old mentor Bernie Carr used to say,
the proof is in the puddin'. So let's come back to that web1-vm and see if we can successfully
connect to it now via remote desktop protocol. We'll click connect, and I'm going to save this
RDP connectoid on the desktop, because we're going to work with it specifically. First, we'll
right-click and go to edit and take a look that we're verifying a connection to port 3389, and
that's part of the NSG rule. Let me suppress this publisher message. I'll authenticate as my
tim account. Company is the name of the active directory domain that this VM is associated
with, and it looks like we're taken right in. Good deal. Alright, let's kill this connection to
verify that the NSG is doing what it should in blocking any other traffic. Let me right-click
my PowerShell icon and open an administrative PowerShell console, and let's do an Enter-
PSSession, and the computer name is going to be the IP address, so let's come back to the
essentials page for the VM, and let me click to copy that public IP address from essentials,
and then paste in the public IP address and then we'll do credential, get credential. If I spelled
that correctly. And let's authenticate. Press enter. Well after a long wait, we'll see that we can
not connect to the destination specified in the request for sure, and just because I have to
show you this, it's really cool technology, let's come back and create a quick allowance for
remote PowerShell. We'll go to front end NSG, inbound security rules, add, I'll call this
WinRM Allow. Probably should have used that naming convention earlier. It's a good idea to
put the service name and allow or deny directly in the name. I'll use 110 as my priority and
this time I'll choose WinRM which defaults to TCP 5986. Let's commit that change and wait
for the deployment to succeed. Okay, so that created that. So now that we've made that
configuration change, let's return to our PowerShell console. I cleared the screen. We'll up
arrow and get back to our enter PSSession statement. I misspoke. This is not domain
controller, so I'll just authenticate with my user name and the password and sure enough there
we go. We have our remote connection. We'll run hostname. It is web1. Anything we do now
is going to be in the context of that remote machine. I've mentioned that you can view NSGs
from a number of different levels. Here we're looking at the front end NSG subnet's pane, and
we can verify that we're connected to the azvnet1 virtual network and address range. And the
fact that the associate button here exists, rephrases what I said earlier where you can associate
an NSG with multiple subnets or multiple network interface cards, if that's your need. They're
reusable.

NSG Logging and Network Watcher

Managing NSGs. Deploying the network security group requires quite a bit of planning. We
saw that in the previous demo. You have to design the scope or the level at which you're
going to put the NSG, and then, of course, defining the rules in the NSG and testing it. Now
that's what we're focusing on here. Logging your network security groups to watch their
behavior over time as well as testing rules, honestly. So first we have diagnostic settings
where you can, in this screenshot, it shows the blade for enabling diagnostics logging on your
network security groups. And you'll see that you can archive the logs to a traditional Azure
storage account. That's what I've done in this example. You can stream the data live to an
event hub if you want to display the data as a live ticker feed, or you can send it to log
analytics. Now if you're wondering, Tim, what is log analytics? Log analytics is part of the
Azure management solutions frameworks, also called operations management suite or OMS.
We cover that in quite a bit of the other partner training in the Azure readiness program. So
check that out. We're not going to get into OMS here. We're going to stay with just native
Azure. And then down at the bottom, you can enable two types of logs. Network security
group events, which refer to tracking where the NSG is bound to VM subnets in control plane
behavior and then the other log is network security group rule counter that actually tracks the
number of times that each NSG rule is fired and traffic is either allowed or denied, and you
can customize the retention period. Oh by the way, one more thing I do want to say about log
analytics. The idea there is that you get a basic level of log parsing and analysis using the
Azure monitor, but if you decide to step up to log analytics in OMS, you get a much deeper
analysis with predictive analytics and additional value-add features. If you're interested,
check the exercise files, 'cause I give you some links there. Another feature that I'm a huge
fan of is called Network Watcher. This is now generally available. It was in preview for quite
a long time. There's several tools in Network Watcher, but it provides you a way to, among
other things, test your network security group rules to make sure that they're working
properly, alright? When first released, NSGs did not have a way, other than actually enabling
them and running real, live connections to your resources, of validating that it was catching
traffic that you wanted it to block, or allowing traffic that you wanted the NSG to allow. Well
Network Watcher in this interface, notice you select a protocol and a direction, inbound or
outbound, the local IP address of your virtual machine, and the local port it's going to be
waiting for, and then you specify a remote IP address and a remote port, and it will run a
synthetic check against that and come back with either access allowed or access denied, and
if it's blocked or allowed by a specific NSG rule you see it at the bottom of the screen shot,
and you can get additional data on that. Very handy tool. Another thing that Network
Watcher does is allows you to view the effective rule set for your NSGs. And the text is a
little small up on the top here, and you'll see this in the demo though, but you can look at
your rule sets both at the subnet level, the network interface level, as well as, like I said, there
may be an NSG at two different scopes. You could look at the inbound and outbound rules
that apply to a particular VM and it's network interface card. Again, this is very useful
metadata for being able to monitor, troubleshoot, and optimize your network security groups
in Azure IS. I'd like to go into a little bit more detail on Network Watcher. This is a
screenshot from the Azure portal. You can see first of all in the settings pane, there's an entire
suite of network diagnostic tools. The IP flow verify for instance, is fantastic for analyzing
your network security groups and testing them and troubleshooting them. But what I want to
draw your attention to in this slide specifically is a Network Watcher feature called the
connectivity checker or also called connection troubleshoot as it appears in the screenshot.
The use case here is that you need to test connectivity between two virtual machines, so you
specify a source VM and a port and a destination address and a port, and then run that check
and then you will come back with well, either the connection was allowed or blocked, based
on what's happening with each VMs virtual NIC and network security group and potentially
application security group rules. In my business as a consultant, I find that networking in
Azure and in particular working with NSGs and ASGs can cause some confusion for those
who don't have a strong background in ethernet and TCP IP networking. So Network
Watcher can be a great help with that. I've done an entire course in the Pluralsight library on
Network Watcher, so I encourage you to take a look at that. I love Network Watcher. It's a
fantastic feature. Have you ever heard of the defense indepth security model? What this
model does is take a look at a system security in a layered approach, and we can do the same
thing in Microsoft Azure. Let's take a look from the outside in the various security layers that
comprise the Azure software defined networking stack. Now you're connection's coming
inbound to your deployments form the internet are first going to be captured by the Azure
platform. Recall that we have the shared responsibility model in cloud computing. Microsoft
takes care of distributed denial of service or DDOS protection and manages the security and
safety of your endpoints to make sure that they're not being flooded with rogue connections.
Then your responsibility as the tenant or as the customer comes into play, you need to
leverage and deploy and maintain virtual networks that provide strong isolation, not only for
infrastructure as a service virtual machines, but also for other services like storage accounts
and Azure SQL databases. NSG and UDR of course, stands for network security group and
user-defined routes. This is what you're going to use on one hand to firewall inbound and
outbound traffic in that virtual network and UDRs allow you to customize traffic flows. For
instance, you would use UDRs to work with network virtual appliances. You may have a
software VM that you've installed from the Azure marketplace that gives you strong
firewalling or load balancing or web application firewalling and UDRs will allow your VMs
to use that as its exit from the virtual network back out on to the internet. So at the center, I
know this isn't center as such, and they're ellipses, not circles, but you ca see that at the quote,
unquote core, let's just use the term core, of this model you have your deployments. And of
course, there's additional layers inside your deployments, but in this slide, I want to limit our
discussion to the Azure networking security layers. I will say quickly, that inside the
deployment, you're probably thinking, at least I hope you are, well sure the Azure VM has a
network interface, a virtual NIC, and you can unassign or assign public IPs, you have your
private IP address. Inside the VM there's the actual operating system, Windows Server or
Linux and you have software firewalling, anti-malware, et cetera.
Demo: Enable NSG Logging

Before you can start looking at log and diagnostic data for a network security group, you
have to enable diagnostics. So here we are back at the front end NSG that we created in the
previous demo. I'm going to do a filter for diag, and we'll have under monitoring, diagnostic
logs. And if we do that and we have not yet enabled diagnostics, we'll see a little hyperlink
here that says turn on diagnostics to collect the following logs. Well let's click that link, flip
the status from off to on, and I'll choose the path of least resistance to archive to a storage
account and configure that, and I'll put this in my azcstorage1 storage account that we created
before, and notice that you have to select an already existing storage account for this. And
then down at the bottom, I'll enable both log types, I'll leave retention at limitless and click
save to update the diagnostics data, and we'll close the blade. We now see when we look at
the diagnostic logs setting, it gives us a filterable view. You might be wondering well, what's
the purpose of this filter? Well the filter means that if you want to, you can go beyond just
this NSG and look at diagnostic log data for other subscriptions, resource groups, resource
types or resources. Now this is a brand new NSG and I've just enable diagnostics so it hasn't
actually fired on anything. Let's go back to network security groups, and I have a couple
others that may have some action on them. Let me try this bwafnsg, and go to diagnostic logs,
and yep, this one's been running for a while, so if we come down here, notice that we can
filter to view just either the event or the rule counter logs. By default it's going to show
everything, and the time span can be last one hour, last 24 hours, last week or a custom
interval. Here we have a network security group rule counter event that happened three hours
ago. Let me click download, and it's going to bring down that data as JavaScript object
notation, or JSON. We'll click open. I have Visual Studio code on my system, so I'm able to
load up the JSON view in a fairly nice way, but you'll notice that the properties here is a time
and date stamp and you'll want to look at resource id to see the specific object in question.
Here it's obviously the NSG. And then we have some metadata where we have HTTP
allowed, inbound allowed and it's a matched connection. And then a certain number of
matched connections. Another place you can get to diagnostic logs more centrally, is in the
Azure monitor, so if we go into the search resources and type monitor, we can open Azure
monitor, and this would allow you to view diagnostic data for maybe multiple network
security groups at once. Check the other Azure readiness training for more guidance on
monitor and certainly operations management suite.

Demo: Use Network Watcher

But we need to move on to Network Watcher, so back in the global search I'll type Network
Watch and press enter. This, as I said before, has a number of cool network diagnostics tools,
but we're just going to look at IP flow verify, and security group view right now. What you'll
need to do is enable the Network Watcher on a per subscription per location, or per region
basis. And by default, Network Watcher is disabled for all regions. I'm in the South Central
US, so I opened this little ellipsis, and I selected enable network watcher. Took a moment to
turn on, and once it's on you can start to use the diagnostic tools like IP flow verify. Now it's
defaulting to another resource group. I'm going to go back to azcourse, web1, web1218
network interface, and let's do a test. The private IP of my web front end is 10.4, let's try
3389, which you would guess it should work right, because we created an NSG allow rule for
remote desktop. I'm going to put in a random remote IP address and the call will be going out
to 3389. Let's check that. And it says access is allowed by virtue of my management RDP
security rule. Isn't that great? So if nothing else, the IP flow verify is an excellent way to run
a sanity check against a resource to make sure that your network security group is doing its
job. The security group view, as I showed you in the slide portion of this module, allows you
to load up a resource group, which I'll do now, and a virtual machine and a particular network
interface, and it's thinking now because it's actually parsing any firewall rules that are
somehow associated with that network interface, and you see we have a whole bunch of
them. We have our default inbound rules that we created earlier, and then we have our RDP
and WinRM PowerShell remoting rules that we had before. Now that's the effective rule set.
If you want to know specifically at which scope these rules exist, notice that if I choose
network interface there's nothing there because we scoped these rules at the subnet level.

Multi-NIC Virtual Machines

Multi-NIC virtual machines. Now Multi-NIC virtual machines could be considered an edge
case. It's probably not something you're going to do as a matter of course, but it is possible to
bind more than one Azure network interface card to a virtual machine. Let's take a look at
this example here and see if we can make sense out of it. It looks like in this example, the
front end subnet consists of two matching web servers that are being load balanced to the
internet, and that front end subnet is protected in turn by an NSG. That's cool. But if we look
at the back end subnet where our data tier exists, notice that we've bound two virtual NICs to
each database server. Why might you want to do that? Well there's a couple cases here. One,
is you may have a need to separate data plane communications from management plane
communications. In other words, in this example, DB1 an DB2 may use one of their VNICs
for application traffic; communication with WEB1 and WEB2, and the other NIC maybe be a
dedicated back channel that you're using for instance, to back up those databases. You see, so
you're reducing network contention. Another thing, and that's exactly what this other
annotation is saying, that you can use your second VNIC to connect to another subnet that's
used for a special case; for log analytics and data aggregation, backup recovery and so forth.
To do Multi-NIC deployment you'll want to go programatic for this, so using Azure
PowerShell is a good idea. Let's buzz through this code so you get a general idea of how the
procedure works. Well, in the first piece here, we define the front end variable that essentially
grabs a reference to the front end subnet in our deployment, okay? Stirs it in a variable for us.
Next, we get a reference to one of the two VNICs that we plan to add to our virtual machine.
You specify with New-AzureRMNetworkInterface the ResourceGroup, the location, the
logical name and the subnet to which the VNIC will be attached. We do the same thing with
the next two code blocks, as I said, getting a reference to the back end subnet and applying
that back end subnet to the second network interface. So so far, we see we've built two
VNICs. One that'll be connected to the front end subnet, another that'll be connected to the
back subnet. We can create a virtual machine configuration and then apply that configuration
to a virtual machine, and at the end of this process, this VM is now simultaneously connected
to two subnets. You know you can also attain a Multi-IP configuration for your Azure virtual
machines directly in the portal. You don't necessarily always have to use Azure CLI, Azure
PowerShell or even arm templates. So just to demonstrate that, you see on this slide what
we're looking at is the IP configuration's page of the virtual network interface blade. In the
settings we go to IP configurations, and you can see there we have our first default config
which always has the name IPconfig1, if you don't change it your self. And as I said, you can
have a public IP if you want one assigned, and of course, you'll need a private IP to
communicate in that virtual network, but if your needs are that you need an additional public
and or private IP, all you have to do is click add and add an additional configuration to your
virtual NIC and you're off and running.
User-defined Routes and IP Forwarding

Custom Routing. Now this is potentially where we get into the high weeds. So far we've
presumed a lot of TCP IP networking knowledge, with all this talk of IP addresses and load
balancing and IP subnetting, but then there's layer three routing on top of it, and one of the
beautiful abstractions about the Azure Public Cloud is that, as I've said several times, within
your virtual network, the system routes, in other words Azure itself takes care of routing
traffic to and from the internet and within your virtual network among your different subnets.
But there are use cases where you need to create what are called user-defined routes, or
UDRs. These are configured at the virtual network level, and they're used, the main case I've
used them for is when you're integrating a virtual appliance into a virtual network. We're
going to do another module in this course entirely on virtual appliances. You may have need
for an enterprise class firewall or load balancer or some, well, network appliance and you
want that to run in your virtual network and serve your virtual machines. So this is a
admittedly small diagram from the Azure architecture center that shows a typical
deployment. Now the specifics on how you deploy a virtual network appliance are going to
depend on the vendor, but the vast majority of cases, the VM appliance comes in as a virtual
machine that you integrate into your virtual network. Normally you'll put it on its own subnet
in the virtual network and then you can deploy user-defined routes to each subnet, such that
in this example, let's assume that that VM appliance is an enterprise class firewall, and we
want to use it instead of the network security groups. So we can create UDRs as they're called
on the front end and back end subnets such that all outbound traffic has to go through that
VM appliance, and then the VM appliance, of course, default routes out onto the internet.
Isn't that cool? The visio drawing that I had on the slide is all well and good, but you're
probably wondering how user-defined routes actually exist in the Azure portal. So let me
cover that very briefly. The portal experience begins at the route tables blade. You can click
add to add a new route table to your subscription. This is what the route table object
properties look like and to create the route, you go to the routes setting and click add again,
and essentially you're creating a specific default route. You give the route a name, you
provide a IPV4 address prefix, so we can imagine that in my virtual network, I may have a
network appliance, a virtual network appliance located on the 192.168.1-24 subnet, so that's
the destination, and then notice you can specify the NextHopType. Choices include virtual
network gateway, virtual network, internet, virtual appliance, or none. So by creating these
route objects and binding them to subnets, you can take complete control over how routing
happens in your virtual networks. So Azure in a nutshell gives you that flexibility and that
ability to do that. In order to do this, remember the specific steps are going to depend on the
vendor and the appliance, but the other term you'll often hear when user defined routes are
mentioned in Azure is IP forwarding. This is where a virtual machine is acting sort of as a
router on its own behalf, and you would enable IP forwarding on the appliance VM to
forward traffic on behalf of other VMs in the virtual network. Yet another network related
security related thing you can do is forced tunneling. Now this gets into the hybrid cloud's
scenario that we'll discuss in great detail later in the course. Here I'm just going to plant the
idea into your mind. Let's say you have a site to site virtual private network linking your on
premises environment to an Azure virtual network, and you have some subnets and some
virtual machines up there, that when they make an internet connection, an outbound
connection, you never want it to go out to the public internet. Instead, you want to force
tunnel that traffic back through the site to site VPN through your on premises network. In
other words, you might have proxies and other on premises Edge equipment on your
connected perimeter that you want to make sure all of your servers, whether they're on
premises or in Azure go through. That in a nutshell is forced tunneling. And you'll notice in
this example that at the subnet level you can choose not to do forced tunneling. This front end
subnet for instance maintains its direct internet connection.

Demo: Deploy a Network Virtual Appliance

I've mentioned that the Azure quick start templates is an excellent place to learn and to get
resources to develop your production deployments on. The Azure quick start templates has a
public gallery at Azure.com and this is the behind-the-scenes GitHub repository. And in that
Azure quick start templates repo, there's a project called Barracuda web application firewall
IaaS and this is a nice project because what it does, is it deploys an entire environment that
includes a virtual network appliance. Namely the Barracuda web application firewall, puts it
in its own subnet, gives you a pod of three web VMs. Gives you a public IP address and an
external load balancer all pre configured with RDP NAT rules to be able to get to web vm-1
to an n. Isn't that amazing? This is the power of the Azure resource manager template, and
one of the steps that we'll want to do is create user-defined routes to make sure that web vm-
1, two and any additional nodes we have are communicating through the Barracuda web
application firewall. Now, each solution is going to have its own write-up. You'll notice
there's some deployment steps here, and they have a document link where Barracuda
published a simple PDF that has step-by-step post-deployment configuration steps. I find that
Barracuda is an especially good Azure partner because their documentation is really on point,
but if you're wondering what this looks like at the end of the proverbial day, I took about 30
minutes to deploy this solution. Let me come down to resource groups, and it's in a resource
group I called appliance. And, of course, there's a lot of items here. There's 17 items
altogether. I'm just slowly scrolling through the list. You have your two NICs, your two VMs,
your storage account, your load balancer, there's an NSG, an availability set, virtual network,
wow. Let's click the bwaf VNET virtual network and when you get down to the subnet level,
you can select a subnet and in the configuration for the subnet there's route table. And user-
defined routes are defined here, but you'll notice that in the Azure portal, at least as of this
recording in summer 2017, there's not a new option here. You can only select one that's
already been created, so we're going to get more into this later in the course, so this is
essentially a teaser just to get you thinking about it. But what you'll find in the Azure
documentation is using a template file, an arm template once again. This is a document I have
referenced for you in the exercise files called create user defined routes using a template, and
it gives an example here of a two layer web application where this FW1, I'm not sure exactly
what kind of virtual appliance it is, but it's on its own subnet. And our user-defined routes are
going to configure traffic specifically on a subnet level to go through FW1. And this
documentation actually shows you the code. It's traditional JSON code. It's an object of the
type Microsoft.Network/routeTables, and you're just creating NextHop address entries;
essentially a default route. If you've configured routing tables before, that's essentially what
we're doing here, and it looks like this configuration, you would apply to the appliance to
configure IP forwarding and forward all of its default routes out to a particular public IP
address. And then to deploy those templates we can use the PowerShell command or the
Azure PowerShell command New-AzureRMResourceGroupDeployment. That is your go to
commandlet for any JSON template deployment through PowerShell in Azure resource
manager.

Summary

Well, what have we learned in this module? Quite a bit, quite a bit. First, we have the idea
that network security groups, NSGs are not an optional component in Azure IaaS. If you're
really dead set against using NSGs and you want to use a virtual network security appliance,
that's fine but you never want to expose your Azure VMs to the public internet with no
protection. Second point I want you to walk away with is to consider the NSG deployment
scope. Consider using the subnet scope first and then use the network interface card or VNIC
scope to be more specific and ultimately at the VM level you could leave the host firewall
installed, just planning, planning, planning. And using tools like Network Watcher to test,
test, test, is always part of the daily work of the Azure administrator. Remember the shared
responsibility model of cloud computing. With security, this is front and center. Microsoft
has the responsibility of giving you the tools and the frameworks to use. They gave you the
security of their data centers and believe me, there is so much security there. Check out in the
Pluralsight library, a play-by-play interview with Mark Russinovich, CTO of Azure and he
goes into some detail on the layers of security that Microsoft adds in their data centers
themselves, but the other side of the shared responsibility models that you as the customer are
responsible for locking down your resources, your deployed resources. Microsoft's not going
to do that for you. Alright, well that's that and the next module that we have t'd up is called
Azure Load Balancers, and we're going to deep-dive into that subject that we've seen on quite
a few network diagrams thus far, but it'll be finally time to actually learn how to plan and
deploy and use those load balancers. Thanks for hanging in with me and I'll see you then.

Azure Load Balancers


Overview

Hello there, and welcome to Pluralsight. Tim Warner here welcoming you to the module
entitled Azure Load Balancers. And normally you have about three learning objectives for
each of my modules. This one is no different. My main learning goal for you is to understand
what Azure Load Balancer options exist. If you've never used load balancers, I want to make
sure you understand their use case. And I also want to make sure you know how to configure
both external or Internet-facing load balancers as well as ones internal to the virtual network.
We'll also look at another offering called Application Gateway and you add a third Azure
offering called Traffic Manager. The bottom line is that you're going to leave this module
with a wide spectrum understanding of the hows and whys of Azure Load Balancing. Let's
get right to work.

Preliminary Definitions

Azure Load Balancers. Let's define our terms. I normally like to do that to make sure that
we're beginning on the same conceptual page. Load balancing is a mechanism intended to
optimize resource use, maximize throughput, minimize response time and avoid overload of
any single resource. If you think of any web browsing you do to any degree, whether they're
news sites, or search engines, or whatever, I'm just about positive that at least any of the
enterprise level sites you're visiting do not consist of a single web server. We want to avoid a
single point of failure. This isn't 1995 after all. So we have a farm of identically configured
servers. Most of the time it's web, but it could be application. And we put a virtual IP address
out in front of those servers. So load balancing is actually a clustering technology.
Ultimately, though, at the end of the proverbial day, the reason we do load balancing in any
web context is to give customers a better experience. Network Address Translation, or NAT,
is something that you use every day, even though you may not be aware of it. NAT is a
method of remapping one IP address space into another by modifying address information
and IP datagram packet headers while they're in transit across the traffic routing device. As
you know, the IPv4 address space is pretty much completely depleted. NAT's been around for
awhile, because it allows us to use, for instance, the private RFC 1918 addresses inside,
which are always free, albeit non-routable. And we can conserve precious publicly routable
IP addresses by assigning those to our public end points. NAT, as I said, I used in any high
speed network situation in home, small office, home office, and the enterprise. No differently,
we can still make use of NAT in the Azure public cloud as well. An Application Delivery
Controller, or ADC, is a device, is an appliance, that could be physical or virtual in nature
that's typically placed in a network DMZ. DMZ, of course, stands for de-militarized zone, or
screened subnet. This is a small network section that's exposed directly to the Internet,
whereas the rest of your private assets are going to be hidden behind another firewall
interface. DMZ nodes, of course, will be secured, but less so than your internal ones, because,
for instance, you want the public to hit your public facing Internet web server, let's say. But,
anyway, an ADC is located on this DMZ. And it performs services on behalf of your web
front ends, like application accelerations, SSL-TLS offload, VPN termination, web
application level firewalling, or load balancing. So you can buy an appliance, a single
appliance, that does all of the above. That's a pretty robust appliance indeed. And as should
come as no surprise, we have ADC capabilities available to us in Azure as well.

Azure Load Balancer

Now then, let's look at a table from the Azure documentation. There's the reference in the
lower left corner. That's an LBF not 1BF, in case you're wondering. But this table
summarizes Microsoft's offerings for load balancing in the Azure public cloud. We have
three levels here. We have the application load balancer, the Application Gateway and Traffic
Manager. You're going to learn all three of these in this module. Let's go row by row. You'll
notice the first distinction is which level of the OSI network reference model each works at.
The Azure Load Balancer works at layer four, the Application Gateway, layer seven, and
Traffic Manager, also at layer seven, but it uses DNS. As far as what protocols it supports,
you'll notice that Application Gateway is limited to traditional web services traffic, HTTP
and HTTPS, whereas the Azure Load Balancer you can load balance anything. Your
endpoints can be Azure VMs for Azure Load Balancer. For Application Gateway, you have a
little more flexibility, any Azure internal IP address or public Internet IP address or Azure
VM. And Traffic Manager, Azure VM, Azure web apps or external endpoints. So that means
you can actually put a load balancer in front of end points that are outside the Azure cloud.
You would just feed traffic manager your endpoint IP addresses. For VNets, I'm not going to
read the rest of this for you, but basically you see various pluses and minuses in each of these
services. Endpoint monitoring would be another one. And which load balancer or which
combination of load balancers you choose would be dictated by your business needs, your
technical requirements and what's available in these products. There's a fourth option that I'll
discuss in just a moment. And I'll leave what that is as a surprise for now. All right, so the
Azure Load Balancer's the bread and butter software load balancer product in Azure as I set it
up right so that the transport layer of the OSI reference model. So we're talking about
transport level protocols like TCP and UDP. Actually, as you can see in the middle of the
diagram, the load balancer defines its rules, what it applies the rules to, as a combination of
five discreet elements, the source IP and source port, the destination IP and the destination
port and the protocol. So it's called 5-tuple hash. Now health probes are important for any
load balancer. I don't care whether it's on-premises or in the cloud, because the load balancer
needs to be periodically check each endpoint in its backend pool to determine whether its
available or not. The load balancer would certainly not want to forward traffic to a down
endpoint. In Azure, the Azure Load Balancer, you can do HTTP probes that are useful if you
are, in fact, running a web server on your endpoint. But you can also choose a TCP port,
maybe a common service that's available on that box. And you can have it periodically query
it. And if it gets a response, the load balancer assumes that the node is online. Now Azure
Load Balancer is smart but it isn't as smart as some other options, like the application level
load balancer we'll look at in a little bit. For instance, the load balancer can't offer a lot of
those extra value add services that are optimized for web workloads, like SSL/TLS. You
know, it does require a fair amount of compute power to do SSL termination when you're
doing HTTPS calls, because think of all of the public keys that are exchanged, all the digital
signatures that are created and certificate revocation checking, et cetera. It's nice to be able to
offload that compute to the load balancer and let your web server do what it does best. Let's
take a quick look at the two types of Azure Load Balancers. These types are also referred to
as stock keeping units or SKUs. We have the basic and standard types. The basic is the SKU
that we're discussing in this module, because of the two, basic is generally available and
standard is still in public preview. Although, by the time you're watching this movie, perhaps
even the standard SKU has reached general availability. Basic has some limitations like 150
rules per resource. These would be load balancing rules. You can have a maximum of 10
front end configurations. A configuration would denote a particular traffic stream. You can
have up to 100 instances in your backend pool contained within a single availability set.
That's what we've had historically. More recently, though, the Azure development teams have
given us the standard Azure Load Balancer SKU. And the key word for this is enterprise
scale. In your backend pool, you can actually have up to 1000 VMs. Those VMs can be in
different availability sets. You can load balance to virtual machine scale sets. It's much more
flexible. You have zonal availability. That means you can load balance across data centers.
Recall that an availability zone refers to a data center within an Azure region. So it gives you
zonal high availability. You have far more rules per resource. It's bumped up to 1,250.
There's still a 10 front end configuration limit. This is interesting, you can bind network
security groups to the standard Azure Load Balancer to give you extra firewall protection.
There's a new source network address translation model for enhanced performance on
outbound connections. There's enhanced diagnostics to give you a better picture of how the
load balancer is behaving. And not on this slide, there's a high availability port that gives
greater flexibility between the Azure Load Balancer and say a network virtual appliance. I
want you to know that although we're discussing the basic SKU in this module, what I'm
teaching you about the behavior of the load balancer is the same regardless of whether you're
using either basic or standard. So don't be worried about that.

Load Distribution Methods

When you configure the Azure Load Balancer you need to select a load distribution method.
And there's hash-based distribution and source IP affinity. The default mode is the hash-
based. And what that is is that 5-tuple hash that I mentioned before, dealing with source and
destination IP addresses, ports and protocols. Now the stickiness here is going to be only
within a specific single transport session. So hash-based is a less sticky, or a lower affinity
load balancing solution. The decision point for you, as a developer or an administrator is how
stateful versus stateless is your web application workload? Now we're dealing with IaaS, so
the ideas it that you've deployed your web applications to a series of virtual machines running
in a VNet and how much client data is being involved in each transaction, and how much of a
need for, as I said, stickiness, is there? New communication transport sessions with the same
user may call a different port, which would lead to a different hash computation which could
lead to a connection to another endpoint. Is that deal breaker? If so, then there's source IP
affinity. This is also called session or client IP affinity in the documentation. You can do this
either as a 2-tuple or a 3-tuple hash for binding a rule to an endpoint. The connections that are
initiated from the same client computer will stay with the same endpoint through its lifecycle.
So there, again, is the main distinction between the two load distribution methods, application
client affinity.

Azure External and Internal Load Balancers

Okay, now what else do we have? The two different kinds of Azure Load Balancer are
internal and external. External faces the public and needs a public IP address and potentially a
public DNS name. That's what you see up here in the top part of the diagram. And that public
IP can be dynamic or reserved, but I would strongly recommend you make it reserved to
simplify your life. You don't want that thing changing. You're reliant upon a DNS name, then
the IP changing isn't necessarily a big deal if Azure is able to change the mapping or do the
DNS mapping for you, which it can. Besides the VIP, or the Virtual IP, there's the load
balancer object. And that's really what you create and configure to do all this other stuff. I
take that back. The public IP is a separate resource in Azure as is the load balancer object.
Your host IPs are going to map to what's called a backend pool. And your backend pool
represent, in all likelihood, identically configured servers, like web servers, that will have
their private non-routable IP addresses within the VNet. So that external load balancer needs
to be associated within the virtual network so it understand which endpoints to route to.
You're going to see the specific steps for configuration in my demo. So right here we're just
doing conceptual overview. Now the internal load balancer is perhaps a more special use
case, but where you have need for equitable traffic distribution inside the VNet, not just for
your public external endpoint, but within. And this, a little bit surprisingly in this demo
example, we have two internals, one doing internal load balancing against a fire move,
machines that look like they're listing on Port 80, web tier, let's say. And then we have a
second internal load balancer that makes more sense to me that's load balancing connections
to a database cluster. On 1443 is default Microsoft SQL Server listening port. Again, just
know that there are the two different types and whether you're using the portal or Azure CLI
or PowerShell to deploy it, one of your first decisions is is this an external or an internal load
balancer? This slide simply shows you the specific vocabulary names, which I've given you
already, the front end IP pool or one or more public IPs that serve as the load balancer
endpoints. You can create NAT, or Network Address Translation, rules if you want to
configure calls to specific nodes in your backend pool, okay, for out of bound management,
let's say. Your IP and port mapping rules are the actual load balancing rules, what kind of
traffic you're looking to load balance and then ultimately where the backend IP pool is,
which, in this collection, is going to be a group of IP-based virtual machines and VNet
subnet. Now I skipped over probe, that's the last object that you don't want to overlook. And
this is, as I said, a health check to make sure that the nodes are available. And if they're not
available, they'll be taken out of rotation and continued to be checked unless and until they
come back online, they'll stay out of the pool. The surprise I mentioned a few minutes ago
was this, the fact that we have this library in the Azure Marketplace of third-party virtual
network appliances. We're going to do a module on this separately, upcoming, so I'm just
mentioning this as a tease. But you see these are leading providers of load balancers, Kemp,
Citrix, F5. We're not following around So if you're using any of these products on premises
and you're aware of their feature set, it may make you very happy to know that you can use
that same feature set in your virtual networks. You'll notice that some of these, these are all
pre-configured virtual machines, by the way, have built in runtime costs that you deploy it
and pay per hour normally. Others, you have to have your own license that you've purchased
from the vendor. Some of them support a test drive where you can deploy it for free and it'll
run normally for about an hour and then automatically shut off. And others allow you to
actually deploy and start to use it for free as a trial. And then you can decide to either go to
the paid plan or stop using it. Also related to the concept of networking and services in Azure
is the virtual machine scale set, or VMSS. If you want details on this, check out my course on
Managing Azure Infrastructure in the Pluralsight library, because I walk you through the
entire process end-to-end. But the Cliff Notes version of VMSS is that this is a fleet of
identically configured virtual machines that you can turn on and turn off to perform batch or
high CPU types of jobs. And as it happens, the Azure team in 2017, coming into 2018, have
really made some really nice enhancements to how networking works with virtual machine
scale sets. Now virtual machine nodes can have public IPv4 addresses. Previously, you could
only assign private IP addresses to scale set VMs. Previously, with domain name system,
scale sets relied upon the specific DNS settings of the VNet and the subnet that they were
created in. But now you can configure DNS settings for a scale set directly. You can now do
multiple IP addresses per virtual NIC and multiple vNICs per VM. This gives you, again,
much greater control over each VM insance. Now if you've worked with scale sets there, they
take the notion of batch work and managing virtual servers as a herd pretty seriously. But
there are cases where you do want to have more granular control. From a security standpoint,
we have per-scale set network security groups. Previously, you could assign an NSG only to
a subnet or to a standalone virtual machine vNIC, but not directly to a scale set. The IPv6
load balancer allows you to take advantage of the 128-bit address space provided by IPv6.
And, finally, accelerated networking takes advantage of the technology single-root IO
virtualization, or SR-IOV that gives your VMs, well, much faster networking, because they're
able to go out directly to the host hardware's network interface and directly to the top of rack
switch in your Azure data center. So, again, the trend here that you see in these networking
enhancements are greater control over individual VM-SS nodes and faster, more robust,
behavior.

Demo: Deploy an External Load Balancer

All right, in this first demo, we're going deploy internal and external load balancers. I'll create
the external one first. Here we are on Windows 10 Enterprise Edition workstation with my
trust Edge browser. And you'll see a copy of this in the course exercise files. This is my trusty
Visio diagram that I've embedded into the dashboard here. I showed you how to do this in the
previous module, or module one, or two, I can't remember which one. You're watching them
all anyway, I hope. (laughs) So I cut it down so we're looking at the resource group level AZ
course. And let me scroll down a bit more. And we're going to put Azure Load Balancer
called extlb1 with a public IP address that we'll reserve that will go into and load balance 80
and 443 TCP traffic into our web front end. I have two identically configured web servers
now, one at 10.10, the other at 10.11. I have a network security group that's allowing HTTP
and RDP and WinRM. We have an internal load balancer that we'll deploy that will be
associated with a backend subnet. And I've deployed a SQL server database server at 20.10. I
just have one. We don't need more than one for this example. So that's the lay of the land.
And that's what I think you should refer to as you work through this on your own, if you plan
to. Let's go, do not pass Go, do not collect $200. (chuckles) Let's go right to the load
balancer's node, and we're going to click Add to create one. Extlb1 is the name. And here's
where we choose either public or internal. I'm going to do public and we'll chosen IP address
by creating new. And we'll call this extlb1ip and make this a static reservation and click OK.
Choose the appropriate subscription. This needs to go in the same resource group that
everything else is in, AZ course. My location is in South Central US. And I'll click Create.
Just like we saw with the previous module, creating the object is pretty fast. It's the
configuration that tends to take a little bit of time. That deployment succeeded. Let's refresh
our view and click extlb1 and let's configure this thing. You'll see under Settings here are the
important points. You see our front-end IP configuration and we don't need to click Add here
because we defined that when we created the load balancer. We have our extlb1 reserved
public IP address. And that's 222.226. That's perfectly fine. Let's go to Backend Pools. We're
going to click Add here and we're going to bring in our Web1 and Web2 servers. First we'll
create the pool. I'll call this webfarm1. We're going to use IPv4 and we're going to associate it
with, we can do availability set or single virtual machine live. Harped on you a lot that it's
really crucial that you use availability sets, for a number of reasons. But one is that it makes it
easier to define a backend pool. I have my asfe, that's my availability set front-end that I
know contains my Web1 and my Web2 servers. So I'm going to select that. Now you'll
remember that each virtual network interface card on each VM can have more than one IP
configuration. I believe I just have the one on these, but let's click Add a target network IP
configuration. And we'll choose web1 and we'll choose its 10.10 IP configuration. Let's add
another one for Web2, and I'll specific 10.11 and then we'll save our changes. All right, so
that's saved, let's close this plate. And we now can verify our backend pool consists of the
two VMs. Now they're actually stopped and de-allocated. Yikes, I didn't know that. So let's
head over to virtual machines. Yes, I know I could do this programmatically, but we'll load
up Web1 and start it up. And then we'll come over to Web2 and we'll start it up. One of the
great beauties of resource manager is that we can do multiple things in parallel. You can't do
that in Azure IaaS v1. Back to the load balancer's node, back to extlb1. There's our backend
pool, we've got Health Probes and Load Balancing Rules. Let's go to Health Probes and let's
add one of those, it's always a good idea. HTTPprobe1. I'm going to check on Port 80 to the
root path of the web server every five seconds. Let me bring that up to 60 seconds. And two
consecutive failures. I'll make that three and click OK. Don't want to be too aggressive with
our probes. We do want the probe, but we don't want to be too aggressive. Let's go to Load
Balancing Rules next. Looks like it's still updating. We'll have to wait a moment. All right,
that's finished. So now we could if wanted to do NAT Rules, but let's not worry about that
right now. Let's just deal with the bigger picture of load balancing. All right, there is one
more thing we need to do and that is the load balancing rules. So let's click that and click
Add. Otherwise, the load balancer doesn't have anything to do. We'll call this HTTPIn. The
front end IP address will verify it, will look for TCP 80 on both parts. This is going to be
HTTPS. In time, you can always update the rule. The front end port and the backend port are
the same in this example. We verify the backend pool, the health probe, the session
persistence. I'm going to choose No Session Persistence. And we'll have a idol timeout. Let's
bring that up to a (chuckles) much longer value. The only time Microsoft recommends you
enable floating IP, in fact, let me hover over the little informational blurb, is when you're
using SQL Server always on availability groups with Azure Load Balancers. But let's click
OK and that's saving the rule. So we're looking pretty good here once this finishes. Now let's
apply your skills. Here's a little quiz question for ya'. Before we go to the public IP address,
I'm going to bring us to that public IP 'cause that's our end point. We'll go to public IP address
and bring that up just so it's here. But before we go any further with this, let me ask you a
question. Are there any other things we should check before we test that endpoint in another
browser tab? Here's the IP address right here. I'm going to, whoops, hover and copy it to my
clipboard. What are you thinkin'? What other possible barrier to entry is there? I hope you're
thinking of NSGs. Behind the scenes, I updated my subnet, my VM subnet, NSG rule to
allow for TCP 80. Don't forget about that. Let's now pop open a new browser tab. I'm going
to explicitly do HTTP, paste in that public address. And there you go, bang. It hit Web2. Now
my Web1 and Web2 VMs are configured identically except the homepage displays Web1 for
the Web1 host and Web2 for the other. Let me do a Shift, Refresh. And it took this me this
time to Web1. Let me do Shift, Refresh, and it took me (chuckles) to Web2. It's kind of round
robinning it. Now the last thing I'll leave you with in this demo is in your public IP address
configuration, you know darn well you're not going to use an IP address and give that to your
users. The way to do a DNS label for this address is over in configuration. You can apply a
name label that you don't have to do anything and it'll just work. It has to be globally unique
within Region.CloudApp.Azure.com. That would make it easier for you to hit your load
balancer endpoint, but it's still cumbersome. If you want to use your own domain name here,
try Azure DNS. So you'd make sue of Azure DNS service to do that. That's beyond our scope
at this point, so what I'll do is direct you to the exercise files where I've given you a link.

Azure Application Gateway

Next, let's look at the Application Gateway. The Application Gateway is a special purpose,
multifunction Azure Load Balancer used for web workloads. It works at the OSI Layer 7 or
application layer, which means that we're inspecting the HTTP and HTTP as traffic itself.
You can also use the Application Gateway as a reverse proxy. Normally a proxy server is
going to stand in place of internal resources. The reverse proxy is where you can advertise an
internal resource to the public, and keep that access nice and secure. Application Gateway
gives you round robin load balancing, which isn't the most intelligent way to do load
balancing, but it's certainly better than nothing. You get SSL offload and even web
application firewall, which is really awesome where it's actively screening your connections
to look for things like SQL injection attacks and brute force attacks, and this kind of stuff, as
well as what Microsoft calls Advanced Diagnostics. So the use case here is where you want
more robust security and performance features, maybe less intelligent, less powerful load
balancing, but you want more value added to your web front end.

Demo: Deploy an Internal Load Balancer

All right, to do the internal load balancer it starts off the same way. We go to the load
balancer's node and we click Add, we give it a name, I'll call it intlb1. I'm very creative, as
you can tell. We'll choose Internal for the type. And notice that the form changes
dramatically. We're asked to select a virtual network, which I will do. We're asked to choose
a subnet. This is going to need to be our backend data tier subnet. Decide whether we're
doing dynamic or static IP addressing. And that private IP address needs to be in the
appropriate range. So if you look at my Visio diagram, I think I did 20.5. Yep, I just verified
that on my other monitor. Let's do the resource group tango here. AZ Course, South Central
US. And click Create. All right, good, that succeeded. So we can refresh the view and go to
the internal load balancer configuration. You see here that it's the same basic setup. We have
our front end. But on an internal load balancer, it's a private IP address. The Backend Pool,
Add, we can do the same thing. SQL tier, we'll associate with an availability set. And I create
one called asbe2 or availability set backend2. And let's make sure to choose the VM and its
appropriate IP configuration, being really explicit. And we'll click OK to commit that. Not
going to do health probes here, we've already seen how to use that. We'll go to Load
Balancing Rules. And once it finishes updating, again, just like we saw with the external load
balancer, we'll continue. There we go, we'll click Add. Oh, nuts, it looks like we're stopped.
At least one backend pool and one probe must exist. So we're going to have to create that
health probe after all. Oh, well, that's fine. We're going to do a TCP probe here. We're going
to look at 1433 every 60 seconds. And we'll do three failures and click OK. You'll notice that
these steps are pretty step-wise. By the time you get down to your load balancing rules you
have to have everything else in place. Should have remembered that. Anyway, Add, call this
SQL. Pur front end address, we're going to do load balancing on 1433 for SQL Server and
we're going to the SQL Tier Backend Pool, using that probe, no session preference. Use a 15
minute interval, disable floating IP, and done, there you have it. So we've successfully
deployed internal and external load balancers, very neat.

Demo: Getting Started with Application Gateway

Here we are back in the Azure portal. I'm going to show you how to get started with the
Application Gateway deployment. We already have our working external and internal load
balancers. So I don't want to mess that up. But I at least want to give you the need-to-know
information. So we've selected Application Gateways in the portal, and we'll click Add to
begin the deployment. You notice that there's a wizard here where we configure basic
settings. And you can choose either the standard tier or the web application firewall. Check
the documentation or the exercise files for more docs on that, also so you can learn the
difference between the stock keeping units or SKUs, small, medium and large. Now you can
actually develop a farm of application gateways. And if you read the little Help, it says that
an instance count of two is recommended for production workloads, subscription, resource
group and location, as usual. Now it tells us here, this is important, your virtual network and
public IP address must be in the same location as your gateway. Now one thing that's bound
to confuse some people when you get to step two and you're asked to choose a virtual
network to associate the gateway, we know that azvnet1 is an appropriate gateway. But guess
what? Says the virtual network doesn't have an eligible subnet. You have to have a separate
empty subnet available for your application gateway. Do that first and then you can come in
and choose a subnet, a front-end IP configuration, just like you do with the external or
internal load balancers. And then depending upon which tier. If you're doing web application
firewall or just straight ahead application gateway, there's a couple other choices to make. I'm
going to leave the demo here at that point. I've covered enough to get you started. But it's a
really neat tool. We're going to revisit this setup, 'cause basically the application gateway
setup is very similar to what you get with your network virtual appliance setup that we'll do
in the next module. So I don't want to take too much time here for something that we're going
to cover deeply there.

Azure Traffic Manager

Third type of Azure Load Balancer we're learning about in this module is Traffic Manager.
Now Traffic Manager works at OSI Layer 7, or application layer, but it's not the same as
Application Gateway. In fact, it's very different. What the Traffic Manager is is a DNS space
load balancer. It does use health probes like the other original Azure Load Balancer does.
And what's also different about Traffic Manager is that the client will communicate directly
with the endpoint, instead of through the Traffic Manager. Normally, when you think of a
load balancer, all of the client application connection hits the external load balancer and then
the load balancer decides which internal node to speak to. Here, what we have in this
illustration is up on top, two geographic regions, region one and region two. And we've
placed Traffic Manager in front of those two regions. And we have a single endpoint in your
application settings to go to Traffic Manager. Traffic Manager will look up for the lowest
latency link whether it would be in region one or region two. We can assume that these pools
are the same application. And then the Traffic Manager, using DNS referrals, will give client
an IP address of either a specific endpoint. Or, more likely, you have either an Azure Load
Balancer or an Application Gateway running. So when I say client communicates directly
with the endpoint, in all likelihood, that endpoint is going to be a region-specific load
balancer. This sums up the Traffic Manager connection flow where the user with their
browser uses their own recursive DNS service at their Internet Service Provider or in their
business to do a lookup at a Traffic Manager URL, Contoso.TrafficManager.net, for instance.
That will go to the Traffic Manager instance, as you would expect. And Traffic Manager, as I
said, will do some tests to determine the lowest latency connection. The idea is you've geo-
distributed your application. And your goal is you want to provide the lowest latency, highest
quality service, to your users. Traffic Manager will send your ISP, or whoever your
authoritative DNS server is, that referral, that record will come back to the user, and then they
will directly connect to that region, either to the web application or to the load balancer, as I
said a moment ago.

Traffic Manager Routing Methods

Final thing you should know about Traffic Manager is that when you configure it, there's a
few different routing methods you can select. There's priority routing, which is reliant upon
the primary endpoint. And you use backups only when necessary. So the idea here is that
Traffic Manager is by default going to just one region. But you have another one waiting in
the wings that if something happens connectivity-wise with your first endpoint, Traffic
Manager will go to the other. So this is a failover use case. Another is weighted, where you
can distribute traffic either equally among different regions or not. You can administratively
say I want most traffic here in east US, 'cause we know that's where most of our audience is.
But also we want to do west US and maybe one of the European endpoints. You see what I
mean? Performance means that we've set up Traffic Manager to hit geographically distributed
endpoints where users will hit their closest geographic endpoint based on the Traffic Manager
latency checks that I told you about. And, finally, geographic is where you, as an
administrator, pre-select your geographic regions. And basically if the user, the customer,
falls within a particularly defined geographic region, that's where Traffic Manager's going to
route them to, you see? Geographic is useful if you already have a solid idea of where your
audience base is. You may be using Traffic Manager internally in your organization, and
you're based in two or three different countries. So you can create geographies for those
countries. And rest assured that each employee pool will be routed appropriately. A more
recent addition to the Traffic Manager feature set is what's called Traffic View. And I love
this technology, because it gives you deep insight into Traffic Manager patterns. Specifically,
the data that you receive through Traffic View allows you to understand specifically where
your users are located geographically. You can view traffic volume per geographical region.
You can get latency insights. And then you can deep dive and perform analysis into traffic
patterns over time across Azure regions. In other words, Traffic Manager Traffic View takes
any possibility of guesswork off the table. You will know where your users are coming from
in the world an how Traffic Manager is distributing that traffic using DNS referrals. And then
you can take that knowledge and use it to adjust your topology for even better performance
and lower latency.

Demo: Configure Traffic Manager

Now for Traffic Manager, I'm going to not use the Visio drawing. We're instead, for
simplicity, going to do Traffic Manager in the Azure App Services platform as a service
scenario. Now you can do Traffic Manager and just use external Azure Load Balancers as
your endpoints, if you want to, but, again, I chose to go the PaaS route here. So what I'm
going to do is do a global search for app services, scroll down or click down with mouse and
press Enter. I've created basically the same web app, two instances of it, in two different
regions. You'll see in the location field I have azcourseappeast and azcourseappwest. So
they're geographically distributed right out of the box. Let me select the east one and choose
Browse to pop it open in a browser tab. And while that's working, let's bring up
azcourseappwest and do the same thing with it. And, again, we're talking about the world's
most basic website. All it does is give you the same text except it gives you a different
endpoint name so you know which one you're at. This is the west region, this is the east
region. And we're going to assume that our user base is in one or both of those regions,
depending. And we want to make sure that our application calls, that is our DNS registration
for this site, goes to Traffic Manager in one spot, and users will directed to the appropriate
geography based on latency. You see what I mean? So let's come up here. I don't know if I
have shortcut for Traffic Manager up here. Yep, I sure do. Traffic Manager, Profiles. So let's
go to that and click Add. That's been a consistent step all along, hasn't it? Now you can alter
the DNS name later. Check the exercise files for notes on that. But, initially, it's going to be a
TrafficManager.net URL, which is not optimal for production environments. I have to think
of global unique one. AZCourse1? Will that work? Let me press Tab. Yeah, it worked,
actually. It's not too bad of a DNS name. Here's where we choose the routing method,
Performance, Weighted, Priority or Geographic. I'll choose Geographic, and we'll put in our
azcourse and create that profile. Again, another trend we've seen is we create the object and
then we go in and configure it. It's a two-step thing. Let's check our notifications and clear
completed. And that succeeded, so that's cool. Let's refresh the view and grab our azourse1
profile. I don't think we're going to need to do anything configuration-wise under Settings,
but let's just check to verify. It looks like with this is where we can change our routing
method if we want to, change the DNS time to live. And then for the endpoint health checks,
it's either HTTP or HTTPS. It's going to look at the root of the web server's content directory.
Endpoints is really the big deal here. So let's go to Add. And we can choose Azure endpoint,
an external endpoint that's outside the Azure cloud, or what's called a nested endpoint. A
nested endpoint is a special case endpoint, where you can actually combine Traffic Manager
profiles to create more flexible routing schemes and support larger and more complex
deployments. We're going to go Azure endpoint here. And we'll give it the name of West.
And we're going to choose, you could do cloud service, which is something we'll never use,
that's Azure v1, app service, app service slot, which is if you have multiple deployment slots
for app service app, you can point to a specific one, or just a straight up public IP address.
That's what you'd use to go to a load balancer in the IaaS scenario. But we'll choose App
Service, and then we're going to choose which one we want. This will be the west endpoint so
we'll select that. And then for our regional grouping, it's not giving me the flexibility I really
want here. It's not allowing us to partition in North America, it's just the closest we can get is
the bucket for North America, Central America and Caribbean, nuts. Let's see here, I'll
choose United States and a representative state. Oh, boy, this is a bit too granular. I'm glad it's
optional. I don't want to do that. We'll click OK. Well, the good news is we can always
change the routing distribution method. So let's go to Add and let's just stay the course for
now. We'll create our east endpoint. We'll do App Service, choose and App Service east, and
choose our closest regional grouping and choose the country where these assets are located.
Okay, the following mappings are being used by other endpoints, United States, right. It
looks like we're kind of hosed here to use an IT term. And I'm glad to be making these
mistakes here, because I want to underscore how important it is for you, the learner, to make
these mistakes as you go along. Sometimes with computer-based training, the demos flow
just like butter. Please understand, (chuckles) that's not reflecting of reality many times.
(laughs) So let's go to configuration. And instead of geographic, let's just do performance.
And let's come back and review our west endpoint. Click Edit, yeah, okay. That looks good.
So let's go ahead and add our east endpoint now. App Service, App Service, east, boom. All
right, good deal. You want to make sure that the status of your endpoints is enabled and your
monitoring status is online. Now if they show up in a stopped monitor status, the first thing
you should check is... Well, there's two things you could check, actually. On configuration,
make sure that the endpoint monitor is actually going to a valid path. I have explicitly gone
/index.html to make sure there's nothing left to the imagination. And the other thing you have
to make sure of is that if you're using app services, that you're running at at least the standard
tier. If you're running free shared or basic, those endpoints will always show up as stopped,
okay? So, anyway, let's come back to our Traffic Manager overview page. And we can test
this out. We'll verify that the profile status is enabled. And if we click the DNS name of the
Traffic Manager URL, boom, that was really fast. And it loaded the east endpoint, which
makes sense, because I am, in fact, in the east region of the United States. And when I do a
hard refresh here, it's not going anywhere. I'm staying with the east endpoint.

Summary

In summary, what have we learned in this module? Well, first, there's the theme that it's all
about maximizing the customer's or the user's experience with your application. I don't care
whether you're servicing your own businesses people, external customers, business partners,
the idea with load balancing in geo-distributed applications is to give the users the best
highest performance, lowest latency, experience possible. That's what we're dealing with. To
that point, two tips I have, know your user base, understand where they are as much as you
can (chuckles) in the world, understand what their connectivity environment is, as much as
you can. You know that you don't have a crystal ball, you can't see everybody. But perform
due diligence is what I'm saying. And also when working with Azure architecture, traffic
flow documentation is key. You know because I've show you several drawings. I love Visio.
Microsoft Visio is my own go-to tool. And, to that point, check out the exercise files for this
module, because Microsoft actually makes available for fun and for free Visio stencils and
separate PNG and SVG image files of the Azure icons. That's what I've been using all along.
And you can take advantage of that to develop your documentation. All righty, well, we've
talked a little bit in this module, and even in a couple previous ones on network virtual
appliances. Well, guess what we're going to cover in the next module? You see it. We're
going to do a whole module on these network virtual appliances. This is an important skill for
you to have as an Azure Networking Specialist. So I want to thank you for your participation.
And I look forward to seeing you then.

Network Virtual Appliances


Overview

Hello there, and welcome to Pluralsight. My name is Tim Warner, and this module is entitled
Network Virtual Appliances. The purpose of this module, the main learning take-away's
going to be for you to not only understand the use cases for Azure network virtual appliances,
but also understand what goes into planning and deploying these things. In so doing, and
through a practical demonstration, we'll look at licensing, runtime costs, placement of a
virtual appliance in an Azure virtual network, and basic configuration. Let's get started.

Network Virtual Appliance Use Cases

Understand Azure Network Virtual Appliances. Let's define our terms. First of all, a virtual
appliance is defined as a pre-configured virtual machine image, ready to run on a hypervisor,
and virtual appliances are a subset of the broader class of software appliances. I've worked
with many systems administrators who were so accustomed to on-premises equipment, that
they have a difficult time wrapping their minds around what a virtual appliance is. You
normally think, if you're like me and you've been in the industry for a while, that an appliance
is a special-purpose piece of hardware that most likely runs a very thin operating system
layer, perhaps Linux, and then you have whatever line of business logic, whether it's
application firewalling or load balancing or whatever, running on top. Virtual appliances
offer you the same functionality, but in a virtual machine form factor. A relevant question
that you may have, I've certainly heard this from many in the field, is "Why would I want to
buy a third-party network virtual appliance when Azure includes its own virtual networking
devices?" And that is a relevant question, especially because we just came out of the previous
module where we looked at the Azure load-balancer, the application gateway, and traffic
manager. Well let's seek to answer that question right now. The major use cases for these
network virtual appliances in Azure are as follows. Number one to replicate on-premises
equipment in the Azure Cloud. You may have invested a lot of money, time, human
resources, capital into deploying network equipment from a particular vendor, whatever that
vendor is. So your employees, maybe you yourself, have invested time learning the OS, the
command-line interface OS language, to configure that firewall or whatever it is. Wouldn't it
be nice to be able to make use of those skills in the Azure Cloud? Well it's very likely that
you can indeed do that. And this allows you to save money and time by leveraging existing
expertise, as well as the potential for reusing your on-premises configurations directly in
Azure. Another reason that you may want to consider a network virtual appliance is if you
have need for more robust network services than the default Azure objects that Microsoft
provides, and some of those special purpose use cases might be as we've said, load balancing
firewall, but we also have application delivery control, WAN optimizers. I'm going to take
you to the Azure marketplace, and I encourage you on your time to spend time looking
around and seeing what possibilities exist for third-party partnerships in this network virtual
appliance space.

Microsoft Azure Defense in Depth

This is a nice graphic from the Microsoft documentation that shows how Microsoft Azure
embraces the Defense in Depth security model. With Defense in Depth, we're looking at
security in a layered approach. When you think about your deployments on-premises, you're
thinking about everything from the physical campus and the physical security of your
datacenters to the physical security of your server racks and server appliances, and then the
network, the storage, the compute subsystems, what you're doing at each subsystem to
provide failover redundancy and security against intrusion. Well in a Microsoft Azure
context, Microsoft takes care of the Distributed Denial of Service, or DDoS vectors, as well
as protecting the public IPs that you can receive either dynamically or statically on your
virtual machines. Then into your subscription and hearkening to the shared responsibility
model of Cloud computing. We know by now that the virtual network is an isolated
container, and we can constrain traffic with in-between virtual networks using Network
Security Groups, and even taking control of routing paths with User Defined Routes, or
UDRs. The Network Virtual Appliance fits in really close to the center of the circle, or should
we say the focal point of this ellipse. And then ultimately, you have your Azure deployments,
your VM workloads that are running your line of business services.
Network Virtual Appliance Topologies

Now let's look at Azure Network Virtual Appliance Planning and Deployment. Here's a
sample topology. Now the specifics, you've heard that term "your mileage may vary," the
specifics of the deployment is going to depend, in part, on the vendor. We're talking about
third-party original equipment manufacturers who have partnered with Microsoft to license
use of their network appliances, and so we're talking about integrating an operating system
disc image, a pre-configured virtual machine into one of your existing virtual networks. Now
what we have here, as you can see, is a virtual network that has two production subnets,
front-end and back-end, as you can see. And then we have the virtual network appliance put
on its own subnet, so this is almost always something that you'll need to do to deploy a subnet
specifically for the virtual appliance, and if you have multiples, you can have a farm of
appliances. But we know that the system routes in Azure will automatically take care of
routing traffic among your different subnets, but you're going to want to override that here.
Now, why do you want to do user-defined routes? Well let's say here that this virtual network
appliance is providing robust firewalling, that we want to use either as an adjunct to network
security groups, or potentially to replace them. What you want to do is force the traffic of
selected or all virtual machines in that virtual network to make all of their outgoing calls to
the internet through that virtual network appliance, so this is where you create what are called
route tables, and you associate those at each subnet, and you define what is called a default
route, or a next hop address. That means any traffic going out of the subnet go through the
virtual network appliance. So these route tables are reusable objects we can define them in
the Azure portal, or we can do it programmatically with JSON. Another thing you need to do
is because the virtual network appliance is proxying a lot of traffic and serving essentially as
a router, you have to configure it for IP forwarding, and this is done in the Azure control
plane, again using the Azure portal or Azure CLI or Powershell, or what have you.

Azure Marketplace Partners

The Azure marketplace has a robust list of third-party partners, you recognize I'm sure, many
of the brand names, the company names, I should say, that you see on this slide. It's exciting,
I hope, because like I said, you may have invested quite a bit of money in one or more of the
vendor's products on-premises, and as you extend your on-premises footprint into the Azure
cloud, you can still make use of the robustness, and your support agreements, your licensing,
with these devices. I forgot to mention that earlier. If you're licensed on-premises, you may
have rights to deploy into the cloud as part of your license arrangement with the vendor. As
far as pricing, I know that I've teased this subject a bit in previous modules so this doesn't
come as a surprise, but the way that the appliance pricing works is going to boil down to the
vendor once again, you've heard that expression RTFM, "Read the friendly manual," well you
absolutely want to read the documentation that each vendor provides. I myself, when I choose
a network virtual appliance for a customer, you know what means a lot to me? Is how richly
documented the solution is, and you'll find some variance. You'll find that some vendors are a
little light on their docs for Azure deployment, and some are quite good. We're going to use a
vendor, Barracuda, in our demo that has really nice documentation, but anyway you see that
you can deploy these things using BYOL or Bring Your Own License, others have a run-time
cost integrated directly into Azure. As you see for the Barracuda NextGen Firewall has that,
others may be free and opensource, where you have free usage here and then you can pay for
support, and then others as Cisco says here, the price is going to vary. Now you'll find that
the same virtual network appliance sometimes appears in the marketplace under a couple
different guises, that is, you might see the same product as the BYOL, as well as the
integrated run-time. So again, it's going to pay for you to perform due-diligence. You'll notice
that the F5 solution here has a test drive, this allows you to deploy the solution and have it
run for one hour, just to give you a low barrier of entry of getting this thing up and running.
Others offer a free software trial, and then you can flip it over to the licensed version. Once
again, read, read, read, and perform due-diligence.

Demo: Deploy a Network Virtual Appliance

Okay, let's get this party started. I want to show you how to deploy a network virtual
appliance in Azure. So to that point, let me pop open my trusty web browser, and this is
showing you the Azure marketplace, which is off the azure.com website, actually the URL is
azuremarketplace.microsoft.com. I always say this, but make sure to download the exercise
files, because you'll find this URL in the links list. Anyway, this is where we can get to all of
our Azure, our Microsoft provided, as well as third-party provided Azure images. And you
see we have different categories, or you could just start to search. For instance, what if you
were interested in deploying a firewall appliance, you can see as soon as we start to type, we
see some search suggestions show up down below - that's helpful. I'm going to press enter
and now we see a filtered list, and you see brand names that you may have familiarity with,
and as I've already said during the slide portion, you have different licensing models. But
we're going to focus on this one here, Barracuda NextGen Firewall. Now full-disclosure, I
don't have any kind of relationship with Barracuda, so I'm not picking on them for any other
reason that I like how well-documented their product is, okay? So we're using this as our case
study example. And so the vendor writes this up, they have an overview and then there's a
plans and pricing where you can get the details on the different licensing models, it looks like
this particular device has both an integrated run-time model, as well as bring your own
license, and you want to scroll down it shows you a screenshot of the Barracuda web portal,
which is what you'll wind up connecting to in your virtual machine, and ultimately you'll find
documentation links. This is going to be completely vendor dependent. What I found worked
the best was to go to the Barracuda website, their documentation site is
campus.barracuda.com, and I looked up the firewall and in their documentation, they have
several articles on Azure deployment, and this is one of the documents we're going to use as
our guide in the demo, and I give you these direct links in the exercise notes as well. There's a
related Barracuda article that goes through how to configure Azure Route Tables, or User
Defined Route. Very helpful for sure. So let's come back to the Azure portal, I want to show
you what I've done to lay the groundwork for the solution. We'll go to virtual networks, and if
you've been following along with the course, you know we have the AZVnat1 virtual
network, within that virtual network we'll navigate under settings to subnets, and we have our
front-end and back-end, which is what we've been working with exclusively thus far, but
you'll notice that I've created two additional subnets. They are /28 because I wanted to
maximize our address space a little bit, be a little bit wiser. We could use /24, but that's far
too many addresses than we need. I'm creating a DMZ subnet at 1921683016/28 on which
we'll place our network appliance. Right now, it's an empty subnet, and in the name of
proactivity, for what we're going to do in the next module, I created another /28 gateway
subnet that we'll use when we create our link, our hybrid link between on-prem and the Azure
cloud. Let's open our favorites bar and go to virtual machines, because once again, this
appliance is going to be a pre-configured virtual machine, so the work flow is quite similar to
deploying a virtual machine. We'll go to add, and in the search list let me type Barracuda, and
we can see we have BYOL, which is bring your own license, as well as PAYG, which is
Microsoft shorthand for pay as you go. I'll select the NextGen Firewall Solution, I'll just give
that a click, and then click Create. It defaults the name for us, which is fine, and I'll leave that
alone. We'll provide a management password, specify the subscription, and we'll attach this
to our AZ course resource group, and then whoops! Look what happened, we get a red
exclamation mark that says this resource group contains existing resources, so looks like
we're going to have to create a new resource group for this device. I'll call it BarracudaRG
and hit okay to continue. Once again, remember that the specifics are going to be dependent
upon the vendor and the type of appliance that it is, this is simply a case study example. In
Barracuda land, you can specify an instant size using a property they call firewall size. I'll
call this small, we have to associate an Azure storage account, I'll give it a unique name, we'll
use standard LRS, virtual network. This is unfortunate, it looks like it's forcing us to put the
appliance on a separate virtual network, so there's different ways around this because we
know our production virtual network exists elsewhere, we know that by now if you've been
following the course sequentially, we could make use of VNAT peering, for instance. I'll call
this DMZNetwork, I'll leave the default address space, configure subnets, it defaults to a /29
called Firewall subnet, okay well you're getting the general idea anyway. It proposes a static
private IP address for the firewall, of course the firewall's going to need a public IP address.
I'll use a static reserved called FW address, click okay, and we're going to need a unique DNS
name, so I'll call this azcoursebarracuda1 and click okay. I'll leave the default firewall VM
size, let it validate, keep your fingers crossed, validation passed, so we'll click okay. And we
have some terms of use to agree to, and then click purchase.

Demo: Configure a Network Virtual Appliance

Okay, well once the deployment completes, we have some work to do here. We have some
extra work to do, it's called service chaining, as I've said, the Barracuda device went on its
own virtual network. Let's head over to virtual networks, and so because these exist in the
same Azure region, we can do peering no problem, and as I said, this is called service
chaining. Let's click the Barracuda network, let it load up, and go to peerings. We'll create a
peering, call it a service chain, and we'll select the appropriate virtual network azvnet1, we
definitely want to enable virtual network access as well as allow forwarded traffic. If we
hover over the information balloon, it says this setting allows the peers forwarded traffic,
which is traffic not originating from inside the peer virtual network into your virtual network.
We're not using gateways yet at this point, so we'll select okay. Let that happen. You're
probably thinking we need to make sure to establish the peering in both directions, so back in
the virtual networks page, we'll go to the azvnet1 and now do the same exact thing. Peerings,
add, and we'll select the dmz network and we'll choose the same options there. Excellent, so
we're creating a pool basically, two separate virtual networks that now you can see the icon,
the little chain icon, we have a connection, so if we go down to route tables, you can search
for routes in the global search, I just added it to my favorites. This is where we can create a
custom next hop address and associate it to our front-end and back-end subnets that actually
contain our virtual machines, depending upon the vendor and how their ARM templates
work, they may actually create that route table for you. It looks like the Barracuda solution
did in fact create a new route table for us, let's give it a click and review its settings. We can
do that by coming down to routes, and we can see very simply the route uses 0000/0 as the
address prefix, and the next hop address is going to be the private IP address of the gateway,
and the gateway you might remember a few minutes ago when we deployed it, is using that
as its private IP address within the virtual network. The next step would be to associate the
route with subnets. So we can click associate, select which virtual network the association
will be in, azvnet1, and I'll choose back-end first, actually scratch that, let me come back to
step two and let's just do front-end. Maybe the back-end needs to stay within that virtual
network, because remember the back-end is our data tier. So let's just to the front-end subnet
for now, and there you have it. That's how you implement user defined routes or UDRS, so
now our front-ends will always do out-going internet traffic through the Barracuda appliance,
and our next, next step would be to obtain the public IP address of the Barracuda virtual
machine, if we go to virtual machines and select the Barracuda machine, and we would create
a connection, a web connection to that end point. Well before we wrap up, there's one final
thing that I almost forgot to mention, and I'm glad I remembered, and that is remember we
need to configure IP forwarding on your network virtual appliance, so we do that at the
virtual nick level, so in settings let's go to network interfaces, in the interface list we'll select
the network interface to head on over to its blade. On the virtual network interface settings
panel we'll go to IP configurations, and because we have only a single IP configuration, I'm
comfortable flipping the switch for IP forwarding from disabled to enabled, and we'll save
our changes. Simple as that.

Summary

Cool, well this was a fun module to teach. I hope you got a lot out of it. First of all, by way of
summary, we see that these network virtual appliances allow you to maintain your on-
premises network management and all the investments you've made there seamlessly
integrated into the Azure cloud. So on one use case, you could replicate your on-premises
networking environment, maybe some businesses want to decommission their on-premises
data center and move entirely to Azure, and use the same network management appliances,
but better yet, and what many companies decide to do at least as an initial entree is to extend
their on-premises environment into Azure and wouldn't you know it, coincidentally or
otherwise, I have have to tell you as your instructor this is not coincidental, the module
following this one is called Hybrid Cloud Networking, and we're going to deep-dive into the
various ways we can extend from on-prem into the Azure cloud. Thank you very much for
your participation as always, and I look forward to seeing you in the next module.

Hybrid Cloud Networking


Overview

Hey there, what's up? Welcome to Pluralsight. My name is Tim Warner and this module is
entitled Hybrid Cloud Networking. We're going to begin this module by looking at a typical
Hybrid Cloud scenario, specifically the one that I'm going to build, or we're going to build
together, over the course of this module. In understanding the Hybrid Cloud scenario in
Azure, we need to understand the different Azure VPN types, including the technique of
forced tunneling and then we'll finish with a consideration of ExpressRoute, really, Azure
VPN and ExpressRoute are the two primary methods for building a Hybrid Cloud. Let's get
started.

Our Case Study Environment

Microsoft Hybrid Cloud Environment. Here's a Visio drawing of what we're going to build in
this module. You see it write the azcourse resource group, this should all look very familiar,
if you've been following the course. Notice that I've dropped the backend subnet out of
consideration and instead we have a dedicated subnet for our Azure VPN gateway. I want to
draw your attention specifically to that nexus or connection point, we have a link between an
Azure VPN gateway object in our virtual network, or associated with our virtual network, we
should say, going over the internet to our on-premises environment, which consists of an
active directory domain and I have a virtual machine that's a domain controller, that has the
FQDN Dc1.company.pri and we're going to take our on-premises VPN gateway and link it to
Azure, such that we have a persistent and secure connection, that then allows transparent
interconnectivity between the local network and the Azure virtual network, so this could, for
instance facilitate a domain join scenario, that's exactly what I'm going to do in the demo. I
love those vocab terms, a Virtual Private Network, or a VPN extends a private network
across a public network, that's how I normally define a VPN for my students, I say that a
VPN is a secure channel across an unsecure medium, the public internet. The VPN enables
users to send and receive data across the internet, as if their computing devices were directly
connected to the private network, there's the rub, that's the big deal, we want to be able to, in
a traditional VPN client scenario, you have maybe roving sales staff, who have their
corporate-issued laptops and they want to connect back to the home network from their hotel
room, or a airport WiFi hotspot or whatever, but certainly we don't trust the public internet, I
would never trust a WiFi hotspot in a cafe or an airport, let's say, so we fire up the VPN
connection and we create that secure tunnel and the remote node is able to take advantage of
all of your on-premises configuration management and security, same thing here in Azure.

Azure VPN Facts

Here we have a cornucopia of different Azure VPN facts, that we'll dive into ever deeper, as
we go through the module. First of all, we have some industry standard, vendor neutral
protocols at play, like Internet Protocol Security, also called IPSec, Internet Key Exchange,
IKEv1 or v2, so this means that Azure VPNs are compatible with any modern-day VPN
endpoint, that is, hardware that you have on-premises. Specifically we use this tunnel type for
Site-to-Site, and Vnet-to-Vnet VPNs. Now VPNs can be single or Multi-Site, which means
that a single Azure virtual network can have separate VPN connections to multiple on-
premises sites, that's a nice thing. The SSL/TLS based tunneling is used with a Point-to-Site
VPN scenario and if you're thinking, "Tim, you haven't defined these terms yet," that's true,
so we're gradually getting into it, okay. (laughs) Vnet Peering, we've already talked about, in
fact I included it in the demo in the previous module, here's where you can take two Vnets,
that are in the same Azure region and put them together as a logical container. Now peering
does not require a VPN connection, the VPN is actually optional and if you want to have a
secure Vnet-to-Vnet relationship, you want to deploy VPN gateways and create what's called
a Vnet-to-Vnet VPN. I've mentioned this before and I'll mention it again, we're dealing with
networking in this course and predominantly IPv4, once again, let me state how important it
is that you design your network IDs carefully, so that you don't have overlaps between
different virtual networks in your subscriptions in the Cloud and also your on-premises
network IDs used in each site.

Site-to-Site VPN

Speaking of sites, the workhorse Azure VPN type is called the Site-to-Site VPN, or S2S VPN
for short, this is, as I said before, an IPSec/IKEv2 VPN tunnel between your on-premises
network and your Azure networks. Now you'll note in this diagram, that the numbers are a
little small, so let me read 'em out. In this example, on-premises is 192 168/16, and the Azure
Vnet is an entirely different private IP address space, 10., so that's fine, it's not like you have
to be in the same ballpark with your network IDs, but you just want to avoid a direct overlap,
as I said before. Another thing I said before is that your VPN gateway that you deploy in
Azure can have multiple connections, the VPN gateway is essentially a container object and
this is what supports the multi-site VPN scenario. Going further, some other things you'll
notice is that the gateway needs to exist on its own subnet, in this example, we're using /27,
it's a good idea to preserve even your private IPs, doing a /24 would most likely be overkill,
because you're certainly not going to have 250 odd gateways, that's another possibility, you
could deploy multiple VPN gateways into your gateway subnet. In this diagram, you see also
there's a management subnet with a jump box, we haven't discussed the jump box too much
in this course, but I do get into it in my other Azure Readiness training in this Pluralsight
Microsoft partnership and essentially the use case here is that you create a separate subnet,
populate, say a Windows 10 marketplace image VM and heavily screen or protect what's
allowed to connect into that management subnet and you'd be on-premises and this is with or
without the VPN, you would do all of your Azure administration from the context of the
jump box.

Vnet and Point-to-Site VPNs

The Vnet-to-Vnet VPN, we've already seen this before, is the case where you may have
virtual networks in different regions, in this example, you see we have West US, East US,
and Japan East, we know that Vnet peering does not work across regions and you also may
have a use case need for security, so that's what this is simply allowing us to do, it's a cross-
region secure tunnel, we'll deploy an Azure VPN gateway for each virtual network and create
the same link, just like we have between on-premises in the Azure Cloud, from that
perspective, it's a very similar workflow indeed. The Point-to-Site, or P2S VPN is a one-to-
many relationship and what I mean by that is you as an Azure administrator, or Azure
professional, may want to be able to bring up a VPN from wherever you are, if you're on-
premises, then you'll go through your hardware VPN to cross a Site-to-Site VPN into the
Azure Cloud, but if you take your laptop home, or if you're working from home, you could I
guess VPN into on-prem and then connect to the Azure Cloud through that S2S VPN, but
Point-to-Site allows you more mobility, what you do in a Point-to-Site VPN is install a Azure
VPN client on your workstation and you can directly establish an SSTP VPN tunnel from that
location, I demo this use case in our other Azure Getting Started Readiness training.

Azure VPN Gateway Types

Now, what's possible with regard to these Azure VPN gateways? We know that in Azure, it's
a software-defined networking, or SDN world, so this isn't a physical appliance, like you
likely have on-premises. Well, the first gateway type is the policy-based gateway, which is
using static routing and not usable with the primary Azure VPN types, so you're generally
going to avoid the static policy-based, this is in my experience for backward compatibility
with Azure v1, you're going to go for the route-based Azure VPN gateway, which makes use
of dynamic routing protocols, not statically created ones and this as I said, is suggested by
Microsoft for use with your Site-to-Site, Multi-Site, P2S, and Vnet-to-Vnet VPNs. The Azure
VPN gateway comes in four stock keeping units, or SKUs. In looking at this table, which is
called from the Azure documentation, I don't want you to get mired in the details, instead,
look at trends. We have a basic tier and three standard tiers called VpnGw1, two, and three
respectively. The basic VPN gateway is intended for dev and test environments, it has a
throughput of 100 megabits per second and a maximum of 10 Site-to-Site or Vnet-to-Vnet
tunnels. The Gw1 SKU is dramatically faster, that was a big announcement actually in mid-
2017, six-time performance factor increase. The SLA or Service Level Agreement has also
been bumped up and you'll notice, we have a maximum of 30 tunnels, that's the case for
Gw1, Gw2 and Gw3, Gw2 goes up to one gigabit per second and Gw3 is 1.25 gigs per
second, very impressive. So Gw1 through three are intended for production, basic for dev test
and exploration.

Forced Tunneling

Forced tunneling is simply the process of constraining your Azure virtual network resources
to go back through the Site-to-Site VPN link to get to the internet. In this example, the
frontend tier, as would make sense, can connect directly to the public internet, but our mid-
tier and backends, we've configured forced tunneling, such that any internet calls are forced
down into on-premises, now why would you want to do that? Well, you may have
compliance requirements, where you need to audit, screen, and control all internet traffic for
your corporate servers, perfect use case, you're probably wondering, "Tim, you've told me
this several times, "how do you actually configure forced tunneling?" Let's take a look at that
next. To configure forced tunneling, you first lay the foundation, which is on the Azure side,
your resource group, your virtual network, your local network gateways. Now again, these
terms tend to all crisscross applesauce, as my seven-year-old daughter, Zoey says, bear with
me here, okay, as we work through this alphabet soup of acronyms and different technologies
and protocols, you'll see it all come together in the demo, I promise, but you lay the
foundation in Azure, and then here we go, drum roll please, forced tunneling takes place
through the route table, remember in the previous module, where we covered UDRs? Those
are User Defined Routes, or a default route, 0.0.0.0/0, it's similar to the default gateway idea
we have ordinarily with TCP/IP networking, what we're doing here is defining a user defined
route that defines the default route to point to your Site-to-Site gateway's IP address, alright,
and we also know, if you've come from the previous module, that the route table object can
then be associated with individual Vnet subnets, Understand that you don't have to enable
forced tunneling for all subnets in a Vnet, it would only be selected once and to do that, you
would also need to deploy your Azure VPN gateway, which will give you its public IP
address and that route table by defining the default route and pointing to the Azure VPN
gateway as the default site, that then is the secret sauce, that allows you to selectively require
forced tunneling, because let's say your backend subnet has a route table associated with it,
that defaults to the VPN gateway, by definition any outgoing traffic, that doesn't have a
discreet IP address is going to go through the default route, which is again, your on-premises
VPN pipe.

ExpressRoute

Now let's look at ExpressRoute. ExpressRoute is a great fit for businesses, that have a need
for a high speed connection to Azure and the security and isolation of avoiding the public
internet, that's what ExpressRoute is in a nutshell, it's a secure Azure connection, that
bypasses the public internet. Under the hood, you're participating in a Multi-Protocol Label
Switching or MPLS WAN cloud, MPLS is a longstanding WAN technology. The
ExpressRoute port speed depends upon which ExpressRoute gateway you choose, there's
three, they range from the standard ExpressRoute gateway with bandwidth up to one gigabit
per second up to the ultra performance ExpressRoute gateway, that caps at nine gigabits per
second, there is a middle tier and that's called the high performance ExpressRoute gateway
with the maximum bandwidth of two gigabits per second. Now look at the topology here, we
have our on-premises network, that has one or more, definitely more than one for redundancy
would be advisable, local edge hardware routers and you have a separate ExpressRoute
MPLS circuit to Microsoft's edge routers and once you're at that point, it's the sky's the limit,
you can peer not only into Azure virtual networks, but also other Azure public services,
including Office 365. If you want to go a little bit further, what are the connectivity models
used in ExpressRoute? Well, there's three, the first is called CloudExchange Co-location, the
use case here is you may have ExpressRoute providers available at your colo, you may have
for instance, some of your infrastructure on-premises in a local data center and others, or the
rest of it or some of it in a colocated data center in your city, many times ExpressRoute
providers here in Nashville, Tennessee, our main ExpressRoute provider is Comcast or
Xfinity, whatever they call themselves nowadays and if you have that connectivity available
at your colo, that would provide your entry point to your ExpressRoute link, if you don't use
colocation, you can do a direct Point-to-Point ethernet connection, this is again going to
involve a relationship with a third party ExpressRoute provider, typically an internet service
provider. The third way to go is called Any-to-Any or IPVPN, this is used if you already
participate in an MPLS WAN cloud as your internet connectivity method or maybe as your
private cloud with other sites, if you're geographically distributed, in this model, you're
integrating Azure into your MPLS WAN cloud as simply a separate participating node, isn't
that cool? So I think the take home message here is that there are many options available to
you to make use of ExpressRoute, if you're interested. Now if you want stuff like pricing
details, you can check the Azure.com website directly or you could look in the Exercise Files,
where I've given you some targeted links.

Demo: Deploy a Site-to-Site VPN

Okay, in this demonstration, I'm going to teach you how to create a Site-to-Site VPN between
an on-premises environment and an Azure virtual network. You're looking at the desktop of a
Windows server 2016 domain controller named DC1.company.pri, here we can see that in the
good old, active directory users and computers console. I also have an elevated PowerShell
prompt, you see I've called Get-NetIPaddress and I've selected out the InterfaceAlias and
IPAddress properties, as it happens, this is a multi-homed server with two network interfaces,
but the LAN, the local LAN network ID is 10.1.10.0/24, that's going to be important in a
moment, as we continue our configuration. But for now, let's pop over to our Windows 10
administration workstation and we'll log in to the Azure portal and get started. Now, we need
to define not only our Azure virtual network gateway, our VPN gateway, but we also need to
notify Azure where our local network gateways are, this is going to be the VPN concentrator
endpoint in your on-premises environment, let me select local network gateway and we'll
click Add to deploy one, I'll give it the imaginative name, localgateway and this is going to
need to be a public IP address of your VPN gateway device, Address space is your local,
internal IP address range and you want the top level site or address here, in my case is I just
showed you, my range is 10.1.10.0/24, now notice you can add additional ranges here to
include as comprising the local network side, I'm going to add this to my azcourse resource
group, make sure the location and subscription are correct and we'll click Create, once that
local gateway's been deployed, let's go into its settings list and you can review the
configuration and make a change under Settings, Configuration, if for whatever reason your
public IP address on your VPN gateway changes and you can modify your address spaces
here, clicking the ellipses gives you the Remove option and if you click in the text box, it
actually is in fact editable, which is cool. Now the gateways are container objects, as I said
earlier, which means that we can go to Connections, Add to create a connection, so this
would be a connection from your on-premises environment to Azure, now you'll notice here,
it's asking us to specify a VPN or a virtual network gateway, we don't have that yet, so we
can't complete that configuration here. Instead, we're going to need now to go to Virtual
network gateways and deploy an Azure VPN gateway, let's go to Add as usual, traditional
workflow here, I'll call this azuregateway and notice the Gateway type can be either VPN or
ExpressRoute, so you actually use the same form to configure an ExpressRoute connection,
assuming you have an ExpressRoute subscription, it defaults to Route-based, we talked about
the different VPN types, I'm going to leave Standard as the SKU, you can choose between
Basic, Standard and High performance. Which virtual network is this gateway going to
connect to? We'll use the chooser and select our azvnet1 virtual network, I've mentioned that
the gateway needs its own subnet and it's proposing to use the 192.168.0 subnet, what's nice
about this is that as long as you know in advance which network ID you want it to use, you
can have Azure create the subnet in the virtual network. These little information items are
awfully useful, it tells us that azvnet1 doesn't contain an empty gateway subnet, that's right, I
forgot to add that subnet myself, so it's proposing this one, that's perfectly fine, even though
it's the /24, I hope it makes sense that we're going to definitely need a public IP address, so
we're going to create a new one here, that defaults to azuregateway, I'm going to add IP to the
end of that, so I know what it is and then it looks like we're ready to go, we'll click Create.
Now this process, it tells us down below, that the provisioning could take up to 45 minutes
and Microsoft isn't kidding, when they tell you that, after you deploy your gateway, you
might as well get busy doing other things, because it's going to be a little while waiting.

Demo: Site Project - Azure Cloud Shell

Now I'm not going to keep blabbing my mouth during the entire time, that this deployment is
in progress, but I'm going to take advantage of this opportunity to teach you a little something
extra and that is the Azure Cloud Shell, it's really important that if you haven't begun learning
Azure CLIv2, that you begin now, I trust that your PowerShell skills are at least adequate, but
Cloud Shell will eventually give us PowerShell access, but CLI is what we have now, check
it out, up here, next to the Notification bell, if you click Cloud Shell, you get a pane, now the
first time you do this, you'll be prompted to create a storage account, that Cloud Shell will use
as its environment, I've already done that, so it takes me right in, basically you're getting the
Azure CLIv2 running in a docker container and unfortunately you can't zoom the text in, so
it's not going to be really useful for me to show you fine detail with these commands, but
you'll notice the Toolbar allows you, it defaults to the Bash Linux environment, but
eventually you'll have PowerShell available, you can restart the Shell, get help, give feedback
and there's a link to the Azure CLIv2 documentation, this is really cool, if you just type az
with no other parameters, you get a list of base commands, so for instance, you'll want to
make sure that you're connected to the appropriate subscription, so you'd use the account
context for that and then I find that using -h is a really nice next step to give you a subgroup
or a subcommand, so it says az account list will get a list of all subscriptions in the account
and the results come back in JSON, but you can override that, I'm going to use Up Arrow and
use the -o parameter for output and you can force it into table view, which is easier for it to
see, it's really cool and you can use Shell commands here, if you understand Bash at all, clear
will clear the screen. I happen to know that if we want to look up the pubic IP address for the
gateway, we can go into az network, once again, if you're wondering what to do, do -h and
we have a public IP category, and Up Arrow, let me get rid of the -h and do pub Tab list,
again, it comes back in JSON, I need to remember to do an -o table and you can add that
suffix as part of your persistent configuration, if you want to see the results by default in
tabular format. Now I have a number of IPs, again, the text is too small for you, I understand,
but our three virtual machines, the sqll1, the web1 and the web2 all have reserved public IPs,
but it looks like Azure hasn't yet created an IP address for our Azure gateway, we desperately
need that public-ip, in order to complete the configuration. Now, last thing I'll mention with
this Cloud Shell, you can use the mouse at the boundary to shrink or expand the window, you
can minimize it and then bring it back by clicking the button, you can maximize the Cloud
Shell and then of course, you can close it, if you close it and then click Cloud Shell again, it's
going to request a new Cloud Shell, so it's going to spawn a new container for you, it's a good
use case of docker containers in an Azure context.

Demo: Complete and Test the VPN

Because our DC1.company.pri box is a domain controller in a DNS server, we're going to
need to make sure to associate this custom DNS server IP address with the virtual network
configuration. So let's head on over to the virtual networks blade and get into azvnet1, go to
DNS servers and we're going to override the Default and add in 10.1.10.11 and save that
change. Alright, at long last, it looks like it's completed and there's that public IP address, that
we need of the Azure VPN, I'm going to select it and do a Control+C to copy it, if you're
wondering, if you're following along at home or at work, you're like, "Wait a minute, where
did this public IP address column, "it's not one of the default columns?" Something that you
should know just from a tips and tricks category is up here, you can click Columns and you
can bring out columns, that you didn't know were there and not only can you do that, but you
can actually reorder them, you see the grip handle, you can just simply drag and drop to put
them where you want or you could reset the original column list, good deal. So let's click this
gateway, go to Configuration and verify our settings, just one actually, the SKU, Connections
is where we want to be for the Site-to-Site and the Vnet-to-Vnet, if we want to do that, notice
that to do Point-to-Site, we can set that up right off of the Azure VPN gateway object, but
under Connections, Add, we'll call this S2SVPN1, for Connection type, it's going to be Site-
to-Site IPSec, it defaults to the current gateway and we can't change that for obvious reasons,
we've defined a local network gateway, so we'll select that one, we need to create a pre-
shared key, that will serve as a handshake or a basic validation between the two gateways,
add that in, everything else looks okay, so we'll click Alright, that'll create the connection on
the Azure VPN gateway side, we'll also want to go over to local network gateways and verify
that the connection shows up on that side, you don't have to create the connection twice
fortunately, if we go to localgateway, Connections, we can see our connection, it's in an
updating state right now, let's give it a click and one piece of data you can see in the
Essentials group is Data in and Data out, which is nice. Now, you may have been thinking,
"Hey Tim, "well isn't that on-premises machine a domain controller?" and you'll recall, yes, it
is, in fact, let's flip over to that DC1.company.pri, it's not only a domain controller, but a DNS
server with the IP address 10.1.10.11, so what this means is, I hope that you've picked this up
is that we're going to need to update our Azure virtual network to supply that DNS address,
so let's go to Virtual networks, select our virtual network, DNS servers, and we'll override the
default DND configuration with 10.1.10.11 and we'll save that change. Now, as far as your
on-premises VPN router configuration, the Azure documentation does a really good job there
and I want to strongly encourage you to look at the Exercise Files that accompany my
module, in order to get it, in fact, what I did in this case is I actually used the Routing and
remote access service in Windows server and I set up this machine here as a router, I went
through the Routing and remote access configuration wizard, I created a Demand Dial
network interface here and if we look at its properties, it's using an IKEv2 Tunnel and for
pre-shared key, this is where I plugged in the password1, that we put up on the Azure side
and if everything goes well, you'll need to right-click and manually connect, because this is in
fact a Demand Dial interface, you want your Connection State to be Connected. Okay, so to
finish this out, let's look at what you can actually do, once the Site-to-Site VPN is up,
remember this is the DC1.company.pri domain controller, that's on-premises, let me clear the
screen to get rid of all that stuff and we should be able to do a ping directly to one of the VNs
in the Azure Cloud, like you'll remember web1 is 192.168.10.10, bang, there you go and now
let's switch over to our Windows 10 machine and we can make a connection to our web1 VM
directly in the portal, I'll click connect, I'll save and open the RDP file, let's authenticate, I
have an elevated PowerShell command up here, let me run good old ipconfig /all, because I
want to show you a couple of things here, here is the private IP address, that we just pinged
from on-premises and notice that for the DNS servers property, this is coming from the
virtual network configuration, that I showed you a moment ago, that's awesome. To bring this
full circle, let's clear the screen again and do a ping on 10.1.10.11, our on-premises machine,
so that's just fine, now let me do an ipcongig /registerdns to create an umbilical cord with that
domain controller and let's see quickly if we can join the domain, we'll right-click, go to
System and in the System Control Panel, I'll change settings, we'll change the name and
domain from Workgroup to company.pri and there we go, we're being prompted for on-
premises domain administrator credentials, this Site-to-Site VPN is fully ready to go.

Summary

What have we learned in this module? This covered quite a bit of ground, I think, first, we
have that the Hybrid Cloud scenario is remarkably low friction, when I first started getting
into Azure, I was a little afraid of Hybrid Cloud, because I had visions of incompatible VPN
hardware and complex VPN setup and it's just not the case, with Point-to-Site, you don't even
need a hardware VPN appliance, with Vnet-to-Vnet, it's all done with Azure software defined
networking, when we have the on-premises side, there's so much flexibility, because the
IPSec/IKE protocol stack is vendor neutral, you can use just about any VPN concentrator
under the sun nowadays to set up the local side of the connection, the possibilities are
endless, to use the cliche. (laughs) Common use case for Hybrid Cloud, I should have
mentioned this earlier, would be to use Azure as a disaster recovery site, perfectly good use
case, save you a lot of money having to do that on-premises for sure, another are very fast,
agile dev test environments, allowing developers to spin up simple to complex environments,
using containers or not in an Azure virtual network or potentially using Azure Container
Service or pass, but the agility there is just crazy and then we have cloudbursting,
cloudbursting is where you're doing maybe the bulk of your processing on-premise in your
private Cloud, but then when usage reaches a particular threshold, you burst out into the
Azure Cloud, your users, those who are consuming your services don't know they're
connected to Azure, as opposed to your on-premises environment and that's the great
abstraction and beauty of the Cloud, that users don't need to know that, bursting also works in
reverse that when the usage peak is gone, you come back down to to save Cloud usage costs
and you continue to use your own services. In case you're wondering, yes, Azure Stack fits
neatly into this Hybrid Cloud scenario and we cover Azure Stack elsewhere in this Azure
Readiness training. Alright, well we have one more module left in this course and we're going
to revisit monitoring, we've done a little bit of monitoring from a networking perspective,
we'll continue that discussion and also review Microsoft's suggestions for securing Azure
virtual networks. Thanks as always for your participation, I look forward to seeing you then.

Monitoring and Securing Azure Virtual Networks


Overview

Hello, there, and welcome to Pluralsight. My name is Tim Warner. And this module is
entitled "Monitoring and Securing Azure Virtual Networks." This is the final module in our
networking course, and we've got quite a bit of ground to cover, actually. We'll begin by
examining monitoring virtual networks, strategies for that, including Azure Monitor, Log
Analytics, Network Performance Monitor, and open-source tools. After that, we'll turn our
attention to securing virtual networks, examining some of Microsoft's given best practices in
this domain, as well as looking at a fantastic tool called Azure Security Center. Let's get
started.

Azure Network Monitoring Workflow

Monitoring Azure virtual networks. You know, I've been in ethernet networking for a long,
long time, prior to the cloud, I daresay, and it's always been a dark art, at least initially, how
you visualize network communications flow. The great news is that technology has advanced
such that it's become almost trivially easy to gather the data. So, data gathering is just one
part of your workflow. The other is actually interpreting those results to solve network
performance problems, connectivity issues, to optimize traffic, minimize latency, et cetera.
So, let's look at what Microsoft Azure gives us by way of built-in network monitoring tools.
First thing you'll want to do is make sure that at a resource level, you're enabling diagnostics
wherever possible. For example, earlier in this course when we looked at network security
groups, we made sure to enable diagnostic login for those objects such that the counters can,
in fact, be captured. Now, we're not really getting into guest-level monitoring, but remember
that your Windows server and Linux virtual machines will have their own internal metrics
that can be surfaced through Azure as well. I'm thinking in Windows server things like the
event log system, performance monitor counters, and so forth. Azure Monitor is an excellent
platform for inspecting monitoring data across resource groups so you could gather up not
just NSG-by-NSG metrics but look at them all as a group. It's an excellent tool to put in your
tool belt. Another, we looked at this one already, is Network Watcher that enables you to do
capture analysis where you can perform packet captures and then export that capture data
into, say, a third-party tool. We'll look at that, actually, later in this module. If you go into
Log Analytics, Operations Management Suite, in other words, there is a solution feature in
there called Network Performance Monitor that enables you to look on a subnet-by-subnet
basis for trouble spots, and then there's a very powerful log search capability that allows you
to construct simple to advanced queries and perform reports and do Power BI dashboards and
so forth. Long story short, there are many good tools here summarized on this slide that give
you full visibility into your Azure IaaS deployments. And this is outside any security or
performance monitoring solutions you may already use organizationally, and you can
continue to by, for instance, bringing up a site-to-site VPN between your on-premises
environment and the Azure virtual network and then installing agents to your virtual
machines as necessary.

Open-source Tools

I mentioned open-source tools. One good example of this, and it's in the Azure
documentation, there's a link in the lower left of this slide, but also check the exercise files
because there's a lot of good knowledge that I've consolidated there. We could have, for
instance, a web application running in a VM, and that's all well and good. More to the point,
we can set up a separate monitoring VM, maybe it's a jumpbox, and we could install a tool
like CapAnalysis. This is an open-source tool that allows you to digest and visualize and gain
insights from network packet capture. So, the workflow here would be to run the Network
Watcher from Azure on your VM, gather the capture file itself, and then import it into
CapAnalysis. Isn't that great? This is a perfect example of how we can integrate inbox Azure
tools with open-source tools in the community. CapAnalysis is fantastic because as I said
earlier, it's one thing to gather numbers and statistics about your network. It's another one to
visualize them and make intelligent conclusions. This is a representative screen capture from
CapAnalysis and just the general workflow I want to show you down below. You can see this
waveform diagram that shows you graphically the density of different protocols. And I know
the text is probably too small for you to see on your viewer, but it should come as no surprise
the bulk of this traffic is HTTP (laughing) and SSL, and we have some Skype traffic going
on. So, we're able to characterize traffic, see what's the most chatty protocol, what's the least
chatty protocol. You can do geo-distributed traffic. It's just really awesome that we can
integrate this third-party, open-source solution to help us evaluate the data that Azure gives
us.

Demo: Enable Diagnostics and Use Azure Monitor

Okay, on this demo, I want to begin a journey on looking at the various levels of monitoring
available to us in the Azure networking subsystem. The first step, as I said before, is to make
sure that you're enabling monitoring in the object wherever possible. Previously we looked at
enabling diagnostics in network security groups, but because we're in an IaaS context, let's go
to virtual machines and take a quick look at that. I'm going to grab my web1 VM by
inspection here. And under monitoring, we have a diagnostic settings option. So, what you
may want to do by way of habit is go to a different type of resource that you have and then
filter the settings list for D-I-A-G, diagnostics, and see what's available there. Azure is a
constantly evolving ecosystem, and it looks like the Azure development teams have
revamped the diagnostic settings page for each virtual machine. It's much more robust than it
was, frankly, as of this recording, even from a couple weeks ago. But I've already enabled it,
so I'm seeing a slightly different screen from what you see before you enable it. But we have,
for instance, a section for performance counters. This deals with guest-level diagnostics,
going into the Windows server virtual machine and being able to pull out data from inside the
machine. Logs would certainly be another example, the event log system, crash dump files,
which would be memory contents, or just memory dumps from when a process crashes. This,
and by this, I mean VM-level diagnostic settings, takes place through the Azure diagnostics
agent, which winds up falling back on a storage account. And different administrators have
different approaches here. Some like to consolidate as much as possible into a single storage
account. Others like to partition them, say, by VM or by resource. It's totally up to you on
how you want to handle that. Ultimately, though, the best baseline place to go to look for
monitoring information, whether it's just networking or whatever, is the Azure Monitor. So,
let's open up the search and do a search for monitor. I'll give that a click here. And I like to
describe Azure Monitor as a centralized clearinghouse for insights. Now, it defaults to the
activity log, which is an Azure control plane log file. For instance, I'll draw you up here into
insights. This is a really nice quick list to take a look at first. It shows you failed
deployments, changes to role-based access control, errors, alerts, one outage notification.
Let's go to ten alerts and just see what happens. Looks like there's been some subnets that
have been attempted to be deleted but have failed. Uh-oh. Some route tables, some virtual
network operations that have failed. Well, if we give that a click, it gives us detail down
below. Due to my screen resolution, I can't expand this bottom pane anymore. I apologize
about that. But I'm going to scroll through here. In-use subnet cannot be deleted. So, it looks
like at this date and time, I attempted to delete some network objects that were already in use,
so that's a pretty standard error. You'll notice that we can customize the columns that the
activity log gives us, export that data, and if you click export, you can either export the data
to a storage account or stream them live to an Azure event hub if you want to, for instance,
see a running list of these events as they're fired. That's pretty cool. Let me scroll up to the
top. Now, this can be kind of overwhelming, this top piece here. You can create queries using
all of these drop-downs. And then you can actually save your query and then run it again by
choosing it from this drop-down. So, from a network perspective, you might choose a
particular subscription. You can actually go across all resource groups or selected ones, and
then for resources, it defaults to them all, but you can, in fact, choose particular objects. So,
for instance, a network security group would be something that's pertinent to networking.
And then, depending upon the resource that you choose, there may be resource types and
operations that you can look for. You can look at the time span at different levels. You can
look at different types of category and severity, event initiated by, and you can even do a free
text search. Looks like I didn't have any control plane operations on the object itself. Some
people call the activity log the Azure audit log because it allows you to see who did what
when in your Azure subscription.

Demo: Investigate Metrics and Alerts

If you want to get more into the diagnostic stuff, that's where we get into the diagnostic log
section here. And I'll give that a click. In this, we choose our subscription, our resource
group, our resource type, and then we can filter on a specific object, and it takes you directly
to the events, assuming that you've enabled diagnostics on that object, okay? And notice that
we can get to diagnostic settings for this network security group. We can look at the storage
account properties. Log Analytics not configured. You probably saw in the earlier screen
when I was at activity log that there's this option called log search, and it's actually right here
under explore. And you'll see references to Log Analytics. Well, what we're dealing with
there, it's the next level up from Azure Monitor. I mean, Azure Monitor's really cool as far as
looking at control plane, basic diagnostic log data. You can plot metrics on a resource-by-
resource basis. This is good for monitoring, actually. Why don't we take a quick look at this?
We'll choose azcourse. And let me deselect select all, and we'll look at virtual machines and
we'll look at the web1 virtual machine. And it's going to load up some available metrics. The
metrics allows you to take all of the diagnostic settings that you configure at, say, the VM
level, and plot them on a chart. And in addition to posting different metrics to the chart, you
can change the chart type. You can pin it to the dashboard. That's another whole subject, the
ability to create dedicated dashboards for resource monitoring. And also, any alerts that
you've configured on those metrics show up under alerts. So, it reminds me of that childhood
game, if you've ever heard of it, the ankle bone is connected to the knee bone, the knee bone
is connected to the thigh bone. Similarly, your diagnostic settings pump data into diagnostic
logs. Your diagnostic logs can be searched directly and parsed from here. You can take the
metrics from the diagnostics logs and you can plot them here in a chart and put them on your
dashboard. And you can also leverage your metrics to display alerts, okay? And as you see
here, you can create an alert based on plotted metrics or activity log items. Let me give that
add activity log alert a click just to quickly show you what's going on here. I'll choose the
azcourse resource group we've been using, call this Test Alert. You choose a criteria for your
alert. I mean, understand that the activity log alert, you're looking for a particular log entry to
fire, and then you're associating some kind of action when that event fires. And the action
types here are SMS, which is a text message, email, or webhook. Webhook is useful if you
want to take the alert and put it into, say, a Slack channel or create a ticket in your ticketing
system and so forth.

Demo: Network Performance Monitor and Network Watcher

So, one more time to circle back, monitoring provides really nice baselining for log reading,
alert generation, and metrics plotting. The next level up from Azure Monitor are the
Operations Management Suite solutions. So, let's look up solutions here in the Azure portal.
And specifically the network-related solutions like Network Performance Monitor are part of
an offering called Insight and Analytics. In solutions, we click add, and this brings you to a
subset, a filtered list of the Azure marketplace for monitoring and management solutions.
And as you can see here, the Operations Management Suite is from Microsoft, and they
bundle different products into different solutions. As I said, Insight and Analytics is the one
that you're interested in if you want to take advantage of Network Performance Monitor. And
what this is, once you've installed Insight and Analytics, here it is right here in network
monitoring, it becomes part of an OMS workspace as it's called. And Network Performance
Monitor specifically allows you to visualize your subnet links at a deeper level, okay? There's
some ramp-up steps here. Check the Pluralsight library because my friend and colleague,
James Bannon, did a whole course on OMS. But essentially, after deploying the solution,
you'll then want to onboard any assets, data sources, virtual machines, storage account logs,
the Azure activity log. And for virtual machines, it's really easy to onboard them to an OMS
workspace. You just refresh, show up your VMs, as you can see. And these are all connected.
You give each VM in turn a click and use connect, and you're good to go. Of course, you
could do the same thing with Azure CLI or PowerShell if you want to do it in bulk. After you
onboard the machine, you're ready to take advantage of whatever solution you're looking at.
And for Network Performance Monitor, let's give this widget a click, and it'll open up a larger
blade that shows us a dashboard of virtual networks. Of course, we've been dealing with one
so far. Subnet links, routing paths. And we can see top network health events. So, it looks like
I do have a problem, an unhealthy network link. Well, let's click this option, and we can drill
in at the subnet level and see what Azure has detected. And I can see right off the bat here
that we've got a 2001, which is a private IPv6 address with 100 percent loss. I immediately
understand what that is because I disabled IPv6 on those nodes. If we look at the IPv4 subnet,
it looks like we've had a little bit of latency. It's not a tremendous amount of latency, to be
sure, but it might be something for us to take a closer look at. In case you're wondering, OMS
does, in fact, have a free usage tier that limits the amount of data you can add to the solution
each day as well as your retention. So, you can get into OMS with no monetary commitment.
Finally, just as a quick refresher/reminder, I wanted to look at Watcher one more time.
Remember Network Watcher? We looked at this earlier in the course in terms of
troubleshooting network security groups with IP flow verify and also the security group view
where you can load up an NSG and see what your effective rules are and where they're
linked. To do the packet capture, we go to the packet capture section, and we go to add. And
the form here is pretty straightforward. I mean, you target a specific virtual machine. You
give the cap file a name and then where you want the capture to go, storage account or a file.
And once you've generated the capture file, you can ingest it with, say, Microsoft Message
Analyzer or Wireshark. You don't have to use a third-party tool like I showed you earlier in
this module, but it's nice to have that flexibility. This reminds you up here that you have to
install the network monitor agent on the target virtual machine. Remember earlier in the
course when you set up Network Watcher? You have to enable it per region, and then you
install the Network Watcher VM agent. That's actually a built-in extension that's accessible
through the API or through the web portal.

Azure Network Security Guidance

Securing Azure virtual networks. Alright, so, we've come out of monitoring, being able to
visualize our network for performance tuning, for troubleshooting, for optimization, and
compliance. More businesses than not nowadays are subject to various industry regulatory
compliance certifications, and security certainly fits within that wheelhouse. I'm summarizing
some of Microsoft's own guidance regarding network security. I'll give you a spoiler alert
here that if you've been following this course sequentially, you know all these things already,
so congratulate yourself and pat yourself on the back on a job well done. We know by now
that within the secure container called the virtual network, we are to logically segment our
services, our VM tiers, into their own subnets. Administrator might say, "Well, why? "Why
can't you just put everything "on one subnet for simplicity?" Well, remember that the subnet-
to-subnet traffic can be constrained using network security groups, and I'll go so far as to say
the traffic can be optimized by use of network security appliances or Azure's own internal
load balancer. We know by now through user-defined routes and IP forwarding that you can
control routing within a virtual network. This is especially useful when you're taking
advantage of network virtual appliances. Forced tunneling is a solution that's intimately
associated with user-defined routes that enables us to constrain outbound VM traffic, for
instance, through a site-to-site VPN and through your on-premises network that allows you to
audit and inspect all Azure traffic before it goes out onto the internet. You'll also want to
make sure, especially true in the IaaS scenario, that you're strictly controlling protocol access
to your VMs. You can take advantage of Azure external load balancer with network address
translation rules and configure your VMs to listen to nonstandard ports and certainly to
receive nonstandard port calls at the load balancer just to provide as much security through
obscurity as possible. And then enable Azure Security Center, which is an intelligence
engine, because it's going to, I should say, prevent, detect, and respond to threats. I mean,
you're a human being. The colleagues that you work with are fallible human beings. It's going
to be almost impossible for you to be monitoring the security state of your deployments
24/7/365. So, I think it's a no-brainer to integrate Azure Security Center into your
subscription so that there's somebody, and by somebody, I'm personifying Microsoft's Azure,
(laughing) always inspecting your resources, looking for potential problems. Speaking of
which, Azure Security Center. It has three main features, the first of which Microsoft calls
prevent, or it's a security prevention engine. Its going to run always and monitor the security
state of Azure resources. It's a policy-driven model, so you point Azure Security Center at the
Azure resources you want it to look at. Predominantly, we're talking about VMs, virtual
machines. The detection part of Azure Security Center means that it's automatically
connecting and analyzing security data. You're leveraging Microsoft's global threat
intelligence. Remember, Microsoft makes the technology, so who would know more than
them how their product is actually used in the world? They have an incredible data corpus.
They've seen it all, let's put it that way. And that intelligence is baked into Azure Security
Center. So, as good as your security team is, as good as your developers are, I daresay that
security center may be a little more intelligent and can give you heads up on potential
problems, and in a worst-case scenario, tell you when something bad has happened. That's
actually the third piece here, the response part of Azure Security Center. You have prioritized
security incidents and alerts. And what's especially cool is that Azure Security Center will not
only let you know of actual events, like maybe you've had some brute force RDP attacks on
one of your VMs, but Azure Security Center will then give you insights and suggestions on
what your next step should be to remediate the current alert and ward off future ones. Isn't
that awesome? Microsoft offers protection from distributed denial of service, or DDoS,
attacks against your public IP endpoints. There's two service levels here, basic and standard.
Basic gives you always on monitoring, automatic mitigation for layer 3 or layer 4 attacks, in
other words, IP protocol, TCP, and UDP. There's layer 7 protection and integration with the
application gateway, web application firewall. We've talked about that thus far. And the
service, this protection, is globally deployed. Distributed denial of service attacks can be
devastating to a business. As you probably know, a DDoS attack is a coordinated attack
against a public IP address endpoint with the purpose of flooding it and taking the target
service offline. The standard tier of Azure DDoS protection gives you everything basic does
and also gives you protection policies that are tuned to specific virtual networks, insights via
login alerting and telemetry, and resource cost scale protection. What that means is that cost
protection will provide resource credits for scale out during a documented attack. Earlier in
this course, I covered Network Watcher a bit and I told you then, and I'll repeat now that if
you want detailed training on Network Watcher, look in the Pluralsight library because I built
an entire course just on this feature set. But in the name of what we're doing in this module
with networking and performance monitoring, I wanted to remind you that within Network
Watcher you have two log streams, network security group flow logs and diagnostic logs.
This screenshot shows you the NSG flow logs blade. And you can see here that for each
network security group, you assign it a storage account, and that's where the log files get
written in JavaScript Object Notation, JSON, format. In fact, that's what you see in the
foreground of this screenshot. I've opened up one of the JSON files in Visual Studio code.
This is going to give you excellent insight into exactly how your NSGs are behaving and
what they're doing. Along that theme of network performance monitoring, I also want to give
a tip of the hat to the management solution Network Performance Monitor. This is one of the
solutions you can enable in your Log Analytics workspace. And specifically, it has three
value propositions for you in terms of your virtual network monitoring and performance
tuning. Number one, you get insight on a subnet-by-subnet level how communication is
happening, what kind of latencies are being experienced, dropped packets, this sort of stuff.
More recently, the Azure development team has allowed us to do network performance
monitoring on express route circuits, which is really cool. And third, we've seen a trend over
the last year or so of other Azure services being able to take advantage of the strong isolation
offered by the virtual network so you can also monitor service endpoint status. This would be
internal services like integrated storage accounts, Azure SQL databases, but also external
cloud services like Dynamics CRM Online and Office 365.

Demo: Azure Security Center

In this demo, I want to show you Azure Security Center from a networking context. Go ahead
and locate security center in the Azure portal. And after you enable it for your subscription,
you're ready to set up a security policy. The overview page will start to light up in a very nice
way as you see here. And I use nice way (laughing) in a particular fashion because when we
talk about security, this is obviously a touchy subject, and it's super high important. But if we
go to security policy, we will select our subscription and then configure how we want Azure
Security Center to behave. Collecting data from virtual machines will obviously need to go
on. You point Azure Security Center to a storage account where you want it to load. And then
for the policy components, there's prevention policy, which is where you can scope the
degree that Azure Security Center will look. System updates, OS vulnerabilities, endpoints,
encryption, network security groups, firewalling, vulnerabilities, storage encryption, SQL
auditing, and SQL encryption. You can specify email alerts, security details, SMS, or security
contact email, I should say. And then for pricing tier, you have your free tier and then there's
the standard pay tier that uses a per-node pricing. I'm using a free trial. You notice that the
main difference between the free tier and the standard tier is the difference between basic
detection and advanced detection. And basically, the advanced detection gives you a lot of
the insight and analytics in your security state. Check the exercise files for a link where I
show you the documentation where Microsoft explains that in greater detail. Now, let's come
back to the overview page because we're going to look only at the networking subsystem. I'll
give this a click here. We have three issues, it says. And I'm going to maximize this window
so we can see it. We see right up at the top a summary of networking recommendations. Next
generation firewall is not installed on three endpoints. Restrict access through the internet-
facing endpoint. Again, that's affecting three of three virtual machines. Gives us a list of our
endpoints that do have an internet or public IPv4 address. I do, in fact, have public IPs
reserved for each of my VMs. And you'll notice we have a warning for network security
group and an alert saying whoa for next generation firewall. I like the networking topology
because it shows you logically how your assets are organized. You'll recall that in our azvnet
virtual network, we have a backend subnet with our SQL server, a front end with two web
servers, and then we have a VPN gateway. To take a look at the alert state for a VM, let's go
to sql1 and see what security center has to say. It looks like restrict access through internet-
facing endpoint. We'll give that a click. And if we click through, we ultimately get to an entry
that tells us, "Your NSG has inbound rules "that open access to any which might enable
attackers "to access your resources. "We recommend that you edit the below inbound rules
"to restrict access." Now, that's true because as set up for this demo, I intentionally created a
rule that allows traffic to or from anything, which is a security hand point. And it's nice that
from the advisory piece, we not only get instructions on how we can remediate, but in this
example, look here, edit inbound rules, and if we give that a click, it takes you directly to the
page that you need to be on to configure inbound security rules for that backend NSG. Isn't
that awesome? So, that is how Azure Security Center works from a policy standpoint and
looking at recommendations. Now, the recommendations will either involve first-party fixes,
which are making edits to your default Azure resources, but you'll also sometimes get a
recommendation for a partner solution. To show you that, let's go to prevention
recommendation. And we have a recommendation here to add a web application firewall to
two of our endpoints. And it looks like the endpoints are those reserved IP addresses for my
web front ends, web1-ip and web2-ip. In this case, it's recommending that we put an honest to
goodness web application firewall in front of these. It must be that Azure Security Center can
detect, based on looking at the run state and configuration of these VMs, that they are, in fact,
hosting a website. And if we go to create new, notice that it gives us not only the Microsoft
application gateway but also some selected third-party providers as well. The final piece I
want to show you in this demo besides the recommendation engine and the policies is the
alerting, security alerts, the detection piece. And I don't have anything useful in my
subscription, so I'm going to switch us over now to a Microsoft subscription that has much
better lab data. Okay, so, here we are in one of Microsoft's dev subscriptions. As you can see,
there's a lot more bells and whistles going on here because they have far more assets than I do
in my lab network. But let's come down under detection to security alerts. And I want to
show you two things here that are useful not only from the perspective of networking but just
Azure security in general. You'll notice that we have two types of object, the little shield icon,
which represents individual alerts, and then you have security incidents that are actually
rollups. That can be super powerful. But let's check down here. We see different incidents
like suspicious process executed, potential SQL injection. Here's a successful RDP brute
force attack. That's not cool. Let's give that a click. It shows us when the detection happened,
what resource it is, how many counts or how many times this event has been flagged. Let's
give this entry a click. And it gives us the IP address of the machine that was attacking us, the
process that did the attacking. And some of the attacks were successful, it says. In the last 30
minutes, there were 60 failed login attempts. 20 of the failed login attempts were aimed at
nonexistent users, and one of the failed login attempts aimed at existing users. Ouch. Down
here we have some remediation steps from Microsoft. So, that's an individual alert. But like I
said, the real power comes in when you want to coordinate multiple individual attacks that
may be coming from, say, the same actor. That's where you get into real, real power with
security centers. So, let's choose security incident detected. And if we scroll down a bit, I just
want to show you this, alerts included in this incident. This particular incident, it looks like it
was all against vm1, and it took place over two days, and we have a blocked SQL injection,
failed RDP brute force attack, a successful RDP brute force attack. That's the one we looked
at a moment ago. Multiple domain accounts queried, suspicious service host process
executed, and network communication with a malicious machine. So, this gives us a heck of a
lot more context than what we saw a moment ago looking at this failed and successful RDP
brute force attack in isolation. Let's take a look at this network communication with a
malicious machine. And it tells us that this VM has likely communicated with a command
and control center for the Dridex malware at this particular public IP address. So, you can see
a story forming here, actually. It looks like first we tried to hit vm1 via some kind of SQL
injection. VM1 may, in fact, be a web server, for all we know. Then the agent, the bad actor,
turned to RDP and actually got in, executed a process, queried domain accounts, and
connected to a command and control center. So, it looks like this VM definitely needs to be
remediated or decommissioned because it's likely part of a botnet. Super powerful. And how
could you get this kind of intelligence and context on your own? As I said, for me, Azure
Security Center is a no-brainer feature for all of my Azure IaaS deployments.

Summary

Alright. Well, we're at the end of the road here. To summarize what we've learned in this last
module, make sure that you're taking advantage of Azure insights, all of that global
intelligence that's baked into tools like Azure Security Center and another framework called
Azure Advisor that's not in scope for this course, but I certainly cover it elsewhere, especially
in my Managing Azure Infrastructure, Getting Started course, and Advisor comes in in our
security training as well, so check the library for more content on that. Also, we've seen in
this module how powerful open-source tools can be to add value to what you have by default
in Azure. And you know what? That brings us to the end of this training. Again, I want to
congratulate you for picking up some very high-impact skills that you, I hope, are using
already to deploy, manage, monitor, troubleshoot, and secure your Azure network resources.
If you want to contact me, questions, comments, concerns, whatever, you can reach me at
Twitter. My handle is @TechTrainerTim. I'm on the web at techtrainertim.com. And my
Pluralsight email address is [email protected]. I want to thank you one more time
for joining me on this journey, and I wish you all the best in your Azure endeavors. Take
good care.

You might also like