07 Advanced Networking.v1
07 Advanced Networking.v1
Table of contents
1. Introduction 6
Kubernetes networking 7
Pod networking and CNI plugins 10
2. Pod networking 13
The Kubernetes Pod networking model 16
Network namespaces 21
Host network, Pod network, and Pod subnets 29
Introduction
7
Kubernetes networking
Networking in a Kubernetes is an important but quite
complex topic.
The complexity is due to the fact that Kubernetes has
multiple networking subsystems that all address different
problems.
To truly understand networking in Kubernetes, you have to
be able to conceptually separate these subsystems and
understand each one's purpose, scope, and concepts.
The following is an overview of the most important
networking subsystems in Kubernetes:
8
2. Service abstraction
3. Ingress: communication to Services from outside the
cluster
4. Overlay networks & service meshes: custom abstract
networks on top of the default networks
Pod
networking
14
communication.
Let's consider an example:
A Pod acts like a host and it sees other Pods as other hosts,
no matter what node they're physically running on.
A Pod also sees the nodes themselves (including the node it's
physically running on) as hosts that it can reach through
their IP addresses.
The same applies to the nodes themselves which see Pods
(and other nodes) as hosts reachable through their IP
addresses.
Conceptually, you can think of the Pods and nodes of a
Kubernetes cluster as a flat network of equal hosts that can
communicate with each other through their IP addresses.
This is the Kubernetes Pod networking model.
You must be wondering how this is possible.
How can a Pod that physically runs on a node have its own
IP address, and act like a separate host?
Enter network namespaces.
Network namespaces
Network namespaces are a Linux concept that belongs to
the family of Linux namespaces.
In general, a Linux namespace abstracts and isolates a certain
type of system-wide resources so that they're only visible to
22
The above example shows a node with two Pods — each Pod
runs in its own network namespace, and each network
namespace has an eth0 network interface with an assigned
IP address.
The network interface in the network namespace of Pod 1
has IP address 200.200.0.2, and the network interface in the
network namespace of Pod 2 has IP address 200.200.0.3 —
26
In the above example, there are two nodes with four Pods
running on them.
Consequently, the Pod network consists of four entities (the
four Pods), and the host network consists of two entities
(the two nodes).
As you can see, both of these networks use completely
different IP address ranges.
The host network has IP address range 10.0.0.0/16
The Pod network has IP address range 200.200.0.0/16
This is very typical for Kubernetes cluster.
31
subnets.
In particular, each node in Kubernetes is assigned a Pod
subnet, and all Pods on a node must have an IP address from
the Pod subnet of that node.
The
Container
Network
Interface
(CNI)
36
Fig. CNI
Weave Net
Calico
Flannel
Multus CNI
Nuage CNI
DANM
Contiv CNI
Kube-OVN
TungstenFabric
language.
This file must be located local to the container runtime, and
the container runtime invokes it during certain lifecycle
events of a container.
plugin.
In the following, let's walk through the generic procedure
(not the implementation) that occurs when a container
runtime invokes both the ADD and DEL operations of a
CNI plugin.
Since you use Kubernetes, the explanations will be in the
context of Kubernetes.
1 2
3 4
5 6
node.
43
Now that the Pod can connect to the rest of the network is
44
1 2
1 2
3 4
5 6
Fig. CNI
{
53
"cniVersion": "0.4.0",
"name": "name-of-pod-network",
"type": "name-of-cni-plugin"
}
The response
58
{
"cniVersion": "0.4.0",
"interfaces": [],
"ips": []
}
IPAM plugins
IP address management is the task of allocating IP addresses,
keeping track of which IP addresses have been allocated, and
releasing IP addresses.
IPAM is a necessary task for every CNI plugin.
For example, if a CNI plugin executes the ADD operation, it
must choose a free IP address for the new Pod, and it must
63
{
"cniVersion": "0.4.1",
"name": "my-pod-network",
"type": "my-cni-plugin",
"ipam": {
"type": "host-local",
"subnet": "200.200.0.0/24"
}
}
In this section, you learned what CNI plugins are and how
they're used by Kubernetes.
In the previous section, you learned what the generic
requirements of the Kubernetes Pod networking model are.
You should now be ready to become active and implement
these Pod networking requirements with your own CNI
plugin!
The next section will be the start of the lab that guides you
through the design, implementation, installation, and
testing of your own CNI plugin.
Let's get started!
Chapter 4
Lab 1/4 —
Designing the
CNI plugin
69
You will base the design of your CNI plugin on the bridge
CNI plugin, so let's see how it works.
The bridge CNI plugin focuses on the connectivity
between Pods on the same node and it uses a special network
interface called a bridge to connect these Pods to each other.
A bridge, in the sense used here, is a virtual network
interface on Linux that allows connecting multiple other
network interfaces.
A message sent through a bridge is forwarded to all the
connected network interfaces.
Let's see how the bridge CNI plugin uses a bridge to
connect the Pods on the same node with an example:
71
1 2
3 4
5 6
out through the cni0 , and all other packets are sent out
through eth0 .
With the bridge set up, the CNI plugin proceeds to set up
the network namespace of the new Pod.
Like every CNI plugin, the bridge CNI plugin must create
a network interface in the network namespace of the Pod —
in the example, it is called eth0 .
In the case of the bridge CNI plugin, this network
interface must additionally be plugged into the bridge.
However, the bridge is in the node's default network
namespace, and, in Linux, it is not possible to directly
connect network interfaces across network namespaces.
To work around this issue, the bridge plugin uses another
special virtual network interface called a veth pair.
A veth pair consists of two directly connected network
interfaces — whatever is sent out through one end of the
pair is immediately received by the other end.
Furthermore, and most importantly, the two ends of a veth
pair may be in different network namespaces.
75
So, to connect the Pod to the bridge, the CNI plugin creates
a veth pair and moves one end into the Pod's network
namespace and the other into the node's default network
namespace.
The end in the node's default network namespace is given a
random name and connected to the bridge.
The end in the Pod network namespace is named eth0 and
assigned an IP address from the Pod subnet — in the
example, this is 200.200.0.2.
Now the Pod is effectively connected to the bridge.
If the Pod sends a packet through its eth0 interface, it is
immediately received by the opposite end of the veth pair,
and since the latter one is plugged into the bridge, the packet
reaches the bridge.
The same applies in the other direction — if the bridge
receives a packet, it reaches the Pod's eth0 interface
through the opposite end of the veth pair.
As the last step, the CNI plugin creates a route in the Pod's
network namespace.
This route is a default route (that is, a route matching all
packets) using the IP address of the bridge as the default
gateway.
76
Tracing a packet
If the setup done by the bridge CNI plugin is correct, then
Pods on the same node must be able to send packets to each
other.
The following verifies this by playing through a scenario of a
Pod sending a packet to another Pod:
77
1 2
3 4
The route uses the cni0 network interface, and the next
hop is the final receiver (that is, it is a directly-attached
route).
So, the default network namespace wraps the packet into a
network frame, addresses it with the MAC address of
200.200.0.3, and sends it out through the cni0 bridge
network interface.
Since a bridge sends packets to all connected network
interfaces, the packet is also sent to the eth0 interface of
Pod 2.
That means Pod 2 receives the packet from Pod 1.
So, the design of the bridge CNI plugin seems to be
correct — Pods on the same node can communicate with
each other.
Note how all the settings made by the CNI plugin play role.
For example, the route created in the Pod network
namespace instructs the Pod to use the default network
namespace (via the cni0 bridge interface) as the default
gateway.
This default gateway function of the default network
namespace is then configured with the route that the CNI
plugin adds to the default network namespace.
You now know that communication between Pods on the
same node works with this approach.
But what about Pods on different nodes?
81
Inter-node communication
The bridge CNI plugin actually doesn't address
communication between Pods on different nodes at all.
If you use the bridge CNI plugin on a cluster with more
than one node, then Pods on different nodes can't
communicate with each other — this violates the
Kubernetes Pod networking requirements.
That's a severe limitation of the bridge CNI plugin.
And you need to fix this in your own CNI plugin.
As you can see, there is one route for each node that has the
node's Pod subnet IP address range in the Destination field
and the node's IP address in the Next hop field.
The effect is that any packet destined to a Pod on a given
node is forwarded to the physical network interface of that
node, which is in the default network namespace.
And, as mentioned, the default network namespace on that
node will then locally deliver the packet to the destination
Pod.
84
for the cloud where the nodes of a cloud network are often
not physically connected to each other but always
communicate through a gateway.
On a traditional network where the nodes are physically
connected to each other, the routes could also be installed
directly on each node — however, in that case, every node
must only have the routes to all other nodes, but not the
route to itself.
Since you will use your CNI plugin on a cluster on cloud
infrastructure where the nodes are not physically connected
to each other, you will use the default gateway approach.
To be sure that this really works, let's walk through an end-
to-end example that combines the setup of the bridge CNI
plugin with the approach to inter-node communication that
you just figured out above:
87
1 2
3 4
1 2
3 4
In the above scenario, there are two nodes with four Pods:
Node 1 has IP address 10.0.0.2 and Pod subnet
200.200.0.0/24 and runs Pod 1 (200.200.0.2) and Pod 2
(200.200.0.3)
Node 2 has IP address 10.0.0.3 and Pod subnet
200.200.1.0/24 and runs Pod 1 (200.200.1.2) and Pod 2
(200.200.1.3)
The inter-node communication routes (as explained above)
are installed in the default gateway of the network with IP
address 10.0.0.1.
The example situation is that Pod 1 on node 1 wants to send
a packet to Pod 4 on node 2.
To do so, Pod 1 addresses the packet with 200.200.1.3 (the
91
will work.
In the next section, you will create the Kubernetes cluster on
which you will use your CNI plugin.
Chapter 5
Lab 2/4 —
Creating the
cluster
95
In this part of the lab, you will create the Kubernetes cluster
on which you will use your CNI plugin.
You will create this cluster from scratch on Google Cloud
Platform (GCP) infrastructure with a tool called kubeadm.
1. Network planning
2. Setting up the GCP command-line tool
96
Network planning
Before you launch any infrastructure, you need to have an
idea about the network topology that you want to use for
your cluster.
As you have learnt in a previous section, there are two logical
networks in a Kubernetes cluster, the host network and the
Pod network:
The host network consists of all the physical nodes of
the cluster
The Pod network consists of all the Pods of the cluster
When you create a Kubernetes cluster, you need to define an
IP address range for both of these networks.
These IP address ranges are completely independent of each
other - they don't need to (and often shouldn't) overlap.
97
For the Pod network, you are completely free to choose any
IP address range you like.
The only requirement is that, given the CNI plugin that you
will build, it should not overlap with the host network IP
address range.
In the previous examples of this course, 200.200.0.0/16 was
usually used, and you will stick with this for your cluster, so:
Pod network: 200.200.0.0/16
98
These two parameters are very important and you will need
them when you create the cluster in this section.
What about the Pod subnets that are assigned to all the
nodes?
Do you have to define them too?
Fortunately not, because the Kubernetes controller manager
will do this for you.
When you create your cluster, you will only supply the Pod
network IP address to the controller manager, and the
controller manager will allocate an appropriate subnet to
each node.
Given that your Pod network IP address range is
200.200.0.0/16, the controller manager might assign
200.200.0.0/24 to node 1, 200.200.1.0/24 to node 2, and so
on.
bash
$ gcloud init
bash
100
bash
bash
You can find out all available regions and zones with
the commands gcloud compute regions list and
gcloud compute zones list .
Finally, you can verify that you have set up gcloud correctly
by executing the following command:
bash
The next step is to launch the GCP resources that you will
need for your cluster.
102
bash
bash
bash
You can verify that all the instances have been correctly
created with the following command:
106
bash
bash
bash
Installing Kubernetes
In this step, you will install Kubernetes on the GCP
infrastructure that you just created.
install-kubeadm.sh
#!/bin/bash
bash
bash
bash
network planning.
The --apiserver-cert-extra-sans flag sets the master
node's external IP address as an additional subject alternative
name (SAN) in the Kubernetes API server certificate.
This is needed because when you will access the API server
from your local machine, you will use the master node's
external IP address rather than it's internal one, which is
included by default in the API server certificate.
To execute the above kubeadm init command on the
master node, you can use the following command:
bash
bash
following command:
bash
bash
bash
116
Congratulations!
You just installed Kubernetes on your own GCP
infrastructure and you're able to access the cluster from your
local machine.
That's a great achievement!
One last thing that you should do is saving the kubeconfig
file name in the KUBECONFIG :
bash
$ export KUBECONFIG=$(pwd)/my-kubeconfig
bash
bash
Ready entry.
The Ready entry has a status of False with a reason of
KubeletNotReady , and the associated message says
something like:
Bingo!
Your cluster doesn't yet have a CNI plugin installed.
As mentioned, kubeadm does not install a CNI plugin by
default — it is up to you to do this.
What's happening now is that the kubelet tries to figure out
the installed CNI plugin, can't find any and reports the
KubeletNotReady status, which causes the entire node to be
NotReady .
You're cluster isn't functional until you install a CNI plugin.
But that's actually good news for you because you anyway
wanted to create and install your own CNI plugin.
You will do that in the next section.
Chapter 6
Lab 3/4 —
Implementing
the CNI
plugin
120
In this section, you will implement the CNI plugin that you
designed in the first part of the lab.
In the next section, you will then install this CNI plugin in
the Kubernetes cluster that you created in the previous
section.
For starters, let's recap how your CNI plugin works.
Given that this is how your CNI plugin works, the following
is a possible organisation of the tasks that your CNI plugin
has to do:
After the CNI plugin has done its job, it must return a
response to the kubelet.
The structure of this response is prescribed by the CNI
specification.
The primary information contained in this response is the
name and the IP address of the network interface that the
CNI plugin created in the Pod network namespace.
The kubelet will use this information to finalise the creation
of the Pod — for example, it will write the returned IP
address into the pod.status.podIP field of the
corresponding Pod object in Kubernetes.
127
Preliminary notes
Before starting with the implementation, some preliminary
notes about the programming language, CNI version, and
input parameters are necessary.
Programming language
For the sake of this lab, you will write your CNI plugin in
Bash.
Bash is a convenient choice for implementing a simple CNI
plugin, because it allows to easily use common networking
tools, such as ip and iptables .
However, you could also write this CNI plugin in any other
programming language, such as Python, Java, or Go.
128
CNI version
Input parameters
{
"cniVersion": "0.3.1",
"name": "my-pod-network",
"type": "my-cni-plugin",
"myHostNetwork": "10.0.0.0/16",
"myPodNetwork": "200.200.0.0/16",
130
"myPodSubnet": "200.200.1.0/24"
}
You will have to create such a NetConf for each node when
you will install the CNI plugin later.
Now, let's start coding!
bash
$ mkdir my-cni-plugin
$ cd my-cni-plugin
bash
$ touch my-cni-plugin
$ chmod +x my-cni-plugin
my-cni-plugin
#!/bin/bash
case "$CNI_COMMAND" in
ADD)
;;
DEL)
;;
VERSION)
;;
esac
my-cni-plugin
#!/bin/bash
exec 3>&1
exec &>>/var/log/my-cni-plugin.log
The first line above creates a new file descriptor 3 and directs
it to stdout .
The second line redirects the default file descriptors 1 and 2
from stdout and stderr ,respectively, to a log file.
The effect of this is that anything written to the default file
descriptors 1 and 2 ends up in the log file, and only what's
written to file descriptor 3 goes to stdout .
Why is that necessary?
The reason is that the kubelet takes whatever the CNI
133
my-cni-plugin
log() {
echo -e "$(date): $*"
}
This is a log function that you will use a few times during
your code to produce some descriptive log output that will
help you understand what's going on.
Note that thanks to the above redirections of file descriptors
1 and 2, this function writes to the log file and not to
stdout .
134
my-cni-plugin
netconf=$(cat /dev/stdin)
host_network=$(jq -r ".myHostNetwork" <<<"$netconf")
pod_network=$(jq -r ".myPodNetwork" <<<"$netconf")
pod_subnet=$(jq -r ".myPodSubnet" <<<"$netconf")
The first line of this code reads the content of stdin into a
variable.
Since stdin must contain the network configuration
NetConf, this sets the netconf variable to the NetConf
passed by the kubelet to your CNI plugin.
The next lines extract certain values from this NetConf.
In particular, these are the values of the custom NetConf
fields that you defined above:
myHostNetwork : IP address range of the host network
myPodNetwork : IP address range of the Pod network
135
my-cni-plugin
{
"cniVersion": "0.3.1",
"name": "my-pod-network",
"type": "my-cni-plugin",
136
"myHostNetwork": "10.0.0.0/16",
"myPodNetwork": "200.200.0.0/16",
"myPodSubnet": "200.200.1.0/24"
"ipam": {
"subnet": "200.200.1.0/24"
}
}
my-cni-plugin
This line uses the log function that you defined above to
write a log entry to the log file.
The log entry includes the values of all CNI environment
variables, as well as the NetConf — you will find this useful
137
when you later run the CNI plugin to see what's going on.
That's it for the preparatory work.
The next step is to turn to the actual CNI operations.
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
138
ipam_response=$(/opt/cni/bin/host-local <<<"$ipam_netconf")
{
"cniVersion": "0.3.1",
"ips": [
{
"version": "4",
"address": "200.200.0.2/24",
"gateway": "200.200.0.1"
}
],
"dns": {}
}
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
pod_ip=$(jq -r '.ips[0].address' <<<"$ipam_response")
bridge_ip=$(jq -r '.ips[0].gateway' <<<"$ipam_response")
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
{
143
flock 100
} 100>/tmp/my-cni-plugin.lock
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
{
flock 100
145
The above code checks whether the bridge already exists and
creates it if it doesn't.
The if condition checks whether there exists a network
interface named cni0 — if yes, it evaluates to false, and if
no, it evaluates to true.
200.200.0.1/24.
While it's possible for your CNI plugin to do this, it has two
disadvantages: first, it ties the CNI plugin to GCP (that is,
you can't use it on different types of infrastructure), and,
second, it requires you to grant permissions to access the
GCP API to your VM instances, which complicates the
deployment of the CNI plugin.
To circumvent these issues, you will simply create these
routes manually in the cluster.
Since these routes need to be created only once during the
entire lifetime of a cluster, this means that the CNI plugin
never has to bother with them, but the Pod network in your
cluster still works as expected.
To create the routes, execute the following command:
bash
Once you have done that, the routes will stay there and will
remain valid as long as the Kubernetes cluster exists.
my-cni-plugin
case "$CNI_COMMAND" in
150
ADD)
# ...
{
flock 100
# ...
ensure() {
eval "$(sed 's/-A/-C/' <<<"$@")" &>/dev/null || eval "$@"
}
has the Pod's IP address in the source field, the gateway will
drop it, because it doesn't recognise Pod IP addresses.
The above rules fix this by setting up NAT in the default
network namespace that replaces the Pod's IP address in the
source field with the node's IP address for every packet that
leaves the cluster — in that way, gateway recognises and
processes the packet, which allows it to reach its destination.
The above code works by creating a custom Netfilter chain
named MY_CNI_MASQUERADE which performs NAT
(masquerading) on all packets to a destination outside the
cluster, which is expressed as all packets that are not destined
to the Pod network and not destined to the host network.
Without the above set of rules, your Pods can't reach
destinations outside the cluster.
That's it for the one-time setup!
The next thing to do is the Pod-specific setup.
created
CNI_IFNAME is the name for the network interface to
create in the Pod network namespace
The first thing that you have to do is to create a named link
to the Pod network namespace.
This is necessary so that you can refer to this network
namespace with the ip command, which you will use for
the rest of the setup.
To do so, add the following code to your CNI plugin:
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
mkdir -p /var/run/netns/
ln -sf "$CNI_NETNS" /var/run/netns/"$CNI_CONTAINERID"
bash
First of all, you have to create the raw veth pair that you will
later set up to connect the Pod network namespace to the
bridge.
One end of the veth pair, the Pod-end, is supposed to be in
the Pod network namespace and must be named according
to the CNI_IFNAME environment variable (which by default
is eth0 in Kubernetes).
The other end of the veth pair, the host-end, is supposed to
be in the default network namespace and must be connected
to the bridge.
You could either create the veth pair in the default network
namespace (and then move the Pod-end to the Pod network
namespace) or in the Pod network namespace (and them
move the host-end to the default network namespace).
156
You will use the second approach, because creating the veth
pair in the default network namespace may lead to name
clashes with the Pod-ends if multiple instances of the CNI
plugin run concurrently.
To do so, add the following code to your CNI plugin:
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
host_ifname=veth$RANDOM
ip netns exec "$CNI_CONTAINERID" ip link add "$CNI_IFNAME" type veth peer name "$host_ifname"
The first line in the above code chooses a random name for
the host-end of the veth pair, and the second line creates the
veth pair in the Pod network namespace.
At this point, your configuration looks like this:
157
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
ip netns exec "$CNI_CONTAINERID" ip link set "$host_ifname" netns 1
ip link set "$host_ifname" master cni0 up
The first line in the above code moves the host network
interface to the default network namespace, and the second
line connects it to the bridge and enables it.
At this point, your configuration looks like this:
159
The host network interface is now correctly set up, but its
counterpart in the Pod network namespace is still
untouched.
Let's set up that one next.
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
ip netns exec "$CNI_CONTAINERID" ip address add "$pod_ip" dev "$CNI_IFNAME"
ip netns exec "$CNI_CONTAINERID" ip link set "$CNI_IFNAME" up
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
ip netns exec "$CNI_CONTAINERID" ip route add default via "$bridge_ip" dev "$CNI_IFNAME"
{
"cniVersion": "0.3.1",
"ips": [
{
"version": "4",
"address": "200.200.0.2/24",
"gateway": "200.200.0.1",
"interface": 0
}
],
"interfaces": [
{
"name": "eth0",
"sandbox": "/proc/7182/ns/net"
}
]
}
If this JSON object looks familiar to you, then you are right
— you already received a similar response from the host-
local IPAM plugin that contained the ips field with the
IP address for the Pod.
This is because an IPAM plugin is just a CNI plugin, and
every CNI plugin returns the same type of response.
This comes in handy for you now, because you can reuse the
response you got from host-local and just augment it
with the missing information.
The following code demonstrates that:
my-cni-plugin
case "$CNI_COMMAND" in
ADD)
# ...
response=$(jq ". += {interfaces:[{name:\"$CNI_IFNAME\",sandbox:\"$CNI_NETNS\"}]} | \
.ips[0] += {interface:0}" <<<"$ipam_response")
166
log "Response:\n$response"
echo "$response" >&3
;;
The first line in the above code constructs the final response
of your CNI plugin.
In particular, it takes the response from the host-local
IPAM plugin (that you saved in the ipam_response
variable) and adds the interfaces field to it.
CNI plugin!
There are two remaining operations, DEL and VERSION
that you have to implement next.
my-cni-plugin
case "$CNI_COMMAND" in
DEL)
/opt/cni/bin/host-local <<<"$ipam_netconf"
rm -f /var/run/netns/"$CNI_CONTAINERID"
;;
my-cni-plugin
case "$CNI_COMMAND" in
VERSION)
echo '{"cniVersion":"0.3.1","supportedVersions":["0.1.0","0.2.0","0.3.0","0.3.1"]}' >&3
;;
esac
Take a break after this hard work and get ready for the next
step.
In the next section, you will install and test your CNI
plugin.
Chapter 7
Lab 4/4 —
Installing and
testing the
CNI plugin
171
In this section, you will install and test the CNI plugin that
you created in the previous section.
Remember that the cluster you created does not yet have a
CNI plugin installed, and therefore all its nodes are
NotReady .
After you install your CNI plugin, this will change and your
cluster will become fully functional.
Once your CNI plugin is installed, you will verify that it
works as expected.
Let's get started!
For the first step, installing the CNI plugin executable, you
can use the following command:
bash
{
"cniVersion": "0.3.1",
"name": "my-pod-network",
"type": "my-cni-plugin",
"myHostNetwork": "10.0.0.0/16",
"myPodNetwork": "200.200.0.0/16",
175
"myPodSubnet": "200.200.1.0/24"
}
my-cni-plugin.conf.jsonnet
{
"cniVersion": "0.3.1",
176
"name": "my-pod-network",
"type": "my-cni-plugin",
"myHostNetwork": "10.0.0.0/16",
"myPodNetwork": "200.200.0.0/16",
"myPodSubnet": std.extVar("podSubnet")
}
bash
# macOS
$ brew install jsonnet
# Linux
$ sudo snap install jsonnet
bash
bash
Bingo!
The kubelets indeed detected your CNI plugin and all the
nodes are Ready now.
That means, so far everything seems to work.
However, to be sure, you need to test that your CNI plugin
really does what it is expected to do.
178
pods.yaml
apiVersion: v1
kind: Pod
metadata:
179
name: pod-1
spec:
containers:
- name: ubuntu
image: weibeld/ubuntu-networking
command: ["sleep", "infinity"]
---
apiVersion: v1
kind: Pod
metadata:
name: pod-2
spec:
containers:
- name: ubuntu
image: weibeld/ubuntu-networking
command: ["sleep", "infinity"]
---
apiVersion: v1
kind: Pod
metadata:
name: pod-3
spec:
containers:
- name: ubuntu
image: weibeld/ubuntu-networking
command: ["sleep", "infinity"]
---
apiVersion: v1
kind: Pod
metadata:
name: pod-4
spec:
containers:
- name: ubuntu
image: weibeld/ubuntu-networking
command: ["sleep", "infinity"]
bash
bash
bash
bash
Then, pick a Pod that runs on the same node as the Pod you
just logged into and note down its IP address.
182
Now, try to ping the IP address of this other Pod from the
Pod you're logged into:
bash
$ ping 200.200.1.5
PING 200.200.1.5 (200.200.1.5) 56(84) bytes of data.
64 bytes from 200.200.1.5: icmp_seq=1 ttl=64 time=0.061 ms
64 bytes from 200.200.1.5: icmp_seq=2 ttl=64 time=0.079 ms
Then try to ping the IP address of this other Pod from the
183
bash
$ ping 200.200.2.4
PING 200.200.2.4 (200.200.2.4) 56(84) bytes of data.
64 bytes from 200.200.2.4: icmp_seq=1 ttl=62 time=1.52 ms
64 bytes from 200.200.2.4: icmp_seq=2 ttl=62 time=0.333 ms
bash
INTERNAL-IP
my-k8s-master Ready master 46m v1.17.2 10.0.0.2
my-k8s-worker-1 Ready <none> 46m v1.17.2 10.0.0.3
my-k8s-worker-2 Ready <none> 46m v1.17.2 10.0.0.4
Let's first test if a Pod can communicate with the node it's
running on.
To do so, note down the IP address of the node that the Pod
you're currently logged into is running on.
Then, from the Pod you're logged into, try to ping the IP
address of that node:
bash
$ ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.067 ms
Then, from the Pod you're logged into, try to ping the IP
address of that node:
bash
$ ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_seq=1 ttl=63 time=1.24 ms
64 bytes from 10.0.0.4: icmp_seq=2 ttl=63 time=0.252 ms
It works just the same as in the previous test with the same
node.
Pods can communicate to processes running on different
nodes.
bash
Now, try to ping the IP address of this Pod from the node
you're currently logged into:
bash
$ ping 200.200.1.4
187
Then, try to ping the IP address of this Pod from the node
you're logged into:
bash
$ ping 200.200.2.4
PING 200.200.2.4 (200.200.2.4) 56(84) bytes of data.
64 bytes from 200.200.2.4: icmp_seq=1 ttl=63 time=1.52 ms
64 bytes from 200.200.2.4: icmp_seq=2 ttl=63 time=0.253 ms
bash
bash
189
$ ping 104.31.70.82
PING 104.31.70.82 (104.31.70.82) 56(84) bytes of data.
64 bytes from 104.31.70.82: icmp_seq=1 ttl=54 time=34.8 ms
64 bytes from 104.31.70.82: icmp_seq=2 ttl=54 time=34.7 ms
bash
$ ping learnk8s.io
PING learnk8s.io (104.31.70.82) 56(84) bytes of data.
64 bytes from 104.31.70.82 (104.31.70.82): icmp_seq=1 ttl=54 time=34.4 ms
64 bytes from 104.31.70.82 (104.31.70.82): icmp_seq=2 ttl=54 time=34.8 ms
Cleaning up
You can delete all the GCP resources that you created for
this lab with the following sequence of commands:
bash
After doing this, you can be sure that you're not using any
paid services anymore and will not have any bad surprises on
your next GCP bill.
To remove all the traces of the cluster on your local machine
too, you can remove the kubeconfig file that you used to
access the cluster:
bash
$ rm ~/my-kubeconfig
unset KUBECONFIG