Exercise 3.2: Configure A Local Docker Repo
Exercise 3.2: Configure A Local Docker Repo
While we could create an account and upload our application to hub.docker.com, thus sharing it with the world, we
will instead create a local repository and make it available to the nodes of our cluster.
1. We’ll need to complete a few steps with special permissions, for ease of use we’ll become root using sudo.
student@ckad-1:˜/app1$ cd
student@ckad-1:˜$ sudo -i
2. Install the docker-compose software and utilities to work with the nginx server which will be deployed with the registry.
3. Create a new directory for configuration information. We’ll be placing the repository in the root filesystem. A better
location may be chosen in a production environment.
4. Create a Docker compose file. Inside is an entry for the nginx web server to handle outside traffic and a registry entry
listening to loopback port 5000 for running a local Docker registry.
root@ckad-1:/localdocker# vim docker-compose.yaml
docker-compose.yaml
1 nginx:
2 image: "nginx:1.17"
3 ports:
4 - 443:443
5 links:
6 - registry:registry
7 volumes:
8 - /localdocker/nginx/:/etc/nginx/conf.d
9 registry:
10 image: registry:2
11 ports:
12 - 127.0.0.1:5000:5000
13 environment:
14 REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
15 volumes:
16 - /localdocker/data:/data
5. Use the docker-compose up command to create the containers declared in the previous step YAML file. This will
capture the terminal and run until you use ctrl-c to interrupt. There should be five registry_1 entries with info messages
about memory and which port is being listened to. Once we’re sure the Docker file works we’ll convert to a Kubernetes
tool. Let it run. You will use ctrl-c in a few steps.
root@ckad-1:/localdocker# docker-compose up
6. Test that you can access the repository. Open a second terminal to the master node. Use the curl command to test the
repository. It should return {}, but does not have a carriage-return so will be on the same line as the following prompt.
You should also see the GET request in the first, captured terminal, without error. Don’t forget the trailing slash. You’ll
see a “Moved Permanently” message if the path does not match exactly.
7. Now that we know docker-compose format is working, ingest the file into Kubernetes using kompose. Use ctrl-c to
stop the previous docker-compose command.
ˆCGracefully stopping... (press Ctrl+C again to force)
Stopping localdocker_nginx_1 ... done
Stopping localdocker_registry_1 ... done
8. Download the kompose binary and make it executable. The command can run on a single line. Note that the option
following the dash is the letter as in output. Also that is a zero, not capital O (ohh) in the short URL. The short URL goes
here: https://github.com/kubernetes/kompose/releases/download/v1.1.0/kompose-linux-amd64
root@ckad-1:/localdocker# curl -L https://bit.ly/2tN0bEa -o kompose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 609 0 609 0 0 1963 0 --:--:-- --:--:-- --:--:-- 1970
100 45.3M 100 45.3M 0 0 16.3M 0 0:00:02 0:00:02 --:--:-- 25.9M
9. Move the binary to a directory in our $PATH. Then return to your non-root user.
root@ckad-1:/localdocker# mv ./kompose /usr/local/bin/kompose
root@ckad-1:/localdocker# exit
10. Create two physical volumes in order to deploy a local registry for Kubernetes. 200Mi for each should be enough for
each of the volumes. Use the hostPath storageclass for the volumes.
More details on how persistent volumes and persistent volume claims are covered in an upcoming chapter, Deployment
Configuration.
student@ckad-1:˜$ vim vol1.yaml
vol1.yaml
1 apiVersion: v1
2 kind: PersistentVolume
3 metadata:
4 labels:
5 type: local
6 name: task-pv-volume
7 spec:
8 accessModes:
9 - ReadWriteOnce
10 capacity:
11 storage: 200Mi
12 hostPath:
13 path: /tmp/data
14 persistentVolumeReclaimPolicy: Retain
vol2.yaml
1 apiVersion: v1
2 kind: PersistentVolume
3 metadata:
4 labels:
5 type: local
6 name: registryvm
7 spec:
8 accessModes:
9 - ReadWriteOnce
10 capacity:
11 storage: 200Mi
12 hostPath:
13 path: /tmp/nginx
14 persistentVolumeReclaimPolicy: Retain
12. Verify both volumes have been created. They should show an Available status.
13. Go to the configuration file directory for the local Docker registry.
student@ckad-1:˜$ cd /localdocker/
student@ckad-1:˜/localdocker$ ls
data docker-compose.yaml nginx
14. Convert the Docker file into a single YAML file for use with Kubernetes. Not all objects convert exactly from Docker to
kompose, you may get errors about the mount syntax for the new volumes. They can be safely ignored.
15. Review the file. You’ll find that multiple Kubernetes objects will have been created such as Services,
Persistent Volume Claims and Deployments using environmental parameters and volumes to configure the
container within.
16. View the cluster resources prior to deploying the registry. Only the cluster service and two available persistent volumes
should exist in the default namespace.
17. To illustrate the fast changing nature of Kubernetes you will show that the API has changed for Deployments. With
each new release of Kubernetes you may want to plan on a YAML review. First determine new object settings and
configuration, then compare and contrast to your existing YAML files. Edit and test for the new configurations.
Another more common way to find YAML issues is to attempt to create an object using previous YAML in a new version
of Kubernetes and track down errors. Not suggested, but what often happens instead of the following process.
To view the current cluster requirements use the --dry-run option for the kubectl create command to see what the
API now uses. We can compare the current values to our existing (previous version) YAML files. This will help determine
what to edit for the local registry in an upcoming step.
drytry
1 apiVersion: apps/v1
2 kind: Deployment
3 metadata:
4 creationTimestamp: null
5 labels:
6 app: drytry
7 name: drytry
8 spec:
9 replicas: 1
10 selector:
11 matchLabels:
12 app: drytry
13 strategy: {}
14 template:
15 <output_omitted>
18. From the previous command output and comparing line by line to objects in the existing localregistry.yaml file
output we can see that the apiVersion of the Deployment object has changed, and we need to add selector, add
matchLabels, and a label line. The three lines to add will be part of the replicaSet information, right after the replicas
line, with selector the same indentation as replicas.
Following is a diff output, a common way to compare two files to each other, before and after an edit. Use the man page
to decode the output if you are not already familiar with the command.
41c41
< - apiVersion: apps/v1
---
> - apiVersion: extensions/v1beta1
53,55d52
< selector:
< matchLabels:
< io.kompose.service: nginx
93c90
< - apiVersion: apps/v1
---
> - apiVersion: extensions/v1beta1
105,107d101
< selector:
< matchLabels:
< io.kompose.service: registry
20. View the newly deployed resources. The persistent volumes should now show as Bound. Be aware that due to the
manner that volumes are bound it is possible that the registry claim may not to be bound to the registry volume. Find
the service IP for the registry. It should be sharing port 5000. In the example below the IP address is 10.110.186.162,
yours may be different.
21. Verify you get the same {} response using the Kubernetes deployed registry as we did when using docker-compose.
Note you must use the trailing slash after v2. Please also note that if the connection hangs it may be due to a firewall
issue. If running your nodes using GCE ensure your instances are using VPC setup and all ports are allowed. If using
AWS also make sure all ports are being allowed.
Edit the IP address to that of your registry service.
22. Edit the Docker configuration file to allow insecure access to the registry. In a production environment steps should be
taken to create and use TLS authentication instead. Use the IP and port of the registry you verified in the previous step.
23. Restart docker on the local system. It can take up to a minute for the restart to take place. Ensure the service is active.
It should report that the service recently became status as well.
24. Download and tag a typical image from hub.docker.com. Tag the image using the IP and port of the registry. We will
also use the latest tag.
25. Push the newly tagged image to your local registry. If you receive an error about an HTTP request to an HTTPS client
check that you edited the /etc/docker/daemon.json file correctly and restarted the service.
26. We will test to make sure we can also pull images from our local repository. Begin by removing the local cached images.
27. Pull the image from the local registry. It should report the download of a newer image.
28. Use docker tag to assign the simpleapp image and then push it to the local registry. The image and dependent images
should be pushed to the local repository.
29. Configure the worker (second) node to use the local registry running on the master server. Connect to the worker node.
Edit the Docker daemon.json file with the same values as the master node and restart the service. Ensure it is active.
30. From the worker node, pull the recently pushed image from the registry running on the master node.
31. Return to the master node and deploy the simpleapp in Kubernetes with several replicas. We will name the deployment
try1. Scale to have six replicas. Multiple replicas the scheduler should run some containers on each node.
32. View the running pods. You should see six replicas of simpleapp as well as two running the locally hosted image
repository.
33. On the second node use sudo docker ps to verify containers of simpleapp are running. The scheduler will usually
balance pod count across nodes. As the master already has several pods running the new pods may be on the worker.
34. Return to the master node. Save the try1 deployment as YAML.
student@ckad-1:˜/app1$ cd ˜/app1/
student@ckad-1:˜/app1$ kubectl get deployment try1 -o yaml > simpleapp.yaml
35. Delete and recreate the try1 deployment using the YAML file. Verify the deployment is running with the expected six
replicas.