Docker Focus Guide
Docker Focus Guide
Focus
Guide
Docker
Get up and
running 75% p Do Deliver builds
kto ck
faster s 39x faster
e
er
rD
Bu
Docke
ild Cl ud
o
Do
c k er S c o ut
Trusted by
developers.
Chosen by Fortune
100 companies.
FOCUS ON
DOCKER
Contact Info
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 3
Fo c u s o n D o c k e r Discover Docker
All Aboard
Docker brings the power of containers to your desktop. By Amy Pettle
In the past decade, containers have container community. Docker, along you the flexibility and freedom to in-
grown in popularity thanks to the with other container industry leaders, novate using your choice of tools, ap-
promise of write once, run (almost) helped establish the Open Container plication stacks, and deployment. By
anywhere. A container lets you pack- Initiative (OCI) in 2015 to develop open adding convenience, automation, and
age your application, along with its industry standards around containers centralized management, Docker’s
environment, dependencies, and con- and container runtimes. Several Docker tools support you in developing appli-
figuration, into an isolated unit that tools are open source, and Docker plays cations that work almost anywhere.
will then run on your local laptop, an active role in contributing tools Whether you are developing your
physical or virtual machines (VMs) in to several open source projects, such own applications or converting legacy
a data center, the cloud, or a hybrid as Moby, Cloud Native Computing apps, this focus guide can help you
environment, regardless of the host’s Foundation, and OCI. As the software get started using Docker for container
operating system (OS) or architecture. world changes, Docker continues to development.
You can deploy your app knowing that lead – for example, creating (along with We cover some of Docker’s open
what worked on your laptop will work BastionZero) the OpenPubkey project, source tools such as Docker Engine
throughout the development life cycle. which makes it easier for developers to and Docker Compose and provide a
Unlike VMs, a container doesn’t cryptographically sign build artifacts, glimpse of where Docker is putting
require a base OS image. Instead, it improving supply chain security. its energies today. If you are working
shares the host’s existing OS, saving Docker focuses on bridging the gap on an open source project without a
precious system resources. You can between development and produc- path to monetization, be sure to read
deploy, patch, and scale apps more tion environments. It offers a range about the Docker-Sponsored Open
quickly in a container, allowing you of tools that help accelerate, simplify, Source program.
to manage workloads in real time. and secure the development process. For complex apps with multiple sepa-
Containers also improve app develop- These tools meet developers where rate containers, you’ll need a container
ment thanks to their support of agile they are, offering flexibility and scal- orchestration tool such as Kubernetes.
and DevOps practices, resulting in ability, as well as seamless integration We explain some Kubernetes basics
accelerated testing and production. with cloud-native services and exist- and show you how to leverage Docker
Whether you are developing a mi- ing workflows. These tools include: Desktop’s single-node Kubernetes clus-
croservice, working with CI/CD n Docker Desktop – Docker’s out-of- ter to test your containers.
pipelines, or handling repetitive the-box containerization software for Perhaps you have a legacy app that
tasks, containers can make your job managing the container life cycle could benefit from the uniformity,
easier. If you’re ready to get started n Docker Hub – a container registry scalability, and speed of a container
with containers, then you’re ready for with over 100,000 base images, environment? Check out how to
Docker – a powerful toolset for con- including Trusted Content, which use Docker Compose and other
tainer development. This focus guide offers secure and verified images Docker tools to migrate your existing
takes you inside the Docker experience n Docker Extensions – third-party application.
and shows you how to build safe and tools for customizing your devel- Finally, Giri Sreenivas, Docker’s Chief
secure containers using Docker tools. opment environment Product Officer, discusses Docker’s
n Docker Scout – a supply chain tool local + cloud approach to container
that ensures quality, trust, and reli- development. You’ll also learn about
The Docker Story ability in development from the start some of Docker’s newest tools:
Docker introduced its open source tool- n Docker Build Cloud – acceler- Docker Scout, Docker Build Cloud,
set in 2013, democratizing the container ates build images by offloading and Docker Debug.
space by making building, sharing, test- resource-intensive build tasks in the Containers are a powerful software
ing, and running containers accessible inner loop as well as CI to the cloud development tool. An advanced tool-
to all skill levels thanks to its CLI-based With Docker, you can deploy soft- set such as Docker offers security,
workflow. It’s no surprise that develop- ware, run a lightweight Linux distro, consistency, and ease of use. Docker
ers who containerize apps for a living host a server, or create a development lets you focus on innovation by
turn to Docker to get the job done. environment. You can use Docker simplifying container creation. Read
Beyond their toolset, Docker plays in Linux, Windows, macOS, data on for a detailed look at how to get
an important role in the open source centers, or the cloud. Docker gives started with Docker. n
Test Lab
The built-in single-node Kubernetes cluster included with Docker
Desktop is a handy tool for testing your container. By Artur Skura
Docker makes it easy for developers and corresponding libraries – and then of just one container. If you have
to deploy applications and ensure that because of a tiny glitch in the docs, more than one container, you need
the local development environment things didn’t work as expected? a way to organize them that is trans-
is reasonably close to the staging and Thanks to Docker, these days are parent to your users. In other words,
production environments. Remember mostly over. You can develop your you need a container orchestration
the times you found a great app only app and test it locally and then platform. The unquestioned leader in
to discover the installation instructions deploy it to the testing and produc- orchestration is Kubernetes (K8s for
extended over several pages and in- tion environments with few or no short). It is easy to get started with
volved configuring the database, popu- changes. But Docker itself is not Kubernetes if you have Docker Desk-
lating tables, installing many packages enough. Modern apps rarely consist top installed. Simply go to Settings |
Kubernetes and select Enable Kuber-
netes (Figure 1). Enabling Kubernetes
from Docker Desktop gets you a one-
node cluster suitable for local testing
and experiments.
05 server {
06 listen 80;
07 location / {
08 proxy_pass http://webapp:5000;
09 }
10 }
11 }
Figure 1: Enabling Kubernetes in Docker Desktop.
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 5
FO C U S O N D O C K E R Docker Desktop Local Cluster
Single-node clusters are quite useful tool (called “Kubernetes client”). other hand, correspond to sets of con-
for testing, and the single-node Ku- Because kubectl is already set up to tainers, and they are running in nodes.
bernetes cluster bundled with Docker work with the cluster, you can start One node can contain several pods.
Desktop is pre-configured and ready issuing commands straight away One pod cannot run on more than one
to use. Along with this single-node without additional configuration. node – instead, you create replicas of
cluster (called “Kubernetes server” in the pod using so-called deployments.
Docker docs), Docker Desktop also About Kubernetes A typical Kubernetes cluster has several
includes the kubectl command-line nodes with one or more pods running
Many people say they would like to on each node. When one node fails,
Listing 2: my-app/n ginx/D ockerfile start learning Kubernetes, but they the pods that had been running on it
01 FROM nginx:alpine somehow get stuck at the first phase, are considered lost and are scheduled
02 COPY nginx.conf /etc/nginx/nginx.conf that is, the installation. The problem by the cluster to run on other, healthy
is, administering a Kubernetes cluster nodes. All this happens automatically
Listing 3: my-app/webapp/a pp.py and developing software that runs on when you use a deployment. Kuber-
01 from flask import Flask
it are two different tasks that are often netes is therefore a self-healing plat-
02 import redis handled by different teams. Installing, form for running containerized apps.
03 import os upgrading, and managing the cluster Even on the basis of this simplified
04 is usually done by the Ops or DevOps description, you can understand why
05 app = Flask(__name__) team, whereas the development is Kubernetes took the world by storm.
06 redis_host = os.getenv("REDIS_HOST", "localhost") usually done by developers. Using a
07 r = redis.Redis(host=redis_host, port=6379, decode_
single-node cluster, developers can
responses=True)
take the first steps with verifying that A Multi-Container Example
08
09 @app.route('/') the containerized application works in A simple example will show how easy
10 def hello(): Kubernetes before passing it on to Ops it is to test your Docker containers
11 count = r.incr('counter') for further implementation. using Docker Desktop’s single-node
12 return f'Hello, you have visited {count} times.' Kubernetes is a complex beast, and it Kubernetes cluster. I will create a
13 might be confusing to present its archi- docker‑compose.yml file that sets up
14 if __name__ == '__main__':
tecture in detail, so I’ll focus on the es- a web application stack consisting
15 app.run(host="0.0.0.0", port=5000)
sentials. For starters, it’s enough to re- of an Nginx reverse proxy, a Python
member two concepts: nodes and pods. Flask web application, and a Redis
Listing 4: my-app/webapp/D ockerfile Nodes normally correspond to virtual database. In the root directory of
01 FROM python:3.11 (or, less often, bare metal) machines on your project (let’s call it my‑app), cre-
02 WORKDIR /app which pods are running. Pods, on the ate two folders: nginx and webapp. The
03 COPY . .
04 RUN pip install Flask redis Do I Need cri-dockerd?
05 CMD ["python", "app.py"]
Kubernetes was built around the Docker features and support for extensions. Because
Engine container runtime, and the early ver- Docker Engine was developed before CRI, it
Listing 5: my-app/d ocker-compose.yml
sions of Kubernetes were fully compatible does not fit directly with the CRI interface.
01 services: with Docker Engine. Docker Engine is a full- Kubernetes implemented a temporary adapter
02 nginx:
featured runtime with many features for sup- called dockershim to support Docker Engine
03 build: ./nginx
porting end users and developers – and even a on CRI-based Kubernetes installations. Dock-
04 ports:
system for integrating third-party extensions. ershim was deprecated in Kubernetes 1.20 and
05 ‑ "8080:80"
06 depends_on: In many cases, developers don’t need all the removed in version 1.24.
07 ‑ webapp functionality provided by Docker Engine and A new adapter called cri-dockerd now
08 webapp: just want a much simpler runtime. Kubernetes provides “fully conformant compatibility
09 build: ./webapp implemented the Container Runtime interface between Docker Engine and the Kubernetes
10 environment: (CRI) in 2016 as a universal interface to sup- system.” If you are running Kubernetes 1.24
11 ‑ REDIS_HOST=redis port other container runtimes. Docker contrib- or newer with containerd, you won’t have
12 depends_on:
uted the code for a simpler, more elementary to worry about compatibility. However, if
13 ‑ redis
container runtime called containerd, which you want to continue to use the Docker
14 redis:
15 image: "redis:alpine"
is compatible with CRI. Containerd is now Engine runtime, you might have to replace
16 volumes: maintained by the Cloud Native Computing dockershim with the cri-dockerd adapter. Cri-
17 ‑ redis‑data:/data Foundation. dockerd is included with Docker Desktop, so
18 Containerd works for many common scenarios you won’t need to worry about cri-dockerd to
19 volumes: today, but some users still prefer the more access Docker Desktop’s single-node Kuber-
20 redis‑data: robust Docker Engine, with its user interface netes cluster.
nginx directory will contain a Nginx and one volume. You might ask why Compose makes it easy to start and
configuration file nginx.conf (Listing 1) three services since we only prepared build the whole infrastructure:
with a Dockerfile (Listing 2); the two Dockerfiles? The two Docker-
webapp directory will contain a Flask files are custom images, whereas docker compose up ‑‑build
app app.py (Listing 3) and the cor- the Redis image is a standard image
responding Dockerfile (Listing 4). In (redis:alpine) without any modifica- This command will first build the three
this way, I will build two images: one tions, so you don’t even need to create Docker images (Figure 2) and then run
containing the Flask app and another a Dockerfile for it – you can instead the resulting containers (Figure 3) in
with Nginx. The user will connect to use the ready-made image directly the correct order: As you will notice in
a Nginx instance, which will commu- with the image directive. Docker docker‑compose.yml, the redis service,
nicate with the Flask app. The app,
in turn, will use the Redis in-memory
storage tool as a simple store for
counting users’ visits.
The key part that glues everything
together is the docker‑compose.yml file
(Listing 5). It defines three services Figure 4: The Flask app correctly counting user visits.
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 7
FO C U S O N D O C K E R Docker Desktop Local Cluster
even though defined last, needs to reason to use V1, you should always you want to leverage the power of
run first because webapp depends on it, use V2 as V1 is not receiving updates.) containers, it becomes an obstacle.
whereas nginx has to start last because As a side note, if you are planning on Some organizations go to the other
it depends on webapp already running. using the Docker Engine runtime with extreme and rewrite everything using
The Flask app should be available on Kubernetes, see the sidebar entitled microservices, which might not be
localhost:8080 and working as intended “Do I Need cri-dockerd?” the optimal choice in all cases. What
(Figure 4). (By the way, you might no- you need are logical components that
tice that I am using docker compose, a Migrating to Kubernetes you can develop and deploy fairly in-
new command integrated with Docker dependently and that will still work
Desktop, called Compose V2, instead This brings me to the main topic: together well.
of the legacy Compose V1 command How do I migrate the preceding ex- The Docker Compose file defined
docker‑compose. Unless you have a good ample to Kubernetes? Because the three services, so I need one Ku-
app is already containerized, the mi- bernetes Service file for each (List-
Listing 6: my-k8s-app/n ginx-service.yaml gration should be very easy. In real ings 6-8). In addition, I also need
01 apiVersion: v1 life, DevOps engineers need to deal to create a deployment file for each
02 kind: Service with legacy apps written for a mono- (Listings 9-11) and a ConfigMap
03 metadata: lithic architecture. Although this resource for Nginx (Listing 12).
04 name: nginx
architecture is not inherently bad, if Deployments define, among other
05 spec:
06 ports: things, what containers
07 ‑ port: 8080 Listing 9: my-k8s-app/ nginx-deployment.yaml and volumes should run
08 targetPort: 80 01 apiVersion: apps/v1 and how many of repli-
09 selector: 02 kind: Deployment cas should be created.
10 app: nginx 03 metadata: A ConfigMap is another
04 name: nginx type of resource used for
Listing 7: my-k8s-app/webapp-service.yaml 05 spec:
configuration.
06 replicas: 1
01 apiVersion: v1
07 selector:
Kubernetes will not build
02 kind: Service images. You need to have
08 matchLabels:
03 metadata: them already built and
09 app: nginx
04 name: webapp
10 template: pass them to deploy-
05 spec:
11 metadata: ments as arguments of
06 ports:
12 labels:
07 ‑ port: 5000 the image directive. In
13 app: nginx
08 selector: the case of Redis, I am
14 spec:
09 app: webapp not modifying the offi-
15 containers:
16 ‑ name: nginx cial image and can use it
Listing 8: my‑k8s‑app/redis‑service.yaml 17 image: nginx:alpine directly.
18 ports: With Nginx, things get
01 apiVersion: v1
19 ‑ containerPort: 80
02 kind: Service a bit more complex be-
20 volumeMounts:
03 metadata:
21 ‑ name: nginx‑config
cause I need to adapt the
04 name: redis 22 mountPath: /etc/nginx/nginx.conf default configuration.
05 spec: 23 subPath: nginx.conf Fortunately, I don’t have
06 ports: 24 volumes: to modify the image this
07 ‑ port: 6379 25 ‑ name: nginx‑config time and can use another
08 selector: 26 configMap:
Kubernetes resource:
09 app: redis 27 name: nginx‑config
ConfigMap. ConfigMap
will allow me to manage the con- At this point, I am ready to apply all Listing 11: my-k8s-app/redis-deployment.yaml
figuration independently of the ac- the configurations. I will create the
01 apiVersion: apps/v1
tual Nginx container. This approach necessary infrastructure and run the
02 kind: Deployment
has many advantages. For example, containers (Listing 13).
03 metadata:
I can reconfigure Nginx dynami- When you run the kubectl get pods
04 name: redis
cally, and Kubernetes will propagate command, you should see the pods 05 spec:
changes to all the pods. Also, I can running (Listing 14). 06 replicas: 1
use the same Nginx container in dif- You can also use the kubectl get 07 selector:
ferent environments and only the command to get information on 08 matchLabels:
ConfigMap will change. Versioning deployments, services, and Config- 09 app: redis
also works better with a ConfigMap Maps. In order to actually use the 10 template:
than with a container. app, type the following command: 11 metadata:
In the nginx‑deployment.yaml file 12 labels:
(Listing 9), the ConfigMap is kubectl port‑forward svc/nginx 8080:8080 13 app: redis
mounted into the Nginx container 14 spec:
at the /etc/nginx/nginx.conf path. And, as before, visit localhost:8080 – 15 containers:
16 ‑ name: redis
This replaces the default Nginx con- you should see the same Flask app
17 image: redis:alpine
figuration file with the file defined as deployed earlier with Docker
18 ports:
in the ConfigMap. Using a Config- Compose, the only difference being
19 ‑ containerPort: 6379
Map would make little sense for the that now it is running on Kubernetes.
Flask app, so I need to build the Congratulations – you have built and
image first, upload it to a container deployed your first application on the Listing 12: my-k8s-app/n ginx-configmap.yaml
registry, and then pass its name as local one-node Kubernetes cluster! 01 apiVersion: v1
image in the deployment. In order Now, the magic lies in the fact that 02 kind: ConfigMap
to do so, I need to first create an you can perform the same sequence 03 metadata:
account on Docker Hub or another of kubectl apply commands in the 04 name: nginx‑config
container registry. Then go to the production environment, for example 05 data:
06 nginx.conf: |
my‑app/webapp directory used earlier in EKS on AWS, and the app will run
07 events {
with Docker Compose and build the exactly as it should. In practice, there
08 worker_connections 1024;
image, for example, as flaskapp: are a few differences, such as mak-
09 }
ing the app available to the external 10
docker build ‑t flaskapp . world using a load balancer, storing 11 http {
secrets, storage options, and so on, 12 server {
Now log in to your registry. For Docker but these are more related to the 13 listen 80;
Hub, I will use: interaction of Kubernetes with the 14
external environment – the app itself 15 location / {
docker login ‑‑username=your‑username stays the same. 16 proxy_pass http://webapp:5000;
17 }
18 }
The next stage is tagging: Conclusion 19 }
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 9
FO C U S O N D O C K E R Docker Open Source Developer Tools
Total
Package
Docker provides the open source tools
and resources for compiling, building,
and testing containerized applications.
By Amy Pettle
By all accounts, Docker’s developer Docker as somehow competing with Docker Engine
tools have been an important player Kubernetes, when in fact, Mirantis Docker Engine [1], the open source
in the recent history of enterprise competes with Kubernetes, and the container engine for building contain-
IT. Although containers were not original Docker container platform is erized applications, forms the core of
new in 2013, the release of the open fully integrated into the Kubernetes Docker’s developer tool suite. Devel-
source Docker platform made con- environment. oped upstream in the Moby Project,
tainers more accessible to everyday Docker’s developers are still hard at Docker Engine uses a client-server
admins by simplifying development work, focused on development tools. architecture (Figure 1). Docker Engine
and deployment. Docker also helped Their philosophy is that, although it consists of the daemon (dockerd) and
shape the container landscape by is possible to build a single container APIs that specify which programs can
joining with other container indus- on the fly without the need for an talk to and instruct the daemon.
try leaders to establish the Open enhanced toolset, if you build con- Docker’s open source CLI client
Container Initiative (OCI), a Linux tainers for a living or are concerned (docker) interacts with Docker
Foundation project that maintains with security and consistency in your Engine, letting you manage your
open industry standards around container creations, you’ll need a ver- containers from the command line.
container formats and runtimes. satile set of development tools. It talks to the daemon, which does
Docker even contributed runc, the Docker remains heavily invested the work of building, running, and
original OCI container runtime, to in open source, and several Docker distributing containers. Written in
the foundation in 2015. tools are available under open source Go, the Docker CLI manages single
In recent years, container technol- licenses. This article takes a tour of containers, as well as container im-
ogy has proven to be reliable and the container-building tools in the ages, networks, and volumes. For
ubiquitous, resulting in attention Docker toolset and offers a glimpse at managing multiple containers, you
shifting to orchestration tools such where the company has been putting will need a separate tool, Docker
as Kubernetes. its energies. Compose (see below).
Docker the company actually sold Many of these open source tools For its container runtime, Docker
Lead Image © Alexander Bedrin, 123RF.com
its Enterprise division to Mirantis in have found their way into the Moby Engine relies on containerd, which
2019. Included in that sale was the Project, an upstream, community- manages the container life cycle and
Docker Swarm orchestration platform, governed project for container com- handles creating, stopping, and start-
which Mirantis has continued to mar- ponents, which Docker founded in ing your containers (see the contain-
ket under the Docker Swarm brand, 2017. Moby offers a toolkit of com- erd section for more information).
causing some confusion over what is ponents, a framework for assembling Docker Engine also integrates the
Docker and what isn’t. For instance, these components, and a community open source BuildKit component.
some viewers (incorrectly) visualize for testing and sharing ideas. BuildKit replaced and improved the
legacy builder in the release of Docker Once configured, you can use a single storage to container execution and
Engine 23.0. BuildKit offers improve- command to create and start all of supervision, to low-level storage
ments in performance, storage man- your configuration’s services. and network attachments.
agement, and extensibility. Unlike the Docker Compose provides commands Designed to be embedded in a larger
legacy builder, which performs builds for an application’s entire life cycle. system such as Docker Engine,
serially from Dockerfiles (the text file You can use Docker Compose in all containerd functions as an inter-
containing all the commands called environments: development, testing, nal runtime with minimal runtime
at the command line to assemble an staging, continuous integration (CI), requirements. The containerd dae-
image), BuildKit allows parallel build and even production. Because Docker mon is available for both Linux and
processing and introduces support Compose is an abstraction that aligns Windows, with most of its interac-
for handling more complex scenarios, with developers’ mental model of tions with these operating systems’
such as the ability to detect and skip their applications, it particularly sup- container feature sets being handled
execution of unused build stages. ports the inner loop of application by runc or operating system-specific
Docker Engine binaries are available development. libraries.
as DEB or RPM packages for Cen- A recent addition to Docker Compose By design, containerd works with
tOS, Debian, Fedora, Ubuntu, Red is Docker Compose Watch [3]. The Docker and Kubernetes, as well as
Hat Enterprise Linux (RHEL), SUSE Watch feature lets you define a list of any container platform that wants
Linux Enterprise Server (SLES), and rules that will cause an automatic ser- to abstract syscalls or operat-
Raspberry Pi OS. Docker also offers a vice update when a file is modified. ing system-specific functionality.
static binary for non-supported Linux Watch monitors the files in the local Whereas containerd implements
distributions, but it is not recom- directory, rebuilding the application the Kubernetes Container Runtime
mended for production environments. container when necessary so the ap- Interface (CRI) for those wanting to
plication stays up to date. run a Kubernetes cluster, you can
Docker Compose just as easily use containerd without
containerd Kubernetes. However, if you do plan
If you are looking to define and to use Kubernetes, you will want to
run multi-container Docker apps, Docker donated containerd [4], an use containerd v1.5.11 or v1.6.4 to
you will need Docker Compose [2]. industry-standard container run- address the recent removal of dock-
Available as a plugin for Docker En- time, to the Cloud Native Comput- ershim from Kubernetes.
gine, Docker Compose lets you run a ing Foundation (CNCF) in 2017. You can find containerd as a binary
project with multiple containers from With an emphasis on simplicity and package for the 64-bit architectures
a single source. portability, containerd manages the Intel AMD, PowerPC Little Endian,
Docker Compose uses a YAML file to complete container life cycle on and RISC-V, as well as for the S390x
configure your application’s services. a host – from image transfer and architecture.
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 11
FO C U S O N D O C K E R Docker Open Source Developer Tools
Other Open Source Tools publishers to easily verify content, To find out if your project qualifies
making the Internet more secure. for DSOS, see the “DSOS Require-
As an active member of the container Notary is used in Docker Content ments” box.
open source community, Docker col- Trust [8], which relies on digital
laborates on other projects and tools. signatures for sending and re-
Some of these projects include: ceiving data from remote Docker Docker Commercial Tools
n Distribution [5] (formerly Docker registries. If you are looking for an easy way to
Distribution): This toolkit, which n runc [9]: This CLI tool spawns get started with Docker, you might
Docker donated to the CNCF, lets and runs tools on Linux in ac- be interested in Docker Desktop [14],
you pack, ship, and deliver con- cordance with the OCI specifica- Docker’s out-of-the-box container-
tent. It contains the Open Source tion. This lightweight portable ization software. Docker Desktop’s
Registry implementation for storing container runtime can be used in simple interface doesn’t require you
and distributing container images production. to run Docker from the command
using the OCI Distribution Specifi- line. In addition, it handles container
cation, which defines an API pro- Docker-Sponsored Open setup and automatically applies ker-
tocol to facilitate and standardize nel updates and security patches.
content distribution. Docker Hub,
Source Program Docker Desktop combines Docker’s
GitHub Container Registry, and In addition to contributing open open source components into an
GitLab Container Registry all use source tools, Docker offers a special easy-to-use GUI (Figure 3) that lets
this open source code as the basis program, Docker-Sponsored Open you access important development
for their container registries. Source (DSOS) [10] for developers options with one click. In addition to
n DataKit [6]: Developed upstream working on open source projects that Docker’s open source components,
in the Moby Project, this tool or- don’t have a path to commercializa- Docker Desktop includes useful tools
chestrates apps using a Git-like tion. Started in 2020, DSOS is the suc- such as Docker Extensions, which
data flow. It is used as the coordi- cessor to the Free Team subscription lets you connect the Docker environ-
nation layer for HyperKit, another offered prior to 2021. Over 900 proj- ment to tools you are already us-
Moby tool that functions as the ects are currently part of the DSOS ing, plus access to Docker Hub for
hypervisor component of Docker program. container images and templates, as
for macOS and Windows, as well Being a DSOS member means your well the ability to run a single-node
as for the DataKitCLI continuous projects receive a special badge Kubernetes cluster.
integration system. (Figure 2) in Docker Hub, Docker’s Although Docker Desktop is a
n Notary [7]: Donated to the CNCF container image registry that lets subscription-based offering, Docker
by Docker, this client and server open source contributors find, share, does offer a free Docker Personal
runs and interacts with trusted and use container images. The badge subscription [15], which is best suited
collections. It allows users and signifies that your project has been to individual developers, students
verified and vetted by Docker and is and educators, noncommercial open
DSOS Program Requirements part of Docker’s Trusted Content [11]. source projects, and small businesses
To qualify for the DSOS program, your proj‑ In addition, DSOS projects receive with fewer than 250 employees and
ect must meet the following requirements: free automatic builds on Docker less than $10 million in revenue.
n Shared in Docker Hub’s public reposito‑ Hub. Program members and users Additionally, Docker Scout [16] is a
ries with the source code publicly ac‑ who pull images from your project software supply chain product that
cessible namespace also will get unlimited provides context-aware recommen-
n Compliant with the Open Source Initiative pulls and egress, and DSOS mem- dations to developers. The goal of
definition of open source software [12] bers also receive a freeDocker Team these recommendations is to help
n Active on Docker Hub with updates subscription, which includes Docker the developer build applications
pushed regularly in the past six months Desktop. By the end of 2023, DSOS that are reliable and secure from the
or dependencies updated regularly, even projects will also receive Docker start. You can use Docker Scout in
if your code is stable Scout Team as part of their participa- Docker Desktop, Docker Hub, and
n No pathway to commercialization, which tion in the program. the Docker Scout Dashboard. In the
means you cannot profit through services
or charge for higher tiers, although you
can accept donations
n Your Docker Hub repositories contain
documentation that meets the recom‑
mended community standards
If you are interested in the DSOS program,
submit an application online [13].
Figure 2: A badge appears alongside images that are published by DSOS projects.
Docker CLI, you can use the Docker Info [11] Trusted Content: [https://docs.docker.
Scout CLI plugin, which is available [1] Docker Engine: com/trusted‑content/]
as a script or can be installed manu- [https://docs.docker.com/engine/] [12] Open Source Initiative definition:
ally as a binary. As of early October [2] Docker Compose: [https://opensource.org/osd/]
2023, Docker Scout is in general [https://github.com/docker/compose] [13] DSOS application:
availability [3] Docker Compose Watch: [https://docs. [https://www.docker.com/community/
docker.com/compose/file‑watch/] open‑source/application/]
Conclusion [4] containerd: [https://github.com/ [14] Docker Desktop: [https://www.docker.
containerd/containerd] com/products/docker‑desktop/]
Docker plays an important role in [5] Distribution: [https://github.com/ [15] Docker Personal:
the open source container ecosys- distribution/distribution] [https://www.docker.com/pricing/]
tem. Many core Docker components [6] DataKit: [16] Docker Scout:
are open source, and initiatives such [https://github.com/moby/datakit] [https://docs.docker.com/scout/]
as the DSOS program help indepen- [7] Notary: [https://github.com/
dent software developers make their notaryproject/notary]
projects available to the community. [8] Docker Content Trust: [https://docs. Author
Docker developer tools also support docker.com/engine/security/trust/] Amy Pettle is an editor for ADMIN and Linux
projects such as Moby, CNCF, and [9] runc: Magazine. She started out in tech publishing
OCI that encourage a free, stan- [https://github.com/opencontainers/runc] with C/C++ Users Journal over 20 years ago
dards-based approach to container [10] DSOS: [https://www.docker.com/ and has worked on various Linux New Media
development. n community/open‑source/application/] publications.
Figure 3: You can easily build, run, and share containers from the Docker Desktop dashboard.
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 13
FO C U S O N D O C K E R Dockerize a Legacy App
Makeover
Sooner or later, you’ll want to convert your legacy application to a faster than virtual machines, you can
scale much more efficiently, which
containerized environment. Docker offers the tools for a smooth and is crucial for applications that have
efficient transition. By Artur Skura to deal with sudden load spikes. The
fact that you can start and terminate
In the past, we ran applications on many legacy applications based on containers quickly has several other
physical machines. We cared about older architectures are still in use. If consequences. You can deploy your
every system on our network, and we your legacy application is running applications much faster – and roll
even spent time discussing a proper fine in a legacy context, you might be them back equally quickly if you ex-
naming scheme for our servers wondering why you would want to go perience problems.
(RFC 1178 [1]). Then virtual ma- to the trouble to containerize.
chines came along, and the number The first advantage of containers is Getting Started
of servers we needed to manage in- the uniformity of environments: Con-
creased dramatically. We would spin tainerization ensures that the appli- To work with Docker, you need to
them up and shut them down as nec- cation runs consistently across mul- set up a development environment.
essary. Then containers took this idea tiple environments by packaging the First, you’ll need to install Docker
even further: It typically took several app and its dependencies together. itself. Installation steps vary, depend-
seconds or longer to start a virtual This means that the development ing on your operating system [2].
machine, but you could start and stop environment on the developer’s Once Docker is installed, open a
a container in almost no time. laptop is fundamentally the same terminal and execute the following
In essence, a container is a well-iso- as the testing and production envi- command to confirm Docker is cor-
lated process, sharing the same kernel ronments. This uniformity can lead rectly installed:
as all other processes on the same to significant savings with testing
machine. Although several container and troubleshooting future releases. docker ‑‑version
technologies exist, the most popular is Another benefit is that containers
Docker. Docker’s genius was to create can be horizontally scaled; in other Now that you have Docker installed,
a product that is so smooth and easy words, you can scale the application you’ll also need Docker Compose, a
to use that suddenly everybody started by increasing (and decreasing) the tool for defining and running multi-
using it. Docker managed to hide the number of containers. container Docker applications [3]. If
underlying complexity of spinning up Adding a container orchestration you have Docker Desktop installed,
a container and to make common op- tool like Kubernetes means you can you won’t need to install Docker
Lead Image © hanohiki, 123RF.com
erations as simple as possible. optimize resource allocation and Compose separately because the
better use the machines you have Compose plugin is already included.
Containerizing Legacy Apps – whether physical or virtual. The For a simple example to illustrate the
power of container orchestration fundamentals of Docker, consider a
Although most modern apps are cre- makes it easy to scale the app with Python application running Flask, a
ated with containerization in mind, the load. Because containers start web framework that operates on a
specific version of Python and relies the outside world. To support network but now both containers are isolated
on a few third-party packages. List- connections, you’ll need to expose ports. in a custom network, offering more
ing 1 shows a snippet of a typical When running a container, the ‑p flag control and security.
Python application using Flask. maps a host port to a container port: For applications requiring more com-
To dockerize this application, you plex networking setups, you can use
would write a Dockerfile – a script docker run ‑d ‑p 8080:80 U Docker Compose and define multiple
containing a sequence of instructions ‑‑name web‑server nginx services, networks, and even volumes
to build a Docker image. Each instruc- in a single docker‑compose.yml file
tion in the Dockerfile generates a new In this case, NGINX is running in- (Listing 3).
layer in the resulting image, allowing side the container on port 80. The ‑p When you run docker‑compose up,
for efficient caching and reusability. By 8080:80 maps port 8080 on the host to both services will be instantiated,
constructing a Dockerfile, you essen- port 80 on the container. Now, access- linked, and isolated in a custom net-
tially describe the environment your ing http://localhost:8080 on the host work, as defined.
application needs to run optimally, ir- machine directs traffic to the NGINX As you can see, effective networking
respective of the host system. server running in the container. in Docker involves understanding
Start by creating a file named Dock‑ For inter-container communication, and combining these elements: port
erfile (no file extension) in your Docker offers several options. The mapping for external access, inter-
project directory. The basic structure simplest approach involves using con- container communication via cus-
involves specifying a base image, set- tainer names as DNS names, made tom bridge networks, and orches-
ting environment variables, copying possible by the default bridge net- tration (managed here by Docker
files, and defining the default com- work. First, run a database container: Compose).
mand for the application. Listing 2
shows a simple Dockerfile for the docker run ‑d ‑‑name my‑database mongo
Volumes and Persistent Data
application in Listing 1.
In this Dockerfile, I specify that I’m Now, if you want to link a web applica- Managing persistent data within
using Python 3.11, set the working tion to this database, you can reference Docker involves understanding
directory in the container to /app, the database container by its name: and leveraging volumes. Unlike a
copy the required files, and install the
necessary packages, as defined in a docker run ‑d ‑‑link my‑database:db U
Listing 1: Simple Flask App
requirements.txt file. Finally, I specify my‑web‑app from flask import Flask
that the application should start by
running app.py. In this setup, my‑web‑app can connect app = Flask(__name__)
To build this Docker image, you would to the MongoDB server by using db as
navigate to the directory containing the the hostname. @app.route('/')
Dockerfile and execute the following Although useful, the ‑‑link flag is def hello_world():
return 'Hello, World!'
commands to build and run the app: considered legacy and is deprecated.
A more flexible approach is to create
if __name__ == '__main__':
docker build ‑t my‑legacy‑app . custom bridge networks. A custom
app.run(host='0.0.0.0', port=5000)
docker run ‑p 5000:5000 U network facilitates automatic DNS
my‑legacy‑app resolution for container names, and it
also allows for network isolation. Listing 2: Dockerfile for Flask App (Listing 1)
With these steps, you have contain- For example, you can create a custom # Use an official Python runtime as a base image
erized the Flask application using network as follows: FROM python:3.11‑slim
Docker. The application now runs iso-
# Set the working directory in the container
lated from the host system, making it docker network create my‑network
WORKDIR /app
more portable and easier to deploy on
any environment that supports Docker. Now, run containers in this custom
# Copy the requirements.txt file into the container
network with: COPY requirements.txt /app/
Networking in Docker
docker run ‑d ‑‑network=my‑network U # Install the dependencies
Networking is one of Docker’s core ‑‑name my‑database mongo U RUN pip install ‑‑no‑cache‑dir ‑r requirements.txt
features, enabling isolated containers ‑‑network‑alias=db
to communicate amongst themselves docker run ‑d ‑‑network=my‑network U # Copy the current directory contents into the container
and with external networks. The most my‑web‑app COPY . /app/
straightforward networking scenario
# Run app.py when the container launches
involves a single container that needs to Here, my‑web‑app can still reach my‑da‑
CMD ["python", "app.py"]
be accessible from the host machine or tabase using its name or a DNS alias,
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 15
FO C U S O N D O C K E R Dockerize a Legacy App
container, a volume exists indepen- Whereas anonymous volumes are Dockerizing a Legacy Web
dently and retains data even when a suitable for quick tasks, named vol-
container is terminated. This charac- umes provide more control and are
Server
teristic is crucial for stateful applica- easier to manage. If you use docker Containerizing a legacy web server
tions, like databases, that require data run and specify a named volume, involves several phases: assessment,
to persist across container life cycles. Docker will auto-create it if needed. dependency analysis, containerization,
For simple use cases, you can create You can also create a named volume and testing. For this example, I’ll focus
anonymous volumes at container run- explicitly with: on how to containerize an Apache
time. When you run a container with HTTP Server. The process generally in-
an anonymous volume, Docker gener- docker volume create my‑mongo‑data volves creating a Dockerfile, bundling
ates a random name for the volume. configuration files, and possibly in-
The following command starts a Mon- Now you can start the MongoDB corporating existing databases.
goDB container and attaches an anony- container and explicitly attach this The first step is to create a new
mous volume to the /data/db directory, named volume: directory to hold your Dockerfile
where MongoDB stores its data: and configuration files. This direc-
docker run ‑d ‑‑name my‑mongodb U tory acts as the build context for the
docker run ‑d ‑‑name my‑mongodb U ‑v my‑mongo‑data:/data/db mongo Docker image:
‑v /data/db mongo
You can use named volumes to share mkdir dockerized‑apache
Listing 3: Sample docker-compose.yml File data between containers. If you need cd dockerized‑apache
services: to share data between the container
web: and the host system, host volumes Start by creating a Dockerfile that
image: nginx are the choice. This feature mounts a specifies the base image and instal-
networks: specific directory from the host into lation steps. Imagine you’re using
‑ my‑network the container: an Ubuntu-based image for compat-
database: ibility with your legacy application
image: mongo docker run ‑d ‑‑name my‑mongodb U (Listing 5).
networks:
‑v /path/on/host:/data/db mongo In Listing 5, the RUN instruction in-
‑ my‑network
stalls Apache, and the COPY instruc-
networks:
Here, /path/on/host corresponds to tion transfers your existing Apache
my‑network:
driver: bridge the host system directory you want to configuration file (my‑httpd.conf) into
mount. the image. The CMD instruction speci-
With Docker Compose, volume fies that Apache should run in the
Listing 4: Sample Named Volume specification becomes streamlined foreground when the container starts.
services: and readable, especially when dealing Place your existing Apache configura-
database: with multi-container, stateful legacy tion file in the same directory as the
image: mongo applications. Listing 4 shows how you Dockerfile. This configuration should
volumes: could define a service in docker‑com‑ be a working setup for your legacy web
‑ my‑mongo‑data:/data/db pose.yml with a named volume. server. Build the Docker image from
volumes:
When you run docker‑compose up, it within the dockerized‑apache directory:
my‑mongo‑data:
will instantiate the service with the
specified volume. docker build ‑t dockerized‑apache .
Listing 5: A sample Dockerfile for Apache web server Data persistence isn’t confined
# Use an official Ubuntu as a parent image to just storing data; backups are Run a container from this image,
FROM ubuntu:latest equally vital. Use docker cp to copy mapping port 80 inside the container
files or directories between a con- to port 8080 on the host:
# Install Apache HTTP Server tainer and the local filesystem. To
RUN apt‑get update && apt‑get install ‑y apache2 back up data from a MongoDB con- docker run ‑d ‑p 8080:80 ‑‑name U
tainer, enter: my‑apache‑container U
# Copy local configuration files into the container dockerized‑apache
COPY ./my‑httpd.conf /etc/apache2/apache2.conf docker cp my‑mongodb:/data/db U
/path/on/host The legacy Apache server should
# Expose port 80 for the web server
now be accessible via http://
EXPOSE 80
Here, data from /data/db inside localhost:8080.
# Start Apache when the container runs the my‑mongodb container is cop-
CMD ["apachectl", "‑D", "FOREGROUND"] ied to /path/on/host on the host If your legacy web server interacts
system. with a database, you’ll likely need to
dockerize that component as well or nontrivial task that demands meticu- environments. For instance, setting the
ensure the web server can reach the lous planning and execution. The JAVA_HOME variable for a Java applica-
existing database. For instance, if you procedure involves containerizing tion can be done in the Dockerfile:
have a MySQL database, you can run the database, managing persistent
a MySQL container and link it to your storage, transferring existing data, FROM openjdk:11
Apache container. A tool like Docker and ensuring security measures are ENV JAVA_HOME U
Compose can simplify the orchestra- in place. One area where container- /usr/lib/jvm/java‑11‑openjdk‑amd64
tion of multi-container setups. izing a traditional SQL database is ex-
For debugging, you can view the logs tremely useful is in development and However, hard-coding sensitive or
using the following command: testing (see the “Testing” box). environment-specific information in
If you choose to dockerize a database, the Dockerfile is not recommended,
docker logs my‑apache‑container the first step is to choose a base im- because it compromises security and
age. For MySQL, you could use the reduces flexibility.
This example containerized a legacy official Docker image available on
Apache HTTP Server, but you can use Docker Hub. You will also find official Testing
this general framework with other images for Oracle Database. You can quickly spin up a container with your
web servers and applications as well. The following is a basic example of database, usually containing test data, and
The key is to identify all dependen- how to launch a MySQL container: then immediately verify if your app works
cies, configurations, and runtime properly with the database. And you can do
parameters to ensure a seamless docker run ‑‑name my‑existing‑mysql U it all without asking the Ops team to provi-
transition from a traditional setup to a ‑e MYSQL_ROOT_PASSWORD=U sion a database host for you.
containerized environment. my‑secret‑pw ‑d mysql:8.0 Good examples of this usage are services
in GitLab CI/CD pipelines, such as Post-
greSQL [4] or MySQL [5]. In Listing 6, I
What About a Database? In this example, the environment
use a Docker image containing Docker and
variable MYSQL_ROOT_PASSWORD is set
Docker Compose and the usual variables
Containers are by nature stateless, to your desired root password. The
defining the database, its user, and pass-
whereas data is inherently stateful. ‑d flag runs the container in de-
word. I also define the so-called service,
Therefore databases require a more tached mode, meaning it runs in the based on the image postgres:16 and
nuanced approach. In the past, run- background. aliased as postgres. In the test job, I
ning databases in containers was This quick setup works for new da- install the PostgreSQL command-line cli-
usually not recommended, but nowa- tabases, but keep in mind that exist- ent and execute a sample SQL query con-
days you can do it perfectly well – ing databases require you to import necting to the postgres service defined
you just need to make sure the data existing data. You can use a Docker earlier, having exported the password.
is treated properly. volume to import a MySQL dump file The postgres service is simply a Docker
Or, you can decide not to container- into the container and then import it container with the chosen version of Post-
ize your databases at all. In this sce- into the MySQL instance within the greSQL conveniently started in the same
nario, your containers connect to a Docker container. network as the main container so that
you can connect to it directly from your
dedicated database, such as an RDS
pipeline.
instance managed by Amazon Web Configurations and
Services (AWS), which makes sense
if your containers are running on
Environment Variables Listing 6: PostgreSQL in a GitLab CI Pipeline
AWS. Amazon then takes care of pro- Legacy applications often rely on image: my‑image‑with‑docker‑and‑docker‑compose
visioning, replication, backup, and complex configurations and environ-
so on. This safe and clean solution ment variables. When dockerizing variables:
lets you concentrate on other tasks such applications, it’s crucial to man- POSTGRES_DB: my‑db
POSTGRES_USER: ${USER_NAME}
while AWS is doing the chores. One age these configurations efficiently,
POSTGRES_PASSWORD: ${USER_PASSWORD}
common scenario is to use a contain- without compromising security or
erized database in local development functionality. Docker provides mul-
services:
(so it’s easy to spin up/tear down), tiple ways to inject configurations and ‑ name: postgres:16
but then swap out for a managed da- environment variables into contain- alias: postgres
tabase service in production. At the ers: via Dockerfile instructions, com-
end of the day, your app is using the mand-line options, environment files, test:
database’s communication protocol, and Docker Compose. Each method script:
regardless of where and how the da- serves a particular use case. ‑ apt‑get update && apt‑get install ‑y
tabase is running. Dockerfile-based configurations postgresql‑client
Dockerizing an existing database are suitable for immutable settings ‑ PGPASSWORD=$POSTGRES_PASSWORD psql ‑h postgres ‑U
$POSTGRES_USER ‑d $POSTGRES_DB ‑c 'SELECT 1;'
like MySQL or Oracle Database is a that don’t change across different
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 17
FO C U S O N D O C K E R Dockerize a Legacy App
For more dynamic configurations, use Run it with environment variables Implement least privilege principles
the ‑e option with docker run to set sourced from a .env file or directly for containers. For instance, don’t run
environment variables: from the shell: containers as the root user. Specify a
non-root user in the Dockerfile:
docker run ‑e "DB_HOST=U DB_HOST=database.local U
database.local" ‑e "DB_PORT=U DB_PORT=3306 docker‑compose up FROM ubuntu:latest
3306" my‑application RUN useradd ‑ms /bin/bash myuser
Configuration files necessary for your USER myuser
While convenient for a few variables, application can be managed using
this approach becomes unwieldy Docker volumes. Place the configu- Containers also should not run
with a growing list. As a more scal- ration files on the host system and with least privileges. The following
able alternative, Docker allows you to mount them into the container: example:
specify an environment file:
docker run ‑v U docker run ‑‑cap‑drop=all ‑cap‑add=U
# .env file /path/to/config/on/host:U net_bind_service my‑application
DB_HOST=database.local /path/to/config/in/container U
DB_PORT=3306 my‑application starts a container with all capabilities
dropped and then adds back only the
Then, run the container as follows: In Docker Compose, use: net_bind_service capability required
to bind to ports lower than 1024.
docker run ‑‑env‑file .env U services: Use read-only mounts for sensi-
my‑application my‑application: tive files or directories to prevent
image: my‑application:latest tampering:
This method keeps configurations or- volumes:
ganized, is easy to manage with ver- ‑ /path/to/config/on/host:U docker run ‑v /my‑secure‑data:U
sion control systems, and separates /path/to/config/in/container /data:ro my‑application
the configurations from application
code. However, exercise caution; en- This approach provides a live link If the container needs to write to a
sure the .env files, especially those between host and container, enabling filesystem, consider using Docker
containing sensitive information, real-time configuration adjustments volumes and restricting read/write
are adequately secured and not without requiring container restarts. permissions appropriately.
accidentally committed to public It is also important to implement log-
repositories. Docker and Security ging and monitoring to detect abnor-
In multi-container setups orches- mal container behavior, such as un-
trated with Docker Compose, you can
Concerns expected outgoing traffic or resource
define environment variables in the Securing Docker containers requires utilization spikes.
docker‑compose.yml file: checking every layer: the host system,
the Docker daemon, images, contain- Dockerizing a Legacy
services: ers, and networking. Mistakes in any
my‑application: of these layers can expose your appli-
CRM System
image: my‑application:latest cation to a variety of threats, includ- To dockerize a legacy Customer Rela-
environment: ing unauthorized data access, denial tionship Management (CRM) system
DB_HOST: database.local of service, code execution attacks, effectively, you need to first under-
DB_PORT: 3306 and many others. stand its current architecture. The
Start by securing the host system run- hypothetical legacy CRM I’ll docker-
For variable data across different en- ning the Docker daemon. Limit access ize consists of an Apache web server,
vironments (development, staging, to the Docker Unix socket, typically a PHP back end, and a MySQL data-
production), Docker Compose sup- /var/run/docker.sock. This socket al- base. The application currently runs
ports variable substitution: lows communication with the Docker on a single, aging physical server,
daemon and, if compromised, grants handling functions from customer
services: full control over Docker. Use Unix data storage to sales analytics.
my‑application: permissions to restrict access to au- The CRM’s monolithic architecture
image: my‑application:U thorized users. means that the web server, PHP
${TAG‑latest} Always fetch Docker images from back end, and database are tightly
environment: trusted sources. Scan images for vul- integrated, all residing on the same
DB_HOST: ${DB_HOST} nerabilities using a tool like Docker machine. The web server listens on
DB_PORT: ${DB_PORT} Scout [6] or Clair [7]. port 80 and communicates directly
with the PHP back end, which in Next, move the PHP back end to its Then, run the container:
turn talks to the MySQL database on own environment. Use PHP-FPM to
port 3306. Clients interact with the manage PHP processes separately. docker run ‑‑name U
CRM through a web interface served Update Apache’s httpd.conf to route my‑apache‑container U
by the Apache server. PHP requests to the PHP-FPM service: ‑d my‑apache‑image
The reasons for migrating the CRM
to a container environment are as # httpd.conf For PHP, start with a base PHP
follows: ProxyPassMatch ^/(.*\.php(/.*)?)$U image and then install needed ex-
n Scalability: The system’s mono- fcgi://php:9000/path/to/app/$1 tensions. Add your PHP code after-
lithic nature makes it hard to scale wards (Listing 9).
individual components. For the MySQL database, configure a Build and run the PHP image simi-
n Maintainability: Patching or updat- new MySQL instance on a separate larly to Apache:
ing one part of the applications machine. Update the PHP back end to
often requires taking the entire connect to this new database by alter- docker build ‑t my‑php‑image .
system offline. ing the database connection string in docker run ‑‑name my‑php‑container U
n Deployment: New feature rollouts the configuration: ‑d my‑php‑image
are time-consuming and prone to
errors. <?php MySQL Dockerfiles are less com-
n Resource utilization: The aging $db = new PDO('mysql:host=db;dbname=U mon because the official MySQL
hardware is underutilized but can’t your_db', 'user', 'password'); Docker images are configurable
be decommissioned due to the ?> via environment variables. How-
monolithic architecture. ever, if you have SQL scripts to run
To containerize the CRM, you need to During this isolation, you might find at startup, you can include them
take the following steps. that some components have shared (Listing 10).
libraries or dependencies that are Run the MySQL container with envi-
Step 1: Initial Isolation of Compo- stored locally, such as PHP exten- ronment variables to set up the data-
nents and Dependencies sions or Apache modules. These base name, user, and password:
Before you dive into dockerization, should be identified and installed in
it is important to isolate the individ- the respective isolated environments. docker run ‑‑name my‑mysql‑container U
ual components of the legacy CRM Missing out on these dependencies ‑e MYSQL_ROOT_PASSWORD=U
system: the Apache web server, PHP can cause runtime errors or func- my‑secret ‑d my‑mysql‑image
back end, and MySQL database. tional issues.
This step will lay the groundwork While moving the MySQL database, Listing 7: MySQL Data
for creating containerized versions ensuring data consistency can be a
# Data export from old MySQL
of these components. However, the challenge. Use tools like mysqldump [8] mysqldump ‑u username ‑p database_name > data‑dump.sql
tightly integrated monolithic archi- for data migration and validate the
tecture presents challenges in isola- consistency (Listing 7). # Data import to new MySQL
tion, specifically in ensuring that If user sessions were previously man- mysql ‑u username ‑p new_database_name < data‑dump.sql
dependencies are correctly mapped aged by storing session data locally,
and that no features break in the you’ll need to migrate this functional-
process. ity to a distributed session manage- Listing 8: Dockerfile for Apache
Start by decoupling the Apache web ment system like Redis. # Use an official Apache runtime as base image
server from the rest of the system. FROM httpd:2.4
One approach is to create a reverse Step 2: Creating Dockerfiles and
proxy that routes incoming HTTP re- Basic Containers # Copy configuration and web files
quests to a separate machine or con- COPY ./my‑httpd.conf /usr/local/apache2/conf/httpd.conf
Once components and dependencies
COPY ./html/ /usr/local/apache2/htdocs/
tainer where Apache is installed. You are isolated, the next step is craft-
can achieve this using NGINX: ing Dockerfiles for each element: the
Apache web server, PHP back end, Listing 9: Dockerfile for PHP
# nginx.conf and MySQL database. For Apache, the # Use an official PHP runtime as base image
server { Dockerfile starts from a base Apache FROM php:8.2‑fpm
listen 80; image and copies the necessary HTML
location / { and configuration files. A simplified # Install PHP extensions
proxy_pass U Dockerfile appears in Listing 8. RUN docker‑php‑ext‑install pdo pdo_mysql
http://web:80; Build the Apache image with:
# Copy PHP files
}
COPY ./php/ /var/www/html/
} docker build ‑t my‑apache‑image .
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 19
FO C U S O N D O C K E R Dockerize a Legacy App
For production, you’ll need to opti- For these containers to function If you decide to run your app in Ku-
mize these Dockerfiles and runtime cohesively as your legacy CRM sys- bernetes, for example, you will not
commands with critical settings, such tem, appropriate networking and need to worry about Docker network-
as specifying non-root users to run data management strategies are ing, because Kubernetes has its own
services in containers, fine-tuning vital. networking plugins.
Apache and PHP settings for perfor- Containers should communicate over
mance, and enabling secure connec- a user-defined bridge network rather Step 4: Configuration Management
tions to MySQL. than Docker’s default bridge to enable and Environment Variables
hostname-based communication. Cre- Configuration management and envi-
Step 3: Networking and Data ate a user-defined network: ronment variables form the backbone
Management of a flexible, maintainable dockerized
At this point, the decoupled compo- docker network create crm‑network application. They allow you to pa-
nents – Apache, PHP, and MySQL – rametrize your containers so that the
each reside in a separate container. Then attach each container to this same image can be used in multiple
network (Listing 11). contexts, such as development, test-
Listing 10: Dockerfile for MySQL Startup Scripts Now, each container can reach an- ing, and production, without altera-
# Use the official MySQL image other using an alias or the service tion. These parameters might include
FROM mysql:8.0 name as the hostname. For instance, database credentials, API keys, or
in your PHP database connection feature flags.
# Initialize database schema
string, you can replace the hostname You can pass environment variables
COPY ./sql‑scripts/ /docker‑entrypoint‑initdb.d/
with my‑mysql‑container. to a container at runtime via the ‑e
Data in Docker containers is ephem- flag:
Listing 11: Network Setup eral. For a database system, losing
docker run ‑‑network crm‑network ‑‑name data upon container termination is docker run ‑‑name my‑php‑container U
my‑apache‑container ‑d my‑apache‑image unacceptable. You can use Docker ‑e API_KEY=my‑api‑key U
docker run ‑‑network crm‑network ‑‑name my‑php‑container volumes to make certain data persis- ‑d my‑php‑image
‑d my‑php‑image tent and manageable:
docker run ‑‑network crm‑network ‑‑name
In your PHP code, the API_KEY vari-
my‑mysql‑container ‑e MYSQL_ROOT_PASSWORD=my‑secret ‑d
docker volume create mysql‑data able can be accessed as $_ENV['API_
my‑mysql‑image
KEY'] or getenv('API_KEY'). For a
Bind this volume to the MySQL more comprehensive approach,
Listing 12: docker-compose.yml Network Setup container: Docker Compose allows you to
services: specify environment variables for
web: docker run ‑‑network crm‑network U each service in the docker‑compose.
image: my‑apache‑image ‑‑name my‑mysql‑container U yml file:
networks: ‑e MYSQL_ROOT_PASSWORD=U
‑ crm‑network my‑secret ‑v mysql‑data:U services:
/var/lib/mysql ‑d my‑mysql‑image db:
php: image: my‑mysql‑image
image: my‑php‑image For the Apache web server and PHP environment:
networks:
back end, you should map any writ- MYSQL_ROOT_PASSWORD: my‑secret
‑ crm‑network
able directories (e.g., for logs or up-
loads) to Docker volumes. Alternatively, you can use a .env
db:
image: my‑mysql‑image Docker Compose facilitates running file in the same directory as your
environment: multi-container applications. Create docker‑compose.yml. Place your envi-
MYSQL_ROOT_PASSWORD: my‑secret a docker‑compose.yml file as shown ronment variables in the .env file:
volumes: in Listing 12.
‑ mysql‑data:/var/lib/mysql Execute docker‑compose up, and all API_KEY=my‑api‑key
networks: your services will start on the de- MYSQL_ROOT_PASSWORD=my‑secret
‑ crm‑network fined network with the appropriate Reference these in docker‑compose.yml:
volumes for data persistence. Note
networks:
that user-defined bridge networks services:
crm‑network:
incur a small overhead. Although db:
driver: bridge
this overhead is negligible for most image: my‑mysql‑image
Running docker‑compose up will load inconsistencies, or security vulner- applications reside today in applica-
these environment variables auto- abilities. The CRM system, being tion repositories, including Docker’s
matically. Never commit sensitive in- business-critical, demands meticulous own Docker Hub container image
formation like passwords or API keys validation. library. If you are deploying the ap-
in your Dockerfiles or code. The most basic level of validation is plication within your own infrastruc-
Configuration files for Apache, functional testing to ensure feature ture, you will likely opt for a con-
PHP, or MySQL should never be parity with the legacy system. Auto- tainer orchestration solution already
hard-coded into the image. Instead, mated tools like Selenium [9] for web in use, such as Kubernetes.
mount them as volumes at runtime. UI testing or Postman [10] for API
If you’re using Docker Compose, you testing offer this capability. Running a Conclusion
can specify a volume using the vol‑ test suite against both the legacy and
umes directive: dockerized environments verifies con- Containerization offers many techni-
sistent behavior. For example, to run cal benefits, including uniformity,
services: Selenium tests in a Docker container, security, and better scaling. In addi-
web: you would type a command similar to tion, containerizing your apps can
image: my‑apache‑image the following: save you money with more efficient
volumes: testing and rollout, and a container
‑ ./my‑httpd.conf:/usr/local/U docker run ‑‑net=host selenium/U strategy can minimize the need for
apache2/conf/httpd.conf standalone‑chrome python U continual customization to adapt to
my_test_script.py new hardware and software settings.
Some configurations might differ Docker Compose and other tools in
between environments (e.g., develop- Once functionality is confirmed, the Docker toolset provide a safe, ef-
ment and production). Use templates performance metrics such as latency, ficient, and versatile approach for mi-
for your configuration files where throughput, and resource utiliza- grating your existing applications to a
variables can be replaced at runtime tion must be gauged using tools like container environment. n
by environment variables. Tools like Apache JMeter, Gatling, or custom
envsubst can assist in this substitution scripts. You should also simulate Info
before the service starts: extreme conditions to validate the [1] RFC 1178: Choosing a Name for your
system’s reliability under strain. Computer: [https://datatracker.ietf.org/
envsubst < my‑httpd‑template.conf > U Static application security testing doc/html/rfc1178]
/usr/local/apache2/conf/httpd.conf (SAST) and dynamic application [2] Install Docker Engine:
security testing (DAST) should also [https://docs.docker.com/engine/install/]
Strive for immutable configurations be employed. Tools like OWASP ZAP [3] Docker Compose:
and idempotent operations to ensure can be dockerized and incorporated [https://docs.docker.com/compose/]
your system’s consistency. Once a into the testing pipeline for dynamic [4] GitLab: Using PostgreSQL: [https://docs.
container is running, changing its testing. While testing, activate moni- gitlab.com/ee/ci/services/postgres.html]
configuration should not require toring solutions like Prometheus and [5] GitLab: Using MySQL: [https://docs.
manual intervention. If a change is Grafana or ELK stack for real-time gitlab.com/ee/ci/services/mysql.html]
needed, deploy a new container with metrics and logs. These tools will [6] Docker Scout:
the updated configuration. identify potential bottlenecks or secu- [https://docs.docker.com/scout/]
While this approach is flexible, it in- rity vulnerabilities dynamically. [7] Clair: [https://github.com/quay/clair]
troduces complexity into the system, Despite rigorous testing, unforeseen [8] mysqldump: [https://dev.mysql.com/doc/
requiring well-documented proce- issues might surface post-deployment. refman/8.0/en/mysqldump.html]
dures for setting environment vari- Therefore, formulate a rollback strat- [9] Selenium: [https://www.selenium.dev/]
ables and mounting configurations. egy beforehand. Container orchestra- [10] Postman: [https://www.postman.com/
Remember that incorrect handling of tion systems, such as Kubernetes and automated‑testing/]
secrets and environment variables Swarm, provide the ability to easily
can lead to security vulnerabilities. rollout changes and rollback when is-
sues occur. Author
Step 5: Testing and Validation Artur Skura is a senior DevOps engineer currently
Testing and validation are nonnego- Step 6: Deployment working for a leading pharmaceutical company
tiables in the transition from a legacy Deployment into a production en- based in Switzerland. Together with a team of
system to a dockerized architecture. vironment is the final phase of experienced engineers, he builds and maintains
Ignoring or cutting corners in this dockerizing a legacy CRM system. cloud infrastructure for large data science and
phase jeopardizes the integrity of The delivery method will depend machine learning operations. In his free time, he
the system, often culminating in on the application and your role as composes synth folk music, combining the vibrant
performance bottlenecks, functional a developer. Many containerized sound of the ’80s with folk themes.
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 21
FO C U S O N D O C K E R Interview: Giri Sreenivas
Local + Cloud
Docker’s chief product officer discusses its local + cloud
strategy to container development. By Amy Pettle
At DockerCon 2023, Docker fundamentally changed our core Similarly, container environments
announced a local + cloud strategy objective. Instead, it has served as a demand robust security measures,
for container development. We talk north star, shaping how we address especially as these environments
to Giri Sreenivas, Chief Product Officer the evolving needs of the developer become more complex. Again, we
at Docker, about Docker’s approach to community. want to meet our users where they
container development. Giri highlights Emerging technologies, evolving regu- are instead of imposing another
three new local + cloud products and lations, and shifting market dynamics cumbersome tool on them. So we
hints at where this strategy will lead all influence the daily lives of devel- ask ourselves “how can we make
Docker in the future. opers. We actively listen to how these security as easy as possible for de-
all play out in the market, which velopers?” Our tools are designed
The container industry has evolved helps us anticipate developers’ needs, to catch and solve security issues
dramatically over the past 10 years so we can continue delivering solu- seamlessly as part of the develop-
since Docker first launched. How tions that make their lives easier. ment process, reducing interrup-
have these changes directed Dock- There are some trends I want to touch tions and rework.
er’s product development? on that have really helped Docker fo- A question we see a lot from cus-
cus our product development efforts. tomers is how to help their teams
Docker’s priority has always been the I think we’re all aware of the major innovate and keep up productivity.
developer. Our mission is to empower architecture shifts from monolithic We’d like to think we’ve mastered
developers to focus on building the applications to microservices as well that at Docker. It’s all about a seam-
future. While the container landscape as the rise of Kubernetes and con- less and efficient developer experi-
has seen some significant changes in tainer orchestration. But what I want ence. Managers need to reframe
the past decade, this evolution hasn’t to highlight are other areas that are how they’re thinking about this. It’s
very close to Docker: cloud-native not only “how do I make developers
Giri Sreenivas development, security and the supply faster,” but also “how can I make
Giri Sreenivas is the Chief Product Officer chain, and of course, the developer my developers happier.” Docker
at Docker where he leads product, design, experience. prioritizes a seamless and efficient
growth and data. He has over two decades We want to meet our customers developer experience to empower
of experience in engineering and product and users where they are in their our users. We believe that happy de-
leadership roles across consumer and development journey. So we focus velopers are productive developers,
on building tools that integrate which is why we focus on making
enterprise businesses. He is also a two-
seamlessly with cloud-native ser- our tools intuitive, efficient, and
time venture-backed startup founder
vices and deployment workflows. magical.
with an acquisition and subsequent
Docker has been fundamental to AI is another great, and relevant, ex-
IPO (Rapid7). Giri is a graduate of the
the rise of the cloud-native industry ample of this. I’m sure we all see the
schools of computer science at Stanford
and we’ll remain a critical player by momentum surrounding generative
University (BS) and Carnegie Mellon
investing in tools that give develop- AI and its potential for developers.
University (MS) and currently resides in
ers flexibility to thrive in any work But how does this relate specifi-
the Greater Seattle area.
environment. cally to Docker’s users? We’re seeing
containers used to simplify configura- dependencies, and configurations, all current workflows without any major
tion and development of generative while you’re still building. This way, changes. This makes it easy to adopt
AI solutions. That’s why we worked you avoid costly rework and delays and start benefiting from faster builds
with some partners on our Gen AI down the road. Ultimately, Docker immediately.
stack. It’s an easy way to get develop- Scout helps you build better, more
ers up and running, fast. secure software. How does Docker Build Cloud differ
from alternative build options cur-
Tell us about Docker’s local + cloud What benefits does Docker Scout rently available?
approach and how it benefits offer development teams over pre-
container developers. vious supply chain management Docker Build Cloud focuses on the
solutions? crucial inner loop builds where de-
We are advocating for a local + cloud velopers need immediate feedback
approach because we recognize that What differentiates Docker Scout is on code changes. This is a significant
a single, universal methodology its ability to meet developers where improvement over traditional build
doesn’t work for every organization. they are in their workflows. We un- environments that often run on a
Developers have a diverse set of derstand the developer’s pain when local developer machine or through
needs and should have the flexibil- it comes to solving for security, emulation (virtual machines), leading
ity to leverage both local and cloud and we want to make the process to slowness and inconsistency due to
resources based on their specific as easy and painless for them as varying hardware configurations and
requirements. possible. We integrate directly into multiple layers of touchpoints. Docker
Instead of forcing developers to adapt their workflows, so when they’re Build Cloud avoids these limitations
to a “cloud-only” or “local-only” en- building, they can view where there by utilizing powerful cloud instances.
vironment, Docker embraces a hybrid are issues and solve them right then This delivers a significantly faster and
approach. This allows developers to and there. more consistent build experience.
utilize the familiarity and comfort Docker Build Cloud also integrates
of local development tools for tasks Tell us about Docker Build Cloud seamlessly with existing tools and
like code editing and debugging and its ability to harness the cloud. Docker commands without requir-
while seamlessly scaling to cloud ing workflow changes, differing from
resources when needed for resource- Docker Build Cloud is a game other CI solutions that often force
intensive workloads, collaboration, or changer for developers and develop- frequent merges, new command-line
deployments. ment teams who are tired of waiting interface tools, or unnecessary pro-
Our goal is to meet developers where around for slow builds. It addresses duction-like requirements.
they are with “just enough” cloud to a major pain point by offloading
overcome any limitations their cur- resource-intensive build tasks in the Docker also announced a beta
rent environments pose. inner loop as well as continuous version of a new debugging tool,
integration (CI) to the cloud, freeing Docker Debug. How does Docker
Docker recently announced Docker up local resources and significantly Debug work?
Scout General Availability (GA). speeding up the build process. We’ve
What does Docker Scout do for soft- seen build times reduced by up to 39 Docker Debug is really excit-
ware supply chain management? times in some cases. ing because it enables debugging
It leverages the scalability and in containers that don’t have an
Docker Scout goes beyond just being elasticity of the cloud to provide available shell, which is becoming
a security or vulnerability detection developers with on-demand access increasingly common with the adop-
tool. It empowers developers to build to powerful hardware resources. tion of slim images. It also enables
secure software by ensuring its qual- This allows them to build their im- debugging when docker exec is not
ity, trustworthiness, and compliance ages quickly and efficiently, even possible because a container is not
right from the outset. if they don’t have access to a high- running. It removes the friction as-
Imagine building a house. You performance machine locally. Ad- sociated with container debugging,
wouldn’t wait until it’s finished to ditionally, the cloud-based nature of empowering developers to fix issues
check for faulty wiring or leaky pipes, Docker Build Cloud makes it cost- within their domain of expertise
right? That’s where Docker Scout effective, as developers only pay for without worrying about container
comes in for your software develop- the resources they use. intricacies.
ment process. One of the best things about Docker
Think of Docker Scout as your con- Build Cloud, similar to Docker Scout, How does Docker Debug differ from
struction inspector for software. It is its seamless integration with exist- traditional container debugging
proactively helps you identify and ing workflows. Developers can use and how does that benefit container
fix potential issues in your code, their existing tools and keep their developers?
W W W. A D M I N - M AGA Z I N E .CO M FO CU S O N D O C K E R 23
FO C U S O N D O C K E R Interview: Giri Sreenivas
With Docker Debug, anyone, Lastly, there are some great im- fits squarely into our local + cloud
whether a novice or seasoned provements to Docker Compose theme by bringing the power of the
Docker user, can debug their con- with the additions of the watch and cloud to existing build workflows in
tainers without needing full knowl- include features. With watch, we the inner loop and CI.
edge of how that container was built have made Compose a lot more ef- We will continue to invest in simpli-
and the tools it contains. There’s no fective and efficient for front-end fying and accelerating the process
need to specify a particular configu- developers that are making rapid of containerizing applications with
ration or load debug-specific tools changes and want to see those re- further investments in docker init,
into the image. It’s native to the flected quickly in the inner loop. Docker for AI, and similar initiatives
Docker toolchain as well so there’s With include, we’re simplifying the that either automate or abstract the
no need for the developer to install code and build steps of the inner complexity of creating Dockerfiles
any additional local tooling. loop by making the Compose con- and Compose files.
figuration more closely reflect the Modern software development is
What other improvements has modularization of the application a team sport, and it is becoming
Docker recently introduced to itself. increasingly vital to facilitate col-
improve the inner loop (code/build/ 2023 has been a tremendous year laboration with containers. Develop-
test/debug) experience for container of innovation across Docker with ers today stitch together a variety
developers? a series of improvements across of hacks and tools to solve the
the inner loop that are benefiting networking and ephemeral envi-
We’ve focused a lot this year on developers. ronments problems that arise from
enhancing the inner loop and local wanting to share and debug with
development experience, starting with What is on the horizon in container peers. Stay tuned for some timely
making it easier for developers and development tools for Docker? innovation here that I’m excited for
development teams to ship secure Docker to get to market in 2024.
software with Docker Scout. Through As we hinted at DockerCon, we be- We’re also excited about our entry
each stage of the inner loop, Scout lieve the future of development is into testing in the inner loop with
surfaces vulnerability risk and simpli- very much a local + cloud hybrid our recent acquisition of Atomic-
fies how to fix it with simple, one- one. This is both about bringing the Jar. The AtomicJar team has done a
step remediations. power of the cloud to the local devel- phenomenal job cultivating a strong
To improve the experience with opment experience as well as making community around Testcontainers
containers throughout the entire it easier to leverage dependencies on and an ecosystem of partners. We will
inner loop, we have also made tre- the cloud in the local development continue to invest here with an eye
mendous advances in performance experience. This will be a common toward the local + cloud answer for
and stability with Docker Desktop. theme across our products in the increasingly complex testing needs of
There are dramatic improvements in coming year. developers.
startup times, reductions in memory Scout will innovate on the best de- And of course, we can’t talk about
and CPU consumption and improve- veloper experience for secure soft- the future without addressing the
ments, and speedups in networking ware development, leverage trusted watershed moment our industry
and file sharing performance. content to reduce vulnerability is going through with generative
In addition to Docker Desktop’s per- exposure in more strategic ways, AI. In addition to all the ways one
formance improvements, we have and support more integrations to would expect Docker to leverage
listened closely to our customers and improve end-to-end visibility of risk AI to accelerate containerization,
delivered a number of improvements and the value of remediation across enable adoption of GenAI stacks,
for the deployment and administra- the Software Development Life and improve the inner loop experi-
tion of Docker Desktop so it’s more Cycle (SDLC). We’ll further invest ence with containers, we are also
readily available and up to date for in enabling customers to customize thinking deeply about how the
developers from their IT teams. policies, remediations, and more, as SDLC and developer journey will
We’re excited about the direction of well as continue improving sugges- be transformed with AI and how
docker init as we add support for tions for remediations to be more Docker will help accelerate this
more languages like Python and PHP actionable. transformation. n
and accelerate getting more projects Giving time back to developers and
running in containers in the code development teams to focus on the The Author
and build phases of the inner loop. things that matter is at the heart Amy Pettle is an editor for ADMIN and Linux
To improve visibility and trouble- of our mission, and we’re excited Magazine. She started out in tech publishing
shooting of container builds, we about how accelerating builds with with C/C++ Users Journal over 20 years ago
recently released a Builds view in the power of the cloud in Docker and has worked on various Linux New Media
Docker Desktop. Build Cloud delivers on this. This publications.