0% found this document useful (0 votes)
70 views

Docker Flash Card

Docker is a container management service that allows deploying applications using lightweight containers. It has various components like Docker Engine for building images and creating containers, and Docker Hub for hosting images. The document outlines Docker's lifecycle states, lists useful commands like docker run and docker ps, and provides details on creating Dockerfiles, images, containers, and deploying to Docker Hub. It also explains concepts like Docker Swarm for clustering containers and orchestrating applications on a swarm.

Uploaded by

arti09
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Docker Flash Card

Docker is a container management service that allows deploying applications using lightweight containers. It has various components like Docker Engine for building images and creating containers, and Docker Hub for hosting images. The document outlines Docker's lifecycle states, lists useful commands like docker run and docker ps, and provides details on creating Dockerfiles, images, containers, and deploying to Docker Hub. It also explains concepts like Docker Swarm for clustering containers and orchestrating applications on a swarm.

Uploaded by

arti09
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 12

Docker

Docker is a container management service.


It is used to automate the deployment of any application,
using lightweight, portable containers.
==========================================
Docker LifeCycle:-
1) Created State
2) Running State
3) Paused State / unpaused state
4) Stopped state
5) Killed/ Deleted State
==========================================
Components of Docker

1) Docker for Mac - to run Docker container for Mac OS


2) Docker for Linux - to run Docker container for Linux OS
3) Docker for Windows - to run Docker container for Windows OS
4) Docker Engine - for building Docker images and creating Docker containers
5) Docker Hub - to host various Docker images
6) Docker Compose - to define applications using multiple Docker containers.
==========================================
List of useful Docker commands:-
1) view running containers / list all of the containers currently running
$ docker ps

2) command to run the container under a specific name


$ docker run —name [name] [image]

3) command to export a docker


$ docker export [OPTIONS] CONTAINER

4) command to import an already existing docker image


$ docker import [file_name.tar] [image_name:tag]

5) commands to delete a container


$ docker rm [container_name]

6) command to remove all stopped containers, unused networks,


build caches, and dangling images
$ docker system prune

7) To check the versions of Docker Client and Server


$ docker version

8) To create a Docker swarm


docker swarm init –advertise-addr <manager IP>
---------------
Docker Swarm is a native clustering and orchestration solution for Docker.
It allows you to create and manage a cluster of Docker nodes,
and deploy and scale applications on the cluster.

A swarm is a group of Docker engines that are running in swarm mode


and joined together.
Once the engines are joined in a swarm,
they can be used to deploy services.
A service is a group of containers that are running a specific image
and are deployed on the swarm.
---------------
Basic Commands:
$ sudo docker info
— shows docker status and configuration
$ sudo docker ps— show docker containers
$ sudo docker ps -l
— show “latest” docker container -l = lower case L
$ sudo docker ps -a
— show “all” docker container; even those not running
$ sudo docker images
— show docker images (and tags)
$ sudo docker run -it
<container> <app>
— connect / login to work interactively on container
$ systemctl status docker — show status and log for docker <CTRL-C>
to exit
# sudo systemctl enable docker — enable docker <not usually needed>
using system control
# sudo systemctl start docker — start docker <if it was stopped>
$ sudo service docker stop — Stop docker service
$ sudo service docker start — Start docker service
$ sudo service docker restart — restart docker service
$ sudo usermod -aG docker <AdminUser> — Add the <AdminUser>
to Linux Authorized users for docker replace <AdminUser>
with your username must log out and log back in for it to take effect
---------------
The most critical Docker commands are:
• Build - Builds a Docker image file
• Commit - Creates a new image from container changes
• Create - Creates a new container
• Dockerd - Launches Docker daemon
• Kill - Kills a container
---------------
Some advanced commands include:
• Docker info - Displays system-wide information
regarding the Docker installation
• Docker pull - Downloads an image
• Docker stats - Provides you with container information
• Docker images - Lists downloaded images
==========================================
Creating a Docker image

A Docker image can be created with a "Dockerfile"


which list the components and commands that make up a package.

Dockerfiles are a list of commands that docker performs to build an image.

It’s a best practice to build images in a “clean” directory,


as docker build’s default behavior is to copy the working directory
to the image.
Place this file in a new folder at the top of your project named docker.
---------------
Docker Instructions in a Docker file
1) FROM
To set the Base Image for the subsequent instructions
A valid Dockerfile must have FROM as its first instruction.

2) LABEL
Add labels to an image to organize images of our project

3) Run
To execute any command of the current image

4) CMD instruction
To execute a command at runtime when the container is executed.
Syntax
CMD ["executable", "param1", "param2"?]

There can be only one CMD in a Dockerfile.


If we use more than one CMD, only last one will be executed.

6) COPY
To copy new files or directories
from source to the filesystem of the container at the destination.

7) ENTRYPOINT
To execute commands at runtime for the container.

Syntax
ENTRYPOINT command param1
Options:-
command - command to run when the container is launched
param1 - parameter entered to the command

8) ENV
To set environment variables in the container.

Syntax:-
ENV key value

Options:-
key - key for the environment variable
value - value for the environment variable

9) WORKDIR
To set the working directory of the container.

Syntax
WORKDIR dirname

Options:-
dirname - the new working directory.
If the directory does not exists, it will be added.
---------------
Docker command CMD vs RUN
- The CMD command, is used to specify the default command
that should be executed when a container is started from an image.
This command can be overridden when starting a container.
- The RUN command is used to execute a command
during the image-building process.
It will run command(s) in a new layer on top of the current image
and commit the results.
==========================================
Sample "Dockerfile"
# Alpine Linux with OpenJDK JRE
FROM openjdk:8-jre-alpine
# copy WAR into image
COPY spring-boot-app-0.0.1-SNAPSHOT.war /app.war
# run application with this command line
CMD ["/usr/bin/java", "-jar", "-Dspring.profiles.active=default", "/app.war"]
---------------
Next, add the output directory property to the "spring-boot-maven-plugin"
in the "pom.xml" file
This copies the jar into the docker directory
as part of the package build target.
---------------
Then, build the image with the following command:-
docker build -t ImageName:TagName dir
Options:-
-t − is to mention a tag to the image
ImageName − This is the name you want to give to your image.
TagName − This is the tag you want to give to your image.
Dir − The directory where the Docker File is present.

E.g.
docker build -t spring-boot-app:latest .
---------------
To tag an image to the relevant repository
docker tag imageID Repositoryname
Options:-
imageID − This is the ImageID which needs to be tagged to the
repository.
Repositoryname − This is the repository name to which the
ImageID needs to be tagged to.
E.g.
sudo docker tag ab0c1d3744dd demousr/demorep:1.0
---------------
To push images to the Docker Hub
docker push <Repositoryname>
Options:-
Repositoryname − This is the repository name which needs to be
pushed to the Docker Hub.
==========================================
Docker Hub

Docker Hub is a registry service on the cloud that allows you to download
Docker images that are built by other communities.
You can also upload your own Docker built images to Docker hub.
---------------
Official site for Docker hub
https://www.docker.com/communityedition#/add_ons
---------------
To download a particular image, use the following command
Docker pull <image_name>

Eg. sudo docker pull jenkins => On Ubuntu machine


---------------
To run Jenkins:-
sudo docker run -p 8080:8080 -p 50000:50000 jenkins
Here,
jenkins - name of the image we want to download
-p - to map the port number of the internal Docker image to
our main Ubuntu server
so that we can access the container accordingly.
==========================================
Docker Image
An image is a combination of a file system and parameters.
---------------
To view list of Docker images
docker images
O/P consists of TAG, ImageID and Created details.
Tags don’t create new copies of images. They’re pointers.
---------------
To get only Docker image ID
docker images -q
-q =>returns the image ID.
---------------
To run the docker image
docker run <image_name>
---------------
To remove Docker image
docker rmi <ImageID> => You can get Image ID from "docker images" command.
---------------
To see the details of a particular image or container
docker inspect <Repository>
where 'Repository is the name of the image.
==========================================
Docker Containers

Containers are instances of Docker images


that can be run using the Docker "run" command.
---------------
Can you lose data stored in a container?
Any data stored in a container remains there unless you delete the container.
---------------
Best method for removing a container
1) Stop the container
$ docker stop <container_id>
2) Then remove the container
$ docker rm -f <container_id>
---------------
Can a container restart on its own?
Since the default flag "-reset" is set to "false",
a container cannot restart by itself.
---------------
To run a container in an interactive mode:-
docker run -it <image_name>
---------------
To display the running containers on the host machine
docker ps

Options
-a => to list all containers
---------------
To see the list of commands executed on the image via a container
docker history <ImageID>
---------------
To see top processes within a container
docker top <ContainerID>
---------------
To stop a running container
docker stop ContainerID
---------------
To delete a container
docker rm <ContainerID>
---------------
To provide statistics of a running container
docker stats ContainerID
---------------
To attach to a running container
docker attach <ContainerID>
---------------
To pause processes running in the container
docker pause <ContainerID>
---------------
To unpause the processes in a running container
docker unpause <ContainerID>
---------------
To kill the processes in a running container
docker kill <ContainerID>
---------------
Docker Container Lifecycle
create-> run-> stop/pause/kill
==========================================
Docker image Vs Container vs Engine
- An image in Docker is a lightweight, stand-alone, executable package
that includes everything needed to run a piece of software,
including the code, a runtime, libraries,
environment variables, and config files.

- A container is a running instance of an image.

- A Docker engine is a background service that manages and runs


Docker containers.
It is responsible for creating, starting, stopping, and
deleting containers, as well as managing their networking and storage.
==========================================
Docker command COPY vs ADD
- The COPY command is used to copy local files
from the host machine to the Docker image.
It only copies the files, and does not support any other functions
such as decompressing files or fetching files from a remote location.

- The ADD command supports additional functionality beyond just copying files.
ADD also supports fetching files from a remote URL,
and automatically decompressing files that are archived in a supported format
(such as tar or gzip).
==========================================
Docker Namespace
A Docker namespace is a feature of the Linux kernel that
allows for the isolation of resources, such as network, process,
and file system, among different groups of users and processes.
Docker uses namespaces to provide isolation between containers
running on the same host.
==========================================
Docker registry
A Docker registry is a service that stores and distributes Docker images,
and it allows users to upload, download, and manage their own images.
It can be either public or private,
and it can support different authentication and authorization mechanisms
to control access to the images.
==========================================
Entry point
An entry point in Docker is a command or script that is executed
when a container is started from an image.
It’s defined in the
image’s Dockerfile using the ENTRYPOINT command,
and it’s used to specify the default command that
should be run when a container is created from the image.
It can be overridden when starting a container using the — entrypoint option.
==========================================
Docker Configuring

To start the Docker daemon process


service docker start

To stop the Docker daemon process


service docker stop
==========================================
Docker Containers and Shell

nsenter
to attach to a container without exiting the container.

Syntax
nsenter –m –u –n –p –i –t containerID command

Options
-u is used to mention the Uts namespace
-m is used to mention the mount namespace
-n is used to mention the network namespace
-p is used to mention the process namespace
-i is to make the container run in interactive mode.
-t is used to connect the I/O streams of the container to the host OS.
containerID − This is the ID of the container.
Command − This is the command to run within the container.
==========================================
Docker Private registries
To have your own private repositories
sudo docker run –d –p 5000:5000 –-name registry registry:2
Here,
- Registry is the container managed by Docker
which can be used to host private repositories.
- The port number exposed by the container is 5000.
Hence with the –p command, we are mapping the same port number
to the 5000 port number on our localhost.
- We are just tagging the registry container as “2”,
to differentiate it on the Docker host.
- The "–d" option is used to run the container in detached mode.
This is so that the container can run in the background
==========================================
Docker - Building Apache Web server Docker File

Create a Docker file with the necessary information as follows:-

FROM ubuntu
RUN apt-get update
RUN apt-get install –y apache2
RUN apt-get install –y apache2-utils

Here,

- We are first creating our image to be from the Ubuntu base image.
- Next, we are going to use the RUN command to update all the
packages on the Ubuntu system.
- Next, we use the RUN command to install apache2 on our image.
- Next, we use the RUN command to install the necessary utility
apache2 packages on our image.
- Next, we use the RUN command to clean any unnecessary files from
the system.
- The EXPOSE command is used to expose port 80 of Apache in the
container to the Docker host.
- Finally, the CMD command is used to run apache2 in the background.
---------------
Run the "docker build" command to build the docker file as follows:-
sudo docker build –t=”mywebserver” .
We are tagging our image as mywebserver.
---------------
Create a container from the image
sudo docker run –d –p 80:80 mywebserver
Here,
- The port number exposed by the container is 80.
Hence with the "–p" command, we are mapping the same port number
to the 80 port number on our "localhost".
- The "–d" option is used to run the container in detached mode.
This is so that the container can run in the background.
==========================================
Docker - Container Linking

Container Linking allows multiple containers to link with each other.


It is a better option than exposing ports.
Steps:-
1) Download the Jenkins image
sudo docker jenkins pull
2) Run the container
sudo docker run --name=jenkins -d jenkins
3) Launch the destination container
sudo docker run --name=reca --link=henkinsa:alias-src -it ubuntu:latest
/bin/bash
4) Attach the recieving container
sudo docker attach reca
5) Run the "env" command to notice new variables linking
with the source container
==========================================
Docker Storage

Storage Drivers:-
1) OverlayFS - good for testing applications in the lab.
2) AUFS - stable driver; can be used for production-ready applications
3) Btrfs - good for instances where you maintain multiple build
pools
4) ZFS - good for systems which are of Platform-as-a-Service type work.
---------------
Changing the storage driver for a container

sudo docker run –d --volume-driver=flocker


–v /home/demo:/var/jenkins_home –p 8080:8080 –p 50000:50000 jenkins

Options:-
The –volume-driver option is used to specify another storage driver for the
container.
---------------
Data Volumes
- A separate volume that can be shared across containers

Features of data Volumes:-


1) They are initialized when the container is created.
2) They can be shared and also reused amongst many containers.
3) Any changes to the volume itself can be made directly.
4) They exist even after the container is deleted.
---------------
Creating a volume
Syntax
docker volume create –-name=volumename –-opt options

Here,
name − This is the name of the volume which needs to be created.
opt − These are options you can provide while creating the volume.

E.g.
sudo docker volume create –-name = demo –opt o = size = 100m
Here, we are creating a volume of size 100MB and with the name "demo".
---------------
Listing all volumes
docker volume ls
==========================================
Docker Networking

docker network ls
To list all docker networks on the Docker host
---------------
Inspect a Docker network
docker network inspect <networkname>

E.g. sudo docker network inspect bridge


---------------
Creating your own network
docker network create --driver <driver_name> name

Here,
drivername − This is the name used for the network driver.
name − This is the name given to the network.

E.g. sudo docker network create –-driver bridge new_nw


==========================================
Docker - Setting Node.js
Steps:-
1) Pull the "node" image on the docker host
Docker pull node

2) On the Docker host, create a "Node.js" file using vim editor


with the following content
Console.log(‘Hello World’);

3) To run the above script file, use the following command:-


sudo docker run –it –rm –name = HelloWorld –v “$PWD”:/usr/src/app
–w /usr/src/app node node HelloWorld.js

Here,
- The –rm option is used to remove the container after it is run.
- We are giving a name to the container called “HelloWorld”.
- We are mentioning to map the volume in the container which is
/usr/src/app to our current present working directory.
This is done so that the node container will pick up
our "HelloWorld.js" script
which is present in our working directory on the Docker Host.
- The –w option is used to specify the working directory used by
Node.js.
- The first node option is used to specify to run the node image.
- The second node option is used to mention to run the "node" command
in the node container.
- And finally we mention the name of our script.
==========================================
Docker - Setting MongoDB
Steps:-
1) Pull the image from Docker Hub
docker pull mongo
2) Run a Mongo Container
sudo docker run -it -d mongo
Here,
The –it option is used to run the container in interactive mode.
The –d option is used to run the container as a daemon process.
And finally we are creating a container from the Mongo image.

3) Spin up another container to act as a client


sudo docker run –it –link=tender_poitras:mongo mongo /bin/bash
Here,
- The –it option is used to run the container in interactive mode.
- We are now linking our new container to the already launched
MongoDB server container.
Here, you need to mention the name of the already launched container.
- We are then specifying that we want
to launch the Mongo container as our client
and then run the bin/bash shell in our new container.

4) Run the env command in the new container to see the details of
how to connect to the MongoDB server container

5) Connect to the MongoDB server from the client container


mongo 172.17.0.2:27017
Here,
- The mongo command is the client mongo command that is used to
connect to a MongoDB database.
- The IP and port number is what you get when you use the env
command.

6) To switch to a database name demo


use demo
==========================================
Docker Cloud
The Docker Cloud is a service provided by Docker in which
you can carry out the following operations −
1) Nodes − You can connect the Docker Cloud to your existing cloud
providers such as Azure and AWS to spin up containers
on these environments.
2) Cloud Repository − Provides a place where you can store your own
repositories.
3) Continuous Integration − Connect with Github and build a
continuous integration pipeline.
4) Application Deployment − Deploy and scale infrastructure and
containers.
---------------
Steps:-
1) Goto the following link to getting started with Docker Cloud :-
https://cloud.docker.com/
2) Provide login credentials
3) Connecting to the Cloud Provider - Goto AWS and perform the following steps:-
- Take AWS keys from AWS Console
− https://aws.amazon.com/console/
- Create an AWS Policy that will allow Docker to use EC2 instance.
Filename "dockercloudpolicy"

{
"Version": "2012-10-17",
"Statement": [ {
"Action": [
"ec2:*",
"iam:ListInstanceProfiles"
],
"Effect": "Allow",
"Resource": "*"
}]
}
- Create a role which will be used by Docker to spin up the nodes on AWS.
dockercloud-role
- Go to ‘Role for Cross Account Access’ and
select “Provide access between your account and a 3rd party AWS account".
- On the next screen,
- In the Account ID field, enter the ID for the Docker Cloud service.
- In the External ID field, enter your Docker Cloud username.
- Attach the policy
- In the next screen, copy the "arn" role which is created
4) go back to Docker Cloud,
- select Cloud Providers,
and click the plug symbol next to Amazon Web Services.
- Enter the arn role and click the Save button
- Once saved, the integration with AWS would be complete.

6) Setting up Nodes
- Go to the Nodes section in Docker Cloud
- Give the details of the nodes which will be setup in AWS.
- Click the "Launch Node cluster"
7) Deploying a service
- Go to the Services Section in Docker Cloud.
Click the "Create" button
- Choose the Service which is required.
- Choose the Create & Deploy option.
This will start deploying the Mongo container on your node cluster.
==========================================
Docker Compose
Docker Compose is used to run multiple containers as a single service.
---------------
Creating Docker-Compose file
sudo vim docker-compose.yml

version: '2'
services:
database:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_USER=user
- MYSQL_PASSWORD=password
-MYSQL_DATABASE=demodb
web:
image:ngnix

Here,
- The database and web keyword are used
to define two separate services.
One will be running our mysql database and
the other will be our nginx web server.
- The image keyword is used to specify the image from dockerhub
for our mysql and nginx containers
- For the database, we are using the ports keyword
to mention the ports that need to be exposed for mysql.
- And then, we also specify the environment variables
for mysql which are required to run mysql.
---------------
Run the dockercompose file as follows:-
sudo ./docker-compose up
==========================================
How to deploy Spring Boot Application in Docker Hub?
Step#1 : Creating Repository in Docker Hub
Step#2 : Creating Spring Boot Application using STS
Step#3 : Creating jar file of the application
Step#4 : Creating a Docker file
Step#5 : Download and Install Docker Toolbox/Desktop
Step#6 : Execute Docker Commands
==========================================
Docker – Publishing Images to Docker Hub
Steps:-
1) Login/signup to docker hub
2) Click on the repositories tab
3) Create a new repository
4) Create an image from docker and push it to the newly created repository.
To build an image the syntax is shown below.
docker build -t username/repository_name .
5) Push the docker image to docker hub
docker push <username>/<repository_name>
==========================================
Docker Java Application Example
Steps:-
1) Create a directory
2) Create a Java File
3) Create a Dockerfile
4) Build Docker Image
5) Run Docker Image
==========================================

==========================================

You might also like