03 DevOps Automation
03 DevOps Automation
Priyanka Vergadia
Developer Advocate, Google Cloud
Specifically, we will talk about services that support continuous integration and
continuous delivery practices, part of a DevOps way of working.
With DevOps and microservices, automated pipelines for integrating, delivering, and
potentially deploying code are required. These pipelines ideally run on on-demand
provisioned resources. This module introduces the Google Cloud tools for developing
code and creating automated delivery pipelines that are provisioned on demand.
We will talk about using Cloud Source Repositories for source and version control,
Cloud Build, including build triggers, for automating builds, and managing containers
with Container Registry.
Infrastructure as Code
Developers check-in code Run unit tests Build deployment package Deploy
Use a Git repo for each If the tests don’t pass, Create a Docker Save your new Docker
microservice and stop. image. image in a container
branches for versions. registry.
Typical extra steps include linting of code, quality analysis by tools such as
SonarQube, integration tests, generating test reports, and image scanning.
Google provides the components required for a
continuous integration pipeline
Continuous
Integration
Continuous
Integration
The Cloud Source Repositories service provides private Git repositories hosted on
Google Cloud. These repositories let you develop and deploy an app or service in a
space that provides collaboration and version control for your code. Cloud Source
Repositories is integrated with Google Cloud, so it provides a seamless developer
experience.
Google provides the components required for a
continuous integration pipeline
Cloud Build
Cloud Source Repositories Build system executes the steps required to
Developers push to a central repository make a deployment package or Docker image.
when they want a build to occur.
Continuous
Integration
Cloud Build executes your builds on Google Cloud infrastructure. It can import
source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket,
execute a build to your specifications, and produce artifacts such as Docker
containers or Java archives. Cloud Build executes your build as a series of build
steps, where each build step is run in a Docker container. A build step can do
anything that can be done from a container, irrespective of the environment. There are
standard steps, or you can define your own steps.
Google provides the components required for a
continuous integration pipeline
Cloud Build
Cloud Source Repositories Build system executes the steps required to
Developers push to a central repository make a deployment package or Docker image.
when they want a build to occur.
Continuous
Integration
Build triggers
Watches for changes in the Git repo
and starts the build.
A Cloud Build trigger automatically starts a build whenever you make any changes
to your source code. You can configure the trigger to build your code on any changes
to the source repository or only changes that match certain criteria.
Google provides the components required for a
continuous integration pipeline
Cloud Build
Cloud Source Repositories Build system executes the steps required to
Developers push to a central repository make a deployment package or Docker image.
when they want a build to occur.
Continuous
Integration
Container Registry
Store your Docker images or Build triggers
deployment packages in a central Watches for changes in the Git repo
location for deployment. and starts the build.
Container Registry is a single place for your team to manage Docker images or
deployment packages, perform vulnerability analysis, and decide who can access
what with fine-grained access control.
You can use IAM to add team members to your project and to grant them permissions
to create, view, and update repositories.
Some other features of Cloud Source Repositories include the ability to debug in
production using Cloud Debugger, audit logging to provide insights into what actions
were performed where and when, and direct deployment to App Engine. It is also
possible to connect an existing GitHub or Bitbucket repository to Cloud Source
Repositories. Connected repositories are synchronized with Cloud Source
Repositories automatically.
Cloud Build lets you build software quickly across all
languages
● Google-hosted Docker build service
○ Alternative to using Docker build command
● Use the CLI to submit a build
gcloud builds submit --tag gcr.io/your-project-id/image-name .
Cloud Build lets you build software quickly across all languages. It is a Google-hosted
Docker build service and is an alternative to using the Docker build. The CLI can be
used to submit a build using gcloud. An example is shown on this slide.
“gcloud build submits” submits the build and will run as a remote build. The “--tag” is
the tag to use when the image is created. The tag must use the gcr.io or *.gcr.io.*
namespace. Your source must contain a Dockerfile if you use the tag.
Build triggers watch a repository and build a container whenever code is pushed.
Google’s build triggers support Maven, custom builds, and Docker.
The build configuration can be specified either in a Dockerfile or a Cloud Build file.
The configuration required is shown on this slide. First, a source is selected. This can
be Cloud Source Repositories, Github, or Bitbucket. In the next stage, a source
repository is selected, followed by the trigger settings. The trigger settings include
information like the branch or tag to use for trigger, and the build configuration, for
example the Dockerfile or Cloud Build file.
Container Registry is a Google Cloud–hosted
Docker repository
● Images built using Cloud Build are automatically save in Container Registry.
○ Tag images with the prefix gcr.io/your-project-id/image-name
● Can use Docker push and pull commands with Container Registry.
○ docker push gcr.io/your-project-id/image-name
○ docker pull gcr.io/your-project-id/image-name
It is also possible to push and pull images using the standard Docker commands. So
to push an image, use:
docker push gcr.io/your-project-id/image-name
for example).
● Container Registry includes a vulnerability
Kritis
scanner that scans containers. Signer
Now binary authorization allows you to enforce deployment of only trusted containers
into GKE. Binary authorization is a Google Cloud service and is based on the Kritis
specification. For this to work, you must enable binary authorization on your GKE
cluster where your deployment will be made. A policy is required to sign the images.
When an image is built by Cloud Build, an attestor verifies that it was from a trusted
repository; for example, Source Repositories. Container Registry includes a
vulnerability scanner that scans containers. A typical workflow is shown in the
diagram.
Checkin of code triggers a Cloud Build. As part of the build, Container Registry will
perform a vulnerability scan when a new image is uploaded. The scanner publishes
messages to Pub/Sub. The Kritis Signer listens to Pub/Sub notifications from a
container registry vulnerability scanner and makes an attestation if the image
scanning passed the vulnerability scan. Google Cloud binary authorization service
then enforces the policy requiring attestations by the Kritis signer before a container
image can be deployed.
Infrastructure as Code
On-Premises
● Buy machines.
Moving to the cloud requires a mindset change. The on-demand, pay-per-use model
of cloud computing is a different model to traditional on-premises infrastructure
provisioning. A typical on-premises model would be to buy machines and keep these
running continuously. The compute infrastructure is typically built from fewer, larger
machines. From an accounting view, the machines are capital expenditure that
depreciates over time.
Proprietary + Confidential
On-Premises Cloud
● Keep machines running for years. ● Turn machines off as soon as possible.
When using the cloud, resources are rented instead of purchased, and as a result we
want to turn the machines off as soon as they are not required to save on costs. The
approach is to typically have lots of smaller machines—scale out instead of scale
up—and to expect and engineer for failure. From an accounting view, the machines
are a monthly operating expense.
Proprietary + Confidential
In other words, in the cloud, all infrastructure needs to be disposable. The key to this
is infrastructure as code (IaC), which allows for the provisioning, configuration, and
deployment activities to be automated.
Having the process automated minimizes risks, eliminates manual mistakes, and
supports repeatable deployments and scale and speed. Deploying one or one
hundred machines is the same effort. The automation can be achieved using scripts
or declarative tools such as Terraform which we will discuss later.
It is really important that no time is spent trying to fix broken machines or installing
patches or upgrades. These will lead to problems recreating the environments at a
later date. If a machine requires maintenance, remove it and create a new one
instead.
Terraform is one of the tools used for Infrastructure as Code or IaC. Before we dive
into understanding Terraform, let’s look at what infrastructure as code does. In
essence, infrastructure as code allows for the quick provisioning and removing of
infrastructures.
Several tools can be used for IaC. Google Cloud supports Terraform, where
deployments are described in a file known as a configuration. This details all the
resources that should be provisioned. Configurations can be modularized using
templates, which allows the abstraction of resources into reusable components
across deployments.
In addition to Terraform, Google Cloud also provides support for other IaC tools,
including:
● Chef
● Puppet
● Ansible
● Packer
Proprietary + Confidential
Terraform is an infrastructure
automation tool
● Repeatable deployment process
● Declarative language
● Focus on the application
● Parallel deployment
Compute Cloud Firewall Cloud VPN
● Template-driven Engine Rules
Terraform is an open source tool that lets you provision Google Cloud resources.
This deployment can be repeated over and over with consistent results, and you can
delete an entire deployment with one command or click. The benefit of a declarative
approach is that it allows you to specify what the configuration should be and let the
system figure out the steps to take.
Instead of deploying each resource separately, you specify the set of resources that
compose the application or service, which allows you to focus on the application.
Unlike Cloud Shell, Terraform will deploy resources in parallel.
Terraform uses the underlying APIs of each Google Cloud service to deploy your
resources. This enables you to deploy almost everything we have seen so far, from
instances, instance templates, and groups, to VPC networks, firewall rules, VPN
tunnels, Cloud Routers, and load balancers.
Terraform language
● The configuration file guides the <BLOCK TYPE> "<BLOCK LABEL>" "<BLOCK LABEL>" {
management of the resource. # Block body
<IDENTIFIER> = <EXPRESSION> # Argument
}
The Terraform language is the user interface to declare resources. Resources are
infrastructure objects such as Compute Engine virtual machines, storage buckets,
containers, or networks. A Terraform configuration is a complete document in the
Terraform language that tells Terraform how to manage a given collection of
infrastructure. A configuration can consist of multiple files and directories.
● Blocks that represent objects and can have zero or more labels. A block has a
body that enables you to declare arguments and nested blocks.
● Arguments are used to assign a value to a name.
● An expression represents a value that can be assigned to an identifier.
Proprietary + Confidential
disk {
image = “image to build instance”
}
}
output “instance_ip” {
value = “${google_compute.ip_address}”
}
Terraform can be used on multiple public and private clouds. Terraform is already
installed in Cloud Shell.
The example Terraform configuration file shown starts with a provider block that
indicates that Google Cloud is the provider. The region for the deployment is specified
inside the provider block.
The resource block specifies a Google Cloud Compute Engine instance, or virtual
machine. The details of the instance to be created are specified inside the resource
block.
The output block specifies an output variable for the Terraform module. In this case, a
value will be assigned to the output variable "instance_ip."
Lab
Building a DevOps
Pipeline
In this first lab, you will build a DevOps pipeline using Cloud Source Repositories,
Cloud Build, and Container Registry. Specifically, you will first create a Git repository.
You will then write a simple Python application and add it to your repository. After that
you will test your web application in Cloud Shell and then define a Docker build.
Once you define the build, you will use Cloud Build to create a Docker image and
store the image in Container Registry. Then, you will see how to automate builds
using triggers. Once you have the trigger, you will test it by making a change to your
program and pushing that change to your Git repo.
Lab review
Building a DevOps
Pipeline
In this lab, you learned how to use Google Cloud tools to create a simple and
automated continuous integration pipeline. You used Cloud Source Repositories to
create a Git repository and then used Cloud Build and triggers to automate the
creation of your Docker images when code was checked into the repo.
When Cloud Build created your Docker images, it stored them in Container Registry.
You saw how to access those images and test them in a Compute Engine VM.
You can stay for a lab walkthrough, but remember that Google Cloud's user interface
can change, so your environment might look slightly different.
Review
DevOps Automation
In this module, you learned about services that you can use to help automate the
deployment of your cloud resources and services. You used Cloud Source
Repositories, Cloud Build, triggers, and Container Registry to create continuous
integration pipelines. A continuous integration pipeline automates the creation of
deployment packages like Docker images in response to changes in your source
code.
You also saw how to automate the creation of infrastructure using the infrastructure as
code tool, Terraform.