Setup and deployment of a microservice-oriented architecture using Terraform
Setup and deployment of a microservice-oriented architecture using Terraform
Master Thesis
Author:
Predrag Falcic, BSc (Hons)
Supervisor:
DI Jochen Hence, MBA
Date:
27. 12. 2021
Summary
Cloud platforms provide scalable computing power and save organizations from the need
to invest and maintain costly infrastructures. With the use of infrastructure as code tasks
like managing, building, testing, delivering, and deploying become easier to implement. It
allows us to take advantage of automation tools to perform these tasks without any direct
human interaction.
The goal of this research was to analyze the Infrastructure as Code tools and suggest how
it can be used to improve the static IT infrastructure of an organization. The first part of the
research will be focused on the analysis of available tools and known practices for the
development and deployment of a microservice-oriented architecture, through the study of
the literature. Based on the performed theoretical research, the infrastructure and
communication between individual micro services will be presented. Next a practical
example of such a system will also be implemented and deployed on a cloud provider’s
platform by using infrastructure as code and compare it with the manual deployment of the
same system. The requirements of the system represent the basic requirements for data
managing systems through an interaction with a user interface.
The valuation of the results showed that using infrastructure as code can improve the
deployment and development process of a system, but we can also note that this may not
always be true and that the advantages and disadvantages depend on the complexity of
the system and the need for continuous deployments.
Predrag Falcic iv
List of Abbreviations
Predrag Falcic v
Predrag Falcic vi
Key Terms
Summary .......................................................................................................................iii
Abstract ........................................................................................................................ iv
List of Abbreviations .......................................................................................................v
Key Terms ..................................................................................................................... vii
Contents ...................................................................................................................... viii
1. Introduction ........................................................................................................... 1
1.1 Problem statement .................................................................................................... 1
1.2 Scope ......................................................................................................................... 1
1.3 Research methodology ............................................................................................... 2
1.4 Thesis structure overview .......................................................................................... 2
2. Theoretical framework ........................................................................................... 4
2.1 Web-Application ........................................................................................................ 4
2.2 Containerization......................................................................................................... 6
2.3 Architecture Patterns ................................................................................................. 7
2.3.1 Monolithic and N-Tier ................................................................................................................ 7
2.3.2 Microservices ............................................................................................................................. 9
2.4 Cloud computing ...................................................................................................... 10
2.4.1 Amazon Web Services .............................................................................................................. 12
2.4.2 AWS Elastic Compute Cloud..................................................................................................... 13
2.4.3 AWS Elastic Container Registry ................................................................................................ 14
2.4.4 AWS Elastic Container Service ................................................................................................. 14
2.4.5 AWS Networking ...................................................................................................................... 15
2.4.6 AWS Security ............................................................................................................................ 16
2.4.7 AWS DynamoDB....................................................................................................................... 17
2.5 Infrastructure as code .............................................................................................. 17
2.5.1 General elements ..................................................................................................................... 18
2.5.2 Provisioning tools ..................................................................................................................... 20
Predrag Falcic ix
Error! Reference source not found.. Introduction
1. Introduction
1.2 Scope
Based on the motivation presented above, the initial questions that this thesis will explore
are the following:
Predrag Falcic 1
Error! Reference source not found.. Introduction
The innovation value of this work lies in the profound comparison of deploying the
application on AWS Cloud manually (configuring the database and the AWS EC2 Instances
and deploying the docker containers by using the AWS Web Application) with the
deployment of the same application using Terraform as the tool for infrastructure as code,
for configuring AWS Resources and deploying the application. Additionally, in this paper we
will provide how an automation tool like Jenkins can be used for automatically deploying the
microservices using Terraform.
Predrag Falcic 2
Error! Reference source not found.. Introduction
Predrag Falcic 3
2. THEORETICAL FRAMEWORK
The following chapter describes the findings from the theoretical analysis of the different
parts of the system that we are going to build and how they relate to each other.
The World Wide Web (WWW) or the Web has been identified as a powerful channel for
exchanging information in recent years. Today, a growing number of companies are building
websites to enable customers around the world to use their services. However, for
information to be exchanged through the Web, detailed planning and preparation are
required. [5]
It is not always possible to define what a Web application should or will contain at the start
of the development process, because the changes in the marketplace or the volume of data
come quickly without derailing a year’s worth of plans. These is why it is important, when
developing web application, that they and the infrastructure are highly scalable.
The microservice-oriented architecture is highly scalable and is a good choice to overcome
these challenges. On the downside, the maintenance of microservice architecture, can be
difficult. Containerization can be used to simplify this process. By using containerization, we
can isolate each microservice in its own working environment and make them independently
deployable and scalable. [6]
By using a Cloud service provider like Amazon Web Service (AWS), which provide and
manage the underlying infrastructure, enables organization to focus on application
development and makes it easier to achieve scalability, by automatically allocating the
required resources needed by the application at that time of use.
Although it is easy to configure a simple cloud architecture, mistakes can still occur when
provisioning more complex architectures. In every type of work, human error is always
present and in managing a cloud infrastructure is not different. To avoid human errors
Infrastructure as Code (IaC) can be used for automatically launching cloud environments
quickly and without mistakes.
2.1 Web-Application
In 1989, a global information management system which allows the resources over a
computer network was introduced. Today we call this system the World Wide Web. It was
presented as a document management system that enables sharing of information
resources. Each document has a unique address know as Universal Resource Locator
(URL), which can also be used to access the required document. [5]
To make the documents available, a computer connected to the internet was needed.
According to this idea, most of the information resources available on the Internet were
initially static documents. The development of dynamic information systems, starting with
search engines connected to a database, has changed the nature of digital resources. With
the emergence of dynamic systems, the era of web pages is being replaced by the era of
web applications. The web page would display static document content, while the web
application would present dynamic content that depended on user requirements. [7]
A web application can be defined as a software deployed on a server, that to be used an
internet connection and a web browser are required. Any component of a website that can
perform a user-specific function may qualify as a web application.
These applications can serve a variety of purposes and can be tailored to different needs,
either for business or individual purposes. For example, a web application can be a simple
Predrag Falcic 4
online calculator or something with a more complex structure, for example a banking system
[7].
Most web applications today use the API (Application Programming Interface), and, in most
cases, it is the REST (Representational State Transfer) API. The API enables
communication between software by defining a strict communication interface. Queries
return fixed data structures, so in some cases there is a return of data that is not needed by
the user at a given time, or there is a surplus of data [8].
In the REST architecture, clients send requests to the server if they want to access a
resource, add a resource, delete, or modify a resource. Servers send appropriate responses
to client requests, the resource requested, a message about successfully saved resource
changes, and so on. The HTTP protocol is most often (but not necessarily) used for
communication between the server and the client. The HTTP protocol defines a set of
standard methods for resource management, some of them are shown in Table 1.
Method Description
Predrag Falcic 5
2.2 Containerization
By creating virtual environments on their computer for each operating system required,
virtualization has allowed developers to develop software from their own computer for
different operating systems.
However, although it is easy to run the windows operating system on a MacBook and test
the functionality of the application on the windows platform, virtual machines also have their
drawbacks. A lot of factors determine the performance and efficiency of a virtual machine,
such as the hardware of the computer on which it is running.
A container represents an isolated lightweight executable, which contains the operating
system, on which the application should run and all the dependencies that the application
requires to operate correctly. The container can because of this run consistently in any
environment. Containers make it possible to overcome the difficulties that come with the
use of a virtual machine, because they are more portable and resource-efficient than virtual
machines [10].
In opposite of virtual machines, which run a full virtual operating system, the containers
have a different approach such that the kernel of the operation system is shared with other
containers and applications. These makes containers much smaller, and they do not require
many resources, but they can be hard to set up, manage and automate. As an open-source
project, Docker aims to overcome these drawbacks by automating the deployment of
applications into containers.
Docker is a technology that enables storing code and dependencies into a neat little
package, called an image. The image contains the application and all the required
configurations and dependencies and then it can be used to start a container with the
application deployed inside of it. Unlike virtual machines containers do not contain a
hardware hypervisor. Because containers only require docker on the host machine,
container-based ecosystems offer a better solution to develop and deploy applications.
When using container-based systems, like with other tools used when developing a
software, there are also concerns about how secure these systems are. Truly isolated,
containerized applications, because they function independently, can prevent any malicious
code from affecting other containers or the host operating system. In theory we can have
isolated containers, but in practice we often have the situation that the application layers
are often shared across multiple containers. This approach is more resource efficient, but it
also opens the doors to security breaches across containers. Also, because the containers
share the same host operating system, it is also at risk. One way to overcome these
liabilities is for the containers to inherit the security properties of the underlying operating
Predrag Falcic 6
system. With this approach security permissions can be defined for limiting the
communication with unnecessary resources.
Unlike virtual machines, containers share the machine’s operating system kernel and do
not require that each application has access to the underlying operating system.
Additionally, it is possible for container layers, bins, and libraries to be shared with multiple
containers, which makes them smaller in capacity and faster to start in comparison with
virtual machines.
Docker containers can be used on servers that require multiple applications running. Also,
when learning new technologies, instead of installing everything on the local computer, we
can just deploy the required images as docker containers. Docker has many pros, but there
are also situations where docker containers are not a good solution, like working with an
overly complicated application. Because Docker is a virtualized system, it is slower than the
native operating system, but still faster the a virtual machine [11].
Starting code-first when developing an application, without proper work invested in creating
well-defined and clear architecture, the system will in most cases result in a traditional
Monolithic or N-Tier architecture.
These two patterns are mostly mistaken as the same and even though they are very similar
they still have some differences. Systems developed with monolithic design pattern have
their entire work in on solution or one project which results in one component which contains
the whole code for the application. In the end, we have a single project with all the database
calls (persistence layer), logic performed on the requested data (business layer) and
presenting the data to the end-user (presentation layer). This makes support and
maintenance of the application very difficult because even if one single change must be
made in the application, the entire application will have to go offline and restart. The layered
pattern tries to overcome some of these drawbacks.
Predrag Falcic 7
The layered pattern is recognized by the horizontally organized components, each one
representing a layer in the application. The number of layers an application must-have is
not defined. Depending on the size of the system there can be 3 or more layers. In most
cases, layered architecture that most of us will recognize consists of four layers:
• Presentation
• Business
• Persistence
• Database
In a layered architecture, each layer has a unique role or responsibility and is independent
of other layers. For example, to display the data to the user and to accept the input from the
user, the presentation layer is used, which also send the data to the business layer. The
business layer is responsible for performing some logic to the data or passing the requests
when data is required from the database to the persistence layer. So, we see that we have
a separation of responsibilities between the presentation and business layer, such that the
presentation layer does not know what happens to the data, which is sent, and it does not
know from where the data it presents to the user came from. This can also be seen between
the business and the persistence layer. The business layer does not have a direct
connection to the database, nor does it know which database is used or how the data is
accessed. It only requests data that it needs or sends data that should be saved in the
database. The connection to the database and accessing and saving data in the database
is the responsibility of the persistence layer. By applying such a separation between the
components, we are aligning with separation of concerns principle. If the interaction, for
example, between presentation and persistence layer, would be allowed, this would result
in a very tightly coupled application with lots of interdependencies between components
and would make the application very hard to change.
We have mentioned that there are applications that have more than only these four layers.
One example where another layer would be needed is if there was a requirement for sharing
the logging classes or string utility classes the persistence layer in the business layer. To
achieve this, a service layer could be implemented. This approach brings some concerns
up such as that now the business layer must go throw the service layer to get to the
persistence layer. This is a well-known problem and one way to deal with it is to use open
layers, which allow requests to bypass the open layer and go directly to the layer below it.
If implemented right the layered pattern can be a very powerful tool in software
development, but as with each tool, some considerations must be considered. For example,
when using the open layer approach carelessly can result in a lack of layer isolation and
bring more dependencies between layers as wanted.
Even though the layered pattern overcomes the shortcomings of the monolithic pattern, they
are still not ideal for a large project which must have high availability and scalability. In a
layered architecture, when performing a change in the application, the entire application still
must go offline and restart for the change to take place. This means that the application will
be unusable for the users while the maintenance work is being performed. Today, most
Predrag Falcic 8
systems must be available all the time, and implementing one of these two patterns will not
be easy to achieve. In the following sub-chapter, we will discuss how microservices are
used and what are their benefits over monolithic or layered architecture [12].
2.3.2 Microservices
In the previous sub-chapter, we mentioned that the microservices offer a viable alternative
to monolithic and layered applications. In this part of the chapter, we are going to discuss
how this pattern is implemented and where it differs from the previously mentioned patterns.
The microservice-oriented architecture describes that the whole system is split into
independent applications or microservices, so that when one microservice needs to be
updated, this does not affect the other microservices, except the ones that use them. This
allows the other microservices to continue to function and only a part of the system is
unavailable for a moment and not the whole system.
There are multiple ways how this pattern can be implemented, but all of them should apply
several common core concepts from the general architecture pattern.
One of the concepts is separately deployed units. As we previously mentioned, when one
microservice is updated, the entire application does not have to be stopped. This is possible
because each microservice or component of the microservice is deployed as a separate
unit that is independent of other microservices in the application. Because of this way of
implementing microservices the application is more scalable and has a high degree of
component decoupling, which, as we saw previously, is not the case in monolithic and
layered applications.
The second concept worth mentioning is the service component. When we think about a
microservice oriented architecture we often think about separate services which are
independent from each other, but it is easier to understand the design of this architecture
pattern by instead of thinking about services we imagen them as service components.
Because, depending on the complexity of the application, the structure of a service
component can vary from a simple module to a large portion of the application which
includes multiple modules. So, a single service component can have one or more modules,
depending on the level of granularity of the application. For example, a single module
service component would be a login in component that generates only an access token
which is used for every other request. On the other hand, an example of a multi-module
service component would be the part of an application for order goods from an e-commerce
website, which would include multiple services for add objects to cart to the actual payment
process. The most important thing here is to decide the level of granularity of the service
components. Splitting the components into a single module component is not always the
best solution and putting more modules into one component can lead to a complex
component on which others depend and creating a bottle neck or even worse a monolithic
application within a microservice architecture.
Predrag Falcic 9
Figure 3 Microservice-oriented architecture
From the above figure an example is given how a microservice architecture could be
designed.
We have mentioned that the service components are completely decoupled from each other
which brings as to another key concept in a microservice oriented architecture which is that
it is a distributed architecture. The service components are completely independent and are
only accessible through a remote access protocol like REST or SOAP for example. From
the above figure we can also see that each component service has its own database. If we
want to correctly decompose the system into microservices, they must be loosely coupled.
The persistent data of a microservice must remain private, it means that it should only be
accessible via it’s API. If two different microservice, were to use the same persistent layer,
it could lead to a more complex design and slower development phase, because we would
have to coordinate the changes to the data schema and even update both microservices,
when one of them maybe does not require the newly made changes to the data schema.
When all the mentioned concepts are implemented correctly it can result in a system that is
built on multiple service components, which with the proper implemented decomposition
can be developed and deployed independently [12].
Predrag Falcic 10
• Cost – Fewer operating costs, because there is no need to buy software of run and
maintain on-site servers.
• Speed - Additional resources are added easily which gives the business a lot of
flexibility.
• Global scale – If more resources are required, they are automatically allocated
based on the need and geographical location.
• Productivity - No need to configure the datacenters.
• Reliability - Making backups, disaster recovery is easy to implement.
• Security - Policies, technologies, and controls help protecting the data and
infrastructure from potential threats.
There are multiple cloud types and depending on the application’s requirements, not all
types are right for our application. Depending on the deployment or the computing
architecture, that our services are implemented on, there are three different ways to deploy
cloud services:
• Public cloud - A model where on-demand computing services and infrastructure are
managed by a third-party provider and share with multiple organizations over the
internet.
• Private cloud - The organization using it does not share the resources with other
users. This is achieved by paying third-party companies for hosting or by using the
organizations own datacenters.
• Hybrid cloud – Is a combination of public and private clouds and allows sharing data
and application between them.
Every technology or software that a user accesses over the internet which does not require
additional downloads is a one of the following cloud computing services:
• Infrastructure-as-a-Service (IaaS) – Offers the users computing, networking, and
storage resources.
• Platforms-as-a-Service (PaaS) - Platform on which applications and the required
infrastructure can run.
• Software-as-a-Service (SaaS) - Provides a cloud application that can be used by
the users.
• Function-as-a-Service (FaaS) – Allows for building, running, and managing the
application without the need for developers to manage the infrastructure.
Cloud services are infrastructure, platforms, or software that are hosted by third party
providers and made available to the user over the internet. Cloud providers allow users to
access the cloud services with only a computer and a network connection, not additional
hardware and software are required to start the development of a software product.
For the research in this paper a public cloud with Infrastructure-as-a-Service is used,
provided by Amazon Web Services (AWS).
In the following sub-chapters, we will discuss different services which are offered by
Amazon, and which were used in the practical example developed for the research
purposes work of this paper [13].
Predrag Falcic 11
2.4.1 Amazon Web Services
We have mentioned that IaaS provides computing, networking, and storage resources
which we can use to build our infrastructure on which our application will be running. To
build an infrastructure many parts need to be configured first, like creating instances for the
application, configuring the database, networking, and security. The AWS offers all these
fundamental services. In the AWS toolbox are many tools that can be used to achieve this
goal. Instead of hosting the application and database on bare-metal servers in its
datacenters, AWS offers many virtual servers which have different design and size,
choosing the right one depends on our requirements for the application.
The services managed by AWS are mostly hosted on virtual machines like Elastic Cloud
Compute instances (EC2 instances), which have massive amount of computing power and
resources like CPU and RAM. These virtual machines also provide networking, storage,
and security services. Most of the services AWS offers fall into the infrastructure-as-a-
Service definition which includes virtualized servers and storage and a software defined
network which allows hosting each customer’s infrastructure into an isolated virtual private
cloud (VPC). This allows us to use any number of services from the AWS toolbox and design
the infrastructure the way it best suits our needs.
The following figure shows the services which were used in the practical example
implemented for this paper and which will be discussed in detail in the following sub-
chapters.
Each of the services presented in the Figure 4 play an important role in the deployment of
the application and are the fundament of the infrastructure. Amazon Elastic container
service and Registry (Amazon ECS and ECR) are used for storing and deploying the docker
image on the Amazon Elastic Compute Cloud (Amazon EC2). A load balancer will also be
configured and a DynamoDB will be used a database. All this will be configured and
implemented in an Amazon virtual private Cloud (Amazon VPC). Also, Amazon Identity and
Access Management (Amazon IAM) will be used for giving the users permission to perform
certain actions in AWS, for example creating an EC2 instance or creating the load balancer.
When working with sensitive data Security is very important and that is why we will discuss
network and application security in AWS. [14]
Predrag Falcic 12
2.4.2 AWS Elastic Compute Cloud
Amazon Elastic Compute Cloud of Amazon EC2 is a web service that provides secure,
resizable compute capacity in the cloud. Amazon EC2 enables us to have complete control
over the scaling of our application and the computing resources. EC2 can be resized, and
we can add or delete instances depending on our requirements. Amazon EC2 supports
different operating systems, some of them are:
• Amazon Linux
• Ubuntu
• Windows Server
• Red Hat Enterprise Linux
• CentOS
• Debian
To restrict access to the EC2 instances, we can create groups and configure them so that
only members of that groups can perform certain actions on some of the instances,
depending on our requirements.
Amazon EC2 instances are also Fault tolerant, such as that if one of the instances crushes
for some reason and we configured that there should always be three instances running,
the instance which ran into an error will automatically be started again.
AWS uses the concept of a Region, which represents a physical location around the world
the data centers are bundled, and a group of data centers are known as an Availability Zone
(AZ). A Region consists of multiple physically separated Availability Zones within a
geographic area. When lunching instances, different Availability Zones can be chosen. An
Availability Zone represent one or more data centers with independent power, cooling, and
physical security and is connected via redundant low-latency networks. All traffic between
Availability Zones is encrypted. Availability Zones enable us to create applications which
are more available, fault tolerant, and scalable than it would be from a single data center.
Beside Availability Zones, AWS also uses Local Zones, which represent an extension of an
AWS Region, but they are geographically closer to the end user [15].
Predrag Falcic 13
In the Figure 5 we can see that a Region (Regions are independent from each other),
consists of multiple Availability Zones (in this example 3) and we see that a database
instance was created in one of the Availability Zones. Also, we can see that the Region has
three Local Zones, which are used for placing resources closer to the end user.
In previous chapters we have mentioned the term containerization and said that a container
represents a simple unit that packages all our code and the required dependencies so that
the application can be deployed quickly and in any environment.
Docker images are used to create a docker container. For the images to be used, they must
first be deposited, and AWS provides a very handy tool, called AWS Elastic Container
Registry (AWS ECR), which can be used for storing the images.
ECR represents a fully managed container registry that makes storing, sharing, managing,
and deploying the images easier. First the images need to be pushed to ECR and then they
can be pulled using other container management tools like Kubernetes, Docker Compose
or AWS ECS.
The combination of ECR and AWS IAM can be used for configuration of policies and
management and control of the access to the images. This means that we do not have to
share credentials or manage permissions individually for each repository.
The steps required for using ECR are:
1. Creating and configuring a new repository
2. Pushing the image to the repository
3. Pulling an image from the repository and
4. Using the image in production
Usually, all these steps are performed as part of continuous integration workflow, which we
will talk about in later chapters, but can also be locally using shell commands or by using
the graphical user interface that AWS provides [16].
Some of the benefits of using ECR are:
• Fully managed – all we must do is push the image to a repository and pull them
when we need to deploy them. There is no need for operating and scaling the
infrastructure or a software to install to manage the container registry.
• Secure – Images are transferred over HTTPS and are automatically encrypted. As
previously mentioned, we can also define policies and access rights to the images
using Amazon IAM.
• Highly available – They are highly available and accessible.
• Easy to use – Using the Docker command line (Docker CLI) we can push the image
and using Amazon ECS the images can be easily pulled and deployed.
When we discussed AWS ECR, we said that the container images are pushed to a
repository and that the registry is used for storing the images in the cloud. We also
Predrag Falcic 14
mentioned that the images can be pulled and deployed in the production environment for
example. Pulling and deploying the images can be done using AWS Elastic Container
Service (AWS ECS). ECS is a cloud computing service in AWS that is used for managing
containers and running the applications in the cloud without any additional configuration of
the environment. Scalability is provided by running the applications on a group of servers,
called clusters. In the practical example prepared for this study, the images are pulled from
AWS ECR and deployed on EC2 instances. ECS also scales the application and manages
the availability of the containers. An Amazon ECS container instances is an Amazon EC2
instance that is running the Amazon ECS container agent and is registered into an Amazon
ECS cluster. This means that by running tasks with ECS using the EC2 launch type the
tasks are placed on the active container instance. Each container instance requires an IAM
policy and role, so that the service knows to which Amazon account the agent belongs to.
In our practical example we used docker images and each EC2 instance runs the
application as a container on Amazon. There are four network modes that allow us to
configure the interaction between the container and other services:
• Host mode – Containers are added and exposed on the host’s network
• Task networking mode – The container is provided the full networking features in
Amazon VPC
• None mode – External traffic is deactivated
• Bridge mode – All containers on the same host in a local virtual network are
connected using the Linux bridge accessed through the host’s default network
connection
To distribute the traffic across the containers, the AWS Elastic Load Balancer (AWS ELB)
is used. By creating a Task Definition and the ELB to use, ECS add or remove containers
that use the ELB. The ECS scheduler automatically starts new containers with the image of
the new application’s version and deactivates all containers running the old version of the
application. ECS also add or remove any deactivated containers from the ELB. By defining
the number of required containers, ECS allows us to automatically recover any unhealthy
container and start a new one to ensure that the necessary number of containers is always
running. Because the Amazon ECS containers run on top of EC2 instances which run in
Amazon virtual public cloud (AWS VPC), we can manage which instances are exposed to
external traffic. Amazon IAM can also be used to create security groups and further limit the
access to the instances. ECS in combination with ECR offer improved security to the
application. Also using ECS, many containers can be launched in seconds without any
additional configuration needed [17].
AWS networking services offer a wide range of networking options that are scalable, on-
demand, and easy to configure. When talking about the networking services offered by
AWS and which are used in the practical example for this study are:
• Amazon Virtual Private Cloud (VPC)
• Elastic Load Balancing (ELB)
Predrag Falcic 15
Amazon VPC allows us to isolate a section of AWS where we will have the complete control
of the environment. When saying complete control of an environment we mean that we can
provide IP addresses, subnets, internet gateways, route tables, security groups, and
networking configurations. For example, the application which was built as the practical
example for this study, has a VPC with a public subnet which allows the application to be
accessible from the internet, but the database is in a private subnet and is accessible only
from our application and has no IP address accessible from the internet. We can also create
virtual private networks (VPN) connections between other data centers and our VPC, so
that they access to our private network [18].
Elastic Load Balancing (ELB) is an AWS product which is hidden behind a DNS host name,
and it distributes all incoming request to the EC2 instances. ELB also detects when an
instance is not healthy and redirects the requests to the other instances until the unhealthy
ones are restored. Previously we mentioned auto scaling and said that we can define
several instances which should be always running and that if one of them fails it will be
restored automatically. Because ELB can detected that an instance is unhealthy and that it
is being restored, these two products are a good combination. For example, an instance
hast failed for some reason, the auto scaling functionality check periodically if the number
of running instances is met and detects the one instance is not running so it begins with the
restoring process. The ELB gets a request and detects that one instance is being restored,
so it does not distribute the request to the unhealthy instance, but instead to one on of the
healthy ones. [19]
For better security in the cloud AWS implements the Share Responsibility Model, which
means that AWS provides secure infrastructure and services, while the customers are
responsible for secure operation systems, data, and platforms. To establish an even securer
global infrastructure, AWS configures infrastructure components and provides services and
features that can be used to achieve this. For example, one of the services AWS offers we
have mentioned already is the Identity and Access Management (IAM) service. Using this
service, we can manage users and their permissions in other AWS services like ECR or
ECS. For example, to secure the infrastructure of the Amazon EC2 service, amazon
provides security for following assets:
• Facilities
• Physical security of hardware
• Network infrastructure
• Virtualization infrastructure
Our responsibility as users of the AWS EC2 service is to secure the following assets:
• Amazon Machine Images (AMIs)
• Operating systems
• Applications
• Data in transit
• Data at rest
• Data stores
• Credentials
• Policies and configuration
Predrag Falcic 16
As we can see in the list above, to provide the right level of security to our application we
also must show responsibility when implementing AWS services. In the practical example
we have used the IAM service that allows us to manage users, secure credentials such as
passwords, access keys, and permissions policies that control which service a user can
access. IAM allows us to create individual users within our account and provide each one
of them a username, password, and access key [20].
In the practical example for this study, we used the Amazon DynamoDB which is a fully
managed NoSQL key-value database service that allows to create database tables that can
store and retrieve any amount of data. The DynamoDB is very performant because it
automatically manages the data traffic of tables over multiple servers. It is a managed
service, which means that there is no need for any installation or setting up, configuring
database clusters, or managing ongoing cluster operation, everything is handled
automatically. Efficient request routing algorithms make the database very fast, because
even though the data grows the latency stays low and stable. The data is replicated over
three different data centers. Database tables are dynamic which means that they can have
any number of attributes.
When scaling vertically we use more hardware resources like CPUs and RAM which can
get very expensive. Scaling horizontally can be achieved by splitting the data across
multiple machines of which everyone has a subset of the full dataset which is cheaper but
harder to achieve. By relaxing the requirements of strong consistency, where not all the
users have to see the latest content instantly when it is created, horizontal scaling is much
easier to achieve in Dynamo.
DynamoDB achieves single-digit millisecond performance at any scale. It can handle more
than 10 trillion requests per day and can support more than 20 million requests per second.
Because Dynamo is serverless there are no servers to provision, patch, or manage and no
software to be installed. Scalability is achieved by scaling the tables up or down depending
on the requirements in any given moment. High availability and fault tolerance are built in
and eliminate the need to architect the for these capabilities [21].
Definition Files
Definition Files are the key element of infrastructure as code. As we mentioned previously,
that by making changes to definitions and executing these definitions resource in a cloud
computing provider is allocated. With allocation of resources, we mean different
components of the infrastructure such as a server, a new instance, load balancing, virtual
private network, basically every component that we need for the system to function. The
components are defined and configured in definition files. The infrastructure as code tools,
use these files and create the defined components on the cloud provider like Amazon for
example. Definition files can be written in JSON, YAML or XML format, but some tools
define their own domain specific language (DSL). By storing the definition and configuration
of infrastructure components in definition files, we can easily implement version control. For
example, this would make it easier to trace the changes that were made or rollback the
changes if there were errors. Also, events can be triggered when a new definition file is
committed, for example to execute it immediately.
Predrag Falcic 18
We have mentioned that for a system to function there are multiple components that must
be configured and allocated. A dynamic infrastructure platform provides the needed
computing resources like servers, storage, and networking, but the most important is that
they can be programmatically allocated and configured. In the practical example we will use
the infrastructure as a service provided by Amazon. But a virtual machine can also be used.
It does not matter which dynamic infrastructure platform we use; it is only important that
scripts and tools can be used to create the infrastructure components, but also to destroy
them. Under the term Programmable we understand that the infrastructure components are
defined in the previously mentioned definition files and when executed the components are
either created, updated, or destroyed on the dynamic infrastructure platform. One more
important feature of the platform is that it behaves as a self-service, which means that the
resources can be changed and customized based on the user requirement. This means
that the team should not be able to provision only one of three servers: a web server, an
application server, or a database server, which would greatly limit their possibilities. Instead,
the team should be able to choose, for example, different application server if the need
appears.
Automation Tools
In previous paragraphs, we mentioned that the dynamic infrastructure platform allows us to
allocate required resources and to configure them based on the requirements. Different
automation tools can be used to achieve this. A difference between allocating resources
and configuring resources should be made and based on the difference between them,
there are two type of automation tools: provisioning tools and configuration tools.
Provisioning tools are used for allocation of the resources on a dynamic infrastructure
platform. In the practical example we will use Terraform to manage the AWS infrastructure
services (creating instances, load balancer, vpc, etc.).
On the other hand, configuration tools, are used for configuring already created
infrastructure components with the required dependencies and settings. For example,
defining to which database our application should connect by modifying a property file.
Some of the tools that can help us achieve these functionalities are Puppet, Chef, and
Ansible.
Predrag Falcic 19
2.5.2 Provisioning tools
We mentioned that to use infrastructure as code for managing the infrastructure and for
resource allocation on a given platform, for creating and destroying components etc. we
can use provisioning tools. When we want to create a functional infrastructure, we start with
installing the needed provisioning tools to setting up the fundamental infrastructure
components. There are many provisioning tools to choose from, for example one of the
open-source provisioning tools like Terraform or Openstack Heat, or one of Google’s,
Microsoft’s, and Amazon’s resource managers. In this chapter we will discuss the Terraform
provisioning tool because it was used in the practical example implemented for this study.
Terraform
Terraform, developed by HashiCorp and written in the Go programming language is an
open-source tool that quickly became one of the most popular provisioning tools built for
infrastructure automation or in other words for configuring, provisioning, and managing the
infrastructure as code. There are multiple infrastructure providers and terraform allows us
to easily plan and create infrastructure as code across these providers using the same
workflow. For example, as a customer of a cloud provider we want to start some cloud
components. We can achieve this by either going into the web console, of the cloud
provider, and start launching some new instance, but we must do that all manually through
a user interface. Every time we want to start, change, or delete an instance, we must repeat
the same steps, login into the web console, go through different menu options, find what we
are looking for, fill the required fields in the form, and execute command. Using Terraform
we can achieve the same result, but in code.
With terraform, through code, we define the state of the infrastructure. For instance, we
want to create and start three EC2 instances on AWS and whenever we run terraform it will
check and make sure that those three instances are running on the cloud platform. Even if
would manually, through the web console, change some instances, by running terraform it
will try to match the code with the actual infrastructure. For example, if we have three
instances defined through code in terraform definition files, terraform will launch those
instances. If we would stop one of the instances through the web console, the next time we
run terraform, it will make sure that a new instance is created so that it matches the state of
our infrastructure.
We have mentioned that the instances are started using code, written in a domain-specific
language called Hashicorp Configuration Language (HCL). The code is written in files called
terraform files and they represent what we previously defined as definition files in
infrastructure as code. It makes the infrastructure auditable because we can understand
what our infrastructure is made of by just looking what is defined in the terraform files.
Additionally, it allows us to keep the infrastructure history changes in a version control
system like Git or Subversion. That mean that every change that is made to the
infrastructure, can be tracked in the version control system.
Unlike Ansible, Chef, or Puppet, which focus on automating the installation and
configuration of software, terraform automates the provisioning of the infrastructure itself
and it keeps the machines in compliance in a certain state. Terraform can also be combined
with an automation software like Ansible. For example, we can use terraform to start some
instances on AWS, and then when the instances become available, we can use Ansible to
install and configure the software on those instances [23].
Predrag Falcic 20
Code written using the Hashicorp Configuration Language must be saved in files with file
extension .tf so that terraform can interpret it. A simple example of a terraform file is given
in the Code Listing below.
variable "test_var" {
type = "string"
default = "The value of the variable"
}
variable "test_map" {
type = map(string)
default = {
key = "some value"
}
}
The file from the above example is very simple, but it allows us to see how HCL syntax
looks like and how a .tf file is structured. The file has a variable, which is of type string, and
it has a default value. Also, we can see how a map variable is created. To execute terraform
files we can use the terraform apply command. This command runs the terraform files
and it will try to create an infrastructure exactly as it is defined in the terraform files. Of
course, if we would execute the above file, nothing much would happen because we have
not specified a provider. The providers, which we can see later in the study how to create
them, are created using the key word provider and defining the required attributes for the
provider. Additionally, to variables and providers we can create resources by specifying
their type and name.
Predrag Falcic 21
3. SYSTEM DESIGN
In this chapter we will discuss how the practical example is designed. We will take a closer
look at the requirements of the system, what are the different parts of the system, and how
do they connect and interact with each other. Also, we will take a closer look at the tools
which were used for each part of the system. In this chapter we will discuss the system as
a whole and in later chapters we will show the architecture of the web-application and
infrastructure.
The practical example represents a microservice-oriented architecture for managing
addresses of persons and companies. The Application was built using the Spring-boot
framework at the backend and Angular framework as the frontend application. The goal was
to give an practical example and develop an application that allows the user to add new
persons, companies, addresses, and to connect the addresses with a person or company.
The whole application will be divided into microservices, each functioning independent from
another. The microservices will be deployed on AWS Cloud. For the deployment and for the
allocation of the resource we will use terraform.
We will also discuss how we could use Jenkins for automating our deployment and
management of the infrastructure.
With the use of terraform the allocation of the required resources will be allocated. So, a
ECS instance will be created, and it will host the docker containers with the corresponding
application. Each container will be run on an EC2 instance. The Dynamo database will also
be instantiated using terraform. In the next chapter the design of the infrastructure will be
discussed in more detail.
Finally, we will use Jenkins for deployment and infrastructure automation. We will use
Jenkins to build docker images, push the images on AWS ECR, and run the images on
AWS ECS. Jenkins will trigger a redeployment of a docker container if a newer version of a
docker images is pushed into the AWS ECR. Also, if any changes are made to the terraform
file, Jenkins will automatically start the execution of the new terraform file so that the
infrastructure is updated.
Predrag Falcic 23
used the Dynamo database which, for the development phase, will be instantiated on the
local machine, but later, when we deploy the application on AWS, we will use terraform to
instantiate an instance of the Dynamo database on AWS. Each of the backend and frontend
applications will be running in a docker container. Basically, for each microservice a docker
image will be built. The docker images will be pushed to AWS ECR and from there they will
be deployed on AWS ECS. We will use only one EC2 instance which will host the ECS. We
will see how we can set up Jenkins and how we can automate the deployment process. The
main goal of the practical example was to present a simple system that offers the basic
CRUD (create, read, update, and delete) operation on entities, based on small independent
microservices, and to deploy the system on AWS cloud. The practical example should be
seen as an example of how the different technologies can be used together and how
terraform can help us with automating the management of the infrastructure.
3.2.1 Spring-boot
In the previous section and chapters, we mentioned that the system will consist of 5
microservices. One of them will provide the graphical user interface which the user can use
to interact with the system, and we will discuss it in the next section. The other four
microservices are implemented using the programming language Java and the Spring-Boot
framework. In this section we will discuss how spring-boot framework was used for creating
the backend part of the system.
Spring-boot is an open-source java-based framework developed by a company called
Pivotal. It is one of the most popular frameworks for java developers for developing web
applications because it makes creating stand-alone spring applications very easy. We can
set up minimum configuration to start a simple web application, unlike when we use spring.
There is a difference between Spring and Spring-boot. The basis of Spring-boot is the
Spring Framework. It uses dependency injection to inject the required dependencies into
our application [24].
For example, one such dependency would be “spring-boot-starter-web” which allows us to
create an application which accepts HTTP requests from other applications. An example
how the dependencies are configured in the maven pom.xml file is given in the Code Listing
2.
Predrag Falcic 24
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- Database dependency -->
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-dynamodb</artifactId>
<version>1.11.64</version>
</dependency>
<dependency>
<groupId>com.github.derjust</groupId>
<artifactId>spring-data-dynamodb</artifactId>
<version>5.1.0</version>
</dependency>
In the Code Listing 2 we can see that in the project we have injected three dependencies.
As previously mentioned, the dynamo database will be used. So, to be able to implement
the features for creating, reading, updating, and deleting data from the database we must
include the required dynamo dependency.
The dependency “spring-boot-starter-web” is used for building RESTful web application in
Spring-boot. It also uses Tomcat as the default embedded container, which makes starting
the application easy and can it can be done with the following command: mvn spring-
boot:run.
For the application to be accessed from other applications, we need to define some entry
points, which accept HTTP Get, Post, Put, or Delete requests. There are other types of
requests, but in the application only these four were used.
Predrag Falcic 25
@RestController
@RequestMapping("company")
public class CompanyController {
Logger LOG =
LoggerFactory.getLogger(CompanyModuleApplication.class);
@Autowired
private CompanyServiceImpl companyServiceImpl;
@GetMapping
public List<CompanyDto> getCompanies() {
LOG.info("getCompanies(): Retrieving all persons from the
database");
return companyServiceImpl.getAllCompanies();
}
@GetMapping("/{id}")
public CompanyDto findById(@PathVariable("id") String id) {
LOG.info("findById: Searching for company with id: {}",
id);
return companyServiceImpl.findCompanyById(id);
}
@PostMapping
public CompanyDto saveCompany(@RequestBody CompanyDto
companyDto) {
LOG.info("Person {} received from GUI Module",
companyDto.toString());
return companyServiceImpl.addCompany(companyDto);
}
@DeleteMapping("/{id}")
public String deleteById(@PathVariable("id") String id) {
companyServiceImpl.deleteCompanyById(id);
return "Company with id: " + id + " deleted";
}
}
By using the right annotation like “@DeleteMapping” and passing it an id of the entity, the
entity will be deleted from the database. Post requests are used for inserting data in the
database and it is assumed that the entity that is being saved does not exist at the time.
The Put request is used for updating the data in the database, and the Get request is used
for reading the data from the database. By defining a request mapping
Predrag Falcic 26
(@RequestMapping(“company”)) we specify under which path our endpoints are
accessible. For example, if we assume that the application is running on our local machine,
we can get all companies from the database by sending a Get request at the address:
http://localhost:8080/company.
The whole system consists of four such applications, each responsible for saving different
types of entities, and one that is redirecting the requests from the frontend application, about
which we will talk about in the next section. In Figure 7, an overview about the four
applications is presented.
As we can see from the Figure 7, the DataManagerController is responsible for accepting
requests and passing them to the corresponding Controller. For example, if we want to add
a new person, the request would be sent to the PersonController, or any other operation on
a person object would also be redirected to the PersonController. In the next section we will
see how the frontend application is built and how a request to the DataManagerController
is sent.
3.2.2 Angular
A Single page Application (SPA) is an application in which the user, when interacting with
the application and changing web pages, stays on the same page and in the background
only the different components are changed and rendered. Angular is one of the frameworks
that can be used to build such applications. Each component represents a different service
in the application which is responsible for performing some operations on the given entity.
For example, in Figure 8, we can see the responsible component for displaying a form,
where the user can insert some data about the address he is creating.
Predrag Falcic 27
Figure 8 Add Address Form
If we would switch to the page responsible for displaying a list of all created addresses
(Figure 9), we would still be on the same page, but only the component which is responsible
for creating an address will be switched with the component responsible for displaying a list
of all addresses.
Every component, depending on the action that it should perform, either creating new
addresses or displaying existing ones, sends a request to the DataManagerController to
perform the required operation on the data object. For example, in the Code Listing 4 a
request for retrieving all addresses is shown.
Predrag Falcic 28
getAll(): Observable<any> {
return this.http.get(this.ADDRESS_API);
}
The variable this.ADDRESS_API is the endpoint of the backend service, for example
http://localhost:8080/address.
The data exchanged between the angular and spring-boot application, and also between
the spring-boot applications, is in json format (Code Listing 5).
[
{
"id": "61c36b15548d2d0d34ab4c25",
"country": "Österreich",
"city": "Wien",
"street": "Teststraße",
"houseNumber": "2"
}
]
The angular application is also deployed in a docker container and represents one of the
five microservices that we mentioned earlier. Figure 10, displays the interaction between all
microservices. In later chapters we will go into more details on how the microservices
interact with each other and the data that is transferred between them.
Predrag Falcic 29
3.2.3 DynamoDB
@DynamoDBTable(tableName = "Address")
public class Address {
@DynamoDBHashKey(attributeName = "id")
private String id;
@DynamoDBAttribute(attributeName = "Country")
private String country;
@DynamoDBAttribute(attributeName = "City")
private String city;
@DynamoDBAttribute(attributeName = "Street")
private String street;
@DynamoDBAttribute(attributeName = "HouseNumber")
private String houseNumber;
}
In Code Listing 6, we can see that with the use of the @DynamoDBTable annotation we
can specify how the table, in which the data is stored, is called. We can also use the
@DynamoDBHashKey annotation over the class attributes for defining the key attribute in
the table. For mapping the remaining attributes, we can use the @DynamoDBAttribute
annotation and pass the name of the column to which the data should be mapped in the
table.
For the interaction with the DynamoDB we can work with the DynamoDB APIs directly
because it is a web service and is accessible over HTTP(S), or as I have done it in the
practical example, we can use the SDK provided by AWS. When interacting with the
DynamoDB applications do not need to maintain a persistent network connection. For a
Predrag Falcic 30
microservice to be able to save data to the database we must create a repository, an
example of the address repository is given in the Code Listing 7.
@EnableScan
public interface AddressRepository extends
CrudRepository<Address, String> {
}
@Configuration
@EnableDynamoDBRepositories
(basePackages =
"com.fh.campus.wien.ma.address.repository")
public class DynamoDBConfig {
@Bean
public AmazonDynamoDB dynamoDB() {
AWSCredentialsProvider credentials =
new ProfileCredentialsProvider("profileName");
return AmazonDynamoDBClientBuilder
.standard()
.withCredentials(credentials)
.build();
}
}
There are many other features in the DynamoDB which are out of the scope of this
paperwork, so therefore I have only focused on the basic functionalities and required
configurations to explain how the DynamoDB can be used and how the models are created.
3.2.4 Docker
Docker is a Linux container management toolkit which lets users publish container images.
A Docker image is a recipe for running a containerized process. In the practical example
we are building docker images for each microservice, or in other words for each spring boot
and angular application that we have implemented. Also, Docker allows us to used images
which were created by others, an example of such an image is the mongo database image.
By executing the command
docker run --name mongodb -d -p 27017:27017 mongo:latest
Predrag Falcic 31
a docker container named mongodb is started and exposed on port 27017. Docker first
looks in the repository on the local machine and if the image is not found locally it is pulled
from Docker Hub. After the method is executed, we can access the mongo database under
localhost:27017.
This example shows how a docker image, which is already built can be deployed in a docker
container.
The spring-boot and angular applications are not yet built into images and to achieve that
we first must create a file called Dockerfile. An example Dockerfile used for building the
docker image for the address application is given in Code Listing 9.
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src
In the first stage of the building a docker image the spring-boot application is built, and the
code is being compiled. In the second stage the spring-boot application is being packaged
so that it can be deployed in the container.
By executing the command
docker build -t address-controller-image .
docker will execute the code from the Dockerfile of a specific project and build an image
called address-controller-image. To deploy the image in a docker container and to be able
to access the application, the following command must be executed
docker run --name address-app-container -d -p 8081:8081 address-
controller-image:latest.
Figure 11, shows the connection between the docker containers in the system and their
interaction.
Predrag Falcic 32
Figure 11 Interaction between the docker containers
As we can see from Figure 11, the container containing the single page application connects
only with the DataManager container and he is responsible for forwarding the requests to
either Company container, Person container, or Address container.
In this section I wanted to represent how the docker images are built and deployed, and
how the individual containers interact with each other.
3.2.5 Terraform
In Section 2.5.2, we have talked about what terraform is, and described when it is used. In
this section I will discuss how terraform was used in the practical example to automate the
management of the AWS infrastructure required for the system.
To start allocating resources in AWS we first need a user. AWS offer a service called IAM
which we can use to create our user which will then be used in terraform. When creating a
user using AWS IAM, we can specify the permissions that the user has. By specifying the
permissions, we can control what the user is allowed to do in AWS. Also, we can select the
access type for the user, which in this example will be “Programmatic access”. This means
that the user is not going to access the AWS management console, it’s only going to access
it using the API. Programmatic access type enables the user to have a pair of access key
id and secret access key that can be used in terraform so that the required resources can
be allocated. As for the permissions our user will have the “AdministratorAccess” which
gives the user full access to the AWS resources. We can also provide the user with a group,
which in this example is called “terraform-administrators” and every user who belongs to
that group has the same permissions. Figure 12, show an example of the user we will use
in terraform.
Predrag Falcic 33
Figure 12 Created user in the AWS web console
In the terraform file we can use the access and secret key of the user and launch an EC2
instance. The Code Listing 10 shows the required code in terraform file to launch the
instance.
provider "aws" {
access_key = "AKIASKCTXFDJUZMGSVS4"
secret_key = "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"
region = "eu-central-1"
}
As we can see, we had to define a provider called “aws”, insert the access and secret key,
specify the region in which we want the instance to launch, and create a resource which
represents the actual instance. By configuring the resource for the instance, we had to
specify the “ami” which should be used and type of the instance. The first command that
always must be ran is terraform init and then terraform apply.
The terraform init will initialize the provider plugins, which is in our case “aws”. After
the terraform, file has been executed we can now see in the AWS console that one instance
was created.
Predrag Falcic 34
With command terraform plan we can see the changes that are planned to be
executed, but they will not be until we execute the terraform apply command. If we
want to destroy the resources allocated when a terraform file was executed, we can use the
command terraform destroy. In the next Chapter I will discuss how other AWS
resources were configured and how they are interconnecting with each-other.
3.2.6 Jenkins
In DevOps the term Continuous Integration is the most important part that is used to
integrate different DevOps stages. Jenkins is written in Java, and it is an open-source tool
that has plugins built for Continuous Integration purposes [26]. Jenkins help us to faster and
safer deliver newly implemented features to a system. Jenkins usually checks when there
are new changes in a version control system and start the configured Jenkins jobs. Jenkins
jobs can be configured for different purposes, like checking the new code, building it, and
executing the tests. For an example, if the tests fail, Jenkins breaks the job, and the new
code will not be deployed on the server because is it not working. Figure 14 shows how
Jenkins was used in the practical example.
We have a developer, that makes some changes to the source code. The changes are
committed and pushed in a version control system like Git or Subversion (SVN). Jenkins
checks periodically if there were some new changes and executes the configured Jenkins
jobs. In the Code Listing 11, is an example of the commands that the Jenkins job executes
to build a new image and to push the image in the AWS ECR.
Code Listing 11 Docker Commands for building and pushing the docker images
The docker build command will build the docker image based on the Dockerfile which is
pushed in the repository. Jenkins will pull the code on the server where it is running and
then execute the commands. The GIT_COMMIT variable is an environment variable which
I have used to give the docker image a version, which also represent the application version.
Predrag Falcic 35
The second command is the “docker push” command which pushes the built image to the
repository.
After the build has finished successfully it starts another Jenkins job, if there were some
errors in the build the second Jenkins job will not be started. The second Jenkins job then
deploys the docker image in AWS ECS by executing the command terraform apply.
The workflow of how the resources is allocate on the AWS platform is shown in the following
Figure 15. From the AWS container registry, the images are pulled. The AWS ECS
executes the task for creating the containers. Each container runs in an EC2 Instance.
Predrag Falcic 36
4. INFRASTRUCTURE DESIGN
In the previous chapter we have discussed the tools that were used for the implementation
of the practical example and gave a short example how the workflow, from changes that
were implemented in the code, to the deployment on the AWS ECS, can be automated
using Jenkins. We have also discussed how, from the individual applications, docker
images are built, and deployed into docker containers. In the Figure 6, a rough presentation
of the whole system is given. We have also discussed the interaction between the individual
docker containers and how the microservice-oriented architecture is built.
The applications are hosted on AWS cloud. There are many different components in AWS
that must be configured so that the system can function flawlessly. Like in each application
where we have used different frameworks and dependencies to be able to build the
application, in AWS we must configure different components. For example, we have seen
how the user which is used in terraform for the resource allocation is created and what
permission he has. In this chapter I am going to discuss how terraform is used for running
the docker images from AWS ECR on AWS ECS, how the load balancer and networking
are configured. I will also explain how the different components interconnect with each other
and discuss their role in the infrastructure architecture.
Predrag Falcic 37
resource "aws_ecr_repository" "gui-app-repo" {
name = "gui-app-repo"
}
After executing the terraform file from the above code listing, if we would now login into the
AWS web console, we should see the five repositories created, like in the Figure 16.
To use the docker images from the repository, an AWS ECS cluster must be started. It will
be responsible for managing the docker containers. Once the ECS cluster is started, tasks
and services can be started on the cluster. The cluster will contain only one EC2 instance
which have the ECS agent and the ECS service will manage the cluster. ECS uses task
definitions to launch the docker applications. The task definition describes what docker
containers should be run on the cluster, by specifying the docker image from the ECR. We
can also specify the maximal CPU usage and memory usage. We can also specify if the
containers should be linked with other docker containers. In our example we are linking the
DataManagerContainer with Person, Company, and Address Containers so that they can
communicate with each other. After the task definitions are created, a service definition
must be defined, which is going to run a specific number of containers based on the task
definition. In the practical example I have used only one instance. Code Listing 13, shows
how the tasks are defined in terraform.
Predrag Falcic 38
resource "aws_ecs_cluster" "gui-cluster" {
name = "gui-cluster"
}
The task definition “gui-app-task-definition” will create a container based on the template
file containing all the container’s configurations. For the container configuration I have used
a json template (Code Listing 14) which is loaded when the terraform file is executed. In
the example I am only showing how the task definitions look for the single page application,
because the other containers are deployed similarly, we are only changing the ports on
which they are running. I have used this application in the example, also because for the
application I have configured a load balancer.
[
{
"essential": true,
"memory": 256,
"name": "gui-app",
"cpu": 256,
"image": "${REPOSITORY_URL}:1",
"workingDirectory": "/app",
"portMappings": [
{
"containerPort": 4200,
"hostPort": 4200
}
]
}
]
The service definition is shown in Code Listing 15. We must specify the cluster, the task
definition, and the number of container instances that we want to run. If a container from
the service fails or stops running, it will be restarted. A service can have one or more
containers. We are also defining a load balancer, which will redirect the requests to the
Predrag Falcic 39
container gui-app and the port 4200 on which the container is running. Typically, we will run
multiple instances of a container, deployed in different availability zones. I on container
stops, the load balancer will stop sending traffic to it. Running multiple instances with a load
balancer allows us to have high availability.
load_balancer {
elb_name = aws_elb.gui-app-elb.name
container_name = "gui-app-container"
container_port = 4200
}
}
The service definition is defined as a resource, and we provide it with a name. We specify
the cluster in which the service should run. We are also defining the number of instances of
the container should run, which is in the above example one. The IAM role is used to give
the service the required permissions, which are defined by AWS policies. We are
configuring the service so that first the role must be created and then the service. We are
also specifying the load balancer, which points to the container gui-app-container and the
container port 4200.
The configuration of the load balancer is presented in the Code Listing 16. The load
balancer created in the example is called “gui-app-elb”. Because I have also configured a
VPC, I had to define the subnets in which the load balancer should be in. So, in the example
below, the public subnets are going to be the subnets that this load balancer is presenting.
The load balancer will have two IP addresses, one for each public subnet. The security
group only allows HTTP traffic to port 80. The listener that is configured so that it is listening
on port 80 and it is going to forward the traffic to the instance port, which is 4200 where the
application will be running. I have also configured the health check and set the threshold to
2. So, before the traffic can be sent to the instances, there must be at least two healthy
checks. If the health checks for an instance failed both times the load balancer will not be
sending traffic to that instance. We will perform a GET request on port 4200 on “/” and check
if the website is healthy. The health checks are only performed on a specific page, so that
page can also be “/health”. With interval attribute we can specify how often the checks for
an instance should be performed, in this example it is every 60 seconds.
Predrag Falcic 40
resource "aws_elb" "gui-app-elb" {
name = "gui-app-elb"
listener {
instance_port = 4200
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 30
target = "HTTP:4200/"
interval = 60
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
subnets = [aws_subnet.main-public-1-a.id,
aws_subnet.main-public-2-a.id]
security_groups = [aws_security_group.gui-app-elb-
securitygroup.id]
tags = {
Name = "gui-app-elb"
}
}
Code Listing 16 Terraform file for the creation of the Load Balancer
For the database, in the practical example, the DynamoDB is used. In the Code Listing
17, the terraform code for the configuration of a Dynamo Table is shown.
Code Listing 17 Terraform file for creating the DynamoDB address table
The configuration for the DynamoDB database is very simple. We must specify the table
name, the attribute that represents the hash key and the other attributes. When we define
Predrag Falcic 41
the attributes, only the hash_key and range_key attributes must be declared. In the above
example we only have the hash_key, which is the id of the address.
In this section we have discussed the configuration of the AWS ECR, AWS ECS, the load
balancer, and explained how the components can be configured using terraform. We have
seen how the docker images are pushed to the ECR and how the ECS is configured to start
the cluster with the EC2 instance and the docker container for the frontend application. We
have discussed the configuration of the load balancer and how health checks are performed
so that we always know if the instances are healthy, either the new instances that were
created or the existing ones. Finally, we have seen how the DynamoDB tables are created
using terraform.
Predrag Falcic 42
The VPC covers the region eu-central-1, which is a region in Frankfurt, and it has two
availability zones called eu-central-1a and eu-central-1b, which are datacenters in one
region. The availability zones also have public and private subnets. Our instances will be
launched in either the public subnet or the private subnet. This VPC uses the 10.0.0.0/16
addressing space, which allows us to use the IP addresses that start with “10.0.”. When we
setup the VPC all the IP addresses that we can use are private IP addresses. Which means
that the address cannot be accessed over the internet, rather they are accessible only from
the VPC. Every availability zone in the VPC has its own public and private subnet with their
own IP ranges, so in the example above we have four subnets. So, we have one region
(eu-central-1), two availability zone (eu-central-1a and eu-central-1b), and every availability
zone has two subnets, one private and one public. For example, instances that are launched
in the eu-central-1b public subnet, will have their IP addresses start with “10.0.2.”. So, we
have split our VPC into subnets and those subnets each belong to an availability zone. The
public subnets are accessible over the internet and the private one is not. There are ways
for the instances launched in a private subnet to connect to the internet, but not the other
way around. Instances launched in the public subnets can reach instances from the private
subnet, because they are all in the save VPC.
In Figure 18, a deployment diagram of the system is presented. So, we can see that every
request from the internet must go through a load balancer. Before the load balancer as we
have previously mentioned, we have an internet gateway defined for our endpoints to be
available to the internet. The load balancer passes the request to the ECS instance. In
Figure 18, I have only shown one availability zone, but we have two. In each availability
zone we have an ECS instance which manages the docker containers. Each docker
container runs on a EC2 instance.
# Internet VPC
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"
enable_dns_support = "true"
enable_dns_hostnames = "true"
enable_classiclink = "false"
tags = {
Name = "main"
}
}
# Subnets
resource "aws_subnet" "main-public-1" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
availability_zone = "eu-central-1a"
tags = {
Name = "main-public-1"
}
}
resource "aws_subnet" "main-private-1" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.3.0/24"
map_public_ip_on_launch = "false"
availability_zone = "eu-central-1a"
tags = {
Name = "main-private-1"
}
}
We create a resource “aws_vpc” with the name “main” and set it IP address block to
“10.0.0.0/16”. Next step is to create the private and public “aws_subnet”. We create two for
public subnets and two for the private subnets. We create two for each, because we have
two availability zones. When defining a private subnet, the only difference we had to do is
to set the attribute “map_public_ip_on_launch” to “false”. Because there would be too much
code in the code listing, I have only showed an example of the availability zone “eu-central-
1a” because they are similar.
Predrag Falcic 44
In Code Listing 19, we define the internet gateway because the public subnets need to
communicate with the internet. In the route table we must define a route so that all IP
addresses which are not matching the VPC (0.0.0.0/0) IP addresses, are routed over the
internet gateway. Last step is to associate this route table with the public subnets. Now if a
request comes from the internet, it will go over the internet gateway.
# Internet GW
resource "aws_internet_gateway" "main-gw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main"
}
}
# route tables
resource "aws_route_table" "main-public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main-gw.id
}
tags = {
Name = "main-public-1"
}
}
To control the access to the AWS resource, as mentioned in previous chapters, AWS offers
the Identity and Access Management service (IAM). Using IAM we can create groups,
users, and roles. User can have groups, for example in the above example we have created
a group called terraform_administrators and gave the group to the user we used to start the
instance. Groups can have one or more permissions, which define what the user, which is
assigned to the group, can do in AWS. Users can authenticate using their username and
password. For better security the users can also use multifactor Authentication software or
an access key and a secret key which were used in our example. Roles in IAM can give
users and services access that they normally would not have. The roles can also be
attached to an EC2 instance. From that instance, a user or a service can obtain access
credentials. Using those access credentials, the user or service can assume the role, which
gives them permission to do something. For example, if we create a role and assign the
role to an EC2 instance and we provide the role with some permissions. If we would now
login in, we would be given temporary access credentials and would not have to use our
own ones. Now, because we logged in with the temporary credentials, we will be able to
Predrag Falcic 45
execute all the operations that were given to the role through permissions. But if we would
login with our own credentials, we would not be able to perform that same actions. It is
important to mention that IAM roles only work on EC2 instances, and not for instances
outside the AWS. In Code Listing 20, an example of how groups are defined in Terraform
and how we can attach policies to the group. In the example we create a group
“administrators” and attached the policy for AdministratorAccess and only users added to
this group will have this policy. We have also created one user and added the user to the
group.
resource "aws_iam_group" "administrators" {
name = "administrators"
}
# user
resource "aws_iam_user" "admin" {
name = "admin"
}
Code Listing 20 Terraform file for creating the IAM Groups and Policies
Security is a very large topic, and in this section, I have only presented the ways to make
our application securer, that we also used in the practical example. So, we have discussed
how the applications that we do not want to be access publicly can be deployed in private
subnets and how the subnets can be configured and defined using terraform. In most cases,
in a team there are more than one developer that is responsible for the infrastructure, or we
are using different tools for the automation like Jenkins for example, we can define user
groups and restrict the access of the user that they have in AWS. For example, we could
have a user used in Jenkins and the user can only push image to ECR and run them the
ECS. If we would try to use the same user for creating or making some changes to an EC2
instance, we would run in an error. The goal of this section was also to give an overview of
the region that I have used in the project and how the availability zones and subnets fit in
the whole system.
Predrag Falcic 46
5. VALIDATION
In this paper we have talked about the use of terraform as a tool for managing the AWS
infrastructure. We have seen how terraform can be used for resource allocation on AWS,
and how a microservice-oriented application can be deployed using docker containers on
AWS cloud.
In the validation process we will compare the deployment of the application and the
allocation of the resources on AWS cloud by using terraform and by doing every step
manually using the AWS web console.
Predrag Falcic 47
Figure 19 VPC Configuration in AWS
To deploy the application, like with terraform, we first must build the docker images and
push the in AWS ECR. The docker images were not built again, instead the same images
used with terraform were reused because the process of building the images is the same in
both cases. The next step is to create the ECR registry where we can push the docker
images. Figure 20, show how the form for creating the ECR registry in the web console
looks like.
As we can see it is very easy to create the required repository using the web console.
Pushing the docker images into the created repository will not be covered again because it
is the same as described in previous chapters.
Predrag Falcic 48
Now that the images are pushed and ready to be deployed, we first must create an EC2
cluster. On the AWS ECS page, we can select the option for creating the new cluster. I have
only provided the name to the cluster as we did when we used in terraform.
After creating the cluster, we can create the task definitions which takes the docker image
from the AWS ECR and deploys it to a container. We have three options in the web console,
when creating the task definition.
• Fargate
• EC2
• External
I will not discuss each one of them, because that is out of the scope of this paper. In
terraform we have used the EC2 option, so that is what we will also use now. In the setup
form we can give the name to the task and add a container. For the task size I have used
the same values like in terraform. When creating the container, we can specify the name,
image that should be deployed, memory limits, and the port mappings for the container.
After the task is created it should be show in the list of all tasks as active task (Figure 21).
Next step is to create the load balancer for which will redirect every request to or container
running on port 4200. Amazon requires us to again fill out a form to create the load balancer.
Additional to the configuration showed in Figure 22, we can select the network mapping,
where we will specify our VPC and we must select the availability zones, which in our case
are eu-central-1a and eu-central-1b. Lastly, we specify the listener port of the load balancer
which is in our example port 80.
The goal of this section was to give a brief explanation of how the required resource can be
created and configured from the AWS web console. The whole processes of setting up the
infrastructure is very complex, and it is out the scope of this paper to show every single step
required. The goal was to give an overview of how the configuration looks like in the web
console, how the individual forms, that are required to be filled, look like and how many data
must be entered. In the practical example in both cases, when using terraform and when
using the web console, only the basic configuration was implemented. For example, in a
real-world project we would not be able to create one user and give him all the permissions,
but to keep the example simple and to be able to explain the differences and required steps
in each method for configuring the infrastructure, I have chosen otherwise.
The validation process was performed by comparison of the infrastructure management
using terraform and manually using the AWS web console. The practical example was
deployed on the AWS cloud using both which we also used as the basis for the validation
process.
Predrag Falcic 50
5.2 Validation results
In this section, the obtained results of the comparison of the two methods for AWS
infrastructure management will be shown. The validation of both processes was performed
only by myself, so the results are my objective opinion of both methods. The research
results show how Terraform can be used to improve the setup and deployment of a
microservice-oriented architecture on AWS Cloud. The results also describe how Terraform
can be used with continuous delivery and continuous deployment.
Deploying and configuring the required AWS resources with Terraform required to first learn
the syntax of the HashiCorp configuration language used to write Terraform scripts.
Depending on the level of experience this must be considered when using Terraform.
Using Terraform allows for fast feedback loops which ease the development process. After
executing the scripts an error is thrown if the script is invalid, for example, a resource is
referenced that does not exist. If the script can be executed correctly, Terraform provides
immediate feedback in form of an execution plan on the configuration changes that will be
performed on AWS platform. The execution plan helped me to always see what will be
changed if the script would be executed and check if that is really the change that I wanted
to make.
A Terraform configuration file was written for the practical example of this paper and we can
also use that same file to reconfigure our infrastructure. By using modules multiple
components can be combined into reusable pieces.
Another useful feature of Terraform are state files. When we execute a Terraform script, all
the components which were create on AWS or any other cloud platform, are saved in a
state file. The state file is by default saved on local file system, which is fine because I was
working alone on the project, but it can also be shared with a team so that everyone has
the latest version of the file. This is important because when working as a team everyone
should know what resources were allocated and how they were configured, so that they
don’t override each other’s work.
While implementing the practical example, I have noticed that while I was working on the
Spring-Boot and Angular application, someone else could have worked on writing the
Terraform script for preparing the required resources on AWS. So, by using Terraform we
can work with both developers and operations engineers to reduce the time to market for
new features. Terraform integrates with development and deployment pipelines and the
Terraform files can even be stored in the same repository as the application code.
In the practical example the application was only deployed on AWS. Some systems are
very complex and we can have different part of the system deployed on different platforms,
for example both Google Cloud and AWS. Terraform makes it easy to configure the required
resources and allocate them on both platforms by executing one script. If we would use only
the Google and AWS web consoles, we would have to configure the resources manually on
both platforms.
By performing the configuration of the required components using the AWS web console,
we are guided on what has to be entered and which fields are required to be filled out. This
makes work with the web console easier if we are beginners and have no prior experience
using Terraform and AWS. I have never used Terraform and AWS before the practical
example, so I first had to learn what are the components in AWS that I can use and it was
easier to experiment with AWS through the web console. After I have configured the
required components through the web console, I was able to start with writing the Terraform
script. When starting with Terraform I had to first learn the syntax and learn how the
resource are configured with Terraform. So, basically for both methods there is a learning
curve and without knowing how the specific cloud platform works, it is hard to write a
Terraform script.
Predrag Falcic 51
The sub research question that I tried to answer and to which I just briefly described the
process is how Terraform could be used with continuous delivery and continuous
deployment. For implementing this process, I have used Jenkins. As I have previously said,
when using Terraform, the code is written in Terraform scripts and this allows us to push
the scripts in version control systems like Git so that multiple persons can work on them. By
using Git, we can also create different branches for different configurations. Jenkins can be
configured so that when a branch is merged into the main branch it executes the Terraform
scripts. In the practical example I have always built the docker image and pushed it to the
repository and after that the terraform apply command was executed.
By checking the Terraform scripts into a version control system we can also manage the
version of the configuration. We can also revert to previous version if the new one contains
errors and does not work. Using an automation tool like Jenkins, allowed me to concentrate
on writing the script and by commit and pushing the script Jenkins would execute it.
Through this paper, I first came in contact with Terraform and AWS, I had no previous
experience with either of these tools. I first had to learn how to use the AWS cloud platform
and how deploy applications on it. After that I had to learn the Terraform syntax to be able
to write the scripts necessary to configure the required AWS components. This is one of the
things that should be considered when deciding if in a project Terraform or the AWS web
console should be used. Both methods require some time to learn how to use them. Also
depending on the experience of the developer the time needed to learn how to use them
may vary.
5.3 Discussion
Both methods have a learning curve coming with them and because of that the experience
of the developer plays an important role when it comes to both these technologies. I am
mentioning this because to be able to decide which of the two methods should be used in
an organization is very important in the role of a software consultant. Based on the
experience of the team or oneself we must be able to give a good advice and estimate how
long it would take us to implement one of the methods.
From my experience, I would say that having some experience using the AWS cloud
platform, can improve someone’s learning curve of how to use terraform a lot. Because
terraform does the same thing as when we would configure the AWS components directly
into the AWS web console. So, knowing what is needed when configuring the IAMs, or the
VPC, or how the AWS ECR and ECS work together, can make learning terraform easier.
On the other hand, knowing how to use terraform, for example knowing the syntax, but not
having any experience with AWS will not help learning both faster. In my experience,
learning how terraform functions, the commands, and the syntax, was not so difficult, but
even though I learned that fast, I still had troubles configuring the AWS components.
The time it would take a developer to learn both technologies is a valid point in deciding
what method to use and when, but it is not the only one. Both setups, configuration using
terraform and the manual configuration using the web console, take some time. In the
practical example I have used both methods and if we are experienced with both methods,
then they are fast to implement.
There is one more point to consider, if we would have experience with the AWS platform,
but no experience with terraform, we would still have to invest some time in learning
terraform. And the question that arises is, when should we use terraform and when should
Predrag Falcic 52
we use the AWS web console or in other words when we should automate the deployment
and when should we perform it manually.
If we do not perform deployments very often, then a migration to terraform is not the best
choice for us and we should keep executing the deployments manually. If the infrastructure
components do not change often, investing in learning terraform is not for us then. But, if
we have multiple environments, and we have deployments to the environments after every
commit that is pushed into a version control system, and the infrastructure is often destroyed
and needs to be built again, then terraform would be a good choice. Terraform can be used
to automatically recognize dependencies and executes many parallel processes of work as
possible. Other things where we can consider using terraform over the AWS web console
would be that the interface of the web console changes often, and when trying to document
the process they get outdated very fast. AWS is very complex and require a lot of experience
and understanding to do it right, which often leads to dependencies on a couple of experts
in team. Terraform would also be a good choice if there are multiple cloud platforms in use
which host different applications. The manual work to configure all the elements would take
a very long time, and the use of terraform in such a use-case could prove useful.
Automating as much tasks as possible makes the work of a development team easier. Using
Terraform we also have the possibility to use tools for automating the deployment of our
configuration or the application itself. Jenkins is an example of one such tool. By using
Jenkins, it is possible to integrate Terraform in the CI/CD pipeline. In this paper I have only
briefly described how I have integrated Jenkins in the practical example, but there are other
ways to do it.
By using Terraform we have seen how very complex tasks, performed in the AWS web
console, for configuring the environment used for our application, can be split into multiple
scripts that can be executed through an automation tool like Jenkins. Also Terraform gives
us instant feedback of the changes that will be performed and if there are even any changes
at all. Terraform displays these changes as a plan, which can then even be documented
and discussed with the team if that are really the modifications that we want to perform.
Predrag Falcic 53
6. CONCLUSION
In this paper we have discussed what terraform is and how it can be used to automate the
management of the AWS cloud platform. I have also represented how a microservice-
oriented architecture can be implemented using different software development tools. We
have seen how the system can be deployed on AWS platform, how the required resources
are allocated using terraform. Lastly, we have an example how Jenkins could be used with
Terraform for automating the deployment process.
In the validation phase of the paper, I have discussed the comparison of deploying such a
system using terraform and manually using the AWS web console and how Terraform can
improve the deployment of a system on AWS. I have also given a briefly explanation of the
benefits of using Terraform with an automation tool like Jenkins.
I have concluded that both technologies have their advantages and disadvantages, and that
there is no right or only way when and how to use either of them. Some of the factors to
consider when making the decision are:
• Experience of the development team
• How often are the deployments and the reconfigurations performed?
• Are multiple cloud providers or other servers in use in the organization?
Today, the most important thing is to react quickly to changes in the market, to always be
ahead of the competition. Such requirements in software development are increasingly
leading to the automation of as many processes as possible. Such a requirement does not
only affect the consumer-focused organizations, but we can see the effects on one of the
biggest cloud providers like AWS or Google. Their web interfaces for the cloud platform
consistently change to improve the user experience or to make it easier to perform specific
tasks. This often changes in the interface and the complexity of the cloud platforms, and the
configuration of the individual components has led to the rise of tools used for infrastructure
management and automation.
The goal of this paper was to analyze the Infrastructure as Code tools and suggest how it
can be used to improve the static IT infrastructure of an organization. Two different methods
were presented for configuring the AWS components, so that they interconnect with each
other as one system. Based on the learnings from this paper, I have following suggestions,
on how to use Terraform to improve the management of the infrastructure:
• Share the Terraform state files among the team, so that everyone has the latest
version of the infrastructure configuration. Otherwise, the members of the team will
override each other’s work
• Use the Terraform fast feedback loop to find and fix bugs early in the development
phase
• Use version control tools, like Jenkins, to automate as much tasks as possible. For
example, to perform the apply command on the updated Terraform scripts
• Terraform configuration scripts are human readable, so everyone can see the
configuration of the infrastructure, by just going through the scripts
• Use Terraform modules for creating reusable configuration scripts
Because of the rise of microservice-oriented architecture and many more tools that are
invented that make the work with microservices easier, I have decided to give a simple
example of how such an architecture can be built and deployed on AWS. Today, we are
trying to automate as much tasks as possible, especially redundant tasks like executing
Predrag Falcic 54
deployment scripts. That is why I have also decided to dedicate a small part of this paper
and try to explain how this automation can be achieved when using Terraform and Jenkins.
Predrag Falcic 55
Bibliography
[1] Anurag: Top 5 Cloud Platforms and Solutions to Choose From, 2020.
[2] Valér Orlovský: KubeSharper: An SDK for Building Kubernetes Operators in
C#, Denmark: Aalborg University, newgenapps, 2020.
[3] Tony Mauro: Adopting Microservices at Netflix: Lessons for Architectural
Design Lessons for Architectural Design, nginx blog, 2015.
[4] Carlos Shults: What is infrastructure as code? How it Works, Best Practices,
Tutorials, Stackify, 2019.
[5] L. Shklar and R. Rosen, Web Application Architecture: Principles, protocols
and
practices, England, 2003.
[6] Irakli Nadareishvili, Ronnie Mitra, Matt McLarty, Mike Amundsen, Microservice
Architecture: Aligning principles, practices, and culture, USA, 2016.
[7] Leon Shklar, Richard Rosen, Web Application Architecture: Principles,
protocols and practices, England
[8] Leonard Richardson, Sam Ruby, RESTful Web Services, USA, 2007
[9] Jon Penland, A Complete Guide and List of HTTP Status Codes, Kinsta Blog,
2021
[10] Mark Reed, Docker: The Ultimate Beginners Guide to Learn Docker Step-By-
Step, 2020
[11] James Turnbill, The Docker Book: Containerization Is the New Virtualization,
2019
[12] Mark Richards, Software Architecture Patterns, USA, 2015
[13] Rajkumar Buyya, James Broberg, Andrzej Goscinski, Cloud Computing
Principles and Paradigms, Canada, 2011.
[14] Mark Wilkins, Learning Amazon Web Services (AWS): A Hands-On Guide to
the Fundamentals of AWS Cloud, 2020
[15] Amazon AWS Regions and Availabitly Zones, Available:
https://aws.amazon.com/about-aws/global-infrastructure/regions_az/,
[Accessed 03.11.2021]
[16] Amazon Elastic Container Registry, Store and Manage Docker Containers,
Available: https://www.amazonaws.cn/en/ecr/, [Accessed: 03.11.2021]
[17] What is Amazon Elastic Container Service, Available:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome,
[Accessed: 04.11.2021]
Predrag Falcic 56
[18] Jinesh Varia, Sajee Mathew, Overview of Amazon Web Services, 2014
[19] Jeff Barr, Attila Narin, Jinesh Varia, Building Fault-Tolerant Applications on
AWS, 2011
[20] Alber Anthony, Mastering AWS Security: Create and maintain a secure cloud
ecosystem, 2017
[21] Alex DeBrie, The DynamoDB Book, 2020
[22] Kied Morris, Infrastructure as Code: Managing Servers in the Cloud, 2016
[23] Kirill Shirinkin, Getting Started with Terraform: Manage production
infrastructure as a code, Birmingham - Mumbai, 2017
[24] Michiel Mulders, What is Spring Boot, Available: https://stackify.com/what-is-
spring-boot/, 2019, [Accessed: 16.12.2021]
[25] Ashan Fernando, AWS DynamoDB for Serverless Microservices, Available:
https://enlear.academy/aws-dynamodb-for-serverless-microservices-
2acbbbff1bca
[26] Yasas De Silva, CI - CD Automation with Jenkins, Available:
https://medium.com/@myasas/ci-cd-automation-with-jenkins-179fb674a80a
Predrag Falcic 57
List of Figures
Figure 1: The Structure of Containers and Virtual Machines .................................. 6
Figure 2: Layered Architecture ................................................................................ 8
Figure 3 Microservice-oriented architecture .......................................................... 10
Figure 4 Amazon Web Services ............................................................................ 12
Figure 5 Regions, availability zones, and local zones ........................................... 13
Figure 6 System Overview .................................................................................... 23
Figure 7 Interaction of the individual Microservices .............................................. 27
Figure 8 Add Address Form .................................................................................. 28
Figure 9 List with all Addresses............................................................................. 28
Figure 10 Interaction between the GUI Container and other containers ............... 29
Figure 11 Interaction between the docker containers ........................................... 33
Figure 12 Created user in the AWS web console ................................................. 34
Figure 13 Created Instance in the AWS web console ........................................... 34
Figure 14 Jenkins Workflow for CI ........................................................................ 35
Figure 15 Terraform workflow for deploying docker containers ............................ 36
Figure 16 The created Repositories in the AWS web console .............................. 38
Figure 17 VPC Overview ....................................................................................... 42
Figure 18 Deployment diagram ............................................................................. 43
Figure 19 VPC Configuration in AWS ................................................................... 48
Figure 20 Creating the ECR for the GUI Application in AWS ................................ 48
Figure 21 List of created Tasks in AWS ................................................................ 49
Figure 22 Configuration of the Load Balancer in AWS.......................................... 49
Figure 23 Creating DynamoDB table for the Addresses ....................................... 50
Predrag Falcic 58
List of Tables
Table 1 HTTP Request Methods ............................................................................. 5
Table 2 Response Status Codes............................................................................. 5
Predrag Falcic 59
Code Listings
Code Listing 1 Terraform Syntax example ............................................................ 21
Code Listing 2 Spring-Boot dependencies example ............................................. 25
Code Listing 3 Company-Controller Endpoints ..................................................... 26
Code Listing 4 Get Request from the frontend application.................................... 29
Code Listing 5 Data in JSON Format sent to the Backend ................................... 29
Code Listing 6 Address Data Model Class ............................................................ 30
Code Listing 7 Repository for saving data in the database ................................... 31
Code Listing 8 Configuration of the connection with the DynamoDB .................... 31
Code Listing 9 Dockerfile for building the Address Docker image ........................ 32
Code Listing 10 Terraform file for launching an AWS Instance............................. 34
Code Listing 11 Docker Commands for building and pushing the docker images 35
Code Listing 12 Terraform file for creating the ECRs............................................ 38
Code Listing 13 Terraform file for creating the tasks............................................. 39
Code Listing 14 Json Template for container configuration .................................. 39
Code Listing 15 Terraform file for the service definition ........................................ 40
Code Listing 16 Terraform file for the creation of the Load Balancer .................... 41
Code Listing 17 Terraform file for creating the DynamoDB address table ............ 41
Code Listing 18 Terraform file for configuring the VPC ......................................... 44
Code Listing 19 Terraform file for creating the Internet Gateway.......................... 45
Code Listing 20 Terraform file for creating the IAM Groups and Policies ............. 46
Predrag Falcic 60