freire2021
freire2021
DOI: 10.1002/spe.2956
RESEARCH ARTICLE
Programa de Pós-Graduação em
Informática Aplicada, Universidade de Abstract
Fortaleza, Fortaleza, Brazil Several organizations need to address the challenge to migrate current tradi-
tional monolithic applications in production to microservices, preferably, with-
Correspondence
Américo Falcone Sampaio, Programa de out having to schedule maintenances to take the application offline. This article
Pós-Graduação em Informática Aplicada, presents an approach for migrating to microservices with almost zero down-
Universidade de Fortaleza, Av.
Washington Soares 1321, Edson Queiroz,
time and minimal changes in the monolithic code. The approach is based on
60811-905 Fortaleza, CE, Brazil. the concepts of aspect-oriented programming (AOP) and reflection to enable
Email: [email protected] to intercept calls inside the monolith and transform them into service requests
invoking the newly built microservices using the concept of around advices. The
aspects do the “dirty work” of decoupling what will be refactored and which
service to call and practically “zero” code changes need to be done in the origi-
nal monolithic code. This enables one key novel contribution of our migration
approach which is the ability to revert code and data changes without having to
take the system offline. Two applications are used as proofs of concept to demon-
strate that the proposed approach enables to go “forward” or “backward” among
different versions of the application with minimal code or data changes. An eval-
uation performed in the cloud demonstrates that this work does not introduce
significant performance or cost overhead when compared to the current state of
the art and to the original monolith.
KEYWORDS
aspect-oriented programming, microservice architecture, system migration
1 I N T RO DU CT ION
Microservices is a recent advance to distributed software development that focuses on building a single application as a
set of independent, autonomous, and scalable services.1-3 Each service is developed, scaled, and deployed separately as
different deployment units usually as a process running on a container such as Docker. Large-scale industrial projects
have already proven the value of this architectural style such as Netflix and Amazon.
The motivation behind microservices was to overcome some limitations of traditional application development such
as applications developed as a single large deployment unit, called monolith. Some of these limitations are:
• Monolithic applications contain a single code base and tend to become complex and large over time;
• With a large code size, maintenance becomes more complex as well as build and deployment gets costly;
1280 © 2021 John Wiley & Sons, Ltd. wileyonlinelibrary.com/journal/spe Softw: Pract Exper. 2021;51:1280–1307.
FREIRE et al. 1281
• Monolithic apps scale by cloning the whole application normally as a cluster behind a load balancer. There is no option
to scale only the “modules” of the system that contain bottlenecks. Scalability can be positively affected by migrating
a monolithic architecture into a microservices architecture as shown in Reference 4.
• The database is also a large monolith that stores large amounts of data. If the database is not replicated, it can become
a single point of failure and compromise the application availability.
• Monoliths normally represent large enterprise applications that need to be maintained for many years.
Microservices proposes a more flexible design where each software module (i.e., functionality) is developed indepen-
dently, making it simpler to deploy and develop as code bases become smaller and easier to maintain.1-3,5 Scalability also
can be on a per-service basis and the developer can have different rules for scaling different parts of the application.
Recent cases of success of microservices implementations such as Netflix increased the attention of industry and
academia. One challenge many organizations face is how to migrate existing monolithic applications to a microservices
style.5-7 The migration is a difficult task especially for monoliths that are already in production being used in daily busi-
ness activities. The monolithic code and data need to be refactored into services what might cause planned interruptions
in the application to release the new versions. In general, there are two types of migration approaches:8 big bang rewrite
where major refactorings are applied at once to change the whole code for microservices, and a stepwise hybrid approach
where microservices are incorporated gradually by removing modules of the monolith that remain being used in produc-
tion alongside the newly built microservice, until the whole system is replaced. Even in the case of using the stepwise
hybrid migration strategy, current existing approaches require invasive code changes in the monolith as well as planned
interruptions for releasing new versions.6,9 Moreover, in case problems existing with the new implementation, substantial
manual effort can be required to revert back to the previous stable version (code and data).10,11
This article presents a novel approach for microservices migration. We adopt the strategy to keep the monolith up and
running in production during the migration (hybrid) while we incrementally build new services (stepwise) in a totally
decoupled and nonintrusive fashion (decoupled) using aspect-oriented programming (AOP)12 and reflection. AOP is a
programming paradigm that enables to improve the separation of concerns that result in a scattered implementation
(called crosscutting concern) when using the main paradigm decomposition (e.g., logging code containing several calls
throughout several methods and classes in a object-oriented program). The implementation of the aspect (a new decom-
position unit) enables to isolate all code logic related to the concern (e.g., logging) and insert that behavior (using advices
such as before, after, and around) in the appropriate places (e.g., before or after calls to methods of the logger class) in
a completely noninvasive fashion. Therefore, whenever the developer needs to change the behavior of the crosscutting
concern in the aspect s/he only needs to change the Aspect code instead of changing several places of the code.
Around advices in AOP enable an aspect to intercept part of the code of a class (e.g., a call to a specific method)
and replace the execution of that part with a new behavior that is implemented in the aspect. In our approach, we use
around advices to enable to intercept calls inside the monolith and replace them with service requests invoking the newly
built microservices. The aspects decouple what will be refactored (the monolithic code selected for migration) and which
service to call and almost “zero” code changes need to be done in the original monolithic code. By doing this, our approach
also enables a very fast and practically zero downtime when someone needs to revert the code back to the previous version
as the monolithic code is not changed.
Moreover, we also tackle the challenging task to separate the data in different databases for the newly built microser-
vices. In case a reversion to the previous version is applied, our approach also enables to automatically revert back the
changes in the database keeping the data consistent and not loosing the new updates. To the best of our knowledge, exist-
ing migration approaches do not handle these issues of minimal code changes, ability to revert back with practically zero
downtime, as well as keeping the consistency of data saved in microservices databases. In order to evaluate our approach,
we conducted a case study with a real industrial monolithic application (HRComercial) that is in production and being
used by customers such as retail and convenience stores. Our evaluation showed that our approach enables to create
microservices from the mostly accessed (higher loads) functionalities without having to take the monolithic offline. Also
we perform reversion back to previous versions without downtime and practically no code changes while data is updated
automatically. A performance study shows that our approach introduces minimal response time overhead when perform-
ing load tests for different loads in the cloud (ranging from 25 to 75 concurrent users issuing several requests). Finally, we
compare our implementation to one of the most relevant state-of-the-art approaches,13 showing that our approach does
not introduce significant performance or cost overhead when compared to the current state of the art while having the
ability to have a more decoupled process with fewer code changes.
1282 FREIRE et al.
The rest of this work is organized as follows. Section 2 describes the current state of the art related to microservices
migrations. Section 3 describes the migration approach proposed in our work. Section 4presents two case studies showing
data on the performance, costs, and coding effort comparing our approach to the state of the art. Finally, Section 5 draws
the conclusion and presents future work.
Migration approaches such as in References 2,5,14 define a generic process describing which steps are relevant for
microservices migrations. They start describing how to identify which domain entities are good candidates for microser-
vices and then defining how to refactor them into microservices. These works present general guidelines and patterns for
these steps that can be used by all migration approaches.
Another work15 describes an experience from migrating a commercial application to microservices based on the
previous generic principles. It describes relevant steps, from a process perspective, that need to be addressed such as archi-
tectural considerations, devops tools, team support, and responsibilites, and so on. Most of the steps taken by the authors
during the migration were either manual or supported by devops tools such as for continuous integration and deploy-
ment (CI/CD). Some works suggest strategies for selecting migration candidates based on parameters such as size of the
module (e.g., single class, group of classes, group of methods, etc), and data dependencies.16
In general, there are two main approaches for migration of existing applications to microservices:1,3,5,6,8,17
• Stepwise migration: where the developer selects which modules of the monolithic code will be refactored to microser-
vices step by step. The functionality of the selected module of the monolith is replicated in a newly built microservice
and some sort of call redirection is applied. These approaches are inspired by the Strangler pattern18 where Fowler
suggests re-engineering legacy software by gradually building new parts that somehow strangle the old parts until the
system is completely changed.
• Big bang rewrite: where a complete refactoring of the monolithic code is applied. This approach is rarely used in practice
for monoliths being used by customers in a production environment.
Eisele17 defines a stepwise migration approach where the newly built microservices are kept running in a different
deployment stack in parallel with the monolithic code. No changes are performed in the monolithic code and a proxy
(or load balancer) is introduced to decide which requests are sent to the monolithic code or redirected to the newly
built microservices. Regarding data, this approach separates the microservices data into a new database that can only be
changed from the microservices and is kept “read only” for the monolithic code. In case the developer wants to revert
back to the previous version, manual code and data changes will need to be applied. The proxy will have to be changed
to stop redirecting the call to the microservice, as well as the data stored in the microservices database will have to be
manually updated in the monolithic database.
Another stepwise migration approach is the “Extract services strategy” proposed by Richardson.13 It consists in
selecting modules in the monolithic code and gradually transforming them in microservices (Figure 1). The monolithic
code is changed to transform internal module calls in REST services calls for the newly built microservices. The code
changes are more invasive when compared to Eisele’s approach as there are significant changes in the monolithic code.
Regarding data, the microservices contains its own database similar to Eisele’s approach. In case a reversion is applied,
the developer will need to make more significant code changes as the code changed in the monolith will have to be
recovered from the previous version. The data inserted in the microservices database will need to be manually updated
in the monolithic database.
Knoche19 proposes a stepwise migration approach based on the Facade design pattern. The idea is to include a Facade
to isolate the monolithic module and direct the calls to microservices that are gradually created. It firsts isolates the
external calls and latter the dependencies from the monolithic database. The work does not provide details on how the
persistence layer is migrated.
Muhammad20 applies unsupervised machine learning methods to analyze log files from the monolithic application in
order to identify candidates for microservices migrations. They use information about which parts of the application have
more requests (higher loads) and build new microservices, for these functionalities, to be redirected from a load balancer
and be scaled automatically.
FREIRE et al. 1283
Furda21 presents an approach for identifying challenges (multi-tenancy, statefulness, and data consistency) in legacy
code and applying refactoring and architectural pattern-based migration techniques to deal with these challenges and
migrate to microservices architectures.
In view of the analysis of these migration approaches from monolithic applications to microservices, we elicited the
following requirements (MR—migration requirements):
• (MR01) Refactoring granularity: What level of refactoring needs to be applied to a monolithic system to be migrated to
microservices. This requirement deals with the granularity of code that needs to be selected for refactoring to microser-
vices and some guidance can be found in References 22-24. The selection can be based on a decomposition unit of
small granularity such as a single method of a class, or as a whole class or as a group of classes that form a cohesive
unit such as a module. The developer considers if the selected units on the monolith will be partially migrated2,15,25,26
or completely refactored (Big Bang rewrite). For example, in Figure 1, the developer selects Module Z as the unit of
decomposition to be gradually migrated.
• (MR02) Code migration: After defining which refactoring strategy will be taken, the chosen unit of the monolithic code
should be migrated to the microservice code. The different strategies can perform these code changes in a more or less
invasive strategy depending on how much change is required in the monolith. For example, in Figure 1, the developer
completely removes Module Z from the monolith and implements it as a microservice in Step 2. The changes clearly
will impact the code of Module X that will need to invoke the microservice as a REST Client.
• (MR03) Migration dependencies: What type of dependency exists during and after the migration between what was
refactored from the monolith with the corresponding microservice.17 In Figure 1, there is a clear dependency between
Module X and Module Z. As the code refactoring performed for migration completely removes module Z from the
monolith, in case the microservice is offline the monolith will not be able to make a request to it.
• (MR04) Hybrid topology: The approach suggests that during the migration process a hybrid topology is used consist-
ing of the original monolith and its database and the new microservices and their corresponding databases. Figure 1
presents a hybrid topology where the monolith remains being used with its original database alongside the newly built
microservice (Module Z) with its corresponding database.
• (MR05) Monolith downtime: During the migration approach, the monolithic application should be interrupted or not
to perform the migrations. For example, in order to perform the migration of Module Z in Figure 1, the developer will
need to stop the monolith to code the changes and recompile and redeploy it.
1284 FREIRE et al.
• (MR06) Data migration: Besides code migration as discussed previously, data migration should also be a requirement,
as the microservice encapsulates or is responsible for its own database. For example, in Figure 1, the new microservice
will call its corresponding database.
• (MR07) “Backward” path: The fact of migrating to microservice corresponds to the “forward” path. However, if the
migration presents problems, or if it does not correspond as expected, or simply if it is a temporary migration to meet
a certain demand of requirements, in any case the developer might need to return to the monolithic code. There is no
guarantee that the entire monolith will be migrated and that what has been migrated will be definitive. In any case, the
ability to return to the monolith must be predictable. So this requirement can be quite useful in making the necessary
decision to undo the migration. For example, in Figure 1, in order to revert the code and data changes to perform the
Backward path requires the developer to update the monolithic code to restore Module Z and the monolithic database
to be updated with data from the microservice database. In this case, these changes can require substantial manual
effort.
The last three requirements are partially supported by existing migration approaches. In our work, in addition to
the migration without downtime (MR05), a “backward” reversion solution for the monolith was proposed without inter-
rupting the application. Moreover, we also handle data changes performed in the microservices databases and enable
to consistently update them in the monolithic database in case a Backward reversion is needed (MR06 and MR07). The
following sections will present the technical details on how we achieve this in our approach.
3 PROPOSED A PPROACH
Figure 2 shows an overview of our approach from a process point of view. The process starts with the developer of the
monolithic application choosing which module (e.g., classes, methods, packages) will be migrated to a microservice. For
this step, our approach does not provide any specific guidance as there are other works14,16 that focus on this decision. The
developer might choose the module that is the easiest to migrate in terms of effort, or based on a more relevant criteria
such as which module is the most accessed by users so that it might scale better as a microservice.20 Once the module for
migration is selected, the next step is to perform the forward migration which consists of implementing the corresponding
microservice and its database. In our approach, we use AOP and reflection (see details in Section 3.1) to perform the
migration in a decoupled fashion with few code changes. One key contribution of our approach is the backward migration
step which enables the changes performed to be reverted with minimal (if any) code changes and without the need to stop
the application (Section 3.2). This reversion step is optional and the developer will perform it in case a problem occurs
during the previous forward migration such as an error. After a migration is conducted, the developer can restart the
process by selecting the next module to migrate.
In order to use our approach in practice, the developer needs to observe the following technical issues:
• The input for the approach is a monolithic application. Since we use AOP and selected Spring AOP as our technology
choice, we need as input a monolithic application implemented in Java. There are no restrictions in terms of application
domain or whether the application implements a desktop or a web based user interface.
• We recommend that the input application be implemented following good object oriented design practices such as
using the MVC and layer architectural patterns27 as well as design patterns. This will help with the migration coding
efforts and the use of AOP and reflection. The goal of our approach is to reduce the implementation effort required for
the migration and not to improve code that is poorly designed.
• The granularity of the module will depend on the functionality that the developer selects for migration. For example,
it can be a single method from one class (e.g., a method for registering a user) or a group of methods for this same class
or a group of classes.
• Regarding the database, we assume that the monolithic application uses a monolithic database—normally based on a
large relational database.
• Regarding the microservices implementation, we recommend the use of common infrastructural services such as API
Gateway28 and discovery service29 for integrating the monolith with the microservices implemented. In Section 4, we
show this integration in two different applications.
Figure 3 shows the Forward (from monolith to microservice) cycle of the proposed approach. In Step 1, the developer
selects which monolithic module will be migrated by configuring a Boolean variable in a properties file, which will be
later read during runtime by the aspect’s around advice to decide whether to continue with the monolithic execution
or to make a call to the microservice. In Step 2, the developer implements the corresponding microservice code for the
monolithic module that was selected to be migrated. This implementation includes the business code (module) as well as
FIGURE 3 Monolithic to microservice hybrid migration cycle [Color figure can be viewed at wileyonlinelibrary.com]
1286 FREIRE et al.
F I G U R E 4 Migration to
microservice decoupled by
aspect [Color figure can be
viewed at
wileyonlinelibrary.com]
the code that makes the calls to the database (DAO). The microservices database (Micro-DB) is also created in this step.
The microservices code and database (Micro-DB) are deployed in this step. With the microservices up and running, the
developer has to code the aspect (Step 3) that will intercept the calls (Step 4) to the monolithic module and redirect (via
an around advice—Step 5) to the newly built microservice. Steps 6 and 7 show the data updates that are sent also to the
monolithic database to keep the data consistent.
The use of aspects causes inversion of control (inversion of control—IoC), in such a way that aspects know the com-
ponents of the system, but the opposite is not necessary. As mentioned in Reference 12, this property is also called
obliviousness. To compose the system, a process called combining (weaving) is required, which can occur in a static way,
applied at system build time, or dynamically during execution.
According to Reference 12, the around annotation is performed “around” the target joinpoint (shown in Figure 4,
line 1 (Around annotation). The DynamicPoperties.expressionAOP attribute contains the address of the Class selected
for migration and all methods contained in the package of this class are intercepted by the aspect and resolved by the
migratingMicroservice method in line 2, which decides if the intercepted method is contained in the list (Figure 5) of the
methods to be migrated (checkMigrationMicroservice method—line 5). If the method is included in the list and configured
to be migrated (“=True”), reflection detects the correct type of object of the business rule of that method to send to the
microservice (either as a REST GET or POST) (line 6 and line 11). Thus, the HTTP request (HyperText Transfer Protocol)
is forwarded to the microservice environment (line 20). In case the method is no longer to be migrated (“=False”), the
call proceeds with the monolithic flow (line 18).
Therefore, the aspect’s around advice can completely replace the behavior associated with the joinpoint. In the case
of the approach of this work, it is necessary to replace the behavior associated with the joinpoint and transfer it to the
microservice’s code. The code snippet, Figure 4, shows the aspect in Java that intercepts the method call to be migrated. It
compares with the identified monolithic methods (Figure 5) and then redirects the call making a request to the microser-
vice. This technique provides the decoupled migration and changes the minimum of the original monolithic code.
Following this strategy, the entire monolithic can be gradually migrated to microservices by using aspects and reflection
with minimal code changes in the monolith.
This section presents one of the main contributions of our work: enabling to revert back code and data (i.e., undo the
migration) without having to take the system offline. Some reasons may encourage a decision to undo the migration: (i)
migration that did not work properly and the developer needs to return to the previous version to fix errors; (ii) perform
FREIRE et al. 1287
FIGURE 6 Migration environment: Backward process [Color figure can be viewed at wileyonlinelibrary.com]
a temporary migration due to performance or other issues; (iii) or, to test a migration to analyze the behavior and then
revert back to the monolith.
Figure 6 shows the backward cycle:
• msMigration microservice: This component receives requests to change the state of the application (e.g., insert, update,
delete data) in the microservices database during the forward cycle described previously. Moreover, it puts these
requests in a queue and sends them back periodically to the monolithic side (Monolithic Migration API) to update the
same data in the monolithic database.
• Monolithic Migration API: Receives requests from the msMigration microservice to update the monolithic database
after changes in the microservices database.
This process takes place while the migration is in progress, and the application status will remain transparent to the
user. The msMigration microservice does not work in isolation. It is able to always receive requests that cause the appli-
cation to change status and sends them to the Monolithic migration API to update the monolithic database. Therefore,
the idea is that in case a revertion is applied (i.e., by changing the properties file to “False” for a specific method) the
aspect will make the monolithic flow to proceed and the new database requests are sent to the monolithic database which
has already been updated by the msMigration microservice. In other microservice migration approaches, the “backward”
cycle would be entirely manual for both code and data changes. In our work, the revertion can be applied during runtime,
without downtime and without having to manually change the code and database in the monolith (the only change is in
the properties file).
• Stepwise migration: Some parts of the monolith are migrated to microservice and not the complete monolith (MR01).
Gradually, one or more modules of the monolith are selected as candidates for migration, while others remain being
requested directly in the monolith (MR02). These can be deactivated and returned to Monolith (MR07, “backward”
path) or they can be migrated again on another demand.
• Decoupled migration: There is no need to significantly modify the monolithic code (MR03 and MR04) to perform the
migration, either to go “forward” or “backward.” Decoupling is achieved by using AOP and reflection to intercept the
calls to the monolithic modules and redirect them to the microservices.
1288 FREIRE et al.
• Almost zero downtime migration: During almost the entire migration process, the monolith does not need to be stopped
(MR05).
• Micro-Database: When migrating to microservice, the entity and data are extracted (decomposed) from the monolithic
database and migrated to the micro-database of the corresponding microservice. State changes of part of the application
will be maintained by the microservice after the migration. If the migration is undone, the data will be updated on the
monolithic database (MR06 and MR07).
4 EVALUATION
The evaluation of our approach consisted of two case studies. In the first case study, we used our approach in a real indus-
trial monolithic application that is in production. For the second case study, we selected an open source implementation
of a monolithic Java Spring Web Application. The following sections show the details of each study.
In order to evaluate our approach, we conducted a study with an industrial monolithic system (HRComercial—Figure 7)
developed in Java using the Web Spring Framework (MVC).30 The criteria of choice for this system was that it is a monolith
designed following good practices of object orientation as well as the fact that it is a real system used by customers in
production.
HRComercial is developed by the HRInfo company which is specialized in the development of small retail business
software for small shops and convenience stores. The system currently supports 27 customers located in several neighbor-
hoods in Fortaleza, Brazil and 5 (five) customers operate 24 hours a day, 7 days a week. An average of 870 users access the
system daily. This section shows the evaluation of our approach for migrating some modules of the HRComercial system
to microservices as well as compares it to a popular state of the art approach (Richardson’s).13
The HRInfo company decided to try out our approach as it is currently investigating possible benefits that might be
achieved from migrating the HRComercial system to a microservice architecture. These benefits include:
• Solving current scalability problems due to highly accessed functionalities in the system. They are considering
migrating these functionalities to microservices and having specific scaling rules for each one of them.
• Reducing the system’s release cycle by migrating modules that contain frequently changing business rules to microser-
vices, so that these modules can be better decoupled from the rest of the system.
FIGURE 7 Migration of the HRComercial system [Color figure can be viewed at wileyonlinelibrary.com]
FREIRE et al. 1289
• Improving the system’s response time and operational cost by partially migrating its most accessed modules to the
cloud, so as to take advantage of the cloud’s elastic infrastructure and economy of scale.
• Improving the system’s development process by taking advantage of agile DevOps practices (e.g., continuous inte-
gration/continuous deployment) and tools. The current monolith is a large system with a heavyweight integra-
tion/deployment process.
In addition to the above benefits, the company was particularly interested in our approach since the HRComer-
cial system is currently in production serving real customers and, therefore, they needed to perform the migration to
microservices with minimal operational impact.
Both implementations of the HRComercial system were deployed on the cloud for load testing. Section 4.1.2 describes
the topology of the application in the cloud environment. Moreover, a code analysis was conducted in order to compare
the quality of code in both approaches. The study aimed to answer the following research questions:
Regarding the migration requirements (MRs) previously defined, research questions (RQ1) and (RQ2) help to assess
if the code changes performed by our approach to satisfy MR01, MR02 and MR03 result in significant performance
and infrastructure resource usage bottlenecks. Research question (RQ3) enables to investigate how cumbersome is the
approach in terms of effort for the Forward and Backward cycles regarding its topology (MR04) and data changes (MR06)
considering also the Backward reversion changes (MR07) and downtime (MR05). Finally, (RQ4) helps to answer the cloud
cost impacts caused by using the hybrid topology (MR04). In Section 4.1.8, we provide a more detailed discussion on these
issues for this case study.
Table 1 shows a list of operations (functionalities of the cash flow module) from the monolith that were selected for
our study. During the tests, 80% of the requests are concentrated on the operations that are forwarded to the Microservice
and the remaining 20% (GET_2 and GET_3 requests) remain on the monolith (that has not been migrated).
Load tests were applied for 5 minutes in both strategies. Each test started with 25 users concurrently submitting 10
HTTP requests (via REST) for 20 seconds over the Monolith. These tests are repeated by scaling in steps of 25 users up
to 75 users. A performance and cost assessment was carried out to compare results of both strategies with the same tests.
T A B L E 1 Definition of operations in
Target
the test
Operation type Request Microservice Monolith
√
Simple cash flow query GET_0
√
GET_1
√
GET_2
√
GET_3
√
GET_4
√
GET_5
√
Insertion of new cost in cash flow POST_0
√
POST_1
√
POST_2
√
POST_3
1290 FREIRE et al.
During load tests in the cloud, relevant information on computational performance such as CPU consumption, memory
consumption and response time were collected. In the last step of the evaluation, we performed a code analysis to compare
the quality of our approach and Richardson’s approach. The reason for selecting these particular loads (25, 50, and 75
simultaneous users) was that the company recommended us to test with similar and higher loads to/than its current
peak, which would be around 30 simultaneous users. Moreover, the machine types shown in Table 2 were selected so as
to reflect the current production environment of the HRComercial system.
The same developer (the main researcher and first author of this article) was responsible for doing both implemen-
tations and conducted the load tests in the cloud. Conducting user studies would require much more time, effort, and
budget to perform real tests in the cloud as we did. We plan to conduct a user study as future work with professional
developers and post graduate students. However, for the purpose of our evaluation, which is to assess how our approach
compares with the current state of the art and whether it introduces overhead, we believe that the implementation
and tests performed by the lead researcher is the best way to avoid using inexperienced professionals and favoring one
approach over the other.
The cloud deployment environment for this application is shown in Table 2. The topology consists of four AWS virtual
machine instances31 with 11 containers and a database deployed directly on the virtual machine (without a container).
In the first virtual machine, we deploy two containers with the monolithic code implemented in our approach and
Richardson’s approach. The second virtual machine contains five containers. Two containers are for supporting microser-
vices such as API Gateway and Discovery used by both strategies. The cashflow functionality is deployed in two containers
for each implementation and the msMigration service is contained in a separate container and is used only by our
approach as described previously. The third virtual machine contains the Postgres database installed directly in the vir-
tual machine and the pgAdmin tool inside a container. Finally, the fourth virtual machine contains three containers for
the tools used to perform the load test (Gatling) and metrics (CPU, memory) collection (Prometheus). Figure 8 shows
the details of the cloud topology based on our approach while Figure 9 shows the topology based on Richardson’s. The
main difference in both cases is the inclusion of the migration API component in our approach (monolith machine),
FIGURE 8 HRComercial deployed on the AWS cloud based on our approach [Color figure can be viewed at wileyonlinelibrary.com]
F I G U R E 9 HRComercial deployed on the AWS cloud based on Richardson’s approach [Color figure can be viewed at
wileyonlinelibrary.com]
that contains the AOP and reflection code required to perform the migrations and the MS migration microservice in the
microservice instance.
This section investigates the first research question: [RQ1] Does the proposed approach introduce a response time overhead?
In order to answer this question we conducted a set of load tests with the Gatling tool. The tests execute the same
operations (shown in Table 1) for a range of loads (25, 50, and 75 concurrent users) for each approach. Each test is applied
1292 FREIRE et al.
for a single approach at a time and metrics are collected (e.g., response time, CPU, and memory consumption) in order to
analyze the performance. It is important to mention that we test each approach based on the same conditions (database,
loads, infrastructure) so that they are assessed fairly. The response time metric returned by the Gatling tool is defined as:34
Returns the response time of this request in milliseconds = the time between starting to send the request
and finishing to receive the response.
Figure 10 shows the response times (in milliseconds—ms) for the three loads (25, 50, and 75 users) for both approaches
(presented side by side). The figure shows in different colors the response times obtained for different percentiles during
the whole test (around 5 minutes). For 25 users (Figure 10(A,B)), it can be seen that for higher percentiles such as 95% and
99%, our approach has a response time around 2k milliseconds (i.e., two seconds) most of the time and some peaks around
4k milliseconds (i.e., four seconds). For the same load of 25 users Richardson’s approach presents a similar response time
pattern, however, with much lower response times such as 500 ms most of the time and 1k milliseconds during peaks.
This means that for this load, most of the time Richardson’s outperformed our approach in approximately four times
during almost the whole test as described in detail below (the fine grained data described below is obtained by the Gatling
tool reports and shown in Tables 3 and 4.
• Our approach has a response time of 3899 ms at the 95% percentile which means that 95% of all requests were below
3899 ms. When we look at the 99% percentile data it shows that during 99% of the time the requests were below
4302 ms.
• For the same percentiles, Richardson’s results are 569 ms for the 95% percentile and 926 ms for the 99% percentile.
F I G U R E 10 Response times in milliseconds for three different loads and various percentiles during the whole test [Color figure can be
viewed at wileyonlinelibrary.com]
T A B L E 3 Executions data with 25, 50, and 75 users load for our approach (O) and Richardon’s (R)
Executions
Total OK KO % KO Req/s
# Users O R O R O R O R O R
T A B L E 4 Response times data with 25, 50, and 75 users load for our approach (O) and Richardon’s (R)
Response time (ms)
Min 95th pct 99th pct Max Mean Std Dev
# Users O R O R O R O R O R O R
25 21 16 3899 569 4302 926 4729 1444 1424 191 1148 195
50 19 18 8714 5251 9595 5970 10262 8333 4940 1854 2843 1660
75 28 26 15230 21490 16163 26895 18727 31657 9388 8508 5259 6613
• The previous response times are based on all operations included in the test and for all the requests done during the
whole 5 minutes that the test was executed with 25 users. This means that the longest the user of the application would
wait for a request would be around 4 seconds for our approach and 1 second for Richardson’s considering 99% of all
observations. This is also confirmed by the maximum response time data collected by Gatling which was 4729 ms for
our approach and 1444 ms for Richardson’s.
For the 50 users load (10(C,D)), a similar response time pattern also occurs. However, in this case, our approach
improved when compared to Richardson’s but remained outperformed. The figure clearly shows that most of the time
for the higher percentiles, our approach presents a response time of around 8k milliseconds (i.e., 8 seconds) with peaks
around 10k milliseconds (10 seconds). Regarding Richardson’s the graph shows that most of the time, for the same per-
centiles, it has a response time around 5 seconds with peaks around 7 seconds. More detailed data obtained from Gatling
confirms this as described below and shown in Tables 3 and 4:
• Our approach has a response time of 8714 ms at the 95% percentile which means that 95% of all requests were below
8714 ms. When we look at the 99% percentile data, it shows that during 99% of the time the requests were below
9595 ms.
• For the same percentiles, Richardson’s results are 5251 ms for the 95% percentile and 5970 ms for the 99% percentile.
• The previous response times are based on all operations included in the test and for all the requests done during the
whole 5 minutes that the test was executed with 50 users. This means that the longest the user of the application would
wait for a request would be around 9.5 seconds for our approach and 6 seconds for Richardson’s considering 99% of
all observations. This is also confirmed by the maximum response time data collected by Gatling which was 10,262 ms
for our approach and 8333 ms for Richardson’s. Therefore, in the case of this load the gap between our approach and
Richardson’s considerably reduced. From a 4 times slower with 25 users our approach considerably improved to be
around 1.5 times slower for the higher percentiles with 50 users.
Regarding the load of 75 users (Figure 10(E,F)), the gap between our approach and Richardson’s reduced again. It can
be seen that for the higher percentiles of 95% and 99% most of the time during the test the response times remain around
15k milliseconds (i.e., 15 seconds) for both approaches. Some detailed data obtained from Gatling and shown in Tables 3
and 4:
• Our approach has a response time of 15,230 ms at the 95% percentile which means that 95% of all requests were below
15,230 ms. When we look at the 99% percentile data it shows that during 99% of the time the requests were below
16,163 ms.
• For the same percentiles, Richardson’s results are 13,694 ms for the 95% percentile and 15,544 ms for the 99% percentile.
• The previous response times are based on all operations included in the test and for all the requests done during the
whole 5 minutes that the test was executed with 75 users. This means that the longest the user of the application would
wait for a request would be around 16 seconds for our approach and 15.5 seconds for Richardson’s considering 99% of
all observations. This is also confirmed by the maximum response time data collected by Gatling which was 18,727 ms
for our approach and 18,551 ms for Richardson’s. Therefore, in the case of this load, the gap between our approach and
Richardon’s was almost none showing very similar results for the higher percentiles. The main difference (observing
the figure) is that the proportion of requests that reach these values for Richardson’s is lower than in our approach.
1294 FREIRE et al.
The analysis conducted by the previous data focused on the worst case scenarios that would make the user wait the
longest for a response of his/her operations. As we discussed previously, the worst results for our approach were obtained
with the lowest load tested of 25 users. As load was increased to 50 users, the gap reduced considerably and presented
similar results with the 75 users load. After analyzing these data, we researched what could be the possible causes for
the differences as the technologies used were pretty much the same in both implementations with the main difference
being the use of reflection and AOP in the case of our approach. After reviewing the literature on performance bottlenecks
related to AOP and reflection in Java, we found that:
Because reflection involves types that are dynamically resolved, certain Java virtual machine optimiza-
tions can not be performed. Consequently, reflective operations have slower performance than their
nonreflective counterparts, and should be avoided in sections of code which are called frequently in
performance-sensitive applications.
• Regarding AOP overhead it depends on how weaving of the aspects occurs.38,39 Compile time weaving is done during
compilation and practically does not affect runtime performance, while run time weaving is done during execution
yielding worse performance results. In the case of SpringAOP,40 which was the technology chosen for our case study,
it uses runtime weaving therefore also bringing performance overhead.
Therefore, revisiting (RQ1) Does the proposed approach introduce a response time overhead?, we can conclude that:
• The main response time differences were during the lowest loads mostly due to refactoring and AOP issues. Even
in these cases, the response times of the overall test in our approach was still acceptable for practical purposes and
according to real usage by clients of the application of the company in the case study.
• The overhead of reflection and AOP was not so significant to higher loads probably as the response times increase due
to the heavier loads of more requests being issued by more users. With more users, the responses of the applications
tend to increase, therefore, the cost for performing weaving and reflection starts to have a lower impact proportionally
on the response time.
• For higher loads such as the 75 users tested, we can see that our approach has a similar level of response times for the
worst cases when compared to Richardson’s with a pure Java Spring Implementation. These data are very relevant for
the company as they know that the approach can scale similar to a state of the art approach with the benefits we will
show later of zero downtime and easy reversion of changes.
• For future works, we will investigate if the usage of AspectJ with compile time weaving as well as tuned java virtual
machines for reflection can reduce the overhead of our approach even further.
This section investigates the second research question: (RQ2) How does the proposed approach consume CPU and memory
resources?
Figure 11 shows the CPU consumption during the whole test for both approaches with the three loads of 25, 50, and
75 users. The figure shows the CPU consumption of the containers of the application for each approach (according to the
deployment topology discussed previously in Table 2). The red line is the CPU consumption of the monolith, while the
other colors represent the lines of CPU consumption of the microservices. From all graphs, it can be seen that the load
tests executed demanded a very high CPU consumption from the monolith as all requests (in both approaches) are first
served my the monolith and then (if it is the case) forwarded to a microservice—by an aspect in our case or directly if in
Richardson’s. The CPU consumption data is collected by a monitoring tool (Prometheus) which issues a request (client
module) every second for each of the deployed containers (each container has a Prometheus probe installed in it to collect
the CPU and memory consumption data that is returned to the client module).
For the 25 users load, our approach (Figure 11(A)) presents a more intensive CPU usage—between 80% and 100%
most of the time—during the whole test for the monolith. The peaks near 100% are very frequent and it clearly causes
FREIRE et al. 1295
F I G U R E 11 CPU consumption for 25, 50, and 75 users for both approaches [Color figure can be viewed at wileyonlinelibrary.com]
the client of Prometheus to wait for a short period of time, as the graph shows some points of discontinuity where it
breaks and comes back shortly after. For Richardson’s approach (Figure 11(B)), the CPU consumption of the monolith is
lower—ranging from 40% to 80% of CPU usage with none 100% peaks, therefore presenting a complete continuous graph
until the end of the test. This behavior also confirms the differences in response times presented previously. As the CPU in
our approach had to work more due to Aspect run time weaving and Reflection usage, it impacts the CPU consumption in
the monolith of our approach more significantly than in Richardson’s approach. This explains why there are 100% peaks
in our approach which will cause requests to be queued and wait to be served by the application server. This behavior
becomes clear in two cases: the breaks in the Prometheus’ graphs of the monolith and the less number of requests replied
to Gatling with the same number of users. For example, for 25 users, Gatling (see Table 3) issued 2915 total requests (with
responses) in our approach, while for Richardson’s it achieved 5574 requests (with responses) with the same number of
users. As the tool simulates real users, when the operations included in the tests take more time to finish, the whole test
takes more time to finish and less tests (i.e., less requests) are submitted to the monolith (server side) in the same amount
of time. Regarding the CPU consumption of the microservices’ containers, Figure 11(A,B) shows a similar profile where
CPU is most of the time around 20% with some peaks around 40% but with a higher consumption for Richardson’s as
more requests need to be served. For our approach, it is interesting to see some drops to near 0% when the monolithic
shows a CPU near 100% that does not happen in Richardson’s. The reason for this is the high CPU stress that makes the
monolith to wait and therefore not forwarding anything to the microservices until it recovers.
When the load is increased to 50 (Figure 11(C,D)) and 75 (Figure 11(E,F)) users the CPU profile will become similar
for both approaches. The stress on the monolith will be high with CPU near 100% most of the time for both approaches.
As our approach on average will have more requests with higher response times, the total number of requests attended
will remain less than in Richardson’s, but, for these loads, with a lower gap. For example, for 75 users Gatling (see
Table 3) issued 2166 total requests (with responses) in our approach, while for Richardson’s it achieved 3146 requests
(with responses) with the same number of users. As explained previously, because the CPU is on its limit, the requests
1296 FREIRE et al.
start being queued in the application server and for both cases this starts affecting response time more than the overhead
caused by reflection and AOP only. For the microservices’ CPU consumption for these loads, the same profile of higher
consumption for Richardson’s remain as it has to serve more requests.
The previous data show that the application tested is mostly CPU bound as it requires a high consumption of CPU
when the number of users is increased. Moreover, the relation between response time and CPU usage of our tests confirms
what literature on performance analysis discusses as:
• The relation between CPU load and response time is nonlinear and as CPU consumption gets near 100% load,
response time will increase in a faster nonlinear rate when compared to low CPU loads where it preserves a linear
relationship.41,42
• As mentioned in Reference 41:
As the number of users increases so does the probability that multiple user processes will access the
same CPU simultaneously. When this happens, wait periods are incurred that prolong response times.
Therefore, the CPU load, or more precisely the CPU wait time, has a direct effect on the response time.
Regarding memory consumption, for all loads, the tests did not present much difference, with maximum load being at
the container of the monolith with maximum memory usage around 400MB. Figure 12 shows the memory consumption
with 75 users for both approaches and confirms that the application being tested is more CPU bound as shown previously
and did not present much difference in memory usage for the two approaches.
Therefore, revisiting (RQ2) How does the proposed approach consume CPU and memory resources?, we can conclude
that:
• Our approach showed that for the 25 users load, it presents a more intensive CPU usage—between 80% and 100%
most of the time than Richardson’s. Sometimes the CPU presented peaks near 100% making requests to be queued and
affecting the response time more significantly.
• When the load is increased to 50 and 75 users, the CPU profile becomes similar for both approaches. The stress on the
monolith will be high with CPU near 100% most of the time for both approaches.
• Regarding memory consumption, for all loads the tests did not present much difference, with maximum load being at
the container of the monolith with maximum memory usage around 400MB.
• For future work, we will investigate if using virtual machines with higher CPU capacities presents less stress for the
same loads we tested and if they can handle higher loads and what effect this will have on response times.
F I G U R E 12 Memory
consumption for 75 users: (A) Our
approach and (B) Richardson’s approach
[Color figure can be viewed at
wileyonlinelibrary.com]
FREIRE et al. 1297
This section investigates the third research question: (RQ3) What is the effort involved during the “forward” and “backward”
cycles?
In order to answer this question, we selected a microservice candidate (CashFlow) from the HRcomercial system
and implemented 1 forward (From monolith to microservice) and 1 backward (from microservice to monolith) cycle
using both approaches (ours and Richardson’s). The original HRcomercial monolith has a size of 355,428 lines of code,
95,365 statements, 28,807 functions/operations, 1238 classes, and 1514 files and is used in production for several clients
as explained previously.
Given the size and complexity of the HRcomercial monolith, a CashFlow class (FluxoCaixaResource) was chosen to
migrate three theoretically simple services (service type column in Table 1). This process of choosing which service to
migrate initially is part of the stepwise migration method to microservice. So, the strategy used is to migrate in parts,
starting from less complex services in terms of code. For each migration approach, a microservice was developed: MS
CashFlow (oriented to our approach) and MS CashFlow-Richardson. Both receive the same parameters, perform the same
operations, and deliver the same results as the original Monolithic code.
Forward cycle
Given the initial state of the original Monolith, HRcomercial was implemented with our approach to execute the migration
to MS CashFlow microservice. Figure 13 shows an excerpt of the CashFlow (FluxoCaixaResource) original monolithic
code on the left and the changes performed by using our approach on the right. Three services were selected for changes
and 1 line had to be changed in each service (method) as can be seen for line 16 of the findByCodigo() method. An addi-
tional line of code was required in line 4 to make properties (saved in the DynamicProperties.AOP file) related to AOP
available at runtime for each method. This was necessary due to the use of the SpringMVC framework in which the mono-
lith is implemented. Recent implementations we have done with other monoliths in Spring Boot and Spring Cloud do not
require these changes. Anyway, for adding these three services to be listened by the Aspect and redirected during runtime
only 4 lines of code (4 LOC) were required to be added to the original monolith and no other changes were necessary.
Figure 14 shows the same changes when performed with Richardson’s approach. It can be seen for example that
the CashFlow (FluxoCaixaResource) had to change quite a few more lines of code (lines 14 to 20) for performing the
redirection to call the microservice corresponding to the findByCodigo() method. This seems simple as the method is
short, however, in total the class had to change 25 lines of code for the three services migrated (25 LOC). Therefore, when
compared to Our approach on the forward cycle 21 LOC more (i.e., around 6 times more LOC) were required. If many
classes and services were selected to be migrated then a significant number of LOCs would be required for Richardson’s.
Backward cycle
Regarding the “backward” cycle in our approach as described in Section 3.2, there is no need to change anything in either
code or data and can be performed without having to stop the application. The only change needed is to change the flags
F I G U R E 13 Excerpt of the original monolithic code is on the left and the changes by our approach is on the right [Color figure can be
viewed at wileyonlinelibrary.com]
1298 FREIRE et al.
F I G U R E 14 Excerpt of the original monolithic code is on the left and the changes by Richardson’s approach is on the right [Color
figure can be viewed at wileyonlinelibrary.com]
of the services to be redirected by the Aspect from “true” to “false” so that the call will be performed inside the monolith.
Zero code changes and zero downtime is a great contribution of our approach toward the backward cycle.
Unlike the “backward” cycle in our approach, Richardson’s approach requires a lot more effort as the developer needs
to manually change the code and the database. Thus, returning according to Richardson requires:
• 1. Change the CashFlow class in the monolith: undo this code changes and revert it back to the code version before the
migration;
• 2. Stop the monolith: turn off the monolithic and update the code change;
• 3. Persistence Layer: do a dump of the microservice’s micro-database (MS CashFlow-Richardson microservice) and
next recover data (new and changed) in the monolithic database. As for the data excluded by the microservice, it
also needs to be reflected in the monolithic database. All of these operations are done manually, analyzing system
logs and applying these changes through the SQL tool. At our approach, this entire process is automated at run time
(Section 3.2);
• 4. Start the monolith again: Once the code is returned as before and the database is updated, the monolith is ready to
start again.
For Richardson’s “backward” cycle, we performed changes in the CashFlow class which theoretically is a simple code.
This process took 10:08 minutes and more 7:50 minutes for database recovery, totaling 17:58 minutes to complete the
“backward” cycle in this case. As for our approach, the change took practically a few seconds just to change a properties
file and no stoppages were required in the application.
Therefore, revisiting (RQ3) What is the effort involved during the “forward” and “backward” cycles?, we can
conclude that:
• The forward cycle requires only a few (if needed) changes in our approach while for Richardson’s it is more
cumbersome and depending on the amount of services to be changed it can require a signifcat amount of code changes.
• The backward cycle is even more straightforward in our approach as the only action needed is a change in a properties
file. The backward process in Richardson’s requires several manual changes and needs application and database stops.
4.1.6 Costs
Table 5 shows the costs for operating the HRComercial application in the AWS cloud considering different deployment
topologies using various instance types.31 The homogeneous topology is comprised of two instances: one for the applica-
tion server containing the HRComercial code and the other for the Database. The table shows the monetary costs in US
FREIRE et al. 1299
T A B L E 5 Costs for operating the monolith, our approach, and Richardson’s in the AWS cloud with different configurations
Topology cost
Homogeneous: Our approach: Richardson approach:
Monolith + Database Monolith + Microservice Monolith + Microservice
AWS Instance Price/hour Monthly Yearly Monthly Yearly Monthly Yearly
dollars for running the application with this topology for a month and for the whole year with different instance types.
For example, the Homogeneous topology costs around $67.74 per month using a T2.medium instance type for the Mono-
lith Application Server and Database. The instance types shown in the table double in capacity and price per hour and
it is the decision of the architect and developers of the application to choose the most optimal configuration in terms of
capacity and cost that is able to meet the demand of the application users. This is why it is important to run load tests in
the application such as the ones we performed before so that it is possible to know which machine types are the best in
terms of costs for a specific load. In this table we show possible deployment scenarios using powerful machines such as
T2.medium costing around $ 812 per year up to T2.xlarge costing around $3251 per year for operating HRcomercial in
the Homogeneous topology.
Revisiting (RQ4) How does maintaining a hybrid topology affects cloud infrastructure costs? We provide the details as
follows. The costs for the hybrid topology consider the homogeneous instances (Monolith and Database) and an addi-
tional virtual machine for the microservices in both approaches. Our suggestion is that all microservices are hosted on the
same virtual machine and can be placed in a container such as Docker. In case the application would deploy different ser-
vices in separate virtual machines, depending on the instance type costs, the topology could be very expensive to operate.
It is not our goal neither to assess many different instance types nor to provide guidance in which type to select for a given
application. For these issues, we recommend other works including our own previous work.43,44 The important point to
mention here is that for the hibrid topology, the additional virtual machine is a necessary cost (around 50% more expen-
sive) that will facilitate the migration from the monolith to the microservices architecture. Once the complete monolith
has migrated, its machine can be completely retired. Moreover, since the response time overhead of our approach was
not very significant when compared to Richardson’s there would be no extra need for more powerful machines yielding
in similar cloud operating costs.
As with any case study, the validity of our evaluation may be threatened by a few relevant limitations. One first limita-
tion was the fact that the main researcher responsible for designing and developing our approach also implemented the
HRCommercial migrations based on Richardson’s approach. The reason for this choice was to have the same developer (its
most expert user—the first author of this article) to use both migration approaches as his knowledge of the HRComercial
system would be useful for both cases as well as his knowledge of performing migrations. Of course that his implementa-
tion of Richardson’s might not be the best, but we tried to be based on the documentation shown in Reference 13 as much
as possible. For the future, we might include some studies like controlled experiments, which require more resources and
effort, involving external developers to use both approaches to perform migrations. The main purpose of our study was
not to perform a usability assessment of our tool, but rather to validate our proposed approach by showing its use and
benefits in practice. In that regard, having the approach developer conducting the evaluation was key for the purpose of
our study.
Another threat was that we limited the number of migrations performed, load tests conducted, and simplified cost
analysis. This limitation was due to the time and costs related to performing tests in the cloud. Therefore, due to our
limited resources, we focused on analyzing key issues such as performance, cost and coding effort when compared to
a popular state of the art approach. The results showed that our approach shows promising results that can be further
explored in the future for other tests and evaluations.
1300 FREIRE et al.
Finally, our study was also limited by the fact that we only assessed the approach with one application—HRComercial.
As explained previously this is a large and complex industrial application being used by real customers in production.
This brings a great value for our case study as we tried to stay away from toy examples available for microservices studies.
For future work, we plan to conduct more case studies with other industrial applications.
4.1.8 Discussion
This section provides a discussion on how the evaluation of our approach with the HRComercial application relates the
Research Questions (RQ1-RQ4) back to the migration requirements (MR01-MR07).
The decomposition units selected for migration to microservice were a few methods of the Cashflow class and not
the complete monolith (MR01). Gradually, one or more methods of the monolith were selected as candidates for migra-
tion, while others remained being requested directly in the monolith (MR02). These can be deactivated and returned to
Monolith (MR07, “backward” path) or they can be migrated again on another demand. We showed that these changes
could be done based on a scalable strategy preserving similar response times, specially for higher loads, as state of the art
approaches (RQ1). As the HRComercial application was based on a older version of the Spring MVC framework there
was a relatively moderate overhead impact on the CPU consumption (RQ2) of the monolith due to the use of the version
of Spring AOP and Reflection, specially for lower loads, that caused the response times to be slower in these cases (RQ1).
There was no need to significantly modify the monolithic code (MR03 and MR04) to perform the migration, either to
go “forward” or “backward.” Decoupling is achieved by using AOP and reflection to intercept the calls to the monolithic
modules and redirect them to the microservices. We showed that (RQ3) the effort required to change the HRComercial
monolith required only a few line changes in the original Cashflow class. The backward cycle is even more straightforward
in our approach as the only action needed is a change in a properties file and the whole process can be done with zero
downtime (MR05).
While migrating to microservice, the entity and data of the Cashflow module were extracted (decomposed) from the
monolithic database and migrated to the micro-database of the corresponding microservice and new state changes were
being kept in the microservice database after migration. When the migration was undone, the data was transparently
updated on the monolithic database (MR06 and MR07). We also showed that all these reversion data changes are done
automatically (RQ3) by our approach.
Finally we show that in terms of costs (RQ4) we provide a hybrid (MR4) and stepwise migration strategy that needs to
keep the monolith online, alongside the microservices machine, during the process. For these reason, we need an extra
machine during the migration and that is a common practice for hybrid approaches, keeping our approach with a similar
migration cost compared to the state of the art.
After assessing our approach with the previous case study, we decided to evaluate its use with another application to avoid
being limited to just one application and to check whether we obtain similar or completely different results.
For this study, we selected an open source Java Spring Web Application called Petclinic.45 The reasons for choosing Pet-
clinic was that it has an open source implementation available in both monolith45 and microservice46 architectures as
well as it uses the most recent implementation of Spring Boot and Spring Cloud projects. The main idea was to select a
few modules of the monolith to be migrated using our approach and reuse the code of the microservices already available
in the open source implementation. The goal was to investigate the same research questions of the previous case study
now comparing with the application’s original monolithic version.
F I G U R E 15 Test infrastructure of the Petclinic application in the AWS Cloud [Color figure can be viewed at wileyonlinelibrary.com]
In order to answer the previous research questions, we conducted load tests (with 100, 200, and 300 users) in the mono-
lithic Petclinic application and compared the same tests with the Petclinic application deployed in a hybrid topology
following our migration approach. Similar to the previous case study, the same developer (the main researcher and first
author of this article) was responsible for the implementation and conducted the load tests in the cloud. Figure 15 shows
the hybrid deployment topology of the Petclinic application and Testing tools used in our study that were deployed in the
AWS Cloud. The test Infrastructure is comprised of the following:
• Monolith instance server: Contains the Petclinic application deployed in a single AWS Instance (Virtual Machine) of
type T3.micro. According to Amazon,31 this instance type has a capacity of 2 virtual CPUs (based on a 2.5 GHz Intel
Scalable Processor) and 1 GB of memory. Petclinic is a Spring web application that follows the Model View Controller
architectural pattern with the user interface implemented in the view Layer, the business logic as java components
(Owners, Vets, Visits) and the persistence layer using JPA. The Petclinic application also includes the Migration API
component provided by our approach that contains the AOP and reflection code required to perform the migrations.
All the previous components run as a single application in this server.
• Microservice infrastructure server: Contains the microservice selected for migration (owners) alongside with com-
mon microservices infrastructural patterns such as API gateway and Discovery as well as our data reversion migration
microservice. These services execute as separate processes on the same virtual machine of type T3.micro.
• Database server: The monolithic database and the micro databases are hosted in a MySQL DBMS managed by AWS
Relational data service (Amazon RDS). They are hosted together in a single T3.Micro instance.
• Test environment server: Hosts the Gatling load test tool that is used to issue the requests for a set of operations related to
the Petclinic functionalities with different loads (100, 200, and 300 users) and collects response time data. This instance
also hosts the Prometheus monitoring tool that periodically collects metrics related to CPU and memory consumption
on the monolith and microservice servers.
This section investigates the first and second research questions. Our goal was to check if the results for the Pet-
clinic application follow a similar trend as shown in the first case study. We performed load tests to compare the
1302 FREIRE et al.
F I G U R E 16 100 users load. Response times (1), CPU usage (2), and memory usage (3) for pure monolith (A) and our approach (B)
[Color figure can be viewed at wileyonlinelibrary.com]
pure monolith with the hybrid topology presented in the previous section following our approach. The tests consist
of requests issued by the Gatling load test tool that generate loads based on 100, 200, and 300 users performing sev-
eral operations on the Petclinic application. The tests were executed separately for each approach and all variables
that might influence the test results remained equal such as virtual machine types used, database, operating systems,
and so on.
Figure 16 shows the results on response time, CPU, and memory consumption during the whole 5 minutes of the
execution of the tests with the load of 100 users. Regarding response time in this case both approaches show very similar
results for all percentiles. For example, for percentile 99% the result for the pure monolith was 1359 milliseconds(ms)
while for our approach it was 1637 ms. The previous response times are based on all operations included in the test and
for all the requests done during the whole 5 minutes that the test was executed with 100 users. Therefore, in this case,
the usage of our approach in Petclinic practically presented almost the same response times (around 1.5 seconds) when
compared to the pure monolith. The figure also shows that the resource consumption shows a similar CPU and Memory
usage pattern in the monolith for both approaches. During some moments of the test as the microservices help with
processing in our approach, the monolith in the hybrid topology presents a slightly lower resource consumption as the
tasks are being executed by the microservices.
Regarding the results for the 200 and 300 workloads, they continued very similar to the previous pattern of the 100
users load. The response time gap between the two approaches was also nonexistent in practice. For example, with 200
users the result of the percentile 99% of the pure monolith was 4158 ms while for our approach it was 4185ms. Therefore,
for both cases, the response time approximately doubled when we doubled the load. For 300 users and the same 99%
percentile, the value for the pure monolith was 7013 ms while for our approach it was 8097 ms. The CPU consumption
remained similar with CPU near 100% most of the time for the monolith in a very similar pattern as shown in Figure 16.
As the application is mostly CPU bound, there were also no significant changes in memory consumption for the 200 and
300 users loads.
Revisiting (RQ1) Does the proposed approach introduce a response time overhead?, we can conclude that:
• The overhead caused by our approach for the Petclinic application in terms of Response time was practically nonexis-
tent. The main reason why these results were better than the ones in the previous case study were due to the usage of
FREIRE et al. 1303
the latest Spring Version in Petclinic showing less overhead caused by the use of AOP and Reflection when compared
with the results obtained using the older Spring MVC version with the HRcomercial application.
• For the Petclinic application, increasing the load did not present any significant difference in the response time results
obtained with the pure monolith or the hybrid topology based on our approach.
Revisiting (RQ2) How does the proposed approach consume CPU and memory resources?, we can conclude that:
• Both topologies showed intensive CPU usage—near 100% most of the time for the monolith. For some periods, the
microservices in the hybrid topology help to alleviate the CPU consumption by doing processing of some tasks.
• When the load was increased, there was no significant change in the CPU consumption profile.
• Regarding memory consumption, for all loads the tests did not present much difference, with maximum load being at
the virtual machine of the monolith with maximum memory usage around 20MB.
We selected a Microservice candidate (OwnerController) from the Petclinic application and implemented 1 forward
(From monolith to microservice) and 1 backward (from microservice to monolith) cycle using our approach. The original
Petclinic monolith has a size of 1003 lines of code, 179 statements, 92 functions/operations, 24 classes, and 25 files.
Figure 17 shows on the left the original OwnerController class that was completely reused without having to change
one single line of code by our approach (shown on the right). In this implementation not even the small changes we had
to previously make in the case of HRComnercial were required, as we used the recent version of Spring Boot. Our aspects
and reflection code were completely able to identify the OwnerController class and redirect the calls to the microservice
as shown previously in Figure 15. Obviously, in order for our approach to work it is required to include our Migration
API component as a library inside the monolith project. Regarding the microservice, we reused the implementation of
the microservice provided by the open source project. In case this was not available, we could also easily build a new one
from scratch based on the original monolithic code with low effort. Performing the backward cycle is also completely
hassle free as the only action required for the developer is to change a boolean flag in a properties file.
Therefore, revisiting (RQ3) What is the effort involved during the “forward” and “backward” cycles?, we can conclude
that for the Petclinic application the only effort required in the forward cycle was to add our library in the monolith and to
implement the code of the newly created microservice reusing existing code. For the backward cycle, a single line change
in the properties file is the only action required for the developer.
4.2.5 Costs
Table 6 shows the costs for operating the Petclinic application in the AWS cloud considering different deployment topolo-
gies using various instance types.31 The homogeneous topology is comprised of two instances of the same type: one for
F I G U R E 17 Excerpt of the original monolithic code is on the left and the changes by our approach is on the right [Color figure can be
viewed at wileyonlinelibrary.com]
1304 FREIRE et al.
T A B L E 6 Costs for operating the Petclinic application in the AWS cloud with different configurations
Topology cost
Homogeneous: Hybrid:
Monolith + Database Monolith + Microservice
AWS Instance Price/hour Monthly Yearly Monthly Yearly
the application server containing the Petclinic code and the other for the Database. The table shows the monetary costs
in US dollars for running the application with this topology for a month and for the whole year with different instance
types. For example, the homogeneous topology costs around $15 per month using a T3.micro instance type (2 virtual
CPUs and 1 GiB RAM memory). The instance types shown in the table double in capacity and price per hour and it is the
decision of the architect and developers of the application to choose the most optimal configuration in terms of capac-
ity and cost that is able to meet the demand of the application users. This is why it is important to run load tests in the
application such as the ones we performed before so that it is possible to know which machine types are the best in terms
of costs for a specific load. In this table we show possible deployment scenarios using simple machines such as T3.micro
costing around $182 per year up to T3.xlarge (8 virtual CPUS and 32 GiB RAM Memory) costing around $2915 per year
for operating Petclinic in the Homogeneous topology.
Revisiting (RQ4) How does maintaining a hybrid topology affect costs? We provide the details as follows. The costs for
the hybrid topology consider the homogeneous instances (monolith and database) and an additional virtual machine for
the microservices. Our suggestion is that all microservices are hosted on the same virtual machine and can be placed in
a container such as Docker. In case the application would deploy different services in separate virtual machines, depend-
ing on the instance type costs, the topology could be very expensive to operate. It is not our goal neither to assess many
different instance types nor to provide guidance in which type to select for a given application. For these issues, we rec-
ommend other works including our own previous work.43,44 The important point to mention here is that for the hybrid
topology, the additional virtual machine is a necessary cost (around 50% more expensive) that will facilitate the migra-
tion from the monolith to the microservices architecture. Once the complete monolith has migrated, its machine can be
completely retired.
• Internal and construct validity: In the case of this case study the developer of the approach and main researcher was
the person that performed the migration. Since our goal was not to perform a user study involving different develop-
ers and comparing different approaches we believe that the main expert in our approach is the best subject to perform
the migration. For future work we plan to involve postgraduate students and professional developers to conduct
experimental studies in a larger scale. This will require more time and effort to perform the whole study.
• Conclusion and external validity: Regarding the generalization of our results, we can claim that this case study con-
firmed similar results (e.g., response time overhead, resource consumption, effort, and cost) for the research questions
of the first case study though still we cannot generalize the results. The longer experimental study we plan to do in the
future will help to validate these findings and assess other dimensions.
4.2.7 Discussion
This section provides a discussion on how the evaluation of our approach with the Petclinic application relates the
Research Questions (RQ1-RQ4) back to the migration requirements (MR01-MR07).
FREIRE et al. 1305
The decomposition units selected for migration to microservice were a few methods of the Customers class and not
the complete monolith (MR01). Gradually, one or more methods of the monolith were selected as candidates for migra-
tion, while others remained being requested directly in the monolith (MR02). These can be deactivated and returned to
Monolith (MR07, “backward” path) or they can be migrated again on another demand. We showed that these changes
could be done based on a scalable strategy with almost no overhead when compared to the original monolith (RQ1).
Differently from HRComercial, Petclinic was based on the most recent version of the Spring Boot framework with an
improved Spring AOP and Reflection implementation showing practically the same resource consumption (RQ2) of the
monolith for all loads tested.
There was no need to significantly modify the monolithic code (MR03 and MR04) to perform the migration, either to
go “forward” or “backward.” In the forward cycle for Petclinic, also due to the usage of the latest Spring version, there
were no code changes requested in the original customers class selected for migration (RQ3). Decoupling is achieved by
using AOP and reflection to intercept the calls to the monolithic modules and redirect them to the microservices. The
backward cycle remains straightforward in our approach as the only action needed is a change in a properties file and the
whole process can be done with zero downtime (MR05).
While migrating to microservice, the entity and data of the customers module were extracted (decomposed) from the
monolithic database and migrated to the micro-database of the corresponding microservice and new state changes were
being kept in the Microservice Database after migration. When the migration was undone, the data were transparently
updated on the monolithic database (MR06 and MR07). We also showed that all these reversion data changes are done
automatically (RQ3) by our approach.
Finally, we show that in terms of costs (RQ4) we provide a hybrid (MR4) and stepwise migration strategy that needs
to keep the monolith online, alongside the microservices machine, during the process. For these reason, we need an extra
machine (costing around 50% more than the original monolith) during migration and that is a common practice for hybrid
approaches, keeping our approach with a similar migration cost compared to the state of the art.
5 CO NCLUSIONS A ND FUTURE WO RK
The microservices approach has gained a lot of attention over the last years due to successful use in large projects such
as Netflix. As good results started to be reported, many companies considered a migration of existing legacy systems
(monoliths) to microservices with the goal of overcoming limitations related to large code bases, lack of scalability, and
presence of single points of failure.47 As the approach started to be used in practice, many projects found that a migration
of existing systems that are already in production is a challenging task that requires a careful stepwise approach in order
to gradually remove parts of the monolith to the newly built microsoervices.3,19,48 As some practical projects have shown,
there might happen some problems during the migration journey and some companies even reported the need to cancel
the migration effort and completely revert the application back to its monolithic form.10,11
This article presented an approach based on AOP that enables to not only make a gradual easy migration to microser-
vices from the code and data perspective but also with very low effort changing a few lines of code (if needed). Our
approach presented promising results when compared to a state of the art approach implementation, showing it does not
introduce significant performance overhead, maintain similar costs and presents a lower coding effort. In case the devel-
oper regrets its migration and needs to revert back to previous monolithic version, we showed that in our approach not a
single line of code or data has to be changed manually and there is no need to stop the application which is a key novel
contribution.
For future works, we plan to investigate if the use of compile time weaving and using tuned Java Virtual
Machines for reflection can improve the performance of our approach even further. Moreover, we want to test the
application with different profiles of virtual machines and different industrial applications to further improve our
findings.
Therefore, as microservices projects start being used in practice, similar to what happened with other technologies
in the past, it definitely is very useful, but is not a Silver Bullet49 that will fit any application development. We believe
that our approach can be a good choice for risky migration projects that need to be reverted back in case problems
occur.
ORCID
Américo Falcone Sampaio https://orcid.org/0000-0003-0267-268X
1306 FREIRE et al.
REFERENCES
1. Richardson C, Smith F. Microservices Patterns: With Examples in Java. New York: Manning Publications; 2019.
2. Newman S. Building Microservices Designing Fine-Grained Systems. Vol 1. California: O’Reilly Media; 2015.
3. Balalaie A, Heydarnoori A, Jamshidi P. Microservices Migration Patterns. Technical Report TRSUT-CE-ASE-2015-01. Sharif University of
Technology: Automated Software Engineering Group, Sharif University of Technology; 2015.
4. Mazzara M, Dragoni N, Bucchiarone A, Giaretta A, Larsen ST, Dustdar S. Microservices: migration of a mission critical system. IEEE
Trans Serv Comput. 2018;1(1):1-14.
5. Balalaie A, Heydarnoori A, Jamshidi P. Migrating to cloud-native architectures using microservices: an experience report. Paper presented
at: Proceedings of the Advances in Service-Oriented and Cloud Computing - Workshops of ESOCC 2015. Taormina, Italy; 2015, https://
doi.org/10.1007/978-3-319-33313-7_15.
6. Fritzsch J, Bogner J, Zimmermann A, Wagner S. From monolith to microservices: a classification of refactoring approaches. Software Engi-
neering Aspects of Continuous Development and New Paradigms of Software Production and Deployment. Springer International Publishing.
2019;1(1):128–141. http://arxiv.org/abs/1807.10059.
7. Di Francesco P, Lago P, Malavolta I. Migrating towards microservice architectures: an industrial survey. Paper presented at: Proceedings
of the 2018 IEEE International Conference on Software Architecture (ICSA). Seattle, USA; 2018:2901-2909.
8. Lu N, Glatz G, Peuser D. Moving mountains - practical approaches for moving monolithic applications to microservices. Paper presented
at: Proceedings of the International Conference on Microservices (Microservices 2019). Dortmund, Germany; 2019.
9. Francesco PD, Lago P, Malavolta I. Migrating towards microservice architectures: an industrial survey; 2018. https://doi.org/10.1109/
ICSA.2018.00012.
10. Noonan A. Goodbye microservices: from hundreds of problem children to one superstar; 2018. https://dzone.com/articles/goodbye-
microservices-from-100s-of-problem-childre. Accessed February 03, 2020.
11. Little M. Why segment returned to a monolith from microservices; 2018. https://www.infoq.com/news/2018/07/segment-microservices/.
Accessed February 03, 2020.
12. Kiczales G, Lamping J, Mendhekar A, et al. Aspect-oriented programming. In: Akşit M, Matsuoka S, eds. Paper presented at: Proceedings
of the 11th European Conference ECOOP’97 - Object-Oriented Programming. Berlin, Heidelberg/Germany: Springer; 1997:220-242. https://
doi.org/10.1007/BFb0053381.
13. Richardson C. Refactoring a monolith into microservices; 2016. https://www.nginx.com/blog/refactoring-a-monolith-into-microservices.
Accessed December 18, 2018.
14. Richardson C, Smith F. Microservices From Design to Deployment. Vol 1. 1. USA: NGINX; 2016.
15. Balalaie A, Heydarnoori A, Jamshidi P. Microservices architecture enables devops: migration to a cloud-native architecture. IEEE Softw.
2016;33(3):42-52. https://doi.org/10.1109/MS.2016.64.
16. Levcovitz A, Terra R, Valente MT. Towards a technique for extracting microservices from monolithic enterprise systems. 3rd Brazilian
Workshop on Software Visualization, Evolution and Maintenance (VEM). 2016;1(1). http://arxiv.org/abs/1605.03175.
17. Eisele M. Modern Java EE Design Patterns: Building Scalable Architecture for Sustainable Enterprise Development. California: O’Reilly
Media; 2016.
18. Fowler M. The strangler pattern; 2004. https://martinfowler.com/bliki/StranglerFigApplication.html Accessed February 03, 2020.
19. Knoche H, Hasselbring W. Using microservices for legacy software modernization. IEEE Softw. 2018;35:44-49. https://doi.org/10.1109/
MS.2018.2141035.
20. Abdullah M, Iqbal W, Erradi A. Unsupervised learning approach for web application auto-decomposition into microservices. J Syst Softw.
2019;151:243-257. https://doi.org/10.1016/j.jss.2019.02.031.
21. Furda A, Fidge C, Zimmermann O, Kelly W, Barros A. Migrating enterprise legacy source code to microservices: on multi-tenancy,
statefulness and data consistency. IEEE Softw. 2018;35:63-72. https://doi.org/10.1109/MS.2017.440134612.
22. Balalaie A, Heydarnoori A, Jamshidi P, Tamburri DA, Lynn T. Microservices migration patterns. Softw Pract Exper. 2018;48(11):2019-2042.
https://doi.org/10.1002/spe.2608.
23. Gouigoux J, Tamzalit D. From monolith to microservices: lessons learned on an industrial migration to a web oriented architecture. Paper
presented at: Proceedings of the 2017 IEEE International Conference on Software Architecture Workshops (ICSAW). Gothenburg, Sweden;
2017:62-65.
24. Pigazzini I, Arcelli Fontana F, Maggioni A. Tool support for the migration to microservice architecture: an industrial case study. In: Bures T,
Duchien L, Inverardi P, eds. Software Architecture. Cham, Switzerland: Springer International Publishing; 2019:247-263.
25. Edson Y. Migrating to Microservice Databases from Relational Monolith to Distributed Data. Vol 1. California: O’Reilly Media; 2017.
26. Fowler M, Lewis J. Microservices: a definition of this new architectural term; 2014. www.martinfowler.com/articles/microservices.html.
Accessed December 18, 2018.
27. Bass L, Clements P, Kazman R. Software Architecture in Practice. 3rd ed. Boston: Addison-Wesley Professional; 2012.
28. Spring. Spring cloud gateway; 2017. https://spring.io/projects/spring-cloud-gateway. Accessed September 17, 2018.
29. Spring. Service registration and discovery; 2017. https://spring.io/guides/gs/service-registration-and-discovery/. Accessed September 17,
2018.
30. Spring. Web on servlet stack; 2018. https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html. Accessed Septem-
ber 18, 2018.
31. Amazon EC2 instance types. https://aws.amazon.com/ec2/instance-types/?nc1=h_ls. Accessed August 04, 2020.
FREIRE et al. 1307
32. Group TPGD. PostgreSQL: the world’s most advanced open source relational database. https://www.postgresql.org/. Accessed September
17, 2018.
33. Fenglc. pgadmin4. https://hub.docker.com/r/fenglc/pgadmin4. Accessed September 19, 2018.
34. Corp G. Gatling. https://gatling.io/. Accessed September 19, 2018.
35. Authors P. Prometheus. https://prometheus.io/. Accessed September 19, 2018.
36. Rouesnel W. Postgres exporter. https://github.com/wrouesnel/postgres_exporter. Accessed September 19, 2018.
37. The JavaTM Tutorials - reflection API. https://docs.oracle.com/javase/tutorial/reflect/index.html. Accessed June 04, 2020.
38. Kojarski S, Lieberherr K, Lorenz DH, Hirschfeld R. Aspectual reflection. In Software engineering Properties of Languages for Aspect
Technologies (SPLAT, AOSD). Boston; 2003.
39. Hilsdale E, Hugunin J. Advice weaving in aspect. Paper presented at: Proceedings of the 3rd International Conference on Aspect-Oriented
Software Development, AOSD ’04: 2004:26-35; Association for Computing Machinery, New York, NY, https://doi.org/10.1145/976270.
976276.
40. Spring AOP. https://docs.spring.io/spring/docs/3.0.x/spring-framework-reference/html/aop.html. Accessed June 04, 2020.
41. Mißbach M, Gibbels P, Karnstädt J, Stelzel J, Wagenblast T. Adaptive Hardware Infrastructures for SAP. Quincy, MA: Galileo Press, 2005.
https://books.google.com.br/books?id=MczNAAAACAAJ.
42. Bolch G, Greiner S, de Meer H, Trivedi KS. Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer
Science Applications. New York, NY: Wiley-Interscience; 1998.
43. Cunha M, Mendonça NC, Sampaio A. Cloud Crawler: a declarative performance evaluation environment for infrastructure-as-a-service
clouds. Concurr Comput Pract Exp. 2017;29(1):29. https://doi.org/10.1002/cpe.3825.
44. Gonçalves M, Cunha M, Mendonça NC, Sampaio A. Performance inference: a novel approach for planning the capacity of IAAS cloud
applications. In: Pu C, Mohindra A, eds. Paper presented at: Proceedings of the 8th IEEE International Conference on Cloud Computing,
CLOUD; June 27 - July 2. New York, NY: IEEE Computer Society; 2015:813-820. https://doi.org/10.1109/CLOUD.2015.112.
45. Spring petclinic. https://github.com/spring-projects/spring-petclinic. Accessed August 04, 2020.
46. Spring petclinic microservice. https://github.com/spring-petclinic/spring-petclinic-microservices. Accessed August 04, 2020.
47. Bucchiarone A, Dragoni N, Dustdar S, Larsen ST, Mazzara M. From monolithic to microservices: an experience report from the banking
domain. IEEE Softw. 2018;35(3):50-55.
48. Jamshidi P, Pahl C, Mendonça NC, Lewis J, Tilkov S. Microservices: the journey so far and challenges ahead. IEEE Softw. 2018;35(3):24-35.
49. Brooks F. No silver bullet: essence and accidents of software engineering. Computer. 1987;20(4):10-19.
How to cite this article: Freire AFAA, Sampaio AF, Carvalho LHL, Medeiros O, Mendonça NC. Migrating
production monolithic systems to microservices using aspect oriented programming. Softw Pract Exper.
2021;51:1280–1307. https://doi.org/10.1002/spe.2956