0% found this document useful (0 votes)
10 views

202205 InfoQ - Building Microservices in Java

The InfoQ eMag Issue #96 focuses on building microservices in Java, featuring tutorials on Spring Boot, Project Helidon, and Quarkus. It includes a step-by-step guide for deploying a Spring Boot application to Google Cloud, as well as discussions on the influence of MicroProfile on microservices frameworks. The issue aims to facilitate knowledge sharing and innovation in professional software development.

Uploaded by

thomashk1221
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

202205 InfoQ - Building Microservices in Java

The InfoQ eMag Issue #96 focuses on building microservices in Java, featuring tutorials on Spring Boot, Project Helidon, and Quarkus. It includes a step-by-step guide for deploying a Spring Boot application to Google Cloud, as well as discussions on the influence of MicroProfile on microservices frameworks. The issue aims to facilitate knowledge sharing and innovation in professional software development.

Uploaded by

thomashk1221
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

The InfoQ eMag / Issue #96 / July 2021

Building
Microservices in Java

Spring Boot Tutorial: Project Helidon Tutorial: Getting Started


Building Microservices Building Microservices with with Quarkus
Deployed to Google Oracle’s Lightweight Java
Cloud Framework

FACILITATING THE SPREAD OF KNOWLEDGE AND INNOVATION IN PROFESSIONAL SOFTWARE DEVELOPMENT


InfoQ @ InfoQ InfoQ InfoQ

Building
Microservices in Java
IN THIS ISSUE

06 20
Spring Boot Tutorial: Project Helidon Tutorial:
Building Microservices Building Microservices with
Deployed to Google Cloud Oracle’s Lightweight Java
Framework

14 29
Virtual Panel: the MicroProfile
Getting Started with
Influence on Microservices
Quarkus
Frameworks

PRODUCTION EDITOR Ana Ciobotaru / COPY EDITORS Susan Conant DESIGN Dragos Balasoiu / Ana Ciobotaru
GENERAL FEEDBACK [email protected] / ADVERTISING [email protected] / EDITORIAL [email protected]
CONTRIBUTORS

Sergio Felix Robert Cortez Cesar Hernandez


is a Software Engineer in Google Cloud is a passionate Java Developer with more is a product manager and a former
where he works in Cloud Engineering than 10 years of experience. He is involved architect atis a Senior Software Engineer at
Productivity, an organization within Google in the Open Source Community to help Tomitribe with over 14 years of experience
Cloud that focuses on making development other individuals spread the knowledge in Enterprise Java Applications. He is a
frictionless and improving Product & Eng about Java technologies. He is a regular Java Champion, Duke’s Choice Award
Excellence. speaker at conferences like JavaOne, winner, Oracle Groundbreaker Ambassador,
Devoxx, Devnexus, JFokus, and others. He Open Source advocate, Apache and Eclipse
leads the Coimbra JUG and founded the Committer, teacher, and public speaker.
JNation Conference in Portugal. When he When Cesar is away from a computer,
is not working, he hangs out with friends, he enjoys spending time with his family,
plays computer games, and spends time traveling, and playing music with the Java
with family. Community Band, The Null Pointers. Follow
Cesar on Twitter.

Emily Jiang Otavio Santana Erin Schnabel


is a Java Champion. She is Liberty is a passionate software engineer focused is a Senior Principal Software Engineer and
Microservices Architect and Advocate, on Cloud and Java technology. He has maker of things at Red Hat. She is a Java
Senior Technical Staff Member (STSM) in experience mainly in persistence polyglot Champion, with 20 years under her belt, as
IBM, based at Hursley Lab in the UK. Emily and high-performance applications in a developer, technical leader, architect and
is a MicroProfile guru and has been working finances, social media, and e-commerce. evangelist, and she strongly prefers being
on MicroProfile since 2016 and leads the Otavio is a member of both Expert Groups up to her elbows in code.understanding of
specifications of MicroProfile Config, Fault and Expert Leader in several JSRs and how reactive operators work.
Tolerance and Service Mesh. She was a CDI JCP executive committee. He is working
Expert Group member. She is passionate on several Apache and Eclipse Foundation
about MicroProfile and Jakarta EE. projects such as Apache Tamaya,
MicroProfile, Jakarta EE.
A LETTER FROM
THE EDITOR
Over the past few years, the Java Language (WSDL) and Simple
community has been offered a Object Access Protocol
wide variety of microservices- (SOAP) to build enterprise
based frameworks to build applications. Today, however, the
enterprise, cloud-native and Representational State Transfer
serverless applications. Perhaps (REST) protocol is the primary
you’ve been asking yourself method for microservices to
questions such as: What are communicate with each other via
the benefits of building and HTTP.
maintaining a microservices-
based application? Should Since 2018, we’ve seen three
Michael Redlich I migrate my existing new open-source frameworks -
monolith-based application Micronaut, Helidon and Quarkus
is a Senior Research Technician at to microservices? Is it worth - emerge to complement the
ExxonMobil Research & Engineering
in Clinton, New Jersey (views are his the effort to migrate? Which already existing Java middleware
own) with experience in developing microservices framework should open-source products such as
custom scientific laboratory and
web applications for the past 30 I commit to using? What are Open Liberty, WildFly, Payara and
years. He also has experience as a MicroProfile and Jakarta EE? Tomitribe. We have also seen the
Technical Support Engineer at Ai-
Logix, Inc. (now AudioCodes) where What happened to Java EE? How emergence of GraalVM, a polyglot
he provided technical support and does Spring Boot fit into all of virtual machine and platform
developed telephony applications for
customers. His technical expertise this? What is GraalVM? created by Oracle Labs that,
includes object-oriented design and
analysis, relational database design
among other things, can convert
and development, computer security, For those of us that may be old applications to native code.
C/C++, Java, Python, and other
programming/scripting languages. enough to remember, the concept
His latest passions include of microservices emerged from In this eMag, you’ll be introduced
MicroProfile, Jakarta EE, Helidon,
Micronaut and MongoDB. the service-oriented architecture to some of these microservices
(SOA) that was introduced nearly frameworks, MicroProfile, a set
20 years ago. SOA applications of APIs that optimizes enterprise
used technologies such as Java for a microservices
the Web Services Description architecture, and GraalVM. We’ve
The InfoQ eMag / Issue #87 / November 2020
hand-picked three full-length articles SE and Helidon MP, explore the core
and facilitated a virtual panel to explore components of Helidon SE, show you
these frameworks. how to get started, and introduce
a movie application built on top of
In the first article, Sergio Felix, senior Helidon MP. I also demonstrate how to
software engineer at Google, provides convert a Helidon application to native
for you a step-by-step tutorial on how code with GraalVM.
to deploy applications to the Google
Cloud Platform. Starting with a basic And finally, we present a virtual panel
Spring Boot application, Sergio will then featuring an all-star cast of Java
containerize the application and deploy luminaries. Cesar Hernandez, senior
it to Google Kubernetes Engine using software engineer at Tomitribe,
Skaffold and the Cloud Code IntelliJ Emily Jiang, Liberty microservice
plugin. architect and advocate at IBM, Otavio
Santana, staff software engineer at
In the second article, Roberto Cortez, xgeeks, and Erin Schnabel, senior
principal software engineer at Red Hat, principal software engineer at Red Hat,
introduces you to Quarkus, explains discuss the MicroProfile influence on
the motivation behind its creation microservices frameworks. There was
and demonstrates how it is different also a discussion on how developers
from the other frameworks. Dubbed and organizations are reverting
“supersonic subatomic Java,” Quarkus back to monolith-based application
has received a significant amount of development.
attention within the Java community
since its initial release in 2019. We hope you enjoy this edition of the
InfoQ eMag. Please share your feedback
In the third article, I will introduce you via [email protected] or on Twitter.
to Project Helidon, Oracle’s lightweight
Java microservices framework. I will
explain the differences between Helidon
The InfoQ eMag / Issue #96 / July 2021

Spring Boot Tutorial: Building


Microservices Deployed to Google Cloud
by Sergio Felix, Software Engineer

With the increasing popularity of microservices in We’ll then make a few changes to containerize the
the industry, there’s been a boom in technologies application using Jib (builds optimized Docker
and platforms from which to choose to build and OCI images for your Java applications without
applications. Sometimes it’s hard to pick a Docker) and a distroless version of Java 11.
something to get started. In this article, I’ll show Jib works both with Maven and Gradle. We’ll use
you how to create a Spring Boot based application Maven for this example.
that leverages some of the services offered by
Google Cloud. This is the approach we’ve been Next, we will create a Google Cloud Platform (GCP)
using in our team at Google for quite some time. I project and use Spring Cloud GCP to leverage Cloud
hope you find it useful. Firestore. Spring Cloud GCP allows Spring-based
applications to easily consume Google services like
The Basics databases (Cloud Firestore, Cloud Spanner or even
Let’s start by defining what we will build. We’ll Cloud SQL), Google Cloud Pub/Sub, Stackdriver for
begin with a very basic Spring Boot-based logging and tracing, etc.
application written in Java. Spring is a mature
framework that allows us to quickly create very After that, we’ll make changes in our application
powerful and feature-rich applications. to deploy it to Google Kubernetes Engine (GKE).
GKE is a managed, production-ready environment

6
for deploying containerized Kubernetes-based

The InfoQ eMag / Issue #96 / July 2021


applications.

Finally, we will use Skaffold and Cloud Code to


make development easier. Skaffold handles the
workflow for building, pushing and deploying your
application. Cloud Code is a plugin for VS Code In addition, we’ll run some commands to make sure
and IntelliJ that works with Skaffold and your IDE that the application is running on your machine and
so that you can do things like deploy to GKE with a can communicate with the services running on your
click of a button. In this article, I’ll be using IntelliJ project in Google Cloud. Let’s make sure we are
with Cloud Code. pointing to the correct project and authenticate you
using:
Setting up our Tools
Before we write any code, let’s make sure we have a gcloud config set project <YOUR PROJECT
ID>
Google Cloud project and all the tools installed.
gcloud auth login

Creating a Google Cloud Project Next, we’ll make sure your machine has application
Setting up a GCP instance is easy. You can credentials to run your application locally:
accomplish this by following these instructions.
This new project will allow us to deploy our gcloud auth application-default login
application to GKE, get access to a database
(Cloud Firestore) and will also allow us to have Enabling the APIs
a place where we can push our images when we Now that we have everything set up we need to
containerize the application. enable the API’s we will be using in our application:

Install Cloud Code • Google Container Registry API - This will allow
Next, we’ll install Cloud Code. You can follow these us to have a registry where we can privately
instructions on how to install Cloud Code to push our images.
IntelliJ. Cloud Code manages the installation of • Cloud Firestore in Datastore mode - This will
Skaffold and Google SDK that we’ll use later in the allow us to store entities in a NoSQL database.
article. Cloud Code also allows us to inspect our Make sure to select Datastore mode so that we
GKE deployments and services. Most importantly, can use Spring Cloud GCP’s support for it.
it also has a clever GKE development mode that
continuously listens to changes in your code You can manage the APIs that are enabled in your
when it detects a change it builds the app, builds project by visiting your project’s API Dashboard.
the image, pushes the image to your registry,
deploys the application to your GKE cluster, starts Creating our Dog Service
streaming logs and opens a localhost tunnel so you First things first! We need to get started with a
can test your service locally. It›s like magic! simple application we can run locally. We’ll create
something important like a Dog microservice. Since
In order to use Cloud Code and proceed with our I’m using IntelliJ Ultimate I’ll go to `File -> New
application, let’s make sure that you log in using -> Project…` and select «Spring Initializr». I’ll
the Cloud Code plugin by clicking on the icon that select Maven, Jar, Java 11 and change the name to
should show up on the top right of your IntelliJ something important like `dog` as shown below:
window:

7
We’ll create a controller class for the Dog and the
The InfoQ eMag / Issue #96 / July 2021

REST endpoints:

@RestController
@Slf4j
public class DogController {

@GetMapping(“/api/v1/dogs”)
public List<Dog> getAllDogs() {
log.debug(“->getAllDogs”);
return ImmutableList.of(new
Dog(“Fluffy”, 5),
new Dog(“Bob”, 6),
new Dog(“Cupcake”, 11));
}
Click next and add: Lombok, Spring Web and GCP
Support: @PostMapping(“/api/v1/dogs”)
public Dog saveDog(@RequestBody Dog
dog) {
log.debug(“->saveDog {}”, dog);
return dog;
}
}

The endpoints return a list of predefined dogs and


the saveDog endpoint doesn’t really do much, but
this is enough for us to get started.

Using Cloud Firestore


Now that we have a skeleton app, let’s try to
If all went well, you should now have an application use some of the services in GCP. Spring Cloud
that you can run. If you don’t want to use IntelliJ for GCP adds Spring Data support for Google Cloud
this, use the equivalent on your IDE or use Spring’s Firestore in Datastore mode. We’ll use this to store
Initilizr. our Dogs instead of using a simple list. Users will
now also be able to actually save a Dog in our
Next, we’ll add a POJO for our Dog service and a database.
couple of REST endpoints to test our application.
Our Dog object will have a name and an age and To start we’ll add the Spring Cloud GCP Data
we’ll use Lombok’s @Data annotation to save us Datastore dependency to our POM:
from writing setters, getters, etc. We’ll also use the
<dependency>
@AllArgsConstructor annotation to create a
<groupId>org.springframework.cloud</
constructor for us. We’ll use this later when we are groupId>
creating Dogs. <artifactId>spring-cloud-gcp-data-
datastore</artifactId>
@Data </dependency>
@AllArgsConstructor
public class Dog { Now, we can modify our Dog class so that it can be
private String name; stored. We’ll add an @Entity annotation and a @
private int age; Id annotation to a value of type Long to act as an
}
identifier for the entity:

8
@Entity TIP: To quickly test this, you can create an HTTP

The InfoQ eMag / Issue #96 / July 2021


@Data request in IntelliJ with the following:
@AllArgsConstructor
public class Dog {
@Id private Long id; POST http://localhost:8080/api/v1/dogs
private String name; Content-Type: application/json
private int age;
} {
“name”: “bob”,
“age”: 5
Now we can create a regular Spring Repository
}
class as follows:
In only a few steps, we now have the application
@Repository up and running and consuming services from GCP.
public interface DogRepository extends
DatastoreRepository<Dog, Long> {} Awesome! Now, let’s turn this into a container and
deploy it!
As usual with Spring Repositories, there is no need
to write implementations for this interface since Containerizing the Dog Service
we’ll be using very basic methods. At this point, you could start writing a Dockerfile to
containerize the application we created above. Let’s
We can now modify the controller class. We’ll inject instead use Jib. One of the things I like about Jib
the DogRepository into the DogController then is that it separates your application into multiple
modify the class to use the repository as follows: layers, splitting dependencies from classes. This
allows us to have faster builds so that you don’t
@RestController have to wait for Docker to rebuild your entire Java
@Slf4j
application - just deploy the layers that changed. In
@RequiredArgsConstructor
public class DogController { addition, Jib has a Maven plugin that makes it easy
to set up by just modifying the POM file in your
private final DogRepository application.
dogRepository;

@GetMapping(“/api/v1/dogs”) To start using the plugin, we’ll need to modify our


public Iterable<Dog> getAllDogs() { POM file to add the following:
log.debug(“->getAllDogs”);
return dogRepository.findAll(); <plugin>
} <groupId>com.google.cloud.
tools</groupId>
@PostMapping(“/api/v1/dogs”) <artifactId>jib-maven-plugin</
public Dog saveDog(@RequestBody Dog artifactId>
dog) { <version>1.8.0</version>
log.debug(“->saveDog {}”, dog); <configuration>
return dogRepository.save(dog); <from>
} 
</from>
Note that we are using Lombok’s @ <to>
RequiredArgsConstructor to create a 
constructor to inject our DogRepository. When
</to>
you run your application, the endpoints will call </configuration>
your Dog service that will attempt to use Cloud </plugin>
Firestore to retrieve or store the Dogs.

9
Notice we are using Google’s distroless image for
The InfoQ eMag / Issue #96 / July 2021

Java 11. “Distroless” images contain only your


application and its runtime dependencies. They do
not contain package managers, shells or any other
programs you would expect to find in a standard
Linux distribution.

Restricting what’s in your runtime container to This allows your GKE nodes to have permissions to
precisely what’s necessary for your app is a best access the rest of the Google Cloud services. After
practice employed by Google and other tech giants a few moments the cluster will be created (please
that have used containers in production for many check image on page 11).
years. It improves the signal-to-noise of scanners
(e.g. CVE) and reduces the burden of establishing Applications living inside of Kubernetes
provenance to just what you need. Kubernetes likes to monitor your application to
ensure that it’s up and running. In the event of a
Make sure to replace your GCP registry in the code failure, Kubernetes knows that your application
above to match the name of your project. is down and that it needs to spin up a new
instance. To do this, we need to make sure that our
After doing this, you can attempt to build and push application is able to respond when Kubernetes
the image of the app by running a command like: pokes it. Let’s add an actuator and Spring Cloud
Kubernetes.
$ ./mvnw install jib:build
Add the following dependencies to your POM file:
This will build and test the application. Create the
image and then finally push the newly created <dependency>
image to your registry. <groupId>org.springframework.boot</
groupId>
<artifactId>spring-boot-starter-
NOTE: It’s usually a common good practice to use
actuator</artifactId>
a distro with a specific digest instead of using </dependency>
“latest”. I’ll leave it up to the reader to decide what
base image and digest to use depending on the If your application has an application.
version of Java you are using. properties file inside the src/main/
resources directory, remove it and create
Deploying the Dog Service an application.yaml file with the following
At this point, we are almost ready to deploy our contents:
application. In order to do this, let’s first create a
GKE cluster where we will deploy our application. spring:
application:
name: dog-service
Creating a GKE Cluster
To create a GKE cluster, follow these instructions. management:
You’ll basically want to visit the GKE page, wait endpoint:
health:
for the API to get enabled and then click on the
enabled: true
button to create a cluster. You may use the default
settings, but just make sure that you click on the This adds a name to our application and exposes
“More options” button to enable full access to all the health endpoint mentioned above. To verify
the Cloud APIs: that this is working, you may visit your application

10
The InfoQ eMag / Issue #96 / July 2021
at localhost:8080/actuator/health You initialDelaySeconds: 20
should see something like: httpGet:
port: 8080
path: /actuator/health
{ readinessProbe:
“status”: “UP” initialDelaySeconds: 30
} httpGet:
port: 8080
path: /actuator/health
Configuring to run in Kubernetes
In order for us to deploy our application to our Add a service.yaml file with the following:
new GKE cluster, we need to write some additional
YAML. We need to create a deployment and a apiVersion: v1
kind: Service
service. Use the deployment that follows. Just metadata:
remember to replace the GCR name with the one name: dog-service
from your project: spec:
type: NodePort
deployment.yaml selector:
apiVersion: apps/v1 app: dog-service
kind: Deployment ports:
metadata: - port: 8080
name: dog-service targetPort: 8080
spec:
selector: The deployment contains a few changes to the
matchLabels: readiness and liveness probe. This is so that
app: dog-service Kubernetes uses these endpoints to poke the
replicas: 1
app to see if it’s alive. The service exposes the
template:
metadata: deployment so that other services can consume it.
labels:
app: dog-service After doing this, we can now start using the Cloud
spec:
Code plugin we installed at the beginning of this
containers:
- name: dog-service article. From the Tools menu, select: Cloud Code
image: gcr.io/<YOUR GCR -> Kubernetes -> Add Kubernetes Support.
REGISTRY NAME>/dog This will automatically add a Skaffold YAML to your
ports:
application and set up a few things for you so that
- containerPort: 8080
livenessProbe: you can deploy to your cluster by clicking a button.

11
To confirm that all this worked you can inspect the Conclusions
The InfoQ eMag / Issue #96 / July 2021

configuration from the Run/Debug Configurations Congratulations if you made it this far! The
section in IntelliJ. If you click on the Develop on application we built in this article showcases some
Kubernetes run, it should have automatically picked key technologies that most microservices-based
up your GKE cluster and Kubernetes configuration applications would use: a fast, fully managed,
files and should look something this: serverless, cloud-native NoSQL document

Click OK and then click on the green “Play” button database (Cloud Firestore), GKE a managed,
at the top right: production-ready environment for deploying
Kubernetes-based containerized applications and
finally a simple cloud native microservice build with
Spring Boot.

Along the way, we also learned how to use


After that, Cloud Code will build the app, create the a few tools like Cloud Code to streamline
image, deploy the application to your GKE cluster your development workflow and Jib to build
and stream the logs from Stackdriver into your local containerized applications using common Java
machine. It will also open a tunnel so that you can patterns.
consume your service via localhost:8080.
I hope you’ve found the article helpful and that
You can also peak at the workloads page in the you give these technologies a try. If you found this
Google Cloud Console. (Figure 8). interesting, have a look at a bunch of codelabs that
Google offers where you can learn about Spring and
Google Cloud products.

12
The InfoQ eMag / Issue #96 / July 2021
TL;DR
• Using Google
Kubernetes Engine
(GKE) along with Spring
Boot allows you to
quickly and easily set up
microservices.

• Jib is a great way to


Figure 8
containerize your Java
application. It allows
you to create optimized
images without Docker
using Maven or Gradle.

• Google’s Spring Cloud


GCP implementation
allows developers to
leverage Google Cloud
Platform (GCP) services
with little configuration
and using some of
Spring’s patterns.

• Setting up Skaffold
with Cloud Code allows
developers to have
a nice development
cycle. This is especially
useful with starting to
prototype a new service.

13
The InfoQ eMag / Issue #96 / July 2021

Getting Started with Quarkus


by Roberto Cortez, Java Developer

Quarkus created quite a buzz in the enterprise Java resources or just a small amount. In most cases,
ecosystem in 2019. Like all other developers, I was we wouldn’t care that much as long as we
curious about this new technology and saw a lot could run the application. However, the Cloud is
of potential in it. What exactly is Quarkus? How now changing the way we develop and deploy
is it different from other technologies established applications.
in the market? How can Quarkus help me or my
organization? Let’s find out. In the Cloud, we pay exactly for what we use. So
we have become pickier with our hardware usage. If
What is Quarkus? the application takes 10 seconds to start, we have
The Quarkus project dubbed itself Supersonic to pay for these 10 seconds even if the application
Subatomic Java. Is this actually real? What does is not yet ready for others to consume.
this mean? To better explain the motivation behind
the Quarkus project, we need to look into the Java and the Cloud
current state of software development. Do you remember when the first Java version was
released? Allow me to refresh your memory — it
From On-Premises to Cloud was in 1996. There was no Cloud back then. In fact,
The old way to deploy applications was to use it only came into existence several years later. Java
physical hardware. With the purchase of a was definitely not tailored for this new paradigm
physical box, we paid upfront for the hardware and had to adjust. But how could we change a
requirements. We had already made the investment, paradigm after so many years tied to a physical box
so it wouldn’t matter if we used all the machine

14
where costs didn’t matter as much as they do in the @Path(“/{id}”)

The InfoQ eMag / Issue #96 / July 2021


Cloud? Response get(@PathParam(“id”)Long id) {
return bookRepository.find(id)

It’s All About the Runtime .map(Response::ok)


The way that many Java libraries and frameworks
evolved over the years was to perform a set .orElse(Response.status(NOT_FOUND))
.build();
of enhancements during runtime. This was a
}
convenient way to add capabilities to your code in a }
safe and declarative way. Do you need dependency
injection? Sure! Use annotations. Do you need Congratulations, you have your first Quarkus app!
a transaction? Of course! Use an annotation. In
fact, you can code a lot of things by using these Best of Breed Frameworks and Standards
annotations that the runtime will pick and handle The Quarkus programming model is built on top
for you. But there is always a catch. The runtime of proven standards, be it official standards or de
requires a scan of your classpath and classes facto standards. Right now, Quarkus has first class
for metadata. This is an expensive operation that support for technologies like Hibernate, CDI,
consumes time and memory. Eclipse MicroProfile, Kafka, Camel, Vert.x, Spring,
Flyway, Kubernetes, Vault, just to name a few. When
Quarkus Paradigm Shift you adopt Quarkus, you will be productive from
Quarkus addressed this challenge by moving day one since you don’t really need to learn new
expensive operations like Bytecode Enhancement, technologies. You just use what has been out there
Dynamic ClassLoading, Proxying, and more to for the past 10 years.
compile time. The result is an environment that
consumes less memory, less CPU, and faster Are you looking to use a library that isn’t yet in the
startup. This is perfect for the use case of the Quarkus ecosystem? There is a good chance that
Cloud, but also useful for other use cases. Everyone it will work out of the box without any additional
will benefit from less resources consumption setup, unless you want to run it in GraalVM Native
overall, no matter the environment. mode. If you want to go one step further, you could
easily implement your own Quarkus extension to
Maybe Quarkus is Not So New provide support for a particular technology and
Have you heard of or used technologies such as enrich the Quarkus ecosystem.
CDI, JAX-RS, or JPA? If so, the Quarkus stack is
composed of these technologies that have been Quarkus Setup
around for several years. If you know how to So, you may be asking if there is something hiding
develop these technologies, then you will know how under the covers. In fact yes there is. You are
to develop a Quarkus application. required to use a specific set of dependencies in
your project that are provided by Quarkus. Don’t
Do you recognize the following code? worry, Quarkus supports both Maven and Gradle.
For convenience, you can generate a skeleton
@Path(“books”) project in Quarkus starter page, and select which
@Consumes(APPLICATION_JSON) technologies you would like to use. Just import it
@Produces(APPLICATION_JSON)
public class BookApi { in your favorite IDE and you are ready to go. Here
@Inject is a sample Maven project to use JAX-RS with
BookRepository bookRepository; RESTEasy and JPA with Hibernate:

@GET

15
<?xml version=”1.0”?> </dependency>
The InfoQ eMag / Issue #96 / July 2021

<project xsi:schemaLocation=”http://maven. <dependency>


apache.org/POM/4.0.0 https://maven.apache. <groupId>io.quarkus</groupId>
org/xsd/maven-4.0.0.xsd” xmlns=”http:// <artifactId>quarkus-junit5</
maven.apache.org/POM/4.0.0” artifactId>
xmlns:xsi=”http://www.w3.org/2001/ <scope>test</scope>
XMLSchema-instance”> </dependency>
<modelVersion>4.0.0</modelVersion> <dependency>
<groupId>org.acme</groupId> <groupId>io.rest-assured</groupId>
<artifactId>code-with-quarkus</ <artifactId>rest-assured</artifactId>
artifactId> <scope>test</scope>
<version>1.0.0-SNAPSHOT</version> </dependency>
<properties> <dependency>
<compiler-plugin.version>3.8.1</ <groupId>io.quarkus</groupId>
compiler-plugin.version> <artifactId>quarkus-hibernate-orm</
<maven.compiler.parameters>true</maven. artifactId>
compiler.parameters> </dependency>
<maven.compiler.source>1.8</maven. <dependency>
compiler.source> <groupId>io.quarkus</groupId>
<maven.compiler.target>1.8</maven. <artifactId>quarkus-resteasy-jsonb</
compiler.target> artifactId>
<project.build.sourceEncoding>UTF-8</ </dependency>
project.build.sourceEncoding> </dependencies>
<project.reporting. <build>
outputEncoding>UTF-8</project.reporting. <plugins>
outputEncoding> <plugin>
<quarkus-plugin.version>1.3.0.Final</ <groupId>io.quarkus</groupId>
quarkus-plugin.version> <artifactId>quarkus-maven-plugin</
<quarkus.platform.artifact-id>quarkus- artifactId>
universe-bom</quarkus.platform.artifact-id> <version>${quarkus-plugin.
<quarkus.platform.group-id>io.quarkus</ version}</version>
quarkus.platform.group-id> <executions>
<quarkus.platform.version>1.3.0.Final</ <execution>
quarkus.platform.version> <goals>
<surefire-plugin.version>2.22.1</ <goal>build</goal>
surefire-plugin.version> </goals>
</properties> </execution>
<dependencyManagement> </executions>
<dependencies> </plugin>
<dependency> <plugin>
<groupId>${quarkus.platform.group- <artifactId>maven-compiler-plugin</
id}</groupId> artifactId>
<artifactId>${quarkus.platform. <version>${compiler-plugin.
artifact-id}</artifactId> version}</version>
<version>${quarkus.platform. </plugin>
version}</version> <plugin>
<type>pom</type> <artifactId>maven-surefire-plugin</
<scope>import</scope> artifactId>
</dependency> <version>${surefire-plugin.
</dependencies> version}</version>
</dependencyManagement> <configuration>
<dependencies> <systemProperties>
<dependency> <java.util.logging.manager>org.
<groupId>io.quarkus</groupId> jboss.logmanager.LogManager</java.util.
<artifactId>quarkus-resteasy</ logging.manager>
artifactId> </systemProperties>

16
</configuration> For convenience, the sample Maven project already

The InfoQ eMag / Issue #96 / July 2021


</plugin> has the required setup to build your project as
</plugins>
native. You do need to have GraalVM in your
</build>
</project> system with the native-image tool installed.
Follow these instructions on how to do so. After
You might have noticed that most of the that, just build as any other Maven project but with
dependencies start with the groupId io. the native profile: mvn verify -Pnative. This will
quarkus and that they are not the usual generate a binary runner in the target folder, that
dependencies that you might find for Hibernate, you can run as any other binary, with ./project-
Resteasy, or Junit. name-runner. The following is a sample output of
the runner in my box:
Quarkus Dependencies
Now, you may be wondering why Quarkus supplies [io.quarkus] (main) code-with-quarkus
their own wrapper versions around these popular 1.0.0-SNAPSHOT (powered by Quarkus
1.3.0.Final) started in 0.023s. Listening
libraries. The reason is to provide a bridge between on: http://0.0.0.0:8080
the library and Quarkus to resolve the runtime INFO [io.quarkus] (main) Profile prod
dependencies at compile time. This is where activated.
the magic of Quarkus happens and provides [io.quarkus] (main) Installed features:
[agroal, cdi, hibernate-orm, narayana-jta,
projects with fast start times and smaller memory resteasy, resteasy-jsonb]
footprints.
Did you notice the startup time? Only 0.023s. Yes,
Does this mean that you are constrained to use our application doesn’t have much, but still pretty
only Quarkus specific libraries? Absolutely not. impressive. Even for real applications, you will see
You can use any library you wish. You run Quarkus startup times in the order of milliseconds. You can
applications on the JVM as usual, where you don’t learn more about GraalVM on their website.
have limitations.
Developer Productivity
GraalVM and Native Images We have seen that Quarkus could help your
Perhaps you already heard about this project company become Cloud Native. Awesome. But
called GraalVM by Oracle Labs? In essence, what about the developer? We all like new shiny
GraalVM is a Universal Virtual Machine to run things, and we are also super lazy. What does
applications in multiple languages. One of the Quarkus do for the developer that cannot be done
most interesting features is the ability to build with other technologies?
your application in a Native Image and run it
even faster! In practice, this means that you just Well, how about hot reloading that actually works
have an executable to run with all the required without using external tools or complicated tricks?
dependencies of your application resolved at Yes, it is true. After 25 years, since Java was born,
compile time. we now have a reliable way to change our code and
see those changes with a simple refresh. Again,
This does not run on the JVM — it is a plain this is accomplished by the way Quarkus works
executable binary file, but includes all necessary internally.
components like memory management and
thread scheduling from a different virtual machine, Everything is just code, so you don’t have to worry
called Substrate VM to run your application. about the things that made hot reloading difficult
anymore. It is a trivial operation.

17
To accomplish this, you have to run Quarkus in • Quarkus Start Page
The InfoQ eMag / Issue #96 / July 2021

Development Mode. Just run mvn quarkus:dev


• Sample Github Repo
and you are good to go. Quarkus will start up
and you are free to do the changes to your code
and immediately see them. For instance, you can
change your REST endpoint parameters, add new
methods, and change paths. Once you invoke them,
they will be updated reflecting your code changes. TL;DR
How cool is that?
• Quarkus is a new
Is Quarkus Production Ready? technology aimed at
All of this seems to be too good to be true, cloud development.
but is Quakus actually ready for production
environments? Yes it is. • With Quarkus, you
can take advantage
A lot of companies are already adopting Quarkus as of smaller runtimes
their development/runtime environment. Quarkus optimized for the cloud.
has a very fast release cadence (every few weeks), • You don’t need to relearn
and a strong Open Source community that helps new APIs. Quarkus is
every developer in the Java community, whether built on top of the best-
they are just getting started with Quarkus or are an of-breed technologies
advanced user. from the last decade, like
Hibernate, RESTEasy,
Check out this sample application that you can Vert.x, and MicroProfile.
download or clone. You can also read some of
the adoption stories in a few blog posts so you can • Quarkus is productive
have a better idea of user experiences when using from day one.
Quarkus. • Quarkus is production
ready.
Conclusion
After a year of its official announcement, Quarkus
is already on version 1.3.1.Final. A lot of effort is
being put in the project to help companies and
developers to write applications that they can run
natively in the Cloud.

We don’t know how far Quarkus can go, but one


thing is for certain: Quarkus shook the entire Java
ecosystem in a space dominated by Spring. I
think the Java ecosystem can only win by having
multiple offerings that can push each other and
innovate to keep themselves competitives.

Resources
• Quarkus Website

18
SPONSORED ARTICLE

The InfoQ eMag / Issue #96 / July 2021


Feature-Driven Development: A
Brief Overview
Feature-driven development Roles The Chief Programmer is an
(FDD) is a five-step Agile Before we can dive into the experienced programmer that
framework that organizes methodology, it is important leads a small development team,
software development around to understand the six primary helping with analysis and design
making progress on features in roles involved in a feature- to keep the project moving in the
one to two-week sprints. The five driven development team. Each right direction.
steps are: role serves a specific function
throughout the development The Class Owners are individual
• Develop an overall model process, there may be more than developers creating features
one person in a given role on a on smaller development teams.
• Build a features list
team. Their responsibilities can include
• Plan by feature designing, coding, testing, and
The Project Manager is the documenting the features or
• Design by feature
leader of the whole project and classes.
• Build by feature coordinates all the moving
parts, ensures deadlines get hit, The Domain Expert has
A feature in FDD does not always identifies gaps in the workflow, detailed knowledge of the user
refer to a product feature; it could and so on. requirements and understands
refer to a small task or process a the problem customers want
client or user wishes to complete. The Chief Architect creates the solved.
For example, “View all open blueprint for the overall system.
tasks”, “Pay on-line bill”, or “Chat Part of their job is to educate the In addition to the six primary
with my friends in the game.” people on the team about the roles, there are eight supporting
Features in FDD are similar to system›s design so each person roles that may be needed:
user stories in Scrum. can effectively fit their individual
tasks within the context of the • Release Manager
As with other Agile software whole project. The Chief Architect
• Language guru
development frameworks, approaches the project from a
the goal of feature-driven holistic point of view. • Build engineer
development is to iterate quickly
• Tool-smith
to satisfy the needs of the The Development Manager
customer. The five-step process coordinates all the teams • System administrator
of FDD assigns roles and utilizes ensuring they complete
• Tester
a set of project management best their tasks on time providing
practices to ensure consistency, mentoring and leadership of • Deployer
making it easier for new team programming activities.
• Technical writer
members to onboard.

Please read the full-length version of this article

19
The InfoQ eMag / Issue #96 / July 2021

Project Helidon Tutorial: Building


Microservices with Oracle’s Lightweight
Java Framework
by Michael Redlich, Senior Research Technician at ExxonMobil Research & Engineering

Oracle introduced its new open-source This tutorial will introduce Helidon SE and Helidon
framework, Project Helidon, in September 2018. MP, explore the three core components of Helidon
Originally named J4C (Java for Cloud), Helidon SE, how to get started, and introduce a movie
is a collection of Java libraries for creating application built on top of Helidon MP. There will
microservices-based applications. Within six also be a discussion on GraalVM and what you can
months of its introduction, Helidon 1.0 was expect with the upcoming release of Helidon 2.0.
released in February 2019.
Helidon Landscape
The current stable release is Helidon 1.4.4, but Helidon, designed to be simple and fast, is
Oracle is well on their way to releasing Helidon 2.0 unique because it ships with two programming
planned for late Spring 2020. models: Helidon SE and Helidon MP. In the graph
below, you can see where Helidon SE and Helidon

20
MP align with other popular microservices Let’s start with the first version of

The InfoQ eMag / Issue #96 / July 2021


frameworks. the startServer() method to start a Helidon web
server on a random available port:

private static void startServer() {


Routing routing = Routing.builder()
.any((request, response) ->
response.send(“Greetings from the web
server!” + “\n”))
.build();

WebServer webServer = WebServer


.create(routing)
.start()
.toCompletableFuture()
.get(10, TimeUnit.SECONDS);
Helidon SE
Helidon SE is a microframework that features three System.out.println(“INFO: Server
core components required to create a microservice started at: http://localhost:” + webServer.
-- a web server, configuration, and security -- for port() + “\n”);
building microservices-based applications. It is a
}
small, functional style API that is reactive, simple
and transparent in which an application server is
First, we need to build an instance of
not required.
the Routing interface that serves as an HTTP
request-response handler with routing rules. In
Let’s take a look at the functional style of Helidon
this example, we use the any() method to route the
SE with this very simple example on starting the
request to the defined server response, “Greetings
Helidon web server using the WebServer interface:
from the web server!”, which will be displayed in the
WebServer.create( browser or via the curl command.
Routing.builder()
.get(“/greet”, (req, res) In building the web server, we invoke the
-> res.send(“Hello World!”))
overloaded create() method designed to accept
.build())
.start(); various server configurations. The simplest one, as
shown above, accepts the instance variable, routing,
Using this example as a starting that we just created to provide default server
point, we will incrementally build a configuration.
formal startServer() method, part of a server
application for you to download, to explore the The Helidon web server was
three core Helidon SE components. designed to be reactive which means
the start() method returns and an instance of
Web Server Component the CompletionStage<WebServer> interface to
Inspired by NodeJS and other Java frameworks, start the web server. This allows us to invoke
Helidon’s web server component is an the toCompletableFuture() method. Since a specific
asynchronous and reactive API that runs on top server port wasn’t defined, the server will find a
of Netty. The WebServer interface provides basic random available port upon startup.
server lifecycle and monitoring enhanced by
configuration, routing, error handling, and building Let’s build and run our server application with
metrics and health endpoints. Maven:

21
$ mvn clean package server:
The InfoQ eMag / Issue #96 / July 2021

$ java -jar target/helidon-server.jar port: 8080


host: 0.0.0.0
When the server starts, you should see the
security:
following in your terminal window:
config:
require-encryption: false
Apr 15, 2020 1:14:46 PM io.helidon.
webserver.NettyWebServer <init> providers:
INFO: Version: 1.4.4 - http-basic-auth:
Apr 15, 2020 1:14:46 PM io.helidon. realm: “helidon”
webserver.NettyWebServer lambda$start$8 users:
INFO: Channel ‘@default’ started: [id: - login: “ben”
0xcba440a6, L:/0:0:0:0:0:0:0:0:52535] password: “${CLEAR=password}”
INFO: Server started at: http:// roles: [“user”, “admin”]
localhost:52535 - login: “mike”
password: “${CLEAR=password}”
As shown on the last line, the Helidon web server roles: [“user”]
selected port 52535. While the server is running, - http-digest-auth:
enter this URL in your browser or execute it with
There are three main sections, or nodes,
following curl command in a separate terminal
within this application.yaml file
window:
- app, server and security. The first two
$ curl -X GET http://localhost:52535 nodes are straightforward. The greeting subnode
defines the server response that we hard-coded
You should see “Greetings from the web server!” in the previous example. The port subnode
defines port 8080 for the web server to use upon
To shut down the web server, simply add this line of startup. However, you should have noticed that
code: the security node is a bit more complex utilizing
YAML’s sequence of mappings to define multiple
webServer.shutdown() entries. Separated by the ‘-’ character, two security
.thenRun(() -> System.out.
println(“INFO: Server is shutting down... providers, http-basic-auth and http-digest-auth,
Good bye!”)) and two users, ben and mike, have been defined.
.toCompletableFuture(); We will discuss this in more detail in the Security
Component section of this tutorial.
Configuration Component
The configuration component loads and processes Also note that this configuration allows for
configuration properties. Helidon’s Config interface clear-text passwords as the config.require-
will read configuration properties from a defined encryption subsection is set to false. You would
properties file, usually but not limited to, obviously set this value to true in a production
in YAML format. environment so that any attempt to pass a clear-
text password would throw an exception.
Let’s create an application.yaml file that will
provide configuration for the application, server, Now that we have a viable configuration file,
and security. let’s update our startServer() method to take
advantage of the configuration we just defined.
app:
greeting: “Greetings from the web
server!”

22
private static void startServer() { We can now build and run this version of our web

The InfoQ eMag / Issue #96 / July 2021


Config config = Config.create(); server application using the same Maven and Java
ServerConfiguration serverConfig
commands. Executing the same curl command:
= ServerConfiguration.create(config.
get(“server”));
$ curl -X GET http://localhost:8080
Routing routing = Routing.builder()
.any((request, response) -> You should see “Greetings from the web server!”
response.send(config.get(“app.greeting”).
asString().get() + “\n”))
Security Component
.build();
Helidon’s security component provides
WebServer webServer = WebServer authentication, authorization, audit and outbound
.create(serverConfig, routing) security. There is support for a number of
.start()
implemented security providers for use in Helidon
.toCompletableFuture()
.get(10, TimeUnit.SECONDS); applications:

System.out.println(“INFO: Server • HTTP Basic Authentication


started at: http://localhost:” + webServer.
port() + “\n”); • HTTP Digest Authentication
}
• HTTP Signatures
First, we need to build an instance of
• Attribute Based Access Control (ABAC)
the Config interface by invoking its create()
Authorization
method to read our configuration file.
The get(String key) method, provided • JWT Provider
by Config, returns a node, or a specific subnode,
• Header Assertion
from the configuration file specified by key.
For example, config.get(“server”) will return • Google Login Authentication
the content under the server node and config.
• OpenID Connect
get(“app.greeting”) will return “Greetings from
the web server!”. • IDCS Role Mapping

Next, we create an instance You can use one of three approaches to implement
of ServerConfiguration, providing immutable security in your Helidon application:
web server information, by invoking
its create() method by passing in the • a builder pattern where you manually provide
statement, config.get(“server”). configuration

• a configuration pattern where you provide


The instance variable, routing, is built like the
configuration via a configuration file
previous example except we eliminate hard-coding
the server response by calling config.get(“app. • a hybrid of the builder and configuration
greeting”).asString().get(). patterns

The web server is created like the previous We will be using the hybrid approach to implement
example except we use a different version of security in our application, but we need to do some
the create() method that accepts the two instance housekeeping first.
variables, serverConfig and routing.

23
Let’s review how to reference the users defined }
The InfoQ eMag / Issue #96 / July 2021

under the security node of our configuration file.


@Override
Consider the following string:
public Collection<String> roles() {
return roles;
security.providers.0.http-basic-auth. }
users.0.login
@Override
When the parser comes across a number in public Optional<String>
the string, it indicates there are one or more digestHa1(String realm, HttpDigest.
Algorithm algorithm) {
subnodes in the configuration file. In this example,
return Optional.empty();
the 0 right after providers will direct the parser }
to move into the first provider subnode, http- }
basic-auth. The 0 right after users will direct
the parser to move into the first user subnode We will use this class to build a map of roles to
containing login, password and roles. Therefore, the users, that is:
above string will return the login, password and
Map<String, AppUser> users = new
role information for the user, ben, when passed
HashMap<>();
into the config.get() method. Similarly, the login,
password and role information for user, mike, would To accomplish this, we add a new
be returned with this string: method, getUsers(), to our web server application
that populates the map using the configuration
security.providers.0.http-basic-auth. from the http-basic-auth subsection of the
users.1.login
configuration file.
Next, let’s create a new class to our web
private static Map<String, AppUser>
server application, AppUser, that implements
getUsers(Config config) {
the SecureUserStore.User interface: Map<String, AppUser> users = new
HashMap<>();
public class AppUser implements
SecureUserStore.User { ConfigValue<String> ben = config.
get(“security.providers.0.http-basic-auth.
private String login; users.0.login”).asString();
private char[] password; ConfigValue<String> benPassword = config.
private Collection<String> roles; get(“security.providers.0.http-basic-auth.
users.0.password”).asString();
public AppUser(String login, char[] ConfigValue<List<Config>> benRoles =
password, Collection<String> roles) { config.get(“security.providers.0.http-basic-
this.login = login; auth.users.0.roles”).asNodeList();
this.password = password;
this.roles = roles; ConfigValue<String> mike = config.
} get(“security.providers.0.http-basic-auth.
users.1.login”).asString();
@Override ConfigValue<String> mikePassword =
public String login() { config.get(“security.providers.0.http-basic-
return login; auth.users.1.password”).asString();
} ConfigValue<List<Config>> mikeRoles =
config.get(“security.providers.0.http-basic-
@Override auth.users.1.roles”).asNodeList();
public boolean isPasswordValid(char[]
chars) {
return false;

24
users.put(“admin”, new AppUser(ben. .get(“/admin”, (request,

The InfoQ eMag / Issue #96 / July 2021


get(), benPassword.get().toCharArray(), response) -> response.send(“Greetings from
Arrays.asList(“user”, “admin”))); the admin, “ + users.get(“admin”).login() +
users.put(“user”, new AppUser(mike. “!\n”))
get(), mikePassword.get().toCharArray(), .get(“/user”, (request,
Arrays.asList(“user”))); response) -> response.send(“Greetings from
the user, “ + users.get(“user”).login() +
return users; “!\n”))
} .build();

Now that we have this new functionality built WebServer webServer = WebServer
.create(serverConfig, routing)
into our web server application, let’s update
.start()
the startServer() method to add security .toCompletableFuture()
with Helidon’s implementation of HTTP Basic .get(10, TimeUnit.SECONDS);
Authentication:
System.out.println(“INFO: Server
started at: http://localhost:” + webServer.
private static void startServer() {
port() + “\n”);
Config config = Config.create();
}
ServerConfiguration serverConfig
= ServerConfiguration.create(config.
get(“server”)); As we did in the previous example, we will build
the instance variables, config and serverConfig. We
Map<String, AppUser> users = then build our map of roles to users, users, with
getUsers(config);
the getUsers() method as shown above.
displayAuthorizedUsers(users);

SecureUserStore store = user -> Using Optional for null type-safety,


Optional.ofNullable(users.get(user)); the store instance variable is built from
the SecureUserStore interface as shown with the
HttpBasicAuthProvider provider =
HttpBasicAuthProvider.builder() lambda expression. A secure user store is used for
.realm(config.get(“security. both HTTP Basic Authentication and HTTP Digest
providers.0.http-basic-auth.realm”). Authentication. Please keep in mind that HTTP
asString().get())
Basic Authentication can be unsafe, even when
.subjectType(SubjectType.USER)
.userStore(store) used with SSL, as passwords are not required.
.build();
We are now ready to build an instance
Security security = Security.builder()
of HTTPBasicAuthProvider, one
.config(config.get(“security”))
of the implementing classes of
.addAuthenticationProvider(provider) the SecurityProvider interface.
.build(); The realm() method defines the security realm
name that is sent to the browser (or any other
WebSecurity webSecurity = WebSecurity.
create(security) client) when unauthenticated. Since we have a
.securityDefaults(WebSecurity. realm defined in our configuration file, it is passed
authenticate()); into the method.
Routing routing = Routing.builder()
.register(webSecurity) The subjectType() method defines the principal
.get(“/”, (request, response) type a security provider would extract or propagate.
-> response.send(config.get(“app.greeting”). It accepts one of two SubjectType enumerations,
asString().get() + “\n”))
namely USER or SERVICE. The userStore() method

25
accepts the store instance variable we just built to We can now build and run this version of
The InfoQ eMag / Issue #96 / July 2021

validate users in our application. our web server application using the same
Maven and Java commands and execute the
With our provider instance variable, we can now following curl commands:
build an instance of the Security class used to
bootstrap security and integrate it with other • $ curl -X GET http://localhost:8080/ will
frameworks. We use the config() and addAuthenti- return “Greetings from the web server!”
cationProvider() methods to accomplish this.
• $ curl -X GET http://localhost:8080/
Please note that more than one security admin will return “Greetings from the admin,
provider may be registered by chaining together ben!”
additional addAuthenticationProvider() methods.
• $ curl -X GET http://localhost:8080/
For example, let’s assume we defined instance
user will return “Greetings from the user, mike!”
variables, basicProvider and digestProvider, to
represent the HttpBasicAuthProvider and HttpDi-
You can find a comprehensive server
gestAuthProvider classes, respectively. Our securi-
application that demonstrates all three versions
ty instance variable may be built as follows:
of the startServer() method related to the three
core Helidon SE components we just explored.
Security security = Security.builder() You can also find more extensive Helidon security
.config(config.get(“security”))
examples that will show you how to implement
.addAuthenticationProvider(basicProvider) some of the other security providers.

.addAuthenticationProvider(digestProvider) Helidon MP
.build();
Built on top of Helidon SE, Helidon MP is a small,
declarative style API that is an implementation
The WebSecurity class implements
of the MicroProfile specification, a platform that
the Service interface which encapsulates
optimizes enterprise Java for a microservices
a set of routing rules and related logic.
architecture for building microservices-based
The instance variable, webSecurity, is
applications.
built using the create() method by passing
in the security instance variable and
The MicroProfile initiative, formed in 2016 as
the WebSecurity.authentic() method, passed into
a collaboration of IBM, Red Hat, Payara and
the securityDefaults() method, ensures the request
Tomitribe, specified three original APIs - CDI
will go through the authentication process.
(JSR 365), JSON-P (JSR 374) and JAX-RS (JSR-
370) - considered the minimal amount of APIs for
Our familiar instance variable, routing, that we’ve
creating a microservices application. Since then,
built in the previous two examples looks much
MicroProfile has grown to 12 core APIs along with
different now. It registers the webSecurity instance
four standalone APIs to support reactive streams
variable and defines the endpoints, ‘/’, ‘/admin’, and
and GraphQL. MicroProfile 3.3, released in February
‘/user’ by chaining together get() methods. Notice
2020, is the latest version.
that the /admin and /user endpoints are tied to
users, ben and mike, respectively.
Helidon MP currently supports MicroProfile 3.2.
For Java EE/Jakarta EE developers, Helidon MP is
Finally, our web server can be started! After all the
an excellent choice due to its familiar declarative
machinery we just implemented, building the web
approach with use of annotations. There is no
server looks exactly like the previous example.

26
deployment model and no additional Java EE Helidon Quick Start Guides

The InfoQ eMag / Issue #96 / July 2021


packaging required. Helidon provides quick start guides for
both Helidon SE and Helidon MP. Simply visit these
pages and follow the instructions. For example,
Let’s take a look at the declarative style of Helidon you can quickly build a Helidon SE application by
MP with this very simple example on starting the simply executing the following Maven command in
Helidon web server and how it compares to the your terminal window:
functional style with Helidon SE.
public class GreetService { $ mvn archetype:generate
@GET -DinteractiveMode=false \
@Path(“/greet”) -DarchetypeGroupId=io.helidon.
public String getMsg() { archetypes \
return “Hello World!”; -DarchetypeArtifactId=helidon-
} quickstart-se \
} -DarchetypeVersion=1.4.4 \
-DgroupId=io.helidon.examples \
-DartifactId=helidon-quickstart-se \
-Dpackage=io.helidon.examples.
Notice the difference in this style compared to the
quickstart.se
Helidon SE functional style.

Helidon Architecture This will generate a small, yet working application


Now that you have been introduced to Helidon SE in the folder, helidon-quickstart-se, that
and Helidon MP, let’s see how they fit together. includes a test and configuration files for the
Helidon’s architecture can be described in the application (application.yaml), logging
diagram shown below. Helidon MP is built on top of (logging.properties), building a native image
Helidon SE and the CDI extensions, explained in the with GraalVM (native-image.properties),
next section, extend the cloud-native capabilities of containerizing the application with Docker
Helidon MP. (Dockerfile and Dockerfile.native) and
orchestrating with Kubernetes (app.yaml).
CDI Extensions
Helidon ships with portable Context and Similarly, you can quickly build a Helidon MP
Dependency Injection (CDI) estensions that support application:
integration of various data sources, transactions
and clients to extend the cloud-native functionality $ mvn archetype:generate
-DinteractiveMode=false \
of Helidon MP applications. The following
-DarchetypeGroupId=io.helidon.
extensions are provided: archetypes \
-DarchetypeArtifactId=helidon-
• HikariCP, a “zero-overhead” production ready quickstart-mp \
-DarchetypeVersion=1.4.4 \
JDBC connection pool data source
-DgroupId=io.helidon.examples \
-DartifactId=helidon-quickstart-mp \
• Oracle Universal Connection Datapool (UCP)
-Dpackage=io.helidon.examples.
data sources quickstart.mp
• Jedis, a small Redis Java client
This is a great starting point for building more
• Oracle Cloud Infrastructure (OCI) object storage complex Helidon applications as we will discuss in
clients the next section.

• Java Transactional API (JTA) transactions

27
Movie Application GRAALVM_HOME=/usr/local/bin/graalvm-ce-
The InfoQ eMag / Issue #96 / July 2021

Using a generated Helidon MP quickstart java11-20.1.0/Contents/Home


application, additional classes - a POJO, a
Once installed, you can return to the helidon-
resource, a repository, a custom exception, and
quickstart-se directory and execute the following
an implementation of ExceptionMapper - were
command:
added to build a complete movie application that
maintains a list of Quentin Tarantino movies. $ mvn package -Pnative-image
The HelidonApplication class, shown below,
registers the required classes. This operation will take a few minutes, but once
complete, your application will be converted to
@ApplicationScoped native code. The executable file will be found in
@ApplicationPath(“/”)
public class HelidonApplication extends the /target directory.
Application {
The Road to Helidon 2.0
@Override Helidon 2.0.0 is scheduled to be released in the
public Set<Class<?>> getClasses() {
Set<Class<?>> set = new late Spring 2020 with Helidon 2.0.0.RC1 available
HashSet<>(); to developers at this time. Significant new
set.add(MovieResource.class); features include support for GraalVM on
set. Helidon MP applications, new Web Client
add(MovieNotFoundExceptionMapper.class);
return Collections. and DB Client components, a new CLI tool,
unmodifiableSet(set); and implementations of the standalone
} MicroProfile Reactive Messaging and Reactive
} Streams Operators APIs.

You can clone the GitHub repository to learn more


Until recently, only Helidon SE applications were
about the application.
able to take advantage of GraalVM due to the use of
reflection in CDI 2.0 (JSR 365), a core MicroProfile
GraalVM
API. However, due to customer demand, Helidon
Helidon supports GraalVM, a polyglot virtual
2.0.0 will support Helidon MP applications to be
machine and platform, that converts applications to
converted to a native image. Oracle has created
native executable code. GraalVM, created by Oracle
this demo application for the Java community to
Labs, is comprised of Graal, a just-in-time compiler
preview this new feature.
written in Java, SubstrateVM, a framework
that allows ahead-of-time compilation of Java
To complement the original three core Helidon SE
applications into executable images, and Truffle, an
APIs - Web Server, Configuration and Security - a
open-source toolkit and API for building language
new Web Client API completes the set for Helidon
interpreters. The latest version is 20.1.0.
SE. Building an instance of the WebClient interface
allows you to process HTTP requests and
You can convert Helidon SE applications to
responses related to a specified endpoint. Just
native executable code using GraalVM’s native-
like the Web Server API, Web Client may also be
image utility that is a separate installation using
configured via a configuration file.
GraalVM’s gu utility:

$ gu install native-image You can learn more details on what developers can
$ export expect in the upcoming GA release of Helidon 2.0.0.

28
The InfoQ eMag / Issue #96 / July 2021
Virtual Panel: the MicroProfile Influence
on Microservices Frameworks
by Michael Redlich, Senior Research Technician at ExxonMobil Research & Engineering

Since 2018, several new microservices frameworks or framework in order to create cloud-ready
- including Micronaut, Helidon and Quarkus - have applications.
been introduced to the Java community, and have
made an impact on microservices-based and When it comes to the decision to build an
cloud-native applications development. application using either a microservices or
monolithic style, developers should analyze the
The MicroProfile community and specification was business requirements and technical context
created to enable the more effective delivery of before choosing the tools and architectures to use.
microservices by enterprise Java developers. This
effort has influenced how developers are currently In mid-2016, two new initiatives, MicroProfile and
designing and building applications. the Java EE Guardians (now the Jakarta EE
Ambassadors), had formed as a direct response
MicroProfile will continue to evolve with changes to to Oracle having stagnated their efforts with the
its current APIs and most likely the creation of new release of Java EE 8.
APIs.
The Java community felt that enterprise Java had
Developers should familiarize themselves with fallen behind with the emergence of web services
Heroku’s “Twelve-Factor App,” a set of guiding technologies for building microservices-based
principles that can be applied with any language applications.

29
Introduced at Red Hat’s DevNation conference on In mid-2018, Red Hat renamed WildFly Swarm,
The InfoQ eMag / Issue #96 / July 2021

June 27, 2016, the MicroProfile initiative was an extension of Red Hat’s core application
created as a collaboration of vendors - IBM, Red server, WildFly, to Thorntail to provide their
Hat, Tomitribe, Payara - to deliver microservices microservices framework with its own identity.
for enterprise Java. The release of MicroProfile However, less than a year later, Red Hat
1.0, announced at JavaOne 2016, consisted of released Quarkus, a “Kubernetes Native Java
three JSR-based APIs considered minimal for stack tailored for OpenJDK HotSpot and GraalVM,
creating microservices: JSR-346 - Contexts and crafted from the best-of-breed Java libraries and
Dependency Injection (CDI); JSR-353 - Java API for standards.”
JSON Processing (JSON-P); and JSR-339 - Java
API for RESTful Web Services (JAX-RS). Dubbed “Supersonic Subatomic Java,” Quarkus
quickly gained popularity in the Java community to
By the time MicroProfile 1.3 was released in the point that Red Hat announced Thorntail’s end-
February 2018, eight community-based APIs, of-life in July 2020. Quarkus joined the relatively
complementing the original three JSR-based new frameworks, Micronaut and Helidon, that were
APIs, were created for building more robust introduced to the Java community less than a year
microservices-based applications. A fourth earlier. With the exception of Micronaut, all of these
JSR-based API, JSR-367 - Java API for JSON microservices-based frameworks support the
Binding (JSON-B), was added with the release of MicroProfile initiative.
MicroProfile 2.0.
The core topics for this virtual panel are threefold:
Originally scheduled for a June 2020 first, to discuss how microservices frameworks
release, MicroProfile 4.0 was delayed so and building cloud-native applications have been
that the MicroProfile Working Group could influenced by the MicroProfile initiative.
be established as mandated by the Eclipse
Foundation. Second, to explore the approaches to developing
cloud-native applications with microservices and
The working group defines the MicroProfile monoliths, and also the recent trend in reverting
Specification Process and a formal Steering back to monolith-based application development.
Committee composed of organizations and And third, to debate several best practices for
Java User Groups (JUGs), namely Atlanta building microservices-based and cloud-native
JUG, IBM, Jelastic, Red Hat and Tomitribe. Other applications.
organizations and JUGs are expected to join in
2021. The MicroProfile Working Group was able Panelists
to release MicroProfile 4.0 on December 23, 2020 • Cesar Hernandez, senior software engineer at
featuring updates to all 12 core APIs and alignment Tomitribe.
with Jakarta EE 8.
• Emily Jiang, Liberty microservice architect and
advocate at IBM.
The founding vendors of MicroProfile offered their
own microservices frameworks, namely Open • Otavio Santana, developer relations engineer at
Liberty (IBM), WildFly Swarm/Thorntail (Red Platform.sh.
Hat), TomEE (Tomitribe) and Payara Micro (Payara),
• Erin Schnabel, senior principal software
that ultimately supported the MicroProfile initiative.
engineer at Red Hat.

30
InfoQ: How has the MicroProfile initiative, first Tolerance, define APIs that are essential for

The InfoQ eMag / Issue #96 / July 2021


introduced in 2016, influenced the way developers microservices applications. That’s a good thing.
are building today’s microservices-based and
cloud-native applications? InfoQ: Since 2018, the Java community has been
introduced to Micronaut, Helidon and Quarkus. Do
Hernandez: MicroProfile has been allowing Java you see a continuation of this trend? Will there be
developers to increase their productivity during more microservices frameworks introduced as we
the creation of new distributed applications and, move forward?
at the same time, has allowed them to boost their
existing Jakarta EE (formerly known as Java EE) Hernandez: Yes, I think new frameworks and
architectures. platforms help in the synergy that produces
innovation within the ecosystem. Over time, these
Jiang: Thanks to the APIs published by innovations can lead to new standards.
MicroProfile, the cloud-native applications using
MicroProfile have become slim and portable. With Jiang: It is great to see different frameworks
these standard APIs, cloud-native application introduced to aid Java developers to develop
developers can focus on their business logics and microservices. It clearly means Java remains
their productivities are significantly increased. attractive and still the top choice for developing
The cloud-native applications not only work on microservices. The other trend I saw is that the
the runtime that developed against originally, they newly emerging frameworks adopt MicroProfile
would also work on different runtimes that support as the out-of-box solution for developing cloud-
MicroProfile, such as Open Liberty, Quarkus, native microservices, as demonstrated by Quarkus,
Helidon, Payara, TomEE, etc. The developers Helidon, etc. Personally, I think there will be more
learned the APIs once and never need to worry microservices framework introduced. I also predict
about re-learn a complete set of APIs in order to they might wisely adopt MicroProfile as their cloud-
achieve the same goal. native solutions.

Santana: The term cloud-native is still a large gray Santana: Yes, I strongly believe that there is a big
area and it’s concept is still under discussion. If trend with this type of framework especially to
you, for example, read ten articles and books on the explore AOT compilation and the benefits from the
subject, all these materials will describe a different application’s cold start.
concept. However, what these concepts have in
common is the same objective - get the most out The use of reflection by the frameworks has its
of technologies within the cloud computing model. trade-offs. For example, at the application start and
MicroProfile popularized this discussion and in-memory consumption, the framework usually
created a place for companies and communities invokes the inner class ReflectionData within Class.
to bring successful and unsuccessful cases. In java. It is instantiated as type SoftReference, which
addition, it promotes good practices with APIs, demands a certain time to leave the memory.
such as MicroProfile Config and the third factor So, I feel that in the future, some frameworks
of The Twelve-Factor App. will generate metadata with reflection and other
frameworks will generate this type of information at
Schnabel: The MicroProfile initiative has done a compile time like the Annotation Processing API or
good job of providing Java developers, especially similar. We can see this kind of evolution already
those used to Java EE concepts, a path forward. happening in CDI Lite, for example.
Specific specs, like Config, Rest Client, and Fault

31
Another trend in this type of framework is to other areas I think MicroProfile can improve is to
The InfoQ eMag / Issue #96 / July 2021

support a native image with GraalVM. This provide some guidance such as how to do logging,
approach is very interesting when working with require all MicroProfile supporters to support CORS,
serverless, after all, if you have code that will run etc. If any readers have any other suggestions,
only once, code improvements like JIT and Garbage please start up a conversation on the MicroProfile
Collector don’t make sense. Google Group. Personally, I would like to open
up the conversation in early 2021 on the new
Schnabel: I don’t see the trend stopping. As initiatives and gaps MicroProfile should focus on.
microservices become more specialized, there is a
lot more room to question what is being included Santana: Yes, and also improvements that need
in the application. There will continue to be a and can be made on the existing ones. Many things
push to remove unnecessary dependencies and need to be worked together with the Jakarta EE
reduce application size and memory footprint team. The first point would be to embrace The
for microservices, which will lead either to new Twelve-Factor App even more in the APIs. For
Java frameworks, or to more microservices in example, making it easier for those who need
other languages that aren’t carrying 20+ years of credentials, such as username and password, in an
conventions in their libraries. application through the MicroProfile Config API. I
could use JPA and the JMS as examples.
InfoQ: With a well-rounded set of 12 core
MicroProfile APIs, do you see the need for Another point would be how to think about the
additional APIs (apart from the standalone integration of CDI events with Event-Driven Design
APIs) to further improve on building more and increasingly explore the functional world within
robust microservices-based and cloud-native databases and microservices.
applications?
Schnabel: There is still a lot of evolution to come in
Hernandez: Eventually, we will need more than the reactive programming space. We all understand
the 12 core APIs. The ecosystem goes beyond REST, but even that space is seeing some change
MicroProfile and Java; the tooling, infrastructure, as pressure grows to streamline capabilities.
and other stakeholders greatly influence creating There is work to do to align with changing industry
new APIs. practices around tracing and metrics, and I
think there will continue to be some changes as
Jiang: The MicroProfile community adapts itself the capabilities that used to be provided by an
and stays agile. It transforms together with the application server move out into the infrastructure
underlying framework or cloud infrastructure. For be that raw Kubernetes, a serverless PaaS-esque
example, due to the newly established CNCF project environment, or whatever comes next as we try to
OpenTelemetry (OpenTracing + OpenCensus), make Kubernetes more friendly.
MicroProfile will need to realign MicroProfile Open
Tracing with OpenTelemetry. Similarly, the previous InfoQ: How would building cloud-native
adopted technologies might not be mainstream any applications be different if it weren’t for
more. MicroProfile will need to align with the new microservices? For example, would it be more
trend. For example, with the widely adopted metrics difficult to build a monolith-based cloud-native
framework, Micrometer, which provides a simple application?
facade over the instrumentation clients for the
most popular monitoring systems, the MicroProfile Hernandez: When we remove microservices from
community is willing to work with Micrometer.The the cloud-native equation, I immediately think of

32
monolith uber-jars deployments. I feel that the depend on the specifics of its environment (e.g., it

The InfoQ eMag / Issue #96 / July 2021


early adopters of container-based infrastructures has credentials for backing services injected and
started taking advantage of cloud-native features avoids special code paths), you’re in good shape.
using SOAP architectures instead of JAX-RS based If your monolith can be built, tested, and deployed
microservices. frequently and consistently (hello, automation!),
you’re ok.
Jiang: I don’t see a huge difference between
building cloud-native applications vs. InfoQ: Despite the success of microservices, recent
microservices. Cloud-Native Application publications have been discussing the pitfalls of
includes modular monolith and microservices. microservices with recommendations to return
Microservices are normally small, while monolith to monolith-based applications development.
is large by tradition. MicroProfile APIs can be used What are your thoughts on this subject? Should
by either monolith or microservices. The important developers seriously consider a move back to
bit is that cloud-native applications might contain monolith-based applications development?
a few modules, which share the same release
cadence and are interconnected. Coud-native Hernandez: Developers should analyze the
applications should have their dedicated database. business requirements and technical context
before choosing the tools and architectures to use.
Santana: So, as Java didn’t die, the monoliths won’t
die anytime soon either! Many of the best practices As a developer, we get excited to test new
we use today in the cloud-native environment frameworks and tools. Still, we need to understand
with microservices can be applied with monoliths. and analyze the architecture for a particular
It is worth remembering that The Twelve Factor scenario. The challenge then turns into finding the
App, for example, is based on the book Patterns balance between the pros and cons of adopting or
of Enterprise Application Architecture, which is moving back to existing architectures.
from 2003 and the popularization of the cloud
environment occurred, mainly, in 2006 with Amazon Jiang: Monolith is not evil. Sometimes modular
AWS. monolith performs as well as microservices.
Microservices are not the only choice for
In general, the best practices you need to build a cloud-native solutions. Choosing monolith or
microservice architecture are also needed in the microservices depends on the company team
monolith environment. For example, CI/CD, test structure and culture. If you have a small team
coverage, The Twelve Factor App, etc. and all of the applications releasing at the same
cadence, monolith is the right choice. On the other
Since monoliths need fewer physical layers, usually hand, if you have a distributed team and they
two or three as a database and application, they operate independently, microservices fit in well with
tend to be easier to configure, monitor and deploy the team structure and culture.
in the cloud environment. The only caveat is that its
scalability in general is vertical, which often causes If moving towards microservices causes the team
the limit that your cloud provider can provide to be moving slower, moving back to monolith-based
exceeded. application is wise.

Schnabel: In my opinion, the key characteristics for In summary, there is no default answer. You have
cloud-native applications are deployment flexibility to make your own choice based on your company
and frequency: as long as your application does not setting and culture. Choose whatever suits the

33
company best. You should not measure the approach depend on how well you apply the base
The InfoQ eMag / Issue #96 / July 2021

success based on whether you have microservices architecture.


or the number of the microservices.

Santana: As developers, we need to understand


that there are no silver bullets in any architectural Jiang: The base practices for building
solution. All solutions have their respective trade- microservices-based and cloud-native applications
offs. The problem was the herd effect, which led to are to adopt The Twelve Factor App, which ensures
the community thinking that you, as a developer, your microservices perform well in the cloud. Check
are wrong if you are not on microservices. In out my 12 Factor App talk at QCon London 2020 on
technology, this was neither the first nor the how to use MicroProfile and Kubernetes to develop
last buzzword that will flood the technological cloud-native applications. The golden tip is to use
community. an open and well-adopted standard for your cloud-
native applications and then you are rest assured
Overall, I see this with great eyes; I feel that the they will work well in the cloud.
community is mature enough to understand that
microservices cannot be applied to all possible Santana: In addition to the practices, both
things, that is, common sense and pragmatism is popularize such as DDD, The Twelve Factor App,
still the best tool for the developer/architect. etc. I consider myself as having good sense
and being a pragmatic, professional, excellent
Schnabel: I agree with the sentiment: there is practitioner. You don’t need to be afraid to apply
no need to start with microservices, especially something simple if it meets the requirement
if you’re starting something new. I’ve found that and your business, so escape the herd effect
new applications are pretty fluid in the beginning. of technology. In the recent book I’m reading,
What you thought you were going to build isn’t 97 Things Every Cloud Engineer Should Know:
quite what you needed in the end. Using a Collective Wisdom from the Experts, among several
monolithic approach for that initial phase saves tips I will quote two:
a lot of effort: coordinating the development and
deployment of many services in the early days The first is no problem if you don’t use Kubernetes;
of an application is more complicated, and there as I mentioned, don’t feel bad about not using a
isn’t a compelling reason to start there. Sound technology when it’s not needed.
software practices inside the monolith can set you
up for later refactoring into microservices once The second is the possibility of using services
your application starts to stabilize. I’ve found that that increase abstraction and decrease the risk
obvious candidates emerge pretty naturally. of implementing cloud-like PaaS, DBaaS. It is
important to understand that this type of service
InfoQ: What are some of your recommended best brings the cloud-provider’s whole know-how
practices for building microservices-based and and makes operations such as backup/restore,
cloud-native applications? updating services such as databases something
much simpler. For example, I can mention
Hernandez: Analyze first if the problem to solve Platform.sh, which allows deploying cloud-native
fits within the microservice approach and the applications straightforwardly in a GitOps centric
constraints or opportunities the particular way.
project has. All the benefits of the cloud-native

34
Schnabel: Avoid special code paths. They are hard evolve with either new APIs or changes to existing

The InfoQ eMag / Issue #96 / July 2021


to test and can be harder to debug. Make sure ones.
environment-specific dependencies (hosts, ports,
credentials for backing services) are injected in a Our panelists also recommended that all
consistent way regardless of the environment your developers should also familiarize themselves with
application runs in. Heroku’s “Twelve-Factor App,” a set of guiding
principles that can be applied with any language
If you’re going to use microservices, use or framework in order to create cloud-ready
microservices. Understand that they are completely applications.
decoupled, independent entities with their own
evolutionary lifecycle. APIs and contract testing With the advent of microservices, monolith-
should be on your radar. Do your best to wrap your based application development had seemingly
head around the ramifications of microservices: if been characterized as “evil.” However, our
you try to ensure that everything works together experts warned against subscribing to such
in a staging environment and then deploy a set of a characterization. Along with describing the
versioned services that work together, you’ve gone pros and cons, they explained situations in
back to a monolithic deployment model. If you have which monolith-based application development
to consistently release two or more services at the would be beneficial over microservices-based
same time because they depend on each other, you application development. The challenging part
are dealing with a monolith that has unnecessary now is for you to take this wisdom and apply it
moving parts. to your specific context and set of challenges.

Think about application security up front. Every


exposed API is a risk. Every container you build
will need to be maintained to mitigate discovered
vulnerabilities. Plan for that. When you’re moving to
a cloud-native environment, you should take it as
a given that “making an application work and then
forgetting about it and letting it run forever” is no
longer an option, and ensure you have the tools in
place for on-going maintenance.

Conclusions
In this virtual panel, we asked the expert
practitioners to discuss MicroProfile and
microservices frameworks. We even asked for their
thoughts on the recent trend to return to monolith-
based application development.

Our experts agreed that MicroProfile has influenced


the way developers are building microservices-
based and cloud-native applications. The Java
community should expect new microservices
frameworks to emerge as MicroProfile itself will

35
Read recent issues

Kubernetes and Cloud Re-Examining Microservices Java Innovations That Are


Architectures after the First Decade on Their Way

Does it feel to you like the We have prepared this eMag In this eMag we want to talk
modern application stack is for you with content created about “Innovations That Are
constantly shifting with new by professional software On Their Way”. This includes
technologies and practices developers who have been massive, root-and-branch
emerging at a blistering pace? working with microservices changes such as Project
We’ve hand-picked a set of for quite some time. If you Valhalla as well as some
articles that highlight where are considering migrating to of the more incremental
we’re at today. With a focus a microservices approach, deliveries coming from Project
on cloud-native architectures be ready to take some Amber such as Records and
and Kubernetes, these notes about the lessons Sealed Types.
contributors paint a picture of learned, mistakes made, and
what’s here now, and what’s recommendations from those
on the horizon. experts.

InfoQ @InfoQ InfoQ InfoQ

You might also like