Microservices Notes and Practise
Microservices Notes and Practise
Micro services are cloud enabled That means I would be able to have multiple instances for each of
these micro services
33
You can see that at that at this moment there are two instances of MicroService1 four instances
34
35
36
MicroService3.
37
This should not involve a lot of configuration I should be able to bring up an instance of MicroService3
39
41
sections in this course is how to cloud enable them how to set up an architecture such that it would
be
42
able to dynamically adjust and be able to bring new instances up and take the older instances down.
to quickly build some of the common patterns in distributed systems to the typical problems which are
8
present for distributed systems in the cloud spring cloud provides a range of solutions.
The most important thing that you need to understand is spring cloud is not really one project as such.
10
There are a wide variety of projects under the umbrella of spring cloud on the spring cloud homepage.
11
If you scroll down further you'll be able to see huge variety of projects which are related to spring
12
cloud.
13
23
spring cloud bus enables the microservices and the infrastructure components things like config
server things like
24
API gateway to talk to each other in this course we'd be using the Finchley M2.
25
1)configuration management.
29
This would mean that there is a lot of configuration for these microservices that the operations team
32
needs to manage.
Spring cloud config server provides an approach where you can store all configuration for all
34
35
So you can store all the configuration for different environments of different microservices in just
36
one place in a centralized location and spring cloud config server can be used to expose that
configuration
37
The next challenge we talked about was the dynamic scale up and scale down.
we'll also use Feign in
62
We would use Spring cloud sleuth to assign ID to request across multiple components and we would
use
65
One of the important things about microservices is these microservices have a lot of common
features.
67
For example logging, security , analytics and things like that you don't want to implement all these
common
68
00:06:45,600 --> 00:06:49,520
69
API Gateways provide great solutions to these kind of challenges. We will use a Netflix Zuul API
gateway in this could.
72
When they build applications as a combination of microservices which can communicate with each
other
using simple messages each of these microservices can be built in different technologies.
Typical more monolith applications people would not have that flexibility.
For example Microservice one might be java MicroService two might be Nodejs Microservice three
might be
written in Kotlin and tomorrow there might be a language X Y Z which is really doing well and which
provides a lot of benefits to you and you can easily create a micro service in that specific language.
And also for the new Microservices that we create we can bring in new processors as well.
Advantage 2:
Dynamic scaling
12
like Amazon.
13
14
15
The load on the application will be a lot and during the rest of the year there might not be so much
16
load during the Black Friday there might be a huge amount of load.
17
If your microServices are cloud enabled they can scale dynamically and you can procure hardware and
release
18
Advantage 2:
Faster release cycles that means bring new features faster to market
22
This means that you can bring new features faster to market and that's a big advantage to have in the
23
modern world.
====================================
1. What's NEW in V2
And as part of the latest versions of Spring Boot and Spring Cloud, you'd be using Spring Cloud
LoadBalancer instead of Ribbon. In the earlier versions of Spring Boot and Spring Cloud,
However, we would be using Spring Cloud LoadBalancer as the load balancer in this specific section.
We would be using Spring Cloud Gateway instead of Zuul and we will be using Resilience4j instead of
Hystrix as the circuit breaker.
In the next section, we will be using Docker to containerize microservices, we will be running all the
microservices using Docker and Docker Compose, and
in the section after that, we would be jumping into Kubernetes. We will be orchestrating all our
microservices with Kubernetes.
So here is the section on Microsevices with Spring Cloud V2 and we have Docker
===============================
Result:
Run
----
Result:
10. Step 06 - Connect Spring Cloud Config Server to Local Git Repository - V2:
Let's now connect Spring Cloud Config Server to the Git repository.
The way we can do that is actually to go to the application.properties of the Spring Cloud Config
What is the folder where we configured our repository at? Let's do a present working directory.
connect Spring Cloud Config Server to the Git repository.:
I have connected before tutor explanation Connect Limits Service to Spring Cloud Config Server
How do we connect the Limits service to the Spring Cloud Config Server?
This is the client project that is needed to enable Limits service to talk to the centralized configuration
And the next thing that we would want to do is to configure the URL of the Cloud Config Server
in the application.properties.
What is the URL of the Cloud Config Server? The URL of the Cloud Config Server is localhost:8888.
So, this is the value that we would want to configure as the URL of the Cloud Config Server.
Make sure that you are configuring this in the application.properties in limits-service.
So, what do we want to do is to import the configuration for the application from the Cloud Config
Server.
this is the application name that we have configured and the profile that is being used is default.
One thing we already saw is the values in the application.properties have less
priority compared
We have now connected the Limits Microservice to the Spring Cloud Config Server to the Git Repo.
how do I do that?
All that I'd need to do is to copy limits-service.properties and actually, I'll call this microservice-x.
The biggest advantage of going with this approach is now all your configuration related to your
application is centralized
And if I would want to have separate properties for microserrvice-x in dev and QA, I can create a dev
and QA files as well.
By separating out the configuration from your applications, you are making your
operations easier.
Your operations team can control all configuration related to all microsevices
for multiple environments in a single location.
So, whenever they would want to make a change in an application, they make a
change in the GitHub repository
Microservice and the Currency Exchange Microservice and when I'm calling from
Currency Conversion
That would help us identify whether our load balancers and the naming servers are working properly
and to be able to track the instance of Currency Exchange that is providing the response back, what
we'll do is, we'll add in a variable in here.
The variable I would want to add in is a string variable and I'll call this environment.
So, in this environment, we'll send a few environement details back in the response of the REST API.
let's now make it fetch some details from the database and we would be using
an in-memory database called H2.
and we will be using JPA to talk to the in-memory database and to be able to
talk to the database,
It gives you a good introduction to JPA and Hibernate and Spring Data JPA.
To be contd.. after JPA Section
Below image shows that spring configured h2dialect through its Auto
configuration as we added h2 dialect dependency in pom.xml
We would not want to use a random database URL like jdbc:h2:mem:baa68bb2-
8fcd-4c1c-b8a6-0df42e3b7a4e.
#So ask spring to load enitties thorugh hibernate not using schema.sql
spring.jpa.defer-datasource-initialization=true
https://docs.spring.io/spring-boot/docs/current/reference/html/
howto.html#howto.data-initialization
21. Step 14 - Create a JPA Repository - V2:
What I would want to do is to actually search the table by from and to. So,
based on the from column and the to column,
And to be able to do that, you can actually extend the repository interface and
add in a few methods.
so, you can add findBy; which column name you'd want to find by? FromAndTo.
So, you'd want to do a findBy
Step 15 - Setting up Currency Conversion Microservice - V2:
25. Step 17 - Invoking Currency Exchange from Currency Conversion
Microservice - V2:
I can see that there is a response coming back. So, the quantity is 15, the
conversionMultiple is 65, and the totalCalculatedAmount is 975, and the
environment, the port where it's getting the response back from is 8000.
26. Step 18 - Using Feign REST Client for Service Invocation - V2:
we had to write a lot of tedious code around RestTemplate to get the Currency
Conversion service to talk with the Currency Exchange Microservice.
To make a simple REST API call, we need to write about 20 lines of code. And
imagine what would happen if in a Microservice architecture you have hundreds
of Microservices, they'd be calling each other and you'd need to repeat this kind
of code everywhere. And that's where Spring Cloud provides you with a
framework called Feign.
Feign makes it really, really easy to call other Microservices and to make use of
Feign.
What we will use is, let's copy the starter-config, which uses the groupId
org.springframework.cloud. openfeign.
Annotation for interfaces declaring that a REST client with that interface should be created (e.g. for autowiring into
another component).
Service name and host and port number is part of proxy class and remaining url resource path is part of
method definition in feign in proxy class as shown below.
27. Step 19 - Understand Naming Server and Setting up Eureka Naming Server - V2:
So, if I would want to get the Currency Conversion Service to talk to a different instance of Currency
Exchange service.
I need to go here and change the URL 8001, 8002 and so on.
The state we would want to be able to get to is we'd want to be able to dynamically launch Currency
Exchange instances and distribute load between them.
Right now, let's say this instance is on port 8000. this is on 8001, and 8002.
As instances come up and go down, we'd want to be able to automatically
discover them and load balance between them.
If I, let's say, feign provide an option where you can hard code multiple URLs in
here, even that would not be a good solution because, let's say 8000 went down
and let's say a new instance was brought up on 8002.
Then you have to change the configuration of this application or the code of this
application all the time.
And that's the reason why we go for something called a Service Registry or a
Naming Server.
What would happen is in a Microservice architecture, all the instances of all the
Microservices would
So, the Currency Conversion Microservice would register with the Service Registry, the
Currency Exchange Microservice would register with the Service Registry and all the other
microservices also would register with the Service Registry.
And let's say the Currency Conversion Microservice wants to talk to the Currency Exchange
Microservice.
It would ask the Service Registry, hey, what are the addresses of the Currency Exchange
Microservice?
The service registry would return those back to the Currency Conversion Microservice and
then the Currency Conversion Microservice can send the request out to the Currency
Exchange Microservice.
So, all the instances would register with the Naming Server or the Service Registry and
whenever Currency Conversion Microservice wants to find out what are the active instances,
it asks the Naming Server, gets the instances, and load balances between them.
What we are creating in here is a Eureka server and we don't want to register
with itself and to do that, there are a couple of properties that we would want
to add in.
All that we needed to do was very, very simple thing, add a dependency.
So, add Eureka client dependency and that's all is needed for you to connect
with Eureka.
So, I'd typically actually configure the URL directly in the application.properties.
So, we can configure the naming server URL in the application.properties as
well.
31. Step 22 - Load Balancing with Eureka, Feign & Spring Cloud LoadBalancer -
V2:
now what I would want to do is to go to the currency exchange proxy and all
that I would need to
So, what do we want to do is, we'd want the feign client to talk to Eureka and
pick up the instances of currency-exchange and do load balancing between
them.
if you look at the dependency hierarchy which is present in here, and just type
in load; l-o-a-d, standing for lord balancer, you'd see that there is a load
balancer, spring-cloud-starter-loadbalancer, which is brought into the class path
by spring-cloud-starter-netflix-eureka-client and this is the load balancer
framework that is used by feign to actually distribute the load among the
multiple instances which are returned by Eureka.
This is client-side load balancing and this comes for free for you.
You would see that typically within 15 to 30 seconds, all the changes are
reflected and the load balancing
will be done between all the available active instances at that particular point in
time.
if there different url for different micro service. At the end after deploying
these in prod. Some issue came in picture and we need to refavtor the
code .
To fix those issues user service should be spilt into two MCs they are user
profile MC and session MC.
but hang on the ui guy yeah he's using it.
so you walk up to him and say hey listen
02:39
that user microservice you're calling
02:40
yeah you need to update the api why
02:43
because i'm refactoring the code
04:54
then you bring in this micro service
04:57
that will act as a gateway
04:59
you can either write your own or use one
05:01
of the available technologies
05:02
there's really nothing special to this
05:04
you're basically creating a bunch of
05:06
apis
05:06
in accordance to this public api that
05:08
you've decided to expose
05:10
but all it does internally is call one
05:12
of your existing microservices and pass
05:14
along the response
05:16
this technique is also called api
05:18
composition right you're composing an
05:20
Api out of other existing apis
you can do to take advantage of it for
1) example
05:38
you can add some kind of a monitoring
05:40
system that measures how many requests
05:41
are coming in
2) how long they're taking and all that
05:44
stuff this is great for operations and
3) support teams you can authenticate users here you can pass
05:49
security tokens like jwt you can
4)
implement security measures and prevent
05:54
stuff like
05:54
handling denial of service attacks
05:56
prevent access to certain users and ips
what are
06:37
the disadvantages of using the api
06:39
gateway pattern
Disadvantage:
1)you've added
06:44
a network hop here
so things are going to be a little bit
06:47
slow period there's really nothing you
06:49
can do about it since the pattern kind
06:51
of requires it
06:55
many of these api gateways do you need
do you just create one well one could be
07:00
a problem
07:01
you build microservices with fault
07:03
tolerance and redundancy in mind so that
07:05
even if some of these instances were to
07:06
go down
07:07
the system will still function what if
07:09
your one api gateway goes down
07:11
since it's a single entry point your
07:13
whole system goes down as a result
07:15
that's not good so yeah you can create
07:18
multiple api gateways
07:19
and split your incoming calls to them
07:22
using stuff like load balancers and
07:24
elastic ips
3) the other problem with api gateway is
07:27
that it can technically get a little too
07:30
complicated so let's say you have a web
07:32
client for your micro services
some
07:58
people call this pattern. the ‘back end for front end ‘
In the older versions of Spring Cloud, the popular API gateway to use was Zuul.
Spring Cloud has moved on and now the recommended option as an API gateway is Spring
Cloud Gateway. In this step, let's get started with implementing a Spring Cloud Gateway and
look at the features it brings in.
34. Step 23 - Enabling Discovery Locator with Eureka for Spring Cloud Gateway:
In the previous step, we launched up our Spring Cloud API gateway. In this step, let's start playing with
it.
One of the things we did earlier is we registered the API gateway with the Eureka naming server.
What we are doing in here is we are passing in the name that currency exchange is registered with in
here and we'd want the API gateway to talk to Eureka with that name, find the server and then
execute requests to this URL.
If you'd want to implement things like authentication, you can implement them on API Gateway and
you can only allow those things which are authenticated in your microservices.
So, all the authentication logic can be implemented on the API gateway.
In this step, we learned how we can proxy through the API gateway to other microservices using
Eureka using the Cloud Gateway Discovery Locator feature.
In the previous step, we learned how to build routes through the Eureka Naming Server and redirect
all the requests through the API Gateway.
In this step, let's look at how you can build custom routes.
One of the options to build custom routes is to actually create a configuration file.
let's call this builder and using this builder, we can customize the routes which we would want to use.
What we want to do is to create the route locator mapping and return it back.
So, return builder.routes.build. This is when we are not customizing any routes in here.
I can start customizing it. So, routes, .route and over here I can define a function defining the route.
If the path of the request is .get, then what do we want to do?
http://httpbin.org:80 is a simple HTTP Request & Response Service and it is a public site.
You can see that you are getting a response back from a specific server.
1)few headers;
2) the origin,
So, I can say filters and I can configure a Lambda function for the filter as well.
So, f lambda function and what do we want to do as part of the filter? f.;
we'd want to, let's say, add a request header or a request parameter.
So, I would rather have a URL of this kind than this kind.
what I would want to do is I would want to redirect it using the naming server and I would also want
to do load balancing.
So, if a request starts with currency-exchange, what I would want to do over here is talk to Eureka,
find the location of this service and load balance between the instances which are written.
//if we want to redirect currency-conversion-new to currency-conversion-feign
http://localhost:8765/currency-conversion-feign/from/USD/to/INR/quantity/10
but what we would need to actually do is we would want to define the regular expression, identifying
the next thing as a segment.
(?<segment>.*)
So, this is the thing which we have added in; within parentheses (?<segment>.*)
and following this we can over here we'd want to use the same segment.
(?<segment>.*) - That means replace any character which occupies zero or more times in ${segment}
In the last step, we saw how you can actually add custom filters on specific paths.
let's say, I would want to log every request that goes through the API gateway.
1. The concerns representing single and specific functionality for primary requirements are known as core concerns.
OR
Primary functionality of the system is known as core concerns.
For example: Business logic
2. The concerns representing functionalities for secondary requirements are referred to as crosscutting concerns or system-wide concerns.
OR
The crosscutting concern is a concern which is applicable throughout the application and it affects the entire application.
For example: logging, security and data transfer are the concerns which are needed in almost every module of an application, hence they are cross-cutting concerns.
Courtesy
This figure represents a typical application that is broken down into modules. Each module’s main concern is to provide services for its particular domain. However,
each of these modules also requires similar ancillary functionalities, such as security logging and transaction management. An example of crosscutting concerns is
"logging," which is frequently used in distributed applications to aid debugging by tracing method calls. Suppose we do logging at both the beginning and the end
of each function body. This will result in crosscutting all classes that have at least one function.
if you'd want to implement things like authentication for all the requests, then this might be a right
place to implement that as well.
In the last few steps, we discussed a lot about Spring Cloud Gateway. Spring Cloud Gateway is an
awesome way to route to your APIs and implement your cross-cutting concerns things like
a)security
b) monitoring/metrics.
These are the best things that you can implement in a Spring Cloud Gateway.
This is built on top of Spring WebFlux and that's the reason why we needed to use the reactive
approach.
Some of the important features of Spring Cloud Gateway are it can match requests on any request
attribute.
Earlier, we saw how you can actually match based on the path.
In addition to that, actually, you can match based on a lot of different things.
So, if you scroll down, you can match based on headers, you can match based on host, you can match
So, you can match your request based on a lot of different parameters.
so, you can match routes on any request attribute and you can define predicates and filters.
Over here, what we are defining is how to match, and that's what is called a predicate.
We saw that Spring Cloud Gateway integrates with the Spring Cloud Discovery Client as well;
We also looked at how you can do a path rewriting using Spring Cloud Gateway.
We looked at several examples related to services which are proxied through Cloud Gateway.
We define a handler mapping and based on the handler mapping, the appropriate handler is called.
And as part of the handler, we have a number of filters which would be executed and finally, the
request would be sent out to the service, which needs to handle that request.
And when the response comes back, you can also do a lot of things with it and you can send it back to
the client.
we have talked about the fact that in a microservices architecture , there is complex call chain.
As shown in the example here, a microservice can call another microservice, that microservice might
be dependent on another microservice, and so on and so forth.
And what would happen if one of these services is down or is very slow?
If the Microservice 4 is down, then Microservice 3 also will be down, Microservice 2 also will be down
because these are all depending on Microservice4.
Even if it's slow, then there is a corresponding impact on the other microservices too.
Because this microservice is slow, all these chains also get impacted.
So, the questions are,
Q1) can we return a fallback response if a service is down? If I see that the Microservice4 is down, in
the Microservice3, can I return a fallback response?
Use Cases:
Ans) This might not always be possible. For example, in the case of a credit card transaction or
something
But in the case of a shopping application, instead of returning a set of products, you might return
-------
Q2) The other question to consider is, can we implement a circuit breaker pattern to reduce the load?
Ans) If I see that Microservice4 is down, instead of repeatedly hitting it and causing it to go down,
can I actually return the default response back without even hitting the Microservice?
Q3) can we retry requests in case of temporary failures? If there is can I retry it a few times and
Ans) only when it has failed multiple times, I return a default response back?
I'd want to allow only a certain number of calls to a specific microservice in a specific period of time.
If you are using Spring Boot, then there is a Circuit Breaker framework which is available, which is
called Resilience4j.
If you do a Google for Resilience4j getting started, you would land up on this specific page.
In the previous versions of Spring Boot and Spring Cloud, Netflix Hystrix was the recommended circuit
breaker framework.
However, with the evolution of Java 8 and functional programming, Resilience4j has become the
recommended framework.
You'd see that we already have starter-actuator in here.
Let's start with a very, very simple feature of CircuitBreaker frameworks, which is called Retry.
So, what we'll do is, we'll add some logic in here which causes failures.
So, let's make this fail and then, let's focus on retry.
So, what I would do is, I would actually create a new RestTemplate and let's just call some dummy
API.
This URL will not work because we don't have anything launched up there
So, let's say we are calling another API and getting that back as is.
And now, this would fail, right? So, If I just have the code as it is and wait for the application to
restart and pick up the change
and if I run this, Sample API fails, right? It returns a slash error, it says I/O error on GET request for
this.
Now, let's say this microservice which we are calling is down temporarily
and sometimes you know that if you invoke the same thing multiple times, it might give you a
response back.
So, if you add @Retry and if you do a Ctrl + Shift + O to organize the imports, you should see an import
what we will also add in is a simple logger called private Logger logger =
LoggerFactory.getLogger(CircuitBreakerController. class)
And if you go back and see, "Sample API received" three times.
So, what you are seeing in here is that the retry is happening three times.
So, in this method, when it's executing, if there's an exception, what would happen is, it would be
retried thrice
and if the retry fails all the three times, only then it would return a error back.
Now, you might be wondering, how can I configure a specific number of retry intervals.
So, what we'd do is, we create our own specific retry configuration .
So, one of the attributes on the annotation is fallback and you can configure what is the fallback
method.
You'll get a problem, we'll see what the problem is, very shortly.
It says, UndeclaredThrowableException.
What's happening?
CircuitBreakerController.hardcodedResponse.
We have a method hardcodedResponse, but there is nothing which accepts throwable as an
argument.
you can also have different responses returned back for different kinds of exception.
So, you can have different fallback methods for different kinds of exceptions.
So, for example, I can configure what should be the interval between retries.
So, if an API call is failing, I would wait for a little while and then make the API call again.
I can configure how much time to wait for. The configuration is waitDuration and you can configure 1
and now you'd say that the response takes a little bit more time.
The third retry happened after almost two and a half seconds
and the next retry took about four seconds, almost four seconds.
So, you can see that each subsequent retry is taking longer and longer, and that's what is called
ExponentialBackoff.
So, whenever you are working on the Cloud, for example AWS, and you are working with some API,
most of the APIs use exponential backoff.
So, if I'm calling an API and the first attempt is failing, what it would do is it would make the next
attempt after, let's say, one second. The subsequent attempt, that's the third attempt, will be made
after two seconds.
The attempt after that would be made after four, eight, 16.
and that's one of the features that the resilience4j retry also supports.
In this step, we looked at the retry features which are present in resilience4j.
Use case:
You'd just give the service a little bit of time and then call it again.
That's the kind of scenarios where we would go for a Circuit Breaker pattern.
In the previous step, we played with Retry. In this Step, let's play with Circuit Breaker.
I'll start with the default configuration without any specific configuration as such.
Now, to see the magic of CircuitBreaker, what we would need to do is to fire a lot of requests to this
specific API.
You can even manually fire these requests as well by just refreshing the screen.
What I'm doing is, watch curl and pasting in the URL. See what would happen.
You can see that it's actually sending a request every two seconds
want to send the request every, I want to be able to send 10 requests per second.
So, what I'm saying in here is, send a request every 0.1 second.
It's not really important for you to replicate this in your local machine.
If you just refresh it 100 times, then you should be able to see it in action too.
and after that, you'd see that there are requests going on.
The CircuitBreaker is returning the response back without even calling this method.
Let's see that in action again. So, you can see that there is no log being generated in here.
the fallback method is being called and the response is returned back.
However, you can see that there are requests being fired.
It will break the circuit and it will directly return a response back.
Now, you might be wondering, how do I know if the microservice is back up and I can call it again?
And to be able to answer that, we'd need to understand how a CircuitBreaker works.
If you go to the resilience4j documentation that we looked at earlier and click CircuitBreaker,
1)CLOSED,
2) OPEN and
3) HALF_OPEN.
Closed is when I am calling the dependent microservice continuously. So, in a closed state, I'll always
be calling the dependent microservice.
In a open state, the CircuitBreaker will not call the dependent microservice,
and for rest of the requests, it would return the hardcoded response or the fall-back response back.
For example, the CircuitBreaker is in the closed state. When you start the application up, the
CircuitBreaker is typically in a closed state. Let's say, I'm calling the depenedent microservice 10,000
times and I see that all of them are failing or I see that 90 percent of them are failing
Once it switches to an open state, it waits for a little while; there's a wait duration that you can
configure.
During the half_open state, The CircuitBreaker would try and see if the dependent microservice is up.
So, it sends a percentage of the request, you can configure how much percentage it would send, let's
say 10% or 20% of the requests to the dependent microservice, and if it gets proper responses for
that, then it would go back to the closed state.
If it does not get proper responses, then it would go back to the open state.
After a certain number of failures, the CircuitBreaker went into open state
and when it goes into an open state, you can see that there is a gap between these two log intervals.
There is almost a 40 seconds gap
and after that, you can see that it's making lesser number of requests.
You can see that it's making only about 10 requests per minute.
If the service is back up, then it would go back to the closed state.
so, what it does is it waits for one more minute and then it retries again.
So, you can see that at 14:44, it's again trying to make a few requests.
It sees that it's down. 14:45,
So, every minute or so, you can configure the duration, the CircuitBreaker retries a few requests and
see if it's getting a response back.
If you scroll further down in the documentation, you can see that there are a wide range of things
ou can configure on the CircuitBreaker. failureRateThreshold configures the failureRateThreshold in
percentage.
when the failure rate is equal or greater than the threshold, the CircuitBreaker transitions to open
slowCallRateThreshold 100.
So, you can also configure a specific slow call duration. So, you can say, any call above 60 seconds,
this is the default value, you can actually configure your own value for it.
Any call which takes more than 60 seconds, is treated as a slow call and if 100% of the calls, by
default, are slow,
then the circuit again becomes open. So, you can see that there are a wide range of configurations
that are possible in here.
I would recommend you to spend some time looking at all the different configurations that are
possible.
If you would want to see how this can be configured for Spring Boot, the best place to look at would
be the Spring Boot 2 documentation.
42. Step 29 - Exploring Rate Limiting and BulkHead Features of Resilience4j - V2:
In this step, let's try and look at the Rate Limiting and the BulkHead features.
Let's start with the rate limiting.
Basically, rate limiting is all about saying, in 10 seconds, I'd want to only allow 10000 calls to the
sample API.
So, we are setting a time period and during that time period, I only want to allow a specific number of
So, this specific API method, I only want to allow 10000 calls.
So, for all the API methods which are present in here, you can set different rate limits.
Over here, you can have different names for each of these APIs.
I'll comment the RestTemplate calls and I'll just return sample API back.
I was running watch command earlier and I would now run it with -n 1.
So, I'm sending a request every one second to the sample API.
You can see that some of the requests are successful, they are returning sample API back,
If I go to the browser and try and fire in a few requests, success and the next one is getting an error.
So, it's saying, RateLimiter 'default' does not permit further calls. So, you can see that the RateLimiter
is cutting the service off.
In addition to RateLimiter, you can also configure how many concurrent calls are allowed.
That's called BulkHead. For each of the APIs inside a microservice, you can configure a bulkhead.