Interview 3
Interview 3
What is ESB?
Mule, the runtime engine of Anypoint Platform, is a lightweight Java-based enterprise service
bus (ESB) and integration platform that allows developers to connect applications together
quickly and easily, enabling them to exchange data.
Below is a quick ESB selection checklist. To read a much more comprehensive take on when to select
an ESB, read this article written by MuleSoft founder and VP of Product Strategy Ross Mason: To ESB
or not to ESB.
● 4. Do you need message routing capabilities such as forking and aggregating message flows,
or content-based routing?
Why Mule?
Mule is lightweight but highly scalable, allowing you to start small and connect more applications
over time. The ESB manages all the interactions between applications and components
transparently, regardless of whether they exist in the same virtual machine or over the Internet, and
regardless of the underlying transport protocol used.
There are currently several commercial ESB implementations on the market. However, many of
these provide limited functionality or are built on top of an existing application server or messaging
server, locking you into that specific vendor. Mule is vendor-neutral, so different vendor
implementations can plug in to it. You are never locked in to a specific vendor when you use Mule.
● Mule components can be any type you want. You can easily integrate anything from a "plain
old Java object" (POJO) to a component from another framework.
● Mule and the ESB model enable significant component reuse. Unlike other frameworks,
Mule allows you to use your existing components without any changes. Components do not
require any Mule-specific code to run in Mule, and there is no programmatic API required.
The business logic is kept completely separate from the messaging logic.
● Messages can be in any format from SOAP to binary image files. Mule does not force any
design constraints on the architect, such as XML messaging or WSDL service contracts.
● You can deploy Mule in a variety of topologies, not just ESB. Because it is lightweight and
embeddable, Mule can dramatically decrease time to market and increases productivity for
projects to provide secure, scalable applications that are adaptive to change and can scale
up or down as needed.
● Mule's stage event-driven architecture (SEDA) makes it highly scalable. A major financial
services company processes billions of transactions per day with Mule across thousands of
Mule servers in a highly distributed environment.
Each of these API personas has multiple tasks associated with them, and those tasks define the
characteristics of the API (see Figure 1).
Let's take a look at the API lifecycle in another dimension (see Figure 2) and understand the different
stages of the lifecycle. Here, you can see how the various personas deal with APIs in different
stages.
Figure 2: API lifecycle and personas.
The API manager, API publisher, and API consumer are personas who may be represented by people
with various designations in an organization.
For the sections below, the numbers represent the steps as given in Figure 2.
The API manager (a persona who could be represented by people with designations such as API
product manager or API architect) prepares an overall plan on how to expose an enterprise’s digital
assets using APIs. The plan can include identifying the list of APIs, their design (including parameters
and types), visibility scope, etc. Last but not least, the API manager also makes sure that the
documentation of the API is comprehensive. There is a popular saying in the API world: "An API is as
good as its documentation." Hence, API documentation is one of the tasks that should be done with
utmost clarity.
Once the API plan is ready, the API publisher (who can be represented by people with designations
such as software developer or software architect) gives birth to APIs by creating APIs as a part of the
core app development process. Note that a lot of times, you find the responsibilities of API manager
and API publisher being delivered by someone like an enterprise architect or similarly designated
persons.
3. Versions
API undergoes further tweaking to its design configuration (i.e., path or query parameter) as per the
requirements. It's tested, versioned, and published to a private enterprise DevPortal. Whether an
API is private or public is defined by the API's visibility settings and the governance semantics around
it. You normally see vendors using API frameworks such as Swagger and tooling around it to do the
above tasks easily.
4. Features
The API manager can apply more management configurations based on requirements. For instance,
the API manager may configure the free API invocations limit to a finite number and make the
consumer to pay for any invocations beyond the configured limit. He or she can also configure
analytic reports on various APIs and can create plans that can be subscribed to by the consumer. At
this point, all APIs are assumed to be published and available for consumption, through an API
DevPortal (check out Paypal’s API dev portal). APIs can be internal or external, and similarly, API
DevPortal can be a public portal like PayPal's or can be private portal accessible only withing an
enterprise network.
The above four steps (as seen in Figure 2), explored the API lifecycle from the perspective of an API
publisher and an API manager.
Now, we will explore the API lifecycle from an API consumer perspective. The API consumer persona
is usually an app developer or an app architect who consumes or integrates the APIs as a part of the
app development process.
In the API DevPortal, the consumer has the ability to search for the desired APIs and subscribe to a
particular API or an API plan associated with a group of APIs as defined by the API manager. The API
consumer creates a DevPortal account and registers his or her app to consume the needed APIs.
Usually, an API key is generated per app and is used during the runtime to authenticate the app
accessing the API. Consuming the API is the last step in the lifecycle of the API. When you are able to
successfully integrate the API into your app and get the desired results, the cycle is complete.
Note: A few API management vendors use another lifecycle step of engage and promote where the
APIs are promoted in various forums to the end consumers (who are developers). I have not touched
that aspect of the lifecycle, since it may not be that relevant from the ADD perspective.
api-led connectivity not only depends on three categories of reusable apis to compose
new services and capabilities, but also decentralizes and democratizes access to
enterprise data. central it produces reusable assets, and in the process unlocks key
systems, including legacy applications, data sources, and saas apps. central it and other
teams can then reuse these api assets and compose process level information. then, app
developers can discover and self-serve on all of these reusable assets, creating the
experience layer of apis and ultimately the end-applications. this api-led approach to
integration increases agility, speed, and productivity.
feasibility, to use some other system we need to use some other system we should have some
architecture
I have 2gb data for transaction how you will implement? if you r implement batch. why you are using
that? instead of That. I have1000gb data in that case how batch jobs load into process?
batch processing
transactions in mule
Transactions are operations in a Mule app for which the result cannot remain indeterminate.
When a series of steps in a flow must succeed or fail as one unit, Mule uses a transaction to
demarcate that a unit.
queue assign platform ki velli log console we will find out the issue
I instaied two transactions at a time ,one is for update and one is for checking balance then how
you manage these scenarios?
transactional scope
Batch process is asyn I don't want asyn,hw you will process transactions?
multiple batch process can be used /// for single tracton batch process is not needed
carrer growth
Wt z syn Nd asynchronous?
single thread as asynchronous- multiple threads used for performance used batch processing
any challenges task did you faced still now,that which you impl in ,raml?
fragments traties i used had lot of issues traites used schema also had some problems
api kit console >>resources created>>get,put,post created and implementes all the data
Software developer
update after work we will assign the work jenkings after building testing teams we will give for
testing
Agile methodology
wt r the integration pattern user used?
Integration Styles
Messaging Systems
Message Channel Mule provides a message channel that connects the message processors in a flow.
Pipes and Filters A flow implements a pipe and filter architecture.
Message Router Message Routers.
Message Translator Message Transformer.
Message Endpoint Message Sources and Operations.
Messaging Channels
Message Construction
Message Routing
Message Transformation
Messaging Endpoints
22: What is Mule Cache Scope and what are its storage types?
A: Caching in Mule ESB can be done by Mule Cache Scope. Mule Cache Scope has
3 storage types
In-memory: This store the data inside system memory. The data stored with In-
memory is non-persistent which means in case of API restart or crash, the data been
cached will be lost.
Configuration Properties:
Store Name
Maximum number of entries
TTL (Time to live)
Expiration Interval
Managed-store: This stores the data in a place defined by ListableObjectStore. The
data stored with Managed-store is persistent which means in case of API restart or
crash, the data been cached will not be lost.
Configuration Properties:
Store Name
Maximum number of entries
TTL (Time to live)
Expiration Interval
Persistence (true/false)
Simple-test-file-store: This stores the data in a file. The data stored with Simple-test-
file-store configuration is persistent which means in case of API restart or crash, the
data been cached will not be lost.
Configuration Properties:
● Store Name
● Maximum number of entries
● TTL (Time to live)
● Expiration Interval
● Persistence (true/false)
● Name and location of the file
28. what is the difference between SOAPKit Router and APIKit Router and
when to use what?
A: SOAP Kit Router is used when a SOAP service is to be developed via WSDL.
SOAP Kit Router will read the WSDL configured, validate the request message
coming in (if configured) and send it to the appropriate flow for processing based on
the operations been configured in WSDL.
API Kit Router is used when a REST base service is been created developed via
RAML. API Kit Router will read the RAML configured, validate the request message
coming in (if configured) and send it to the appropriate flow for processing based of
the HTTP methods in RAML.
29. what is the use of Scatter-Gather flow control?
A: Parallel activate/flows can be executed in mule by using Scatter-Gather flow
control component.
The routing message processor Scatter-Gather sends a request message to multiple
targets concurrently. It collects the responses from all routes, and aggregates them
into a single message.
30. how can you implement and use of first successful?
A: The First Successful router iterates through its list of child routes, each containing
a set of components, until the first successful execution. If none succeed, an
exception is thrown.
31. What is the use of Round Robin flow control?
A: The Round Robin router iterates through a list of two or more routes in order, but
it only routes to one of the routes each time it is executed. It keeps track of the
previously selected route and never selects the same route consecutively. For
example, the first time Round Robin executes, it selects the first route. The next
time, it selects the second route. If the previously selected route is the last route in
the list, Round Robin jumps to the first route.
32. What is the use of Until success flow control?
A: The until-successful scope processes messages through the processors within its
scope until the operation succeeds. Until-successful’s processing occurs
asynchronously from the main flow. After passing a message into the until-
successful scope, the main flow immediately regains control of the thread.
2. Flow in Mule:
Main Flow is a message processing block that has its own processing strategy and
exception handling strategy. It has data processing, connecting applications, event
processing, etc.
Sub Flow:
1. Sub flow always processes messages synchronously (relative to the flow that
triggered its execution).
2. Sub flow executes in the same thread of the calling process. Calling process
triggers the sub-flow and waits for it to complete and resumes once the sub-
flow has completed.
3. Sub flow inherits processing strategy and exception handling strategy from
the parent/calling flow.
4. It can be used to split common logic and be reused by other flows.
Private flow does not have source define. It can be synchronous or asynchronous
based on the processing strategy selected. Also, they have their own exception
handling strategy. Allows you to define different threading profile.
Private flows are different from subflows in terms of Threading and Exception
Handling. They have their own Exception Handling. The exception will be trapped by
the local Exception Handling and it will not be propagated to the main flow. It means
that if an exception occurs in a Private flow and it is handled properly when the call
goes back to the main calling flow, the next message processors continue
executing.
Private flows also receive the same message structure from the main calling flow.
But Private flows create a new execution context.
Contrast that to a private flow and chaining with flow-refs—if an exception occurs in
the called flow, even though we have a rollback strategy configured, it will NOT be
executed because there is no internal queue involved.
Use a flow-ref by default, but don't hesitate to use VM transports if you need
redelivery of messages.
3. What is difference between flow reference and VM?
Flow Ref routes the Mule event to another flow or sub-flow and back within a same
Mule application. This lets you treat the entire referenced flow like a single
component in your current flow.
Flow Ref breaks up the Mule app into discrete and potentially reusable units. For
example, a flow that lists files on a regular basis might reference another flow that
processes the output of the List operation. Instead of appending all the processing
steps, you can append a Flow Ref that points to the processing flow.
VM connector creates a transport barrier in the flow: on a transport barrier, your
Mule message goes through an entire serialization and deserialization process,
which results in a new Mule message with the same payload.
VM endpoints enable message redelivery strategies to be configured in your
exception handling blocks, which is not possible with flow-refs. VMs are able to
do this because they internally use a queue to hold messages while flow-refs are
similar to simple method calls.
1.Manage change:
90% respondents reported that implementing Agile improved their ability to
manage changing requirements
2. Communicate better:
Key success metric for projects is communication. Agile fosters self-organizing,
self-managing teams that are in daily communication with one another, solving
complex issues
The terms PATCH, PUT, and POST are often confused with each other. Many
resources promote the concept of CRUD (create, read, update, delete) applications
and tie HTTP verbs into a single part of the process. The reality is far more complex
than a short acronym, especially as you run into overlapping functionality and other
complications.
The differences between them are subtle but can make significant changes in the
way you create and update resources. It's time to shed some light on PATCH, PUT,
and POST, and when you should use each.
12. PUT vs POST vs PATCH?
POST
You use POST to create a resource and instruct the server to make a Uniform
Resource Identifier (URI) for it. For example, when you want to create a new article
you would POST to /articles to make the file and get the URI, so you end up
with /articles/1234/.
PUT
PUT also creates resources, but it does so for a known URI. So, you can PUT
to /articles/1234/. If the article doesn't exist, PUT creates it. If it does exist, this HTTP
verb updates it. While PUT seems nearly identical to POST, the difference between
the two comes down to idempotence.
Idempotence is a property that creates identical side effects whether you have one
or many results. PUT has this characteristic, while POST creates new resources
infinitely. In general, POST works best for resource creation, while PUT handles
updates.
PATCH
So, where does PATCH come into the picture? This HTTP verb partially updates
resources without sending the complete representation of it. When you're working
with complicated objects and resources, it's a big deal to update more than you need
to.
With this example...
{ "first_name": "Claude", "last_name": "Elements", "email": "claude@cloud-
elements.com", "favorite_color": "blue" }
...PATCH allows you to change the color with JSON :{ "favorite_color": "purple" }.
13. MUNIT ?
MUnit is a Mule application testing framework which allows you to build automated
tests for your Mule integrations and API's. MUnit is very well integrated with Anypoint
Studio.
Various features available with Mule MUnit:
● Create and build Mule tests by writing Mule code.
● Create and build Mule tests by writing Java code.
● Verify Message Processor calls.
● Mock Message Processor.
● Mock outbound endpoints.
● Disable outbound endpoints.
● Disable flow inbound connectors.
● Disable inbound endpoints.
● Debug the tests.
● View coverage reports.
● Various asserts available like Assert Equals, Assert Not Equals, Assert
Payload, Assert False, Assert True, Assert Null Payload, Assert Not Null
Payload.
Spy Message Processor.
Mule 3 Vs Mule 4:
The message structure of Mule 4 is completely changed now. Mule 4 has simplified Mule Event and
Mule Message to make it easier to work with properties and variables. There are no inbound or
outbound properties in Mule 4.
MuleSoft introduced the attributes. Also, Mule 4 came up with the variable concepts; it can
simultaneously deal with all types of variables instead of dealing with the different types of variables
like session, flow, and record variables individually in Mule 3.
In Mule 3, flow starts processing a message when the request is received by Mule, then Mule Client
retrieves an external event, which are triggered by some external systems like the message received
on a queue or when a file is copied to a directory. It is then converted into a Mule message or flow
and starts processing it when it receives an inbound endpoint in a flow.
In Mule 4, flows triggered by an event are generated when a trigger reaches the event source of
flow.
In Mule 4, each component interacts with Mule Event, then the Mule event travels sequentially
through the components of the flow.
Mule 3 message structure:
Mule Message: Mule Message is a data that is processed throughout an application via one or more
flows.
Variables: Variables are used to store per-event values for use within a flow of a Mule app.
We have three types of variables:
In Mule 4, Event and Messages are simplified to easily learn and develop the applications.
Mule Event:
Message:
Payload:
the body of the mule message, you can access payload using the keyword, “payload,” and fields by
using Data Weave scripting language like ‘payload. <fieldname>’. Data Weave expression language is
default scripting language in Mule 4 as MEL is removed to in Mule 3 avoid the stress of converting
into Java objects.
Accessing payload data:
Attributes:
Attributes contain message metadata that can consist of headers and properties received or
returned by a connector, as well as other metadata that is populated by the connector. The
metadata available to the message varies depends on the connector.We can access attributes by
using keyword “attributes”.The characters, underscores, and numbers are the valid identifiers for
attribute names.
When any Source or Operation produces a new Message because of its execution, both parts of the
Message (the payload and attributes) are replaced with the new values, and the previous attributes
are lost. If you need to preserve any information from the attributes across operation invocations,
you can either store the previous attributes in a variable or set the target parameter in the invoked
operation.
Accessing attributes:
Variables:
Variables hold arbitrary user information such as operation results, auxiliary values, and so on.
The stored data can be any supported data type like strings, objects, and numbers.
It can also store current messages using the “message” keyword and also store current message
payload using the “payload” keyword and current message attributes using the “attributes”
keyword.
Delete/remove variables:
Record variables are removed by batch processing, which behaves like a scope in Mule 4.
So, instead, we can use variables. Variables that are created in the process phase are automatically
tied with the processing record and stored throughout the processing phase.
Variables created during the processing phase will exist during the processing phase only and
variables created before or after batch processing will exist throughout the processing phase and on
completion of the phase.
Accessing Variables:
One of the major advantages of the Mule 4 Message structure is dealing with multiple payloads.
In Mule 3, components that returned multiple payloads used a special structure called the Mule
Message Collection.
In Mule 4, any component that needs to deal with multiple messages can simply set the payload of
the message to a List of Mule Messages.
You can then iterate over these messages using Data Weave or each or other components.
In the Mule Batch series, we looked at the batch processing capabilities of Mule ESB 3.
At the time of writing this post, Mule 4 was already available as a Release Candidate
version.
Mule 4 offers huge improvements and changes to Mule 3, such as the introduction of
DataWeave 2.0, Reusable Streaming, improved processing strategies, operation based
connectors, and much more. Mule 4 also has some changes to the way batch jobs were
implemented in Mule 3. In this post, we will look at batch processing in Mule 4.
If you have missed the batch processing capabilities in Mule 3, then I recommend
reading below articles from earlier series -
Input Phase: This is an optional part of the batch job that can be used to retrieve the source
data using any inbound connector.
Load and Dispatch: This is an implicit phase and Mule runtime takes care of it. In this phase,
the payload generated in the Input phase or provided to the Batch from the Caller flow is turned
into a collection of records.
Process: This is the required phase where actual processing of every record occurs
asynchronously. Records are processed through each step while they move back-and-forth
between steps and the intermediate queue.
On Complete: In this final but optional phase, a summary of batch execution is made available
to possibly generate reports or any other statistics. The payload in this phase is available as an
instance of BatchJobResult object. It holds information such as the number of records loaded,
processed, failed, succeeded. It can also provide details of exceptions occurring in the steps.
In Mule 4, this is changed. Batch is now a Scope like for-each, transactional, and async, and it can
be added into the flow itself. No more having different elements!
Batch Phases
There are only 3 phases in a Mule 4 Batch job compared to 4 phases in Mule 3 Batch job.
Input phase does not exist in Mule 4. Since Batch is a scope and it does not need any
special Input phase. The payload of the flow is passed to the batch.
Load and Dispatch: This is still there, Implicit and does the same thing as in Mule 3. It
will create job instance, covert payload into the collection of records and then split the
collection into individual records for processing.
Process: This is still a required phase and essentially same as it was in Mule 3. It will
process all records asynchronously. Batch steps in this phase, still allow you to filter
records using acceptExpression and/or acceptPolicy configurations.
OnComplete: The last and optional phase in batch functions same as in Mule 3. That
means, you still do not get the processed results back in calling flow and they are
available as an instance of BatchJobResult in this phase only.
In Mule 4, flow variables have been enhanced to work efficiently during batch
processing, just like the record variables. Flow variables created in batch steps are now
automatically tied to the processing record and stays with it throughout the processing
phase. No longer record variables are needed.
This also removes all special treatment we had to give to record variables during MUnit
testing.
In this test batch application, the 100K objects are created with DataWeave 2.0 and then
passed to the batch job for processing.
%dw2.0
outputapplication/java
---
(1to100000) map {
id: $,
"Name": "Name-"++$
}
Step 2 and Step 3 then have a validation component that compares the flow variable
value with payload.id. If you run this application, not a single record fails due to
validation, which means all 100K flow variables stayed with their records!
Any flow variables created during process phase, exist during the Process phase only.
Flow variables that existed before and outside the batch scope are available throughout
process phase as well as in on complete phase.
In Mule 4 Batch job, this has been replaced with the Batch Aggregator component. The
underlying behavior and functionality of Batch Aggregator is the same as Batch Commit.
A grouped collection in the batch aggregator is a mutable collection, i.e. it allows you to
modify individual records in groups or variables associated with those. You can
aggregate records to process in two ways -
Conclusion
Mule 4 comes with lots of improvements and enhancements. Batch processing in Mule 3
was already powerful, in Mule 4 it has been simplified for developers and thus easy to
implement.