0% found this document useful (0 votes)
14 views

Interview 3

Uploaded by

Rohit Rao
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Interview 3

Uploaded by

Rohit Rao
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

CTS: interview Questions:

What is ESB?

What is Mule ESB?

Mule, the runtime engine of Anypoint Platform, is a lightweight Java-based enterprise service
bus (ESB) and integration platform that allows developers to connect applications together
quickly and easily, enabling them to exchange data.

Why we need to mule?


Mule and other ESBs offer real value in scenarios where there are at least a few integration points or
at least 3 applications to integrate. They are also well suited to scenarios where loose coupling,
scalability and robustness are required.

Below is a quick ESB selection checklist. To read a much more comprehensive take on when to select
an ESB, read this article written by MuleSoft founder and VP of Product Strategy Ross Mason: To ESB
or not to ESB.

● 1. Are you integrating 3 or more applications/services?

● 2. Will you need to plug in more applications in the future?

● 3. Do you need to use more than one type of communication protocol?

● 4. Do you need message routing capabilities such as forking and aggregating message flows,
or content-based routing?

● 5. Do you need to publish services for consumption by other applications?

Why Mule?

Mule is lightweight but highly scalable, allowing you to start small and connect more applications
over time. The ESB manages all the interactions between applications and components
transparently, regardless of whether they exist in the same virtual machine or over the Internet, and
regardless of the underlying transport protocol used.

There are currently several commercial ESB implementations on the market. However, many of
these provide limited functionality or are built on top of an existing application server or messaging
server, locking you into that specific vendor. Mule is vendor-neutral, so different vendor
implementations can plug in to it. You are never locked in to a specific vendor when you use Mule.

Mule provides many advantages over competitors, including:

● Mule components can be any type you want. You can easily integrate anything from a "plain
old Java object" (POJO) to a component from another framework.

● Mule and the ESB model enable significant component reuse. Unlike other frameworks,
Mule allows you to use your existing components without any changes. Components do not
require any Mule-specific code to run in Mule, and there is no programmatic API required.
The business logic is kept completely separate from the messaging logic.

● Messages can be in any format from SOAP to binary image files. Mule does not force any
design constraints on the architect, such as XML messaging or WSDL service contracts.

● You can deploy Mule in a variety of topologies, not just ESB. Because it is lightweight and
embeddable, Mule can dramatically decrease time to market and increases productivity for
projects to provide secure, scalable applications that are adaptive to change and can scale
up or down as needed.

● Mule's stage event-driven architecture (SEDA) makes it highly scalable. A major financial
services company processes billions of transactions per day with Mule across thousands of
Mule servers in a highly distributed environment.

What is your project architecture?

what is API life cycle?

In the API lifecycle, there are three primary personas:

1. API publisher: Creates and deploys the API.

2. API manager: Manages and monetizes the API.


3. API consumer: Discovers and integrates with the APIs.

Each of these API personas has multiple tasks associated with them, and those tasks define the
characteristics of the API (see Figure 1).

Figure 1: API lifecycle, personas, and tasks.

Let's take a look at the API lifecycle in another dimension (see Figure 2) and understand the different
stages of the lifecycle. Here, you can see how the various personas deal with APIs in different
stages.
Figure 2: API lifecycle and personas.

The API manager, API publisher, and API consumer are personas who may be represented by people
with various designations in an organization.

For the sections below, the numbers represent the steps as given in Figure 2.

1. Plan and Strategy

The API manager (a persona who could be represented by people with designations such as API
product manager or API architect) prepares an overall plan on how to expose an enterprise’s digital
assets using APIs. The plan can include identifying the list of APIs, their design (including parameters
and types), visibility scope, etc. Last but not least, the API manager also makes sure that the
documentation of the API is comprehensive. There is a popular saying in the API world: "An API is as
good as its documentation." Hence, API documentation is one of the tasks that should be done with
utmost clarity.

2. Create, Design, Test, and Publish

Once the API plan is ready, the API publisher (who can be represented by people with designations
such as software developer or software architect) gives birth to APIs by creating APIs as a part of the
core app development process. Note that a lot of times, you find the responsibilities of API manager
and API publisher being delivered by someone like an enterprise architect or similarly designated
persons.

3. Versions

API undergoes further tweaking to its design configuration (i.e., path or query parameter) as per the
requirements. It's tested, versioned, and published to a private enterprise DevPortal. Whether an
API is private or public is defined by the API's visibility settings and the governance semantics around
it. You normally see vendors using API frameworks such as Swagger and tooling around it to do the
above tasks easily.

4. Features
The API manager can apply more management configurations based on requirements. For instance,
the API manager may configure the free API invocations limit to a finite number and make the
consumer to pay for any invocations beyond the configured limit. He or she can also configure
analytic reports on various APIs and can create plans that can be subscribed to by the consumer. At
this point, all APIs are assumed to be published and available for consumption, through an API
DevPortal (check out Paypal’s API dev portal). APIs can be internal or external, and similarly, API
DevPortal can be a public portal like PayPal's or can be private portal accessible only withing an
enterprise network.

The above four steps (as seen in Figure 2), explored the API lifecycle from the perspective of an API
publisher and an API manager.

Now, we will explore the API lifecycle from an API consumer perspective. The API consumer persona
is usually an app developer or an app architect who consumes or integrates the APIs as a part of the
app development process.

5, 6, and 7. Discovery, Authentication, and Creation

In the API DevPortal, the consumer has the ability to search for the desired APIs and subscribe to a
particular API or an API plan associated with a group of APIs as defined by the API manager. The API
consumer creates a DevPortal account and registers his or her app to consume the needed APIs.
Usually, an API key is generated per app and is used during the runtime to authenticate the app
accessing the API. Consuming the API is the last step in the lifecycle of the API. When you are able to
successfully integrate the API into your app and get the desired results, the cycle is complete.

Note: A few API management vendors use another lifecycle step of engage and promote where the
APIs are promoted in various forums to the end consumers (who are developers). I have not touched
that aspect of the lifecycle, since it may not be that relevant from the ADD perspective.

wt z api led connectivity.. how u implement in your project?


api-led connectivity is a methodical way to connect data to applications through
reusable and purposeful apis. these apis are developed to play a specific role –
unlocking data from systems, composing data into processes, or delivering an
experience.
when the entire organization adopts what is known as api-led connectivity,
everyone in the business is empowered to access their best capabilities in delivering
applications and projects through discovery, self-service, and reuse.

api-led connectivity not only depends on three categories of reusable apis to compose
new services and capabilities, but also decentralizes and democratizes access to
enterprise data. central it produces reusable assets, and in the process unlocks key
systems, including legacy applications, data sources, and saas apps. central it and other
teams can then reuse these api assets and compose process level information. then, app
developers can discover and self-serve on all of these reusable assets, creating the
experience layer of apis and ultimately the end-applications. this api-led approach to
integration increases agility, speed, and productivity.

7. what are the apis that enable api-led connectivity?


ans: the apis used in an api-led approach to connectivity fall into three categories:
experience apis – experience apis are the means by which data can be reconfigured so
that it is most easily consumed by its intended audience, all from a common data source,
rather than setting up separate point-to-point integrations for each channel. an
experience api is usually created with api-first design principles where the api is
designed for the specific user experience in mind.
process apis – these apis interact with and shape data within a single system or across
systems (breaking down data silos) and are created here without a dependence on the
source systems from which that data originates, as well as the target channels through
which that data is delivered.
system apis – these usually access the core systems of record and provide a means of
insulating the user from the complexity or any changes to the underlying systems. once
built, many users, can access data without any need to learn the underlying systems and
can reuse these apis in multiple projects.
by building and organizing your apis this way, and then making them discoverable and
available for the business to self-serve, api-led connectivity has made your business
composable, allowing teams throughout the business to compose, recompose, and adapt
these apis to address the changing needs of the business.

role of mulesoft in api-led connectivity


the anypoint platform provides a very nice integration framework as well as an api
management platform. using the anypoint platform as a whole, one can achieve api-led
connectivity in a smoother way.
in this article, i have given a brief overview of api-led connectivity. in the next article, i
will try to show how to achieve this with some examples using anypoint platform.
why we need to follow that architecture?

feasibility, to use some other system we need to use some other system we should have some
architecture

I have 2gb data for transaction how you will implement? if you r implement batch. why you are using
that? instead of That. I have1000gb data in that case how batch jobs load into process?

batch processing

why your using multiple batch steps?

batch commit, queue assign for failures

What is different between else if and batch commit?

one by one iterate

batch commit used to commit or insert and update into db

How you implemented transaction s in your project?

transactions in mule

Transactions are operations in a Mule app for which the result cannot remain indeterminate.
When a series of steps in a flow must succeed or fail as one unit, Mule uses a transaction to
demarcate that a unit.

if one transaction got failed how you solve transaction?

queue assign platform ki velli log console we will find out the issue

I instaied two transactions at a time ,one is for update and one is for checking balance then how
you manage these scenarios?

transaction scope we use to handle the senario

Is this your handle r any mule provided pallet provided?

transactional scope

Batch process is asyn I don't want asyn,hw you will process transactions?

multiple batch process can be used /// for single tracton batch process is not needed

Second round in CTS:

Explain About project architect of last two projects?


Kony is good company why you are leaving?

carrer growth

Wt z syn Nd asynchronous?

single thread as asynchronous- multiple threads used for performance used batch processing

Wt z thread s me mechanism in java?

multi tasking , muti threading we use that in java

How many API s are you created still now?

4 apis used multiple apis used

any challenges task did you faced still now,that which you impl in ,raml?

fragments traties i used had lot of issues traites used schema also had some problems

API design and implementation end to end?

api kit console >>resources created>>get,put,post created and implementes all the data

wt z your team size?

in project wt z your role,

Software developer

where it's come as integration developer?,

update after work we will assign the work jenkings after building testing teams we will give for
testing

How many API s r created till now?

4 apis used multiple apis used

Wt interspaces you r worked?

apis all transactions

For transaction what is approach you r used?

batch jobs used

Wt z methadalagoes used for project development?

Agile methodology
wt r the integration pattern user used?

transaction message etc

WY Abvt mule pattern s?

Enterprise Integration Patterns


Using Mule
Enterprise Integration Patterns are accepted solutions to recurring problems within a given
context. The patterns provide a framework for designing, building messaging and integration
systems, as well as a common language for teams to use when architecting solutions.
Mule supports most of the patterns shown in the Enterprise Integration Patterns book written by
Gregor Hohpe and Bobby Woolf.
Mule reduces the effort required when building integrations by implementing the patterns that you
use to design solutions. You can then simply configure and use these same patterns in Mule.

Mapping Enterprise Integration Patterns into


Mule Objects
Review the following list of Enterprise Integration Patterns that can be mapped directly to Mule
objects:

Integration Styles

Pattern Mapping to a Mule Object

File Transfer File Connector


Shared Database DataBase Connector
Remote Procedure Invocation Mule APIs are meant to work like this procedure or even doing requests to external APIs.
Messaging Mule is all about Messaging.

Messaging Systems

Pattern Mapping to a Mule Object

Message Channel Mule provides a message channel that connects the message processors in a flow.
Pipes and Filters A flow implements a pipe and filter architecture.
Message Router Message Routers.
Message Translator Message Transformer.
Message Endpoint Message Sources and Operations.

Messaging Channels

Pattern Mapping to a Mule Object

Point-to-Point Channel The default channel within a flow.


Message Bus Mule is a message bus.
Guaranteed Delivery Using Reliability Patterns.

Message Construction

Pattern Mapping to a Mule Object

Event Message Mule transmits events from different Application or Processors.


Request Reply Mule uses connectors that facilitate request-reply wise operations, or using Reliability Patterns.

Message Routing

Pattern Mapping to a Mule Object

Content-Based Router Choice Router.


Message Filter Validation Module.
Dynamic Routing Message Routers.
Scatter Gather Scatter Gather Router.
Splitter Foreach Scope, Parape and Batch.
Aggregator Aggregator Module.

Message Transformation

Pattern Mapping to a Mule Object

Content Enricher Target Variables.

Messaging Endpoints

Pattern Mapping to a Mule Object

Polling Consumer Message Sources.


Transactional Client Transaction Management.
Idempotent Receiver Redelivery Policy.

Extra Questions: Just for a glance

What is a REST web service?


A: REST stands for Representational State Transfer or RESTful web service. REST
is a Client-Server architecture which means each unique URL is representation of
some object or resource. Any REST API developed uses HTTP methods explicitly
and in a way that’s consistent with the protocol definition. This basic REST design
principle establishes a one-to-one mapping between creates, read, update, and
delete (CRUD) operations and HTTP methods. According to this mapping:
• To create a resource on the server, use POST.
• To retrieve a resource, use GET.
• To change the state of a resource or to update it, use PUT.
• To remove or delete a resource, use DELETE.
Example:
If we want to create a REST service that fetchs the record of a customer then our
URI will be:
– GET http://tutorialsAtoZ.com/customer/{customerID}
where HTTP method is GET,
Resource is customer
And URI parameter is customerID which depicts
for which resource we want to fetch the records.
13: What are the advantages of RESTful web services?
A: Advantages of RESTful Web Services
Fast: RESTful Web Services are fast because there is no strict specification like
SOAP. It consumes less bandwidth and resource.
Language and Platform independent: RESTful web services can be written in any
programming language and executed in any platform.
Can use SOAP: RESTful web services can use SOAP web services as the
implementation.
Permits different data format: RESTful web service permits different data format
such as Plain Text, HTML, XML and JSON.
14: What is SOAP Web Service?
A: SOAP stands for Simple Object Access Protocol. It is a XML-based protocol for
accessing web services. SOAP is a W3C recommendation for communication
between two applications.
SOAP is XML based protocol. It is platform independent and language independent.
By using SOAP, you will be able to interact with other programming language
applications.
15. How can we create and consume SOAP service in Mule?
A: Creating SOAP Service – We can create a SOAP service same as we create
Mule Project With RAML, the only change is instead of RAML we need to import
Concert WSDL.
Consuming SOAP Service – We can use Web Service Consumer or CXF
component in our mule flow to access/consume SOAP service.
16: What are the advantages, Disadvantages of Soap Web Services?
A: WS Security: SOAP defines its own security known as WS Security.
Language and Platform independent: SOAP web services can be written in any
programming language and executed in any platform.
Disadvantages of Soap Web Services:
Slow: SOAP uses XML format that must be parsed to be read. It defines many
standards that must be followed while developing the SOAP applications. So it is
slow and consumes more bandwidth and resource.
WSDL dependent: SOAP uses WSDL and doesn’t have any other mechanism to
discover the service.
17: What is the difference between SOAP and REST?
A: SOAP -> REST

1. SOAP is a protocol. -> REST is an architectural


style.
2. SOAP stands for Simple Object Access Protocol. -> REST stands for
REpresentational State
Transfer.
3. SOAP can’t use REST because it is a protocol. -> REST can use SOAP web
services
because it is a concept
and can use any
protocol like HTTP,
SOAP.
4. SOAP uses services interfaces to expose the -> REST uses URI to expose
business logic. business logic.
5. SOAP defines standards to be strictly followed. -> REST does not define too
much
standards like SOAP.
6. SOAP requires more bandwidth and resource -> REST requires less bandwidth
and
than REST. resource than SOAP.
7. SOAP defines its own security. -> RESTful web services inherit
security
measures from the
underlying transport.
8. SOAP permits XML data format only. -> REST permits different data
format such
as Plain text, HTML,
XML, JSON etc.
9. SOAP is less preferred than REST. -> REST more preferred than
SOAP.
18. What are various types of Exception Handling?
A: 1. Choice Exception Handling.
2. Catch Exception Handling.
3. Rollback Exception Handling.
4. Global Exception Handling.
5. Default Exception Handling.
19. What are the different types of variables in Mule ESB?
A: The different types of variables in Mule ESB are:
Flow Variable
Session Variable
Record Variable
20. What are the Flow Processing Strategies?
⮚ Synchronous Flow Processing Strategy
⮚ Queued Flow Processing Strategy
⮚ Asynchronous Flow Processing Strategy
⮚ Thread Per Processing Strategy
⮚ Queued Asynchronous Flow Processing Strategy
⮚ Non-blocking Flow Processing Strategy
⮚ Custom Processing Strategy
⮚ Rollback Exception Handling.

21. What is caching and why to use it?


A: Caching is a concept with is used to store frequently used data in the memory, file
system or database which saves processing time and load if it would have to be
accessed from original source location every time.

22: What is Mule Cache Scope and what are its storage types?
A: Caching in Mule ESB can be done by Mule Cache Scope. Mule Cache Scope has
3 storage types
In-memory: This store the data inside system memory. The data stored with In-
memory is non-persistent which means in case of API restart or crash, the data been
cached will be lost.
Configuration Properties:
Store Name
Maximum number of entries
TTL (Time to live)
Expiration Interval
Managed-store: This stores the data in a place defined by ListableObjectStore. The
data stored with Managed-store is persistent which means in case of API restart or
crash, the data been cached will not be lost.
Configuration Properties:
Store Name
Maximum number of entries
TTL (Time to live)
Expiration Interval
Persistence (true/false)
Simple-test-file-store: This stores the data in a file. The data stored with Simple-test-
file-store configuration is persistent which means in case of API restart or crash, the
data been cached will not be lost.
Configuration Properties:
● Store Name
● Maximum number of entries
● TTL (Time to live)
● Expiration Interval
● Persistence (true/false)
● Name and location of the file

23. What is a Web service API?


A: An API (Application Programming Interface) is the means by which third parties
can write code that interfaces with other code. A Web Service is a type of API, one
that almost always operates over HTTP (though some, like SOAP, can use alternate
transports, like SMTP).

24. What is RAML?


A: RAML stands for RESTful API Modeling Language is a YAML-based language for
describing RESTful APIs. It provides all the information necessary to describe
RESTful or practically RESTful APIs.
RAML is similar to WSDL, it contains endpoint URL, request/response schema,
HTTP methods and query and URI parameter.

25. Why we use RAML?


A: RAML helps client know, what the service is and how all the operations can be
invoked. RAML helps the developer in creating the initial structure of this API. RAML
can also be used for documentation purpose.

26. What are the different types of flow?


A: Subflow – A subflow is always synchronous. It is similar to a synchronous flow,
subflow executes in the same thread of the calling process. Calling process triggers
the sub-flow and waits for it to complete and resumes once the sub-flow has
completed.
Synchronous Flow– Same as sub-flow, the only difference is that in synchronous
flows you need to separately define an exception strategy to it, it does not inherit the
exception strategy of its calling flow.
Asynchronous Flow – As in sub-flow and synchronous flow, calling process
triggers the sub-flow and waits for it to complete; for asynchronous flow the flow,
calling process triggers an asynchronous flow and moves ahead to its next activity.
An asynchronous flow executes in parallel to its calling/parent flow in a different
thread. An asynchronous flow does not return its output it its parent/calling flow.
Private Flow – flow that does not have an inbound connector in the source. Means a
private flow cannot start of its own on receiving the inbound message as it does not
have any inbound connector, A private flow can only be called using flow-ref same
as sub-flow.
************************
27. What are inbound and Outbound properties?
A: Inbound properties are immutable, are automatically generated by the message
source and cannot be set or manipulated by the user. They contain metadata
specific to the message source. A message retains its inbound properties only for
the duration of the flow; when a message passes out of a flow, its inbound properties
do not follow it.
Outbound properties are mutable; can be set/modified in a flow and can become
inbound properties when the message passes from the outbound endpoint of one
flow to the inbound endpoint of a different flow via a transport. They contain
metadata similar to that of an inbound property, but an outbound property is applied
after the message enters the flow. Note that if the message is passed to a new flow
via a flow-ref rather than a connector, the outbound properties remain outbound
properties rather than being converted to inbound properties.

28. what is the difference between SOAPKit Router and APIKit Router and
when to use what?
A: SOAP Kit Router is used when a SOAP service is to be developed via WSDL.
SOAP Kit Router will read the WSDL configured, validate the request message
coming in (if configured) and send it to the appropriate flow for processing based on
the operations been configured in WSDL.
API Kit Router is used when a REST base service is been created developed via
RAML. API Kit Router will read the RAML configured, validate the request message
coming in (if configured) and send it to the appropriate flow for processing based of
the HTTP methods in RAML.
29. what is the use of Scatter-Gather flow control?
A: Parallel activate/flows can be executed in mule by using Scatter-Gather flow
control component.
The routing message processor Scatter-Gather sends a request message to multiple
targets concurrently. It collects the responses from all routes, and aggregates them
into a single message.
30. how can you implement and use of first successful?
A: The First Successful router iterates through its list of child routes, each containing
a set of components, until the first successful execution. If none succeed, an
exception is thrown.
31. What is the use of Round Robin flow control?
A: The Round Robin router iterates through a list of two or more routes in order, but
it only routes to one of the routes each time it is executed. It keeps track of the
previously selected route and never selects the same route consecutively. For
example, the first time Round Robin executes, it selects the first route. The next
time, it selects the second route. If the previously selected route is the last route in
the list, Round Robin jumps to the first route.
32. What is the use of Until success flow control?
A: The until-successful scope processes messages through the processors within its
scope until the operation succeeds. Until-successful’s processing occurs
asynchronously from the main flow. After passing a message into the until-
successful scope, the main flow immediately regains control of the thread.

2. Flow in Mule:
Main Flow is a message processing block that has its own processing strategy and
exception handling strategy. It has data processing, connecting applications, event
processing, etc.

Sub Flow:

1. Sub flow always processes messages synchronously (relative to the flow that
triggered its execution).
2. Sub flow executes in the same thread of the calling process. Calling process
triggers the sub-flow and waits for it to complete and resumes once the sub-
flow has completed.
3. Sub flow inherits processing strategy and exception handling strategy from
the parent/calling flow.
4. It can be used to split common logic and be reused by other flows.

A subflow can be called by the flow-reference element of Mule. When the


main flow (calling) calls the subflow using the flow-reference element, it
passes the whole message structure (message properties, payload,
attachments, etc) and the context (session, transaction, etc). In the same way,
when the processing of the message is done in the subflow, the complete
message and context are returned to the main calling flow. In other words,
everything in the subflow behaves as if it were in the same flow. It's important
to note that the message is executed in the same thread.

Private flow does not have source define. It can be synchronous or asynchronous
based on the processing strategy selected. Also, they have their own exception
handling strategy. Allows you to define different threading profile.

Private flows are different from subflows in terms of Threading and Exception
Handling. They have their own Exception Handling. The exception will be trapped by
the local Exception Handling and it will not be propagated to the main flow. It means
that if an exception occurs in a Private flow and it is handled properly when the call
goes back to the main calling flow, the next message processors continue
executing.

Private flows also receive the same message structure from the main calling flow.
But Private flows create a new execution context.

Note: VM endpoints enable message redelivery strategies to be configured in your


exception handling blocks, which is not possible with flow-refs. VMs are able to
do this because they internally use a queue to hold messages while flow-refs are
similar to simple method calls.

Contrast that to a private flow and chaining with flow-refs—if an exception occurs in
the called flow, even though we have a rollback strategy configured, it will NOT be
executed because there is no internal queue involved.

Use a flow-ref by default, but don't hesitate to use VM transports if you need
redelivery of messages.
3. What is difference between flow reference and VM?

Flow Ref routes the Mule event to another flow or sub-flow and back within a same
Mule application. This lets you treat the entire referenced flow like a single
component in your current flow.
Flow Ref breaks up the Mule app into discrete and potentially reusable units. For
example, a flow that lists files on a regular basis might reference another flow that
processes the output of the List operation. Instead of appending all the processing
steps, you can append a Flow Ref that points to the processing flow.
VM connector creates a transport barrier in the flow: on a transport barrier, your
Mule message goes through an entire serialization and deserialization process,
which results in a new Mule message with the same payload.
VM endpoints enable message redelivery strategies to be configured in your
exception handling blocks, which is not possible with flow-refs. VMs are able to
do this because they internally use a queue to hold messages while flow-refs are
similar to simple method calls.

4. What is the difference between using private flow versus VM transport?


One case would be that VM endpoints enable message redelivery strategies to be
configured in your exception handling blocks – this is not possible with flow-refs.
VMs can do this because they internally use a queue to hold messages while flow-
refs are similar to simple method calls.
Look at the sample flow below; here the message will be redelivered five times and
is enabled by the use of VM inbound

10. what are the new changes in Mule 4?

Lightweight – design, build, and deploy in weeks, not years!


Powerful – handles your most complex system to system data needs.
Scalable – grows to meet your needs, thousands of transactions per second.
Embeddable – speeds your own application development time.
Built with modern languages and systems architecture – stay on the leading
edge and keep your technology relevant.
Open-source community maintained software – features added, problems
discovered, and bugs resolved with much more regularity by thousands of
contributors, worldwide.
Enterprise features – take advantage of management tools, performance
monitoring, and high-availability features keep your systems running smoothly.
Vendor support from MuleSoft – service level agreements that let you go right to
the source for issues.

11. Why we are using Agile methodology in our projects?


The main attraction to Agile for me is that studies show that Agile projects
succeed 3 times as often as waterfall projects, in terms of budget, schedule and
feature delivery
Managing scope, budget and time is challenging; ask any project manager. But,
using an Agile framework guarantees projects deliver on time, on budget and with
the priority features the customer values.
Other reasons we cite for using Agile include:

1.Manage change:
90% respondents reported that implementing Agile improved their ability to
manage changing requirements

2. Communicate better:
Key success metric for projects is communication. Agile fosters self-organizing,
self-managing teams that are in daily communication with one another, solving
complex issues

3.Get in sync with the business:


Agile overcomes the issue reported by 78% respondents that business is
usually or always out of sync with project requirements (Geneca 2010-2011 –
http://calleam.com/WTPF/?page_id=1445)

4.Value and Quality:


Agile’s focus is on delivering value to the customer, productivity by the team,
quality software products, supported by an inspect and adapt mentality for constant
improvement

The terms PATCH, PUT, and POST are often confused with each other. Many
resources promote the concept of CRUD (create, read, update, delete) applications
and tie HTTP verbs into a single part of the process. The reality is far more complex
than a short acronym, especially as you run into overlapping functionality and other
complications.
The differences between them are subtle but can make significant changes in the
way you create and update resources. It's time to shed some light on PATCH, PUT,
and POST, and when you should use each.
12. PUT vs POST vs PATCH?
POST
You use POST to create a resource and instruct the server to make a Uniform
Resource Identifier (URI) for it. For example, when you want to create a new article
you would POST to /articles to make the file and get the URI, so you end up
with /articles/1234/.
PUT
PUT also creates resources, but it does so for a known URI. So, you can PUT
to /articles/1234/. If the article doesn't exist, PUT creates it. If it does exist, this HTTP
verb updates it. While PUT seems nearly identical to POST, the difference between
the two comes down to idempotence.
Idempotence is a property that creates identical side effects whether you have one
or many results. PUT has this characteristic, while POST creates new resources
infinitely. In general, POST works best for resource creation, while PUT handles
updates.
PATCH
So, where does PATCH come into the picture? This HTTP verb partially updates
resources without sending the complete representation of it. When you're working
with complicated objects and resources, it's a big deal to update more than you need
to.
With this example...
{ "first_name": "Claude", "last_name": "Elements", "email": "claude@cloud-
elements.com", "favorite_color": "blue" }
...PATCH allows you to change the color with JSON :{ "favorite_color": "purple" }.

13. MUNIT ?
MUnit is a Mule application testing framework which allows you to build automated
tests for your Mule integrations and API's. MUnit is very well integrated with Anypoint
Studio.
Various features available with Mule MUnit:
● Create and build Mule tests by writing Mule code.
● Create and build Mule tests by writing Java code.
● Verify Message Processor calls.
● Mock Message Processor.
● Mock outbound endpoints.
● Disable outbound endpoints.
● Disable flow inbound connectors.
● Disable inbound endpoints.
● Debug the tests.
● View coverage reports.
● Various asserts available like Assert Equals, Assert Not Equals, Assert
Payload, Assert False, Assert True, Assert Null Payload, Assert Not Null
Payload.
Spy Message Processor.

Mule 3 Vs Mule 4:

The message structure of Mule 4 is completely changed now. Mule 4 has simplified Mule Event and
Mule Message to make it easier to work with properties and variables. There are no inbound or
outbound properties in Mule 4.

MuleSoft introduced the attributes. Also, Mule 4 came up with the variable concepts; it can
simultaneously deal with all types of variables instead of dealing with the different types of variables
like session, flow, and record variables individually in Mule 3.

In Mule 3, flow starts processing a message when the request is received by Mule, then Mule Client
retrieves an external event, which are triggered by some external systems like the message received
on a queue or when a file is copied to a directory. It is then converted into a Mule message or flow
and starts processing it when it receives an inbound endpoint in a flow.

In Mule 4, flows triggered by an event are generated when a trigger reaches the event source of
flow.
In Mule 4, each component interacts with Mule Event, then the Mule event travels sequentially
through the components of the flow.
Mule 3 message structure:

Mule Message: Mule Message is a data that is processed throughout an application via one or more
flows.

Messages contain mainly two parts:


1. Header: contains metadata about the message. Metadata consists of properties that provide useful
information about the message. Properties contain inbound (automatically created by message
source, we cannot modify) and outbound (set by user in a flow or automatically created by Mule)
properties.
2. Payload: contains actual business data.

Variables: Variables are used to store per-event values for use within a flow of a Mule app.
We have three types of variables:

1. Flow Variables: To store data within the flow.


2. Session Variables: To store data throughout the application.
3. Record Variables: To store data within batch processing.
Mule 4 Message Structure :
Mule Message:

In Mule 4, Event and Messages are simplified to easily learn and develop the applications.

Mule Event:

Mule event contains “Message” and “Variables.”

Message:

Mule Message contains payload and attributes.

Payload:
the body of the mule message, you can access payload using the keyword, “payload,” and fields by
using Data Weave scripting language like ‘payload. <fieldname>’. Data Weave expression language is
default scripting language in Mule 4 as MEL is removed to in Mule 3 avoid the stress of converting
into Java objects.
Accessing payload data:

Attributes:

Attributes contain message metadata that can consist of headers and properties received or
returned by a connector, as well as other metadata that is populated by the connector. The
metadata available to the message varies depends on the connector.We can access attributes by
using keyword “attributes”.The characters, underscores, and numbers are the valid identifiers for
attribute names.
When any Source or Operation produces a new Message because of its execution, both parts of the
Message (the payload and attributes) are replaced with the new values, and the previous attributes
are lost. If you need to preserve any information from the attributes across operation invocations,
you can either store the previous attributes in a variable or set the target parameter in the invoked
operation.

Accessing attributes:
Variables:

Variables hold arbitrary user information such as operation results, auxiliary values, and so on.
The stored data can be any supported data type like strings, objects, and numbers.
It can also store current messages using the “message” keyword and also store current message
payload using the “payload” keyword and current message attributes using the “attributes”
keyword.

Variables can be accessed using keyword “vars”.Ways to create variables:

1. You can create variables using Set Variable component.


2. Using target variable from within an operation to the connector.
3. Using Data Weave.
4. Using scripting component

Delete/remove variables:

Using the “remove variable” component.

Session variables are removed as transport barriers no longer exist in Mule 4.

Record variables are removed by batch processing, which behaves like a scope in Mule 4.

So, instead, we can use variables. Variables that are created in the process phase are automatically
tied with the processing record and stored throughout the processing phase.

Variables created during the processing phase will exist during the processing phase only and
variables created before or after batch processing will exist throughout the processing phase and on
completion of the phase.
Accessing Variables:

Comparison between mule 3 structure and mule 4 structure


Message Collections:

One of the major advantages of the Mule 4 Message structure is dealing with multiple payloads.

In Mule 3, components that returned multiple payloads used a special structure called the Mule
Message Collection.

In Mule 4, any component that needs to deal with multiple messages can simply set the payload of
the message to a List of Mule Messages.
You can then iterate over these messages using Data Weave or each or other components.

In the Mule Batch series, we looked at the batch processing capabilities of Mule ESB 3.
At the time of writing this post, Mule 4 was already available as a Release Candidate
version.

Mule 4 offers huge improvements and changes to Mule 3, such as the introduction of
DataWeave 2.0, Reusable Streaming, improved processing strategies, operation based
connectors, and much more. Mule 4 also has some changes to the way batch jobs were
implemented in Mule 3. In this post, we will look at batch processing in Mule 4.

If you have missed the batch processing capabilities in Mule 3, then I recommend
reading below articles from earlier series -

Part 1: Mule Batch Processing - Introduction


Part 2: Mule Batch Processing - MUnit Testing

Part 3: Mule Batch Processing - MUnit Testing (The Conclusion)

Refreshing Memory - Batch in Mule 3.x


Batch processing in Mule 3 is divided into four phases - Input, Load and Dispatch, Process, and
On Complete. A special scoped variable set, called Record Variables is used to store any
variables at the record level. These are different than flow variables, which in Mule 3, could not
be used during record processing.

Figure 1.A: Mule Batch Phases [Source: Mule Docs].

Input Phase: This is an optional part of the batch job that can be used to retrieve the source
data using any inbound connector.

Load and Dispatch: This is an implicit phase and Mule runtime takes care of it. In this phase,
the payload generated in the Input phase or provided to the Batch from the Caller flow is turned
into a collection of records.

Process: This is the required phase where actual processing of every record occurs
asynchronously. Records are processed through each step while they move back-and-forth
between steps and the intermediate queue.

On Complete: In this final but optional phase, a summary of batch execution is made available
to possibly generate reports or any other statistics. The payload in this phase is available as an
instance of BatchJobResult object. It holds information such as the number of records loaded,
processed, failed, succeeded. It can also provide details of exceptions occurring in the steps.

Batch Job in Mule 4


In Mule 3.x, a Batch Job was a top-level element and existed independent of flows/subflows. It
could be called inside a flow using the Batch Execute (similar to flow-ref) component.

In Mule 4, this is changed. Batch is now a Scope like for-each, transactional, and async, and it can
be added into the flow itself. No more having different elements!

This is how a batch job looks Mule 4 -


Let’s compare the Mule 4 Batch with Mule 3 Batch and learn about the changes.

Batch Phases
There are only 3 phases in a Mule 4 Batch job compared to 4 phases in Mule 3 Batch job.
Input phase does not exist in Mule 4. Since Batch is a scope and it does not need any
special Input phase. The payload of the flow is passed to the batch.

Load and Dispatch: This is still there, Implicit and does the same thing as in Mule 3. It
will create job instance, covert payload into the collection of records and then split the
collection into individual records for processing.

Process: This is still a required phase and essentially same as it was in Mule 3. It will
process all records asynchronously. Batch steps in this phase, still allow you to filter
records using acceptExpression and/or acceptPolicy configurations.

OnComplete: The last and optional phase in batch functions same as in Mule 3. That
means, you still do not get the processed results back in calling flow and they are
available as an instance of BatchJobResult in this phase only.

Flow Variables Instead of Record Variables


Batch jobs in Mule 3 have a special variable set called as Record Variables. These
variables, unlike flow variables, only existed during the process phase.

In Mule 4, flow variables have been enhanced to work efficiently during batch
processing, just like the record variables. Flow variables created in batch steps are now
automatically tied to the processing record and stays with it throughout the processing
phase. No longer record variables are needed.

This also removes all special treatment we had to give to record variables during MUnit
testing.
In this test batch application, the 100K objects are created with DataWeave 2.0 and then
passed to the batch job for processing.
%dw2.0
outputapplication/java
---
(1to100000) map {
id: $,
"Name": "Name-"++$
}

In batch step 1, it sets a flow variable recordId with value of id.


<set-variablevariableName="recordId"value="#[payload.id]"doc:name="Set Variable"/>

Step 2 and Step 3 then have a validation component that compares the flow variable
value with payload.id. If you run this application, not a single record fails due to
validation, which means all 100K flow variables stayed with their records!

Any flow variables created during process phase, exist during the Process phase only.
Flow variables that existed before and outside the batch scope are available throughout
process phase as well as in on complete phase.

Batch Aggregator Instead of Batch Commit


In a Mule 3 Batch Job, you would be using Batch Commit to group records to send over
the outbound connector such as a database, Salesforce, etc. Instead of calling database
insert for each record, you can specify the group size to perform a batch commit.

In Mule 4 Batch job, this has been replaced with the Batch Aggregator component. The
underlying behavior and functionality of Batch Aggregator is the same as Batch Commit.

A grouped collection in the batch aggregator is a mutable collection, i.e. it allows you to
modify individual records in groups or variables associated with those. You can
aggregate records to process in two ways -

1. Aggregate fixed amount of records


2. Streaming all records
The important difference about mutability in both options is, streaming provides one-
read forward-only iterator of records, while the other one allows you randomly as well
as sequentially access records.

Conclusion
Mule 4 comes with lots of improvements and enhancements. Batch processing in Mule 3
was already powerful, in Mule 4 it has been simplified for developers and thus easy to
implement.

You might also like