0% found this document useful (0 votes)
562 views

Performance Testing With JMeter PDF

This document outlines the key concepts and activities involved in performance testing. It discusses identifying the test environment, acceptance criteria, planning and designing tests, configuring the test environment, implementing tests, executing tests, analyzing results and retesting if needed. Additional concepts covered include endurance testing, which tests performance over an extended period, and latency, which refers to the time to complete a request. The overall goal of performance testing is to determine if a system meets requirements for speed, scalability and stability under anticipated workloads.

Uploaded by

Mayur Baghla
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
562 views

Performance Testing With JMeter PDF

This document outlines the key concepts and activities involved in performance testing. It discusses identifying the test environment, acceptance criteria, planning and designing tests, configuring the test environment, implementing tests, executing tests, analyzing results and retesting if needed. Additional concepts covered include endurance testing, which tests performance over an extended period, and latency, which refers to the time to complete a request. The overall goal of performance testing is to determine if a system meets requirements for speed, scalability and stability under anticipated workloads.

Uploaded by

Mayur Baghla
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 134

Objective

 This course gives an introduction to performance testing concepts and


its implementation using the Apache JMeter tool.

After completing this course, you will be able to:


 Identify what is performance testing.
 Identify what information needs to be gathered for performance
testing.
 Identify the components of JMeter.
 Apply the workflow recommended for creating a basic/advanced
JMeter scenario.
 Assign scripts, run-time settings, load generators and Vusers to a
JMeter scenario based on your load testing goals.
 Load test your application by running a scenario.
 Monitor the Hardware resources during the load run and identify
performance bottlenecks
Course Outline
• What is Performance testing • Execution order • Resource Monitoring
• Functional vs Performance testing • Scoping Rules • Load Testing Web App
• What is Load/Stress/ Capactity Testing • Data Parameterization • Tips and Tricks
• Why load test an application • Regular Expressions • Best Practices
• General Bottlenecks • Correlations
• JMeter – An Introduction • Assertions Types
• Requirements & Installing Jmeter • Preprocessors Types
• What is a Test Plan? • User Parameters
• Elements of a Test Plan • Post-Processors Types
• Thread Group • Properties and Variables
• Controllers • Building a Database Test Plan
• Samplers • Listeners Types
• Logical Controllers • JMeter Functions and User Variables
• Listeners • Reference variables and functions
• Timers • The Function Helper Dialog
• Assertions • Remote & Distributed Testing
• Configuration Elements • Using Distributed Testing with Load
• Pre-Processor Elements Balancers
• Post-Processor Elements • Building a Web service Test Plan
What do major organisations want to avoid?
Performance Testing
 Performance testing is performed to determine how fast a system
performs under a particular workload.
 Its purpose is to determine speed, scalability and stability.
Functional vs. Performance Testing
OBJECTIVE EXAMPLE
Functional Do business processes
test Functionality function properly after
implementation?

OBJECTIVE EXAMPLE
Performance Will 2,000 concurrent hits
test Stability crash the server?

Is response time
Performance acceptable according to
specifications?
Do business processes
Functionality function properly under
under load heavy load?
Load Testing
 This subcategory of performance testing is focused on
determining or validating performance characteristics of the
system or application under test when subjected to workloads
and load volumes anticipated during production operations.
 This is done within the expected WorkLoad
Stress Testing
 Stress testing is focused on determining an application’s
robustness, availability, and reliability under extreme conditions
like heavy loads, high concurrency and limited computational
resources.
 These tests are designed to determine under what conditions an
application will fail, how it will fail, and what indicators can be
monitored to warn of an impending failure.
Capacity Testing
 To determine how many users and/or transactions a given system
will support and still meet performance goals.
 Capacity testing is conducted in conjunction with capacity
planning, which you use to plan for future growth, such as an
increased user base or increased volume of data. For example, to
accommodate future loads, you need to know how many
additional resources (such as processor capacity, memory usage,
disk capacity, or network bandwidth) are necessary to support
future usage levels.
 Capacity testing helps you to identify a scaling strategy in order to
determine whether you should scale up or scale out.
Speed
 User Expectations
– Experience
– Psychology
– Usage

 System Constraints
– Hardware
– Network
– Software

 Costs
– Speed can be expensive!
Scalability
 How many users…
– before it gets “slow”?
– before it stops working?
– will it sustain?
– do I expect today?
– do I expect before the next upgrade?

 How much data can it hold?


– Database capacity
– File Server capacity
– Back-up Server capacity
– Data growth rates
Stability
 What happens if…
– there are more users than we expect?
– all the users do the same thing?
– a user gets disconnected?
– there is a Denial of Service Attack?
– the web server goes down?
– we get too many orders for the same thing?
Confidence
 If you know what the performance is…
– you can assess risk.
– you can make informed decisions.
– you can plan for the future.
– you can sleep the night before go-live day.
- The peace of mind that it will work on go-live day alone justifies the cost of
performance testing.
Performance Testing
 “Performance testing is the process by which software
is tested to determine the current system performance.
This process aims to gather information about current
performance, but places no value judgments on the
findings.”
 Primarily used for…
 – determining capacity of existing systems.
 – creating benchmarks for future systems.
 – evaluating degradation with various loads and/or configurations.
Performance Engineering
 “Performance engineering is the process by which
software is tested and tuned with the intent of realizing
the required performance. This process aims to
optimize the most important application performance
trait, user expérience.”
 Primarily used for…
 – new systems with pre-determined requirements.
 – extending the capacity of old systems.
 – “fixing” systems that are not meeting requirements/SLAs.
Performance Testing Activities

16
Activity 1. Identify the Test Environment.

 Identify the physical test environment and the production


environment as well as the tools and resources available to
the test team.
 The physical environment includes hardware, software,
and network configurations.
 Having a thorough understanding of the entire test
environment helps you identify testing challenges early in
the project.
 In some situations, this process must be revisited
periodically throughout the project’s life cycle.

17
Activity 2. Identify Performance Acceptance Criteria.

 Identify the response time, throughput, and resource


utilization goals and constraints.
 Response time is a user concern, throughput is a business
concern, and resource utilization is a system concern.
 Identify project success criteria that may not be captured
by those goals and constraints; for example, using
performance tests to evaluate what combination of
configuration settings will result in the most desirable
performance characteristics.

18
Activity 3. Plan and Design Tests.

Identify key scenarios,


Determine variability among representative users and how
to simulate that variability,
define test data
establish metrics to be collected.

19
Activity 4. Configure the Test Environment.

 Prepare the test environment, tools, and resources


necessary to execute each strategy as features and
components become available for test. Ensure that the test
environment is instrumented for resource monitoring as
necessary.

20
Activity 5. Implement the Test Design.

 Develop the performance tests in accordance with the test


design.

21
Activity 6. Execute the Test.

 Run and monitor your tests. Validate the tests, test data,
and results collection. Execute validated tests for analysis
while monitoring the test and the test environment.

22
Activity 7. Analyze Results, Report, and Retest.

 Consolidate and share results data.


 Analyze the data both individually and as a cross-
functional team.
 Reprioritize the remaining tests and re-execute them as
needed.
 When all of the metric values are within accepted limits,
none of the set thresholds have been violated, and all of the
desired information has been collected, you have finished
testing that particular scenario on that particular
configuration.

23
Additional Concepts / Terms
Endurance test
 An endurance test is a type of performance test
focused on determining or validating performance
characteristics of the product under test when
subjected to workload models and load volumes
anticipated during production operations over an
extended period of time. Endurance testing is a subset
of load testing.
Latency
 Latency is time to complete the execution of a request.
Latency may also represent the sum of several
latencies or subtasks.
Performance thresholds
Maximum acceptable values for the metrics identified for
your project, in terms of
 response time,
 throughput (transactions per second),
 resource-utilization levels.
Resource-utilization levels include
 processor capacity,
 memory,
 disk I/O,
 network I/O
Scenarios
 In the context of performance testing, a scenario is a
sequence of steps in your application. A scenario can
represent a use case or a business function such as
searching a product catalog, adding an item to a
shopping cart, or placing an order.
Smoke test
 A smoke test is the initial run of a performance test to
see if your application can perform its operations
under a normal load.
Spike test
 Spike testing is done by suddenly increasing
the load generated by a very large number of users,
and observing the behavior of the system.
 Spike testing is a subset of stress testing.
Baselines
 Creating baseline is the process of running a set of tests to
capture performance metric data.
 Its purpose is to evaluate the effectiveness of performance-
improving changes to the system or application
 With respect to Web applications, you can use a baseline to
determine whether performance is improving or declining
and to find deviations across different builds and versions.
 Also within the same build and version we can create
baselines with single user and find deviations with multiple
users.
Benchmarking
 Benchmarking is the process of comparing your
system’s performance against a baseline that you have
created internally or against an industry standard
endorsed by some other organization.
 You need to play by the rules.
 Because you play by the rules, you can be transparent
 In the case of a Web application, you would run a set of
tests to compare against industry benchmark
Throughput
 Transactions per second your application can handle. It is
actually the amount of capacity that a website or application can
handle.
 When presenting performance test results, throughput
performance is often expressed as transactions per second,
or TPS.
Response Time

Load
App DB Server
Injector
Server

Time in which the system responds for a particular


Transaction request

34
Think Time
Think time is the time taken by the users to think or to navigate
to different pages in the application.
Think times are used to simulate human behavior that causes people to wait between
interactions with a Web site.
Think time is the time taken for selecting a new Transaction after the response for the
previous Transaction has been received

Transaction 1

Think Time = 10 Seconds

Transaction 2

35
Transaction Mix
Account Opening Credit

Loans

Query Account Info


Debit

Varying execution frequency of different Transactions

36
Bottleneck
 A bottleneck is a slowdown, not a stoppage. A stoppage is a
failure
 Identifying bottlenecks is an incremental process whereby
alleviating one bottleneck can lead to the discovery of the next
one.
 Some bottlenecks can be addressed by upgrading hardware.
Hardware can be upgraded either by scaling-up (more CPU’s,
memory or cache) or by scaling-out (additional servers).
General Bottlenecks in DB Application
 Bottlenecks in DB servers
 Inferior Hardware
 Low Network Bandwidth
 Large volume of data
 Too much use of Join functions
 Memory Leaks
General Bottlenecks in Web Application
 Bottlenecks in Application Server
 Inferior Hardware
 Low Network Bandwidth
 Inappropriate choice of data structure & algorithms
 Memory Leaks
Manual Testing Is Problematic
Do you have the testing resources?
• Testing personnel
All of you,
click the • Client machines
GO button How do you synchronize users?
again
How do you collect and analyze results?
How do you achieve test repeatability?

Coordinator

Analysis? Web server Database


server

Testers

Load Generation System Under Test


The Tool Solution
Overcomes resource limitations
• Replaces testers with “Virtual Users”
• Runs many Vusers on few machines
• The master machine manages Vusers
Jmeter
Master • Meaningful results with analysis tools
Machine • Repeats tests with scripted actions

Vuser Web server Database


host server

Load Generation System Under Test


Conclusion
 Performance testing helps to identify bottlenecks in a system,
establish a baseline for future testing, support a performance tuning
effort, and determine compliance with performance goals and
requirements. Including performance testing very early in your
development life cycle tends to add significant value to the project.
 For a performance testing project to be successful, the testing must
be relevant to the context of the project, which helps you to focus
on the items that that are truly important.
 If the performance characteristics are unacceptable, you will
typically want to shift the focus from performance testing to
performance tuning in order to make the application perform
acceptably. You will likely also focus on tuning if you want to reduce
the amount of resources being used and/or further improve system
performance.
Conclusion – cont.
 Performance, load, and stress tests are subcategories of
performance testing, each intended for a different purpose.
 Creating a baseline against which to evaluate the
effectiveness of subsequent performance-improving
changes to the system or application will generally increase
project efficiency. Though it may seem counterintuitive at
first to slow your deployment for performance test
planning and execution, the payoff in time, money, and
quality will be big and will come soon.
Pre- Testing Activities:
Find Answers to these Questions:
 What is our anticipated average number of users (normal
load) ?
 What is our anticipated peak number of users ?
 When is a good time to load-test our application (i.e. off-
hours or week-ends), bearing in mind that this may very
well crash one or more of our servers ?
 What is the testing intended to achieve?
 Is the test environment in accordance with the production
environment?
 Which metrics we are concerned about and how to get
them?
JMeter
Features of JMeter
 100% pure java, open source desktop application
 originally developed by Stefano Mazzocchi
 designed for functional/load/performance/stress testing
 Extensible… write your own test
 Simulate heavy load (application, server and network)
 Gives instant visual feedback
 distributed testing
 various protocols - HTTP, FTP, JDBC, JMS, LDAP, SOAP
 multi-platform
 Full multithreading framework
Requirements
 Java Version:
 JMeter requires a fully compliant JVM 1.7 or higher.
 Latest JVM recommended.

 Operating Systems:
JMeter has been tested and works under:
 Unix (Solaris, Linux, etc)
 Windows (98, NT, XP, etc)
 OpenVMS Alpha 7.3+
Installation
 Download the latest production release from
http://jakarta.apache.org/site/downloads/downloads_
jmeter.cgi
 To install a release build, simply unzip the zip/tar file
into the directory where you want JMeter to be
installed.
 It is recommended that most users should run the
latest release.
 Do not rename the sub-folders. You can rename the
parent folder name
Running JMeter
 To run JMeter, run the jmeter.bat (for Windows) or jmeter (for Unix) file present
in the bin directory. After a short pause, the JMeter GUI should appear.
Running Jmeter in Command Line
To run JMeter in Non-GUI Mode, Use the following command options
-n This specifies JMeter is to run in non-gui mode
-t [name of JMX file that contains the Test Plan].
-l [name of JTL file to log sample results to].
-j [name of JMeter run log file].
Example : jmeter -n -t my_test.jmx -l log.jtl -H my.proxy.server -P 8000

Using a Proxy Server


If you are testing from behind a firewall/proxy server, run jmeter.bat/jmeter file
from command line using:
-H [proxy server hostname or ip address]
-P [proxy server port]
-N [nonproxy hosts] (e.g. *.apache.org|localhost)
-u [username for proxy authentication - if required]
-a [password for proxy authentication - if required]
Example : jmeter -H my.proxy.server -P 8000 -u username -a password -N
localhost

50
JMeter's Classpath
JMeter automatically finds classes from jars in the
following directories:
 JMETER_HOME/lib - used for utility jars
 JMETER_HOME/lib/ext - used for JMeter components
and add-ons
Testing Process

Create Thread Create Test


Plan Test
Group Script

Analyze
Run Test Plan
Results
Building a Test Plan
 A test plan describes a series of steps JMeter will execute when run.
• Test plans consist of:
1. Thread groups: organize threads of execution
2. Samplers: sends requests to a server
3. Logical controllers : control flow of test plan (loops, conditionals,
ordering, etc.)
4. Listeners: record, summarize and display response data
5. Timers: introduce delays in test plan
6. Assertions: assert facts about responses.
7. Configuration Elements: used to modify requests
8. Pre-Processor Elements: executes some action prior to a Sampler
Request being made
9. Post-Processor Elements: A Post-Processor executes some action after a
Sampler Request has been made
Elements of a Test Plan
Thread Group

Configuration
Elements

Timers
Pre-Processors
Assertions

Post-Processors

Samplers

Listeners
Getting Familiar with the New Tours Application

55
Defining the Application Performance Requirements
 Now that you are familiar with Mercury Tours, imagine that you are the
performance engineer responsible for signing off that Mercury Tours
meets the needs of your business. Your project manager has given you 4
criteria for release:

1. Mercury Tours must successfully handle 3 concurrent travel agents.


2. Mercury Tours must be able to process 3 simultaneous flight
bookings with response time not exceeding 20 seconds.
3. Mercury Tours must be able to handle 3 travel agents running
simultaneous itinerary checks with response time not exceeding 30
seconds.
4. Mercury Tours must be able to handle 3 agents signing in and signing
out of the system with response time not exceeding 10 seconds.

56
Demonstration of Record and Play
and Creation of Web Test Plan
What is a Test Plan?

• A test plan describes a series of steps JMeter will execute when run.

Elements of a Test Plan

• A complete test plan will consist of one or more Thread Groups, logic controllers, sample
generating controllers, listeners, timers, assertions, and configuration elements.
Thread Group
• Thread group elements are the beginning points of any test plan.
• All controllers and samplers must be under a thread group. Other elements, e.g. Listeners, may be placed
directly under the test plan, in which case they will apply to all the thread groups. .
• In the Thread Group, No of Threads is the Number of users to simulate.
• Ramp-up Period: How long JMeter should take to get all the threads started. If there are 10 threads and a
ramp-up time of 100 seconds, then each thread will begin 10 seconds after the previous thread started, for a
total time of 100 seconds to get the test fully up to speed. Ramp-up needs to be long enough to avoid too large
a work-load at the start of a test, and short enough that the last threads start running before the first ones finish
Start with Ramp-up = number of threads and adjust up or down as needed.
• Loop Count: Number of times to perform the test case. Alternatively, "forever" can be selected causing the
test to run until manually stopped.
• Delay Thread creation until needed: If selected, threads are created only when the appropriate proportion of
the ramp-up time has elapsed. This is most appropriate for tests with a ramp-up time that is significantly longer
than the time to execute a single thread. I.e. where earlier threads finish before later ones start.
• Scheduler: If selected, enables the scheduler
Samplers
• Samplers tell JMeter to send requests to a server and wait for a response.
• Processed in the order they appear in the tree.
• Controllers can be used to modify the number of repetitions of a sampler.
• FTP Request
• HTTP Request
• JDBC Request
• Java object request
• LDAP Request
• SOAP/XML-RPC Request
• WebService (SOAP) Request
Logical Controllers
Logical Controllers let you customize the logic that JMeter uses to decide when to send requests.

• Simple Controller
• Loop Controller
• Once Only Controller
• Interleave Controller
• Random Controller
• Random Order Controller
• Throughput Controller
Logic Controllers determine the order in
which Samplers are processed
• Runtime Controller
• If Controller
• While Controller
• Switch Controller
• ForEach Controller
• Module Controller
• Include Controller
• Transaction Controller
• Recording Controller
Simple Controller
• It organizes your Samplers and other Logic Controllers.
• No functionality
Loop Controller

• It helps to loop element certain number of times.

• If the Loop count is 2 in Loop Controller and the Loop count in Thread Group Loop is 5, then JMeter sends
10 HTTP requests
Once Only Controller
• It tells JMeter to process the controller(s) inside it only once.
• It execute always during the first iteration of any looping parent controller.
Interleave Controller
• It will alternate among each of the other controllers for each loop iteration.

If checked, the interleave controller will treat sub-controllers like


ignore sub-controller blocks single request elements and only allow one request per controller
at a time.
Random Controller
• Similar to the Interleave Controller
• Instead of going in order it picks one at random at each pass.
Random Order Controller
• Its like a Simple Controller in that it will execute each child element at most once
• Order of execution of the nodes will be random
If Controller
• To control whether the test elements below it (its children) are run or not.

• If this is selected, then the condition must be an expression that evaluates to "true"
For example, ${JMeterThread.last_sample_ok}

• Should condition be evaluated for all children? If not checked, then the condition is only evaluated on
entry.

The While Controller runs its children Switch Controller runs the element
until the condition is "false". defined by the switch value.

A ForEach controller loops through


the values of a set of related
variables.
Recording Controller

• It’s a place holder indicating where the proxy server should record samples to.
• All recorded samples will by default be saved under the Recording Controller.
Transaction Controller
• measures the overall time taken to perform the nested test elements
Logic Controllers
• Allows customization the logic that JMeter uses to decide when to send requests.
• Logic Controllers can change the order of requests coming from their child elements.
Timers

• By default, a JMeter thread sends requests without pausing between each request.
• It is recommended that you specify a delay by adding one of the available timers to your Thread Group.
• If not, JMeter could overwhelm your server by making too many requests in a very short amount of time.
• If you choose to add more than one timer to a Thread Group, JMeter takes the sum of the timers and pauses
for that amount of time before executing the samplers to which the timers apply.
• The timer will cause JMeter to delay a certain amount of time before each sampler which is in its scope.
• To provide a pause at a single place in a test plan, one can use the Test Action Sampler. This is similar to
pacing used in Loadrunner

• Timers are processed before each sampler in the scope in which they are found; if there are several timers in
the same scope, all the timers will be processed before each sampler
Constant Timer
• If you want to have each thread pause for the same amount of time between requests, use this timer.
Gaussian Random Timer
• This timer pauses each thread request for a random amount of time, with most of the time intervals ocurring
near a particular value.
• The total delay is the sum of the Gaussian distributed value (with mean 0.0 and standard deviation 1.0) times
the deviation value you specify, and the offset value.
Uniform Random Timer
• This timer pauses each thread request for a random amount of time, with each time interval having the same
probability of occurring. The total delay is the sum of the random value and the offset value.
Synchronizing Timer
• The purpose of the SyncTimer is to block threads until X number of threads have been blocked, and then they
are all released at once. A SyncTimer can thus create large instant loads at various points of the test plan.
Assertions
• Assertions allow you to assert facts about responses received from the server being tested.
• Using an assertion, you can essentially "test" that your application is returning the results you expect it to
• To view the assertion results, add an Assertion Listener to the Thread Group
It is like Checkpoints in
QTP/LoadRunner
Assertions Types
• Assertions are used to perform additional checks on samplers, and are processed after every sampler in the
same scope. To ensure that an Assertion is applied only to a particular sampler, add it as a child of the
sampler.
• Response Assertion
• Duration Assertion
• Size Assertion
• XML Assertion
• Bean Shell Assertion
• MD5Hex Assertion
• HTML Assertion
• XPath Assertion
• XML Schema Assertion
• BSF Assertion
• JSR223 Assertion
• Compare Assertion
• SMIME Assertion
Response Assertion
• The response assertion control panel lets you add pattern strings to be compared against various fields of the
response.
Duration Assertion
• The Duration Assertion tests that each response was received within a given amount of time.
• Any response that takes longer than the given number of milliseconds (specified by the user) is marked as a
failed response.
Size Assertion
• The Size Assertion tests that each response contains the right number of bytes in it.
Listeners

• Listeners provide access to the information JMeter gathers about the test cases while JMeter runs.
• It can direct the data to a file for later use.
• Configuration button which can be used to choose which fields to save, and whether to use CSV or XML
format.
• Listeners can be added anywhere in the test, including directly under the test plan.
Listeners Types
• A listener is a component that shows the results of the samples. The results can be shown in a tree, tables,
graphs or simply written to a log file.
• Listeners can use a lot of memory if there are a lot of samples

Sample Result Save Configuration


Configuration Elements
• A configuration element works closely with a Sampler.
• It does not send requests, it can add to or modify requests.

• A configuration element is accessible from only inside the tree branch where you place the element
• The Cookie Manager is accessible to the HTTP requests "Web Page 1" and "Web Page 2",but not "Web Page
3".
Pre-Processor Elements
• A Pre-Processor executes some action prior to a Sampler Request being made.
• If a Pre-Processor is attached to a Sampler element, then it will execute just prior to that sampler element
running.
• A Pre-Processor is most often used to modify the settings of a Sample Request just before it runs, or to update
variables that aren't extracted from response text.

Post-Processor Elements
• A Post-Processor executes some action after a Sampler Request has been made.
• If a Post-Processor is attached to a Sampler element, then it will execute just after that sampler element runs.
• A Post-Processor is most often used to process the response data, often to extract values from it. See the
scoping rules for more details on when Post-Processors are executed.
Execution order
1. Configuration elements
2. Pre-Processors
3. Timers
4. Sampler
5. Post-Processors (unless SampleResult is null)
6. Assertions (unless SampleResult is null)
7. Listeners (unless SampleResult is null)

• Timers, Assertions, Pre- and Post-Processors are only processed if there is a sampler to which they apply.
• Logic Controllers and Samplers are processed in the order in which they appear in the tree.
• Other test elements are processed according to the scope in which they are found, and the type of test
element
Scoping Rules
• The JMeter test tree contains elements that are both hierarchical and ordered.
• Some elements in the test trees are strictly hierarchical (Listeners, Config Elements, Post-Procesors, Pre-
Processors, Assertions, Timers), and some are primarily ordered (controllers, samplers).

• In this example, the requests are named to reflect


the order in which they will be executed.
• Timer #1 will apply to Requests Two, Three, and
Four (notice how order is irrelevant for hierarchical
elements).
• Assertion #1 will apply only to Request Three.
• Timer #2 will affect all the requests.
Data Parameterization
• To search for different search terms one by one in justdial.
• This can be achieved using Data Parameterization.
• First identify the data parameters in the samplers.

• In this sampler only we will be parameterizing values.


• E.g. mercury for username
Data Parameterization
• Open new excel sheet/notepad.
• Enter all the values.
• Name the excel sheet as SearchTerms and save it as csv format.
• Go to JMeter.
• Right click on the loop controller > Add > Config Element > CSV Data Set Config
• Click on CSV Data Set Config and name it as SearchTerms Data
• In BaseLocation_DataFile CSV Data Set Config file enter the following details as shown below.
• Name : BaseLocation_DataFiIe
• Filename : <File path of your CSV file>
• File encoding : <leave it as blank>
• Variable Names (comma-delimited) : username,password
• Delimiter (use T for tab) :,
• Allow quoted data? : False
• Recycle on EOF? : True
• Stop thread on EOF ? : False
• Sharing mode : All threads
Data Parameterization
• Now click on respective sampler.
• Select userName and password.
• Edit the value mercury and mercury and enter $<CSV Data Set Config’s Variable Names> i.e. ${username}
and ${password}
• Edit the loop as 2 under Thread Group.
• Save the script.
• Add Listeners (View Results Tree).
• Run the script. You could see the two different values in your request and response.
1) Demo For Data Parametrization
2) Exercise : Parameterize
Username and Password
User Parameters
• User Variables can also be specified in the Test Plan but not specific to individual threads.

• Values can be accessed in any test component in the same thread group, using the function syntax :
${variable}.
Post-Processors Types
• As the name suggests, Post-Processors are applied after samplers.
• Note that they are applied to all the samplers in the same scope, so to ensure that a post-processor is applied
only to a particular sampler, add it as a child of the sampler.
Regular Expression Extractor
• Allows the user to extract values from a server response using a Perl-type regular expression.
• As a post-processor, this element will execute after each Sample request in its scope, applying the regular
expression, extracting the requested values, generate the template string, and store the result into the given
variable name.
Regular Expressions
• JMeter includes the pattern matching software Apache Jakarta ORO
• Suppose you want to match the following portion of a web-page:
• name="file" value="readme.txt">
and you want to extract readme.txt .
• A suitable regular expression would be:
• name="file" value="(.+?)">
• The special characters above are:
• ( and ) - these enclose the portion of the match string to be returned
• . - match any character
• + - one or more times
• ? - don't be greedy, i.e. stop when first match succeeds
Regular Expressions
• Extract multiple strings
• Suppose you want to match the following portion of a web-page:
name="file.name" value="readme.txt" and you want to extract both file.name and readme.txt .
• A suitable reqular expression would be:
name="(.+?)" value="(.+?)"
This would create 2 groups, which could be used in the JMeter Regular Expression Extractor template
as $1$ and $2$.
• The JMeter Regex Extractor saves the values of the groups in additional variables.
For example, assume:
Reference Name: MYREF
Regex: name="(.+?)" value="(.+?)"
Template: $1$$2$
• Regular expressions use certain characters as meta characters - these characters have a special meaning to
the RE engine.
• Such characters must be escaped by preceding them with \ (backslash) in order to treat them as ordinary
characters.
Correlations
• Correlations is the process of capturing the dynamic values in the responses which changes at every run and
passing it to the request subsequently.
• To correlate in JMeter we need to add Regular Expression Extractor as a Post Processor.
• Consider any e-commerce or banking applications. Once you login or during checkout, secured session gets
created for safe transactions.
• In the URL you can observe long, random and dynamic session ids.
E.g. name=userSession jsessionid=gj22T5DXE4a0rAfJmt26Bw.1337343363927.1
• In this scenario, we going to store jsessionid into one variable using Regular Expression Extractor. Variable
name could be ${jsessionid}
Correlations - How to identify dynamic value
• Record your actions and design it appropriately.
• Add View Result Tree Listener
• Execute it once
• Definitely you will get few errors.
• Click on the samplers you are getting error during execution in View Result Tree.

• Then go to the respective sampler in the tree view an navigate to the respective Parameters and Post Body as
shown above.
• You will be able to check the dynamic value of osCsid.
• Now go back to Response Data in the tree view for the respective Sampler (which you’ve got error)
Correlations - How to identify dynamic value
• You could see the response the request you
had sent in text format, to view as HTML
select on HTML.
• Search for osCsid
• In notepad, capture unique and First
Occurrence of Left and Right Boundary
values.
• To Correlate, add Post Processor (Regular
Expression Extractor) on respective sampler.
Correlations
• Name Regular Expression Extractor as jsessionid.
• Enter Reference Name as jsessionid
• Enter Regular Expression as
osCsid="(.+?)" />
(.+?) this refers JMeter captures any values in between LB and RB)
• Enter Template as $1$ (this is for grouping)
• Enter Match No. as 1
• Default Value as blank. (To pass default value enter string here)
Correlations
• You have captured the dynamic value which changes frequently.
• Next step is to pass this dynamic value i.e. osCsid in next sampler request.
• Double click on osCsid value.
• Delete the junk value and enter ${<Regular Expression Extractor Name>} i.e. ${osCsid}
Demo & Exercise :
Create Script Using Regular
Expression Extractor
XPath Extractor
• This test element allows the user to extract value(s) from structured response - XML or (X)HTML - using
XPath query language.
Result Status Action Handler
• This test element allows the user to stop the thread or the whole test if the relevant sampler failed.
JMeter – Distributed Testing
• Consists of a Master system (the GUI) which
controls remote slave systems (running JMeter-
server)
• Master
the system running Jmeter GUI, which
controls the test
• Slave
the system running JMeter-server, which
takes commands from the GUI and send
requests to the target system(s)
• Target
the webserver we plan to test
• The JMeter GUI is a multi-threaded Java class
running Java Swing interfaces.
• It communicates with multiple remote injector
Java RMIRegistry service
• By default Remote servers listens to 1099 port.

Image courtesy from jmeter.apache.org


Remote & Distributed Testing
• In the event that your JMeter client machine is unable, performance-wise, to simulate enough users to stress
your server, an option exists to control multiple, remote JMeter engines from a single JMeter GUI client.
• By running JMeter remotely, you can replicate a test across many low-end computers and thus simulate a
larger load on the server.
• One instance of the JMeter GUI client can control any number of remote JMeter instances, and collect all the
data from them. This offers the following features:
• Saving of test samples to the local machine
• Management of multiple JMeterEngines from a single machine
• No need to copy the test plan to each server - the client sends it to all the servers

• Things to check
• The JMeter should be installed in all the systems.
• the firewalls on the systems are turned off.
• all the clients are on the same subnet.
• the server is in the same subnet, if 192.x.x.x or 10.x.x.x ip addresses are used. If the server doesn't use
192 or 10 ip address, there shouldn't be any problems.
• Make sure JMeter can access the server.
• Make sure you use the same version of JMeter on all the systems. Mixing versions may not work
correctly.
• Using different versions of Java may work - but is best avoided.
Remote & Distributed Testing

On master system acting as the console, open windows explorer and go to jmeter/bin
directory

1. open jmeter.properties in a text editor


2. edit the line “remote_hosts=127.0.0.1”
3. add the IP address. For example, if I have jmeter server running on 192.168.0.10, 11, 12,
13, and 14, the entry would like like this:
remote_hosts=192.168.0.10,192.168.0.11,192.168.0.12,192.168.0.13,192.168.0.14

To Run the test

• Run JMeter.bat on Master machine


• Run JMeter-Server.bat on Slave machines
Exercise : Implementing Distributed
Testing
Building a Web Service Test Plan
Web Service
 Web service can be defined as an application, which does not
have any GUI. Web service have engine, this engine takes
input in the form of XML, process the data and provide
output again in the form of XML

XML(Extendible Markup Language)


 XML can be used as a data source to hold some data.
 XML can be used to communicate data between systems.
 In XML we can define our own tags ultimately, we are
defining our own protocol to transfer data between systems

8/29/2016 Ver 1.0 109


Web Services
WSDL(Web Service Definition Language)
 When we are working on web service, we don’t have idea, what is
the format of input xml that web service take
 What is the format of output xml generated by web services.
 What are the different services is exposed by my web service

 All these information is mentioned in XML formatted document


called WSDL. We use this wsdl while testing web services
Building a Web Service Test Plan
• You will create two users that send web service request.
• Also, you will tell the users to run their tests three times. So, the total number of requests is (2 users) x (2
requests) x (repeat 3 times) = 60 JDBC requests.
• Add Thread Group as Web Service Users
• Add Soap/XML – RPC Request Sampler. In the Soap/XML-RPC Data add the request xml
Demo: Creating Web service Plan
Exercise :
JMeter Functions and User Variables
• JMeter functions are special values that can populate fields of any Sampler or other element in a test tree. A
function call looks like this:
${__functionName(var1,var2,var3)}
• For example ${__threadNum}.
• If a function parameter contains a comma, then be sure to escape this with "\“
${__time(EEE\, d MMM yyyy)}
• Variables are referenced as follows:
${VARIABLE}
JMeter Functions and User Variables
Type of function Name Comment
Information threadNum get thread number
Information samplerName get the sampler name (label)

Information machineIP get the local machine IP


address
Information machineName get the local machine name

Information time return current time in various


formats
Information log log (or display) a message (and
return the value)
Information logn log (or display) a message
(empty return value)
Input StringFromFile read a line from a file
Input FileToString read an entire file
Input CSVRead read from CSV delimited file

Input XPath Use an XPath expression to


read from a file
Calculation counter generate an incrementing
number
Calculation intSum add int numbers
Calculation longSum add long numbers
JMeter Functions and User Variables
Type of function Name Comment
Calculation Random generate a random number

Calculation RandomString generate a random string


Scripting BeanShell run a BeanShell script
Scripting javaScript process JavaScript (Mozilla
Rhino)
Scripting jexl, jexl2 evaluate a Commons Jexl
expression
Properties property read a property
Properties P read a property (shorthand
method)
Properties setProperty set a JMeter property
Variables split Split a string into variables

Variables V evaluate a variable name


Variables eval evaluate a variable expression

Variables evalVar evaluate an expression stored


in a variable
String regexFunction parse previous response using a
regular expression
String char generate Unicode char values
from a list of numbers

String unescape Process strings containing Java


escapes (e.g. \n & \t)
Reference variables and functions
• Referencing a variable in a test element is done by bracketing the variable name with '${' and '}'.
• Functions are referenced in the same manner, but by convention, the names of functions begin with "__" to
avoid conflict with user value names.
The Function Helper Dialog
• Using the Function Helper, you can select a function from the pull down, and assign values for its arguments.
The left column in the table provides a brief description of the argument, and the right column is where you
write in the value for that argument. Different functions take different arguments.
• Once you have done this, click the “Generate" button, and the appropriate string is generated for you to copy-
paste into your test plan wherever you like.
The Function Helper Dialog
• __counter generates a new number each time it is called, starting with 1 and incrementing by +1 each time.
• __threadNum simply returns the number of the thread currently being executed.
• __intSum can be used to compute the sum of two or more integer values.
• __longSum can be used to compute the sum of two or more long values.
• __StringFromFile can be used to read strings from a text file
• __machineName function returns the local host name
• __machineIP function returns the local IP address.
• __javaScript executes a piece of JavaScript (not Java!) code and returns its value.
• __Random function returns a random number that lies between the given min and max values.
• __RandomString function returns a random String of length using characters in chars to use.
• __CSVRead function returns a string from a CSV file.
• __property returns the value of a JMeter property
• __P which is intended for use with properties defined on the command line
jmeter -Jgroup1.threads=7 -Jhostname1=www.realhost.edu
Fetch the values:
${__P(group1.threads)} - return the value of group1.threads
${__P(group1.loops)} - return the value of group1.loops
${__P(hostname,www.dummy.org)} - return value of property hostname or www.dummy.org
The Function Helper Dialog
• __log logs a message, and returns its input string
${__log(Message)} - written to the log file as "...thread Name : Message"
${__log(Message,OUT)} - written to console window
${__log(${VAR},,,VAR=)} - written to log file as "...thread Name VAR=value"
• __logn logs a message, and returns the empty string.
${__logn(VAR1=${VAR1},OUT)} - write the value of the variable to the console window
• The BeanShell function evaluates the script passed to it, and returns the result.
${__BeanShell(123*456)} - returns 56088
${__BeanShell(source("function.bsh"))} - processes the script in function.bsh
• __split splits the string passed to it according to the delimiter, and returns the original string.
Define VAR="a||c|" in the test plan.
${__split(${VAR},VAR,|)}
This will return the contents of VAR, i.e. "a||c|" and set the following variables:
VAR_n=4 (3 in JMeter 2.1.1 and earlier)
VAR_1=a
VAR_2=?
VAR_3=c
VAR_4=? (null in JMeter 2.1.1 and earlier)
VAR_5=null (in JMeter 2.1.2 and later)
Building a Database Test Plan
• You will create ten users that send five SQL requests to the database server.
• Also, you will tell the users to run their tests three times. So, the total number of requests is (10 users) x (2
requests) x (repeat 3 times) = 60 JDBC requests.
• Add Thread Group as JDBC Users
Building a Database Test Plan - Adding JDBC Requests
• Variable name bound to pool. This needs to uniquely identify the configuration. It is used by the JDBC
Sampler to identify the configuration to be used.
• Database URL: jdbc:mysql://localhost:3306/test
• JDBC Driver class: com.mysql.jdbc.Driver
• Username: guest
• Password: password for guest
Building a Database Test Plan - JDBC Connection Configuration
• Variable name bound to pool. This needs to uniquely identify the configuration. It is used by the JDBC
Sampler to identify the configuration to be used.
• Database URL: jdbc:mysql://localhost:3306/test
• JDBC Driver class: com.mysql.jdbc.Driver
• Username: guest
• Password: password for guest
Building a Database Test Plan - Adding JDBC Requests
• Add --> Sampler --> JDBC Request
• For Eastman Kodak stock
Demo: Bean Shell using just dial
Resource Monitoring(Using External JMeter Plugin)

• Download JMeter Plugins from http://jmeter-plugins.org/downloads/all/


• Also Download the ServerAgent-X.X.X.zip.
• Copy JMeterPlugins.jar to JMETER_HOME/lib/ext.
• You will see bunch of extra items in Add context menu's item beginning with jp@.. these are the extra plugins
we just added via JMeterPlugins.jar
• Unzip the ServerAgent-X.X.X.zip somewhere on the server. Then launch the agent using startAgent.sh script
on Unix, or startAgent.bat script on Windows.
Resource Monitoring
• Now add Listener jp@gc - PerfMon Metrics
Collector.
• Add an additional listener for each metric you wish
to capture.
• Run your tests and see server side info.
• It should be able to work with most of the systems
since its server side agent is build on top of
SIGAR - System Information Gatherer And
Reporter
Resource Monitoring
• A quick benchmark of memory usage indicates a buffer of 1000 data points for 100 servers would take roughly
10Mb of RAM.
• On a 1.4Ghz centrino laptop with 1Gb of ram, the monitor should be able to handle several hundred servers.
• Possible metric types that could be captured with this Plugin are :
cpu
memory
swap
disks
network
tcp
tail
exec
jmx
• Fields corresponding to each metric type are described at http://jmeter-plugins.org/wiki/PerfMonMetrics/
Tips and Tricks
• Use always latest build of JMeter.
• JMeter is very sensitive, navigate slowly.
• To get Help for any elements, click on that element, press CTRL + H. E.g. To know more about Thread Group,
click on Thread Group, press CTRL + H.
• To add similar kind of elements, use universal keys CTRL + C and CTRL + V.
• JMeter provides utility to change the Look and Feel by clicking Options > Look and Feel .
• To change the language, go to Options > Choose Language, then select your language.
• To enable Debugging, press CTRL + SHIFT + D and to disable CTRL + D.
• To expand all the items press CTRL + SHIFT + Minus, to collapse CTRL + Minus.
• Do not close JMeter Console (command prompt) window any time.
• To Disable/Enable any elements, right click on the element, then select Disable/Enable.
• JMeter is capable of saving any element as a PNG file. Click edit -> Save As Image
Best Practices
1) Use non-GUI mode: jmeter -n -t test.jmx -l test.jtl
2) Use as few Listeners as possible. Typically only Aggregate/Summary Listener would solve the
purpose.
3) Rather than using lots of similar samplers, use the same sampler in a loop, and use variables
(CSV Data Set) to vary the sample.
4) Don't use functional mode(checkbox present in Test Plan)
5) Use CSV output rather than XML
6) Use as few Assertions as possible
7) If your test needs large amounts of data - particularly if it needs to be randomized - create the
test data in a file that can be read with CSV Dataset. This avoids wasting resources at run-time.
8) View results tree listener should be disabled during the load run.
9) If you need large-scale load testing, consider running multiple non-GUI JMeter instances on
multiple machines. This can be achieved by JMeter distributed testing.
10) Filter out all requests you aren't interested in. For instance, there's no point in recording image
requests.
Best Practices

11) Parameterizing should also be done for server names, No of users, loops / test duration etc so
that same script can be run in different environments.
12) Before recording Clear the cookies and cache of the browser.
13) Close JMeter window after the test is over and results have been saved. This would refresh the
jmeter.log when it is opened next time.
14) Use naming conventions always for all elements.
15) Check the Scoping Rules and design accordingly.
16) Be sure your load generator has enough hardware resources
17) Use HTTP Request Defaults to set the server name you are testing and leave this information
out of the requests. You can change one setting instead of modifying every request.
Exercise : Executing Performance
test run and Monitoring Servers
References
• Image courtesy from http://jmeter.apache.org
• Self learning and my experiences in various exercises.
• http://wiki.apache.org/jmeter/JMeterLinks
• http://jmeter-plugins.org/
Q&A
 Thank You !

134

You might also like