Home Web Development Performance Testing with JMeter

Performance Testing with JMeter

By Bayo Erinle
books-svg-icon Book
eBook $35.99 $24.99
Print $43.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $35.99 $24.99
Print $43.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
About this book
Publication date:
April 2015
Publisher
Packt
Pages
164
ISBN
9781784394813

 

Chapter 1. Performance Testing Fundamentals

 

Software performance testing is used to determine the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions.

 
 --Wikipedia

Let's consider a case study. Baysoft Training Inc. is an emerging start-up company focused on redefining how software will help to get more people trained in various fields in the IT industry. The company achieves this goal by providing a suite of products, including online courses, on-site training, and off-site training. As such, one of their flagship products TrainBot, a web-based application is focused solely on registering individuals for courses of interest that will aid them in attaining career goals. Once registered, the client can then go on to take a series of interactive online courses.

 

The incident


Up until recently, traffic on TrainBot was light as it was only opened to a handful of clients, since it was still in closed beta. Everything was fully operational and the application as a whole was very responsive. Just a few weeks ago, TrainBot was opened to the public and all is still good and dandy. To celebrate the launch and promote its online training courses, Baysoft Training Inc. recently offered 75 percent off on all the training courses. However, this promotional offer caused a sudden influx on TrainBot, far beyond what the company had anticipated. Web traffic shot up by 300 percent and suddenly things took a turn for the worse. Network resources weren't holding up well, server CPUs and memory were at 90-95 percent, and database servers weren't far behind due to high I/O and contention. As a result, most web requests began to get slower response times, making TrainBot totally unresponsive for most of its first-time clients. It didn't take too long after that, for the servers to crash and for the support lines to get flooded.

 

The aftermath


It was a long night at the Baysoft Training Inc. corporate office. How did this happen? Could this have been avoided? Why was the application and system not able to handle the load? Why weren't adequate performance and stress tests conducted on the system and application? Was it an application problem, a system resource issue, or a combination of both? All these were questions that the management demanded answers to from the group of engineers, which comprised software developers, network and system engineers, Quality Assurance (QA) testers, and database administrators gathered in the WAR room. There sure was a lot of finger pointing and blame to go around the room. After a little brainstorming, it wasn't long before the group had to decide what needed to be done. The application and its system resources needed to undergo extensive and rigorous testing. This included all facets of the application and all supporting system resources including, but not limited to, infrastructure, network, database, servers, and load balancers. Such a test would help all involved parties discover exactly where the bottlenecks are and address them accordingly.

 

Performance testing


Performance testing is a type of testing intended to determine the responsiveness, reliability, throughput, interoperability, and scalability of a system and/or application under a given workload. It could also be defined as a process of determining the speed or effectiveness of a computer, network, software application, or device. Testing can be conducted on software applications, system resources, targeted application components, databases, and a whole lot more. It normally involves an automated test suite as it allows easy repeatable simulations of a variety of normal, peak, and exceptional load conditions. Such forms of testing help to verify whether a system or application meets the specifications claimed by its vendor. The process can compare applications in terms of parameters such as speed, data transfer rate, throughput, bandwidth, efficiency, or reliability. Performance testing can also aid as a diagnostic tool in determining bottlenecks and single points of failures. It is often conducted in a controlled environment and in conjunction with stress testing; a process of determining the ability of a system or application to maintain a certain level of effectiveness under unfavorable conditions.

Why bother? Using Baysoft's case study, it should be obvious why companies bother and go through great lengths to conduct performance testing. The disaster could have been minimized, if not totally eradicated, if effective performance testing had been conducted on TrainBot prior to opening it up to the masses. As we go ahead in this chapter, we will continue to explore the many benefits of effective performance testing.

At a very high level, performance testing is almost always conducted to address one or more risks related to expenses, opportunity costs, continuity, and/or corporate reputation. Conducting such tests helps to give insights into software application release readiness, adequacy of network and system resources, infrastructure stability, and application scalability, to name a few. Gathering estimated performance characteristics of application and system resources prior to the launch helps address issues early and provides valuable feedback to stakeholders; helping them make key and strategic decisions.

Performance testing covers a whole lot of ground including areas such as:

  • Assessing application and system production readiness

  • Evaluating against performance criteria (for example, transactions per second, page views per day, registrations per day, and so on)

  • Comparing performance characteristics of multiple systems or system configurations

  • Identifying the source of performance bottlenecks

  • Aiding with performance and system tuning

  • Helping to identify system throughput levels

  • Acting as a testing tool

Most of these areas are intertwined with each other, each aspect contributing to attaining the overall objectives of stakeholders. However, before jumping right in, let's take a moment to understand the following core activities in conducting performance tests:

  • Identifying acceptance criteria: What is the acceptable performance of the various modules of the application under load? Specifically, identifying the response time, throughput, and resource utilization goals and constraints. How long should the end user wait before rendering a particular page? How long should the user wait to perform an operation? Response time is usually a user concern, throughput a business concern, and resource utilization a system concern. As such, response time, throughput, and resource utilization are key aspects of performance testing. Acceptance criteria are usually driven by stakeholders and it is important to continuously involve them as the testing progresses, as the criteria may need to be revised.

  • Identifying the test environment: Becoming familiar with the physical test and production environments is crucial for a successful test run. Knowing things such as the hardware, software, and network configurations of the environment helps to derive an effective test plan and identify testing challenges from the outset. In most cases, these will be revisited and/or revised during the testing cycle.

  • Planning and designing tests: Know the usage pattern of the application (if any), and come up with realistic usage scenarios including variability among the various scenarios. For example, if the application in question has a user registration module, how many users typically register for an account in a day? Do those registrations happen all at once, at the same time, or are they spaced out? How many people frequent the landing page of the application within an hour? Questions such as these help put things in perspective and design variations in the test plan. Having said that, there may be times where the application under test is new and so no usage pattern has been formed yet. At such times, stakeholders should be consulted to understand their business process and come up with as close to a realistic test plan as possible.

  • Preparing the test environment: Configure the test environment, tools, and resources necessary to conduct the planned test scenarios. It is important to ensure that the test environment is instrumented for resource monitoring to help analyze results more efficiently. Depending on the company, a separate team might be responsible for setting up the test tools; while another team may be responsible for configuring other aspects such as resource monitoring. In other organizations, a single team may be responsible for setting up all aspects.

  • Preparing the test plan: Using a test tool, record the planned test scenarios. There are numerous testing tools available, both free and commercial that do the job quite well, with each having their pros and cons.

    Such tools include HP Load Runner, NeoLoad, LoadUI, Gatling, WebLOAD, WAPT, Loadster, LoadImpact, Rational Performance Tester, Testing Anywhere, OpenSTA, Loadstorm, The Grinder, Apache Benchmark, HttpPerf, and so on. Some of these are commercial while others are not as mature or portable or extendable as JMeter. HP Load Runner, for example, is a bit pricey and limits the number of simulated threads to 250 without purchasing additional licenses. It does offer a much better graphical interface and monitoring capability though. Gatling is the new kid on the block, is free and looks rather promising. It is still in its infancy and aims to address some of the shortcomings of JMeter, including easier testing DSL (domain-specific language) versus JMeter's verbose XML, and better and more meaningful HTML reports, among others. Having said that, it still has only a tiny user base as compared to JMeter, and not everyone may be comfortable with building test plans in Scala, its language of choice. Programmers may find it more appealing.

    In this book, our tool of choice will be Apache JMeter to perform this step. This shouldn't be a surprise considering the title of the book.

  • Running the tests: Once recorded, execute the test plans under light load and verify the correctness of the test scripts and output results. In cases where test or input data is fed into the scripts to simulate more realistic data (more on this in later chapters), also validate the test data. Another aspect to pay careful attention to during test plan execution is the server logs. This can be achieved through the resource monitoring agents set up to monitor the servers. It is paramount to watch for warnings and errors. A high rate of errors, for example, can be an indication that something is wrong with the test scripts, application under test, system resource, or a combination of all these.

  • Analyzing results, report, and retest: Examine the results of each successive run and identify areas of bottleneck that need to be addressed. These could be related to system, database, or application. System-related bottlenecks may lead to infrastructure changes, such as increasing memory available to the application, reducing CPU consumption, increasing or decreasing thread pool sizes, revising database pool sizes, reconfiguring network settings, and so on. Database-related bottlenecks may lead to analyzing database I/O operations, top queries from the application under test, profiling SQL queries, introducing additional indexes, running statistics gathering, changing table page sizes and locks, and a lot more. Finally, application-related changes might lead to activities such as refactoring application components, reducing application memory consumption and database round trips, and so on. Once the identified bottlenecks are addressed, the test(s) should then be rerun and compared with previous runs. To help better track what change or group of changes resolved a particular bottleneck, it is vital that changes are applied in an orderly fashion, preferably one at a time. In other words, once a change is applied, the same test plan is executed and the results are compared with a previous run to see whether the change made had any improved or worsened effect on results. This process repeats till the performance goals of the project have been met.

The performance testing core activities are displayed as follows:

Performance testing core activities

Performance testing is usually a collaborative effort between all parties involved. Parties include business stakeholders, enterprise architects, developers, testers, DBAs, system admins, and network admins. Such collaboration is necessary to effectively gather accurate and valuable results when conducting tests. Monitoring network utilization, database I/O and waits, top queries, and invocation counts helps the team find bottlenecks and areas that need further attention in ongoing tuning efforts.

 

Performance testing and tuning


There is a strong relationship between performance testing and tuning, in the sense that one often leads to the other. Often, end-to-end testing unveils system or application bottlenecks that are regarded unacceptable with project target goals. Once those bottlenecks are discovered, the next steps for most teams are a series of tuning efforts to make the application perform adequately.

Such efforts are normally included but are not limited to:

  • Configuring changes in system resources

  • Optimizing database queries

  • Reducing round trips in application calls; sometimes leading to redesigning and re-architecting problematic modules

  • Scaling out application and database server capacity

  • Reducing application resource footprint

  • Optimizing and refactoring code, including eliminating redundancy and reducing execution time

Tuning efforts may also commence if the application has reached acceptable performance but the team wants to reduce the amount of system resources being used, decrease volume of hardware needed, or further increase in system performance.

After each change (or series of changes), the test is re-executed to see whether performance has improved or declined as a result of the changes. The process will be continued with the performance results having reached acceptable goals. The outcome of these test-tuning circles normally produces a baseline.

Baselines

Baseline is a process of capturing performance metric data for the sole purpose of evaluating the efficacy of successive changes to the system or application. It is important that all characteristics and configurations except those specifically being varied for comparison remain the same in order to make effective comparisons as to which change (or series of changes) is driving results towards the targeted goal. Armed with such baseline results, subsequent changes can be made to the system configuration or application and testing results can be compared to see whether such changes were relevant or not. Some considerations when generating baselines include:

  • They are application-specific

  • They can be created for system, application, or modules

  • They are metrics/results

  • They should not be over generalized

  • They evolve and may need to be redefined from time to time

  • They act as a shared frame of reference

  • They are reusable

  • They help identify changes in performance

Load and stress testing

Load testing is the process of putting demand on a system and measuring its response, that is, determining how much volume the system can handle. Stress testing is the process of subjecting the system to unusually high loads far beyond its normal usage pattern to determine its responsiveness. These are different from performance testing whose sole purpose is to determine the response and effectiveness of a system, that is, how fast the system is. Since load ultimately affects how a system responds, performance testing is almost always done in conjunction with stress testing.

 

JMeter to the rescue


In the previous section, we covered the fundamentals of conducting a performance test. One of the areas performance testing covers is testing tools. Which testing tool do you use to put the system and application under load? There are numerous testing tools available to perform this operation from free to commercial solutions. However, our focus in this book will be on Apache JMeter, a free open source cross-platform desktop application from The Apache Software foundation. JMeter has been around since 1998 according to historic change logs on its official site, making it a mature, robust, and reliable testing tool. Cost may also have played a role in its wide adoption. Small companies usually may not want to foot the bill for commercial end testing tools, which often place restrictions, for example, on how many concurrent users one can spin off. My first encounter with JMeter was exactly a result of this. I worked in a small shop that had paid for a commercial testing tool, but during the course of testing, we had outrun the licensing limits of how many concurrent users we needed to simulate for realistic test plans. Since JMeter was free, we explored it and were quite delighted with the offerings and the share amount of features we got for free.

Here are some of its features:

  • Performance test of different server types including web (HTTP and HTTPS), SOAP, database, LDAP, JMS, mail, and native commands or shell scripts

  • Complete portability across various operating systems

  • Full multithreading framework allowing concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups

  • Graphical User Interface (GUI)

  • HTTP proxy recording server

  • Caching and offline analysis/replaying of test results

  • High extensibility

  • Live view of results as testing is being conducted

JMeter allows multiple concurrent users to be simulated on the application allowing you to work towards most of the target goals obtained earlier in this chapter, such as attaining baseline, identifying bottlenecks, and so on.

It will help answer questions such as:

  • Will the application still be responsive if 50 users are accessing it concurrently?

  • How reliable will it be under a load of 200 users?

  • How much of the system resources will be consumed under a load of 250 users?

  • What will the throughput look like with 1000 users active in the system?

  • What will be the response time for the various components in the application under load?

JMeter, however, should not be confused with a browser (more on this in Chapter 2, Recording Your First Test and Chapter 3, Submitting Forms). It doesn't perform all the operations supported by browsers, in particular, JMeter does not execute JavaScript found in HTML pages, nor does it render HTML pages the way a browser does. However, it does give you the ability to view request responses as HTML through many of its listeners, but the timings are not included in any samples. Furthermore, there are limitations to how many users can be spun on a single machine. These vary depending on the machine specifications (for example, memory, processor speed, and so on) and the test scenarios being executed. In our experience, we have mostly been able to successfully spin off 250-450 users on a single machine with a 2.2 GHz processor and 8 GB of RAM.

 

Up and running with JMeter


Now, let's get up and running with JMeter, beginning with its installation.

Installation

JMeter comes as a bundled archive so it is super easy to get started with it. Those working in corporate environments behind a firewall or machines with non-admin privileges appreciate this more. To get started, grab the latest binary release by pointing your browser to http://jmeter.apache.org/download_jmeter.cgi. At the time of writing this, the current release version is 2.12. The download site offers the bundle as both .zip file and .tar file. In this book, we go with the .zip file option, but free feel to download the .tgz file if that's your preferred way of grabbing archives.

Once downloaded, extract the archive to a location of your choice. Throughout this book, the location you extracted the archive to will be referred to as JMETER_HOME.

Provided you have a JDK/JRE correctly installed and a JAVA_HOME environment variable set, you are all set and ready to run!

The following screenshot shows a trimmed down directory structure of a vanilla JMeter install:

JMETER_HOME folder structure

Following are some of the folders in the Apache-JMeter-2.12 as shown in the preceding screenshot:

  • bin: This folder contains executable scripts to run and perform other operations in JMeter

  • docs: This folder contains a well-documented user guide

  • extras: This folder contains miscellaneous items including samples illustrating the usage of the Apache Ant build tool (http://ant.apache.org/) with JMeter and bean shell scripting

  • lib: This folder contains utility JAR files needed by JMeter (you may add additional JARs here to use from within JMeter—more on this will be covered later)

  • printable_docs: This is the printable documentation

Installing Java JDK

Follow these steps to install Java JDK:

  1. Go to http://www.oracle.com/technetwork/java/javase/downloads/index.html.

  2. Download Java JDK (not JRE) compatible with the system that you will use to test. At the time of writing, JDK 1.8 (update 20) was the latest and that is what we use throughout this book.

  3. Double-click on the executable and follow the on-screen instructions.

    Note

    On Windows systems, the default location for the JDK is under Program Files. While there is nothing wrong with this, the issue is that the folder name contains a space, which can sometimes be problematic when attempting to set PATH and run programs such as JMeter depending on the JDK from the command line. With this in mind, it is advisable to change the default location to something such as C:\tools\jdk.

Setting up JAVA_HOME

Here are the steps to set up the JAVA_HOME environment variable on Windows and Unix operating systems.

On Windows

For illustrative purposes, assume that you have installed Java JDK at C:\tools\jdk:

  1. Go to Control Panel.

  2. Click on System.

  3. Click on Advance System settings.

  4. Add Environment to the following variables:

    • Value: JAVA_HOME

    • Path: C:\tools\jdk

  5. Locate Path (under system variables, bottom half of the screen).

  6. Click on Edit.

  7. Append %JAVA_HOME%/bin to the end of the existing path value (if any).

On Unix

For illustrative purposes, assume that you have installed Java JDK at /opt/tools/jdk:

  1. Open up a terminal window.

  2. Export JAVA_HOME=/opt/tools/jdk.

  3. Export PATH=$PATH:$JAVA_HOME.

It is advisable to set this in your shell profile settings such as .bash_profile (for bash users) or .zshrc (for zsh users) so you won't have to set it for each new terminal window you open.

Running JMeter

Once installed, the bin folder under the JMETER_HOME folder contains all the executable scripts that can be run. Based on the operating system that you installed JMeter on, you either execute the shell scripts (.sh file) for operating systems that are Unix/Linux flavored, or their batch (.bat file) counterparts on operating systems that are Windows flavored.

Tip

JMeter files are saved as XML files with a .jmx extension. We refer to them as test scripts or JMX files in this book.

These scripts include:

  • jmeter.sh: This script launches JMeter GUI (the default)

  • jmeter-n.sh: This script launches JMeter in non-GUI mode (takes a JMX file as input)

  • jmeter-n-r.sh: This script launches JMeter in non-GUI mode remotely

  • jmeter-t.sh: This opens a JMX file in the GUI

  • jmeter-server.sh: This script starts JMeter in server mode (this will be kicked off on the master node when testing with multiple machines remotely; more on this in Chapter [x])

  • mirror-server.sh: This script runs the mirror server for JMeter

  • shutdown.sh: This script gracefully shuts down a running non-GUI instance

  • stoptest.sh: This script abruptly shuts down a running non-GUI instance

To start JMeter, open a terminal shell, change to the JMETER_HOME/bin folder and run the following command on Unix/Linux:

./jmeter.sh

Run the following command on Windows:

jmeter.bat

A short moment later, you will see the JMeter GUI displayed in the Configuring proxy server section. Take a moment to explore the GUI. Hover over each icon to see a short description of what it does. The Apache JMeter team has done an excellent job with the GUI. Most icons are very similar to what you are used to, which helps ease the learning curve for new adapters. Some of the icons, for example, stop and shutdown, are disabled for now till a scenario/test is being conducted. In the next chapter, we will explore the GUI in more detail as we record our first test script.

On the terminal window, you might see some warnings from Java 8 that some Java options (PermSize and MaxPerSize) provided will be ignored. Do not be alarmed. JDK 8 came with better memory management and some default Java options used to start JMeter are no longer required, so it ignores them. You can read more about this at the following links:

http://java.dzone.com/articles/java-8-permgen-metaspace

http://www.infoq.com/news/2013/03/java-8-permgen-metaspace

Tip

The environment variable JVM_ARGS can be used to override JVM settings in the jmeter.bat or jmeter.sh script. Consider the following example:

export JVM_ARGS="-Xms1024m -Xmx1024m -Dpropname=propvalue"

Command-line options

Running JMeter with incorrect options provides you with usage info. The options provided are as follows:

./jmeter.sh –
-h, --help
print usage information and exit
-v, --version
print the version information and exit
-p, --propfile<argument>
thejmeter property file to use
-q, --addprop<argument>
additionalJMeter property file(s)
-t, --testfile<argument>
thejmeter test(.jmx) file to run
-l, --logfile <argument>
the file to log samples to
-j, --jmeterlogfile<argument>
jmeter run log file (jmeter.log)
-n, --nongui
    run JMeter in nongui mode

This is a snippet (non-exhaustive list) of what you might see if you did the same. We will explore some, but not all these options as we go through the book.

JMeter's Classpath

Since JMeter is 100 percent pure Java, it comes packed with functionality to get most of the test cases scripted. However, there might come a time when you need to pull in a functionality provided by a third-party library or one developed by yourself, which is not present by default. As such, JMeter provides two directories where such third-party libraries can be placed to be auto discovered on its classpath:

  • JMETER_HOME/lib: This is used for utility JARs.

  • JMETER_HOME/lib/ext: This is used for JMeter components and add-ons. All custom-developed JMeter components should be placed in the lib/ext folder, while third-party libraries (JAR files) should reside in the lib folder.

Configuring a proxy server

If you are working from behind a corporate firewall, you may need to configure JMeter to work with it, providing it with the proxy server host and port number. To do so, supply additional command-line parameters to JMeter when starting it up. Some of them are as follows:

  • -H: This command-line parameter specifies the proxy server hostname or IP address

  • -P: This specifies the proxy server port

  • -u: This specifies the proxy server username if it is secure

  • -a: This specifies the proxy server password if it is secure, for example:

    ./jmeter.sh -H proxy.server –P 7567 -u username -a password
    

On Windows, run the jmeter.bat file instead.

Tip

Do not confuse the proxy server mentioned here with JMeter's built-in HTTP Proxy Server, which is used to record HTTP or HTTPS browser sessions. We will be exploring this in the next chapter when we record our first test scenario.

The screen is displayed as follows:

JMeter GUI

Running in non-GUI mode

As described earlier, JMeter can run in non-GUI mode. This is needed when you run remotely, or want to optimize your testing system by not taking the extra overhead cost of running the GUI. Normally, you will run the default (GUI) when preparing your test scripts and running light load, but run the non-GUI mode for higher loads.

To do so, use the following command-line options:

  • -n: This command-line option indicates to run in non-GUI mode

  • -t: This command-line option specifies the name of the JMX test file

  • -l: This command-line option specifies the name of the JTL file to log results to

  • -j: This command-line option specifies the name of the JMeter run log file

  • -r: This command-line option runs the test servers specified by the remote_hosts JMeter property

  • -R: This command-line option runs the test in the specified remote servers (for example, -Rserver1,server2)

In addition, you can also use the -H and -P options to specify proxy server host and post like we saw earlier:

./jmeter.sh -n -t test_plan_01.jmx -l log.jtl
Running in server mode

This is used when performing distributed testing, that is, using more testing servers to generate additional load on your system. JMeter will be kicked off in server mode on each remote server (slave) and then a GUI on the master server will be used to control the slave nodes. We will discuss this in detail when we dive into distributed testing in Chapter 4, Managing Sessions:

./jmeter-server.sh

Note

Specify the server.exitaftertest=true JMeter property if you want the server to exit after a single test is completed. It is set as off by default.

Overriding properties

JMeter provides two ways to override Java, JMeter, and logging properties. One way is to directly edit the jmeter.properties, which resides in the JMETER_HOME/bin folder. I suggest that you take a peek into this file and see the vast number of properties you can override. This is one of the things that makes JMeter so powerful and flexible. On most occasions, you will not need to override the defaults, as they have sensible default values.

The other way to override these values is directly from the command line when starting JMeter.

The options available to you include:

  • Defining a Java system property value:

    -D<property name>=<value>
    
  • Defining a local JMeter property:

    -J<property name>=<value>
  • Defining a JMeter property to be sent to all remote servers:

    -G<property name>=<value>
  • Defining a file containing JMeter properties to be sent to all remote servers:

    -G<property file>
  • Overriding a logging setting, setting a category to a given priority level:

    -L<category>=<priority>
    ./jmeter.sh -Duser.dir=/home/bobbyflare/jmeter_stuff \
        -Jremote_hosts=127.0.0.1 -Ljmeter.engine=DEBUG

Tip

Since command-line options are processed after the logging system has been set up, any attempt to use the -J flag to update the log_level or log_file properties will have no effect.

Tracking errors during test execution

JMeter keeps track of all errors that occur during a test in a log file named jmeter.log by default. The file resides in the folder from which JMeter was launched. The name of this log file, like most things, can be configured in jmeter.properties or via a command-line parameter -j <name_of_log_file>. When running the GUI, the error count is indicated in the top-right corner, that is, to the left of the number of threads running for the test, as shown in Figure 1.4 in the following screenshot. Clicking on it reveals the log file contents directly at the bottom of the GUI. The log file provides an insight into what exactly is going on in JMeter when your tests are being executed and helps determine the cause of error(s) when they occur.

Figure 1.4: JMeter GUI error count/indicator

Configuring JMeter

Should you need to customize JMeter default values, you can do so by editing the jmeter.properties file in the JMETER_HOME/bin folder, or making a copy of that file, renaming it to something different (for example, my-jmeter.properties), and specifying that as a command-line option when starting JMeter.

Some options you can configure include:

  • xml.parser: This specifies Custom XML parser implementation. The default value is org.apache.xerces.parsers.SAXParser. It is not mandatory. If you found the provided SAX parser buggy for some of your use cases, it provides you with the option to override it with another implementation. For example, you can use javax.xml.parsers.SAXParser, provided that the right JARs exist on your instance of JMeter classpath.

  • remote_hosts: This is a comma-delimited list of remote JMeter hosts (or host:port if required). When running JMeter in a distributed environment, list the machines where you have JMeter remote servers running. This will allow you to control those servers from this machine's GUI. This applies only to distributed testing and is not mandatory. More on this will be discussed in Chapter 6, Distributed Testing.

  • not_in_menu: This is a list of components you do not want to see in JMeter's menus. Since JMeter has quite a number of components, you may wish to restrict it to show only components you are interested in or those you use regularly. You may list their classname or their class label (the string that appears in JMeter's UI) here, and they will no longer appear in the menus. The defaults are fine and in our experience, we have never had to customize them, but we list it here so that you are aware of its existence. It is not mandatory.

  • user.properties: This specifies the name of the file containing additional JMeter properties. These are added after the initial property file, but before the -q and -J options are processed. This is not mandatory. User properties can be used to provide additional classpath configurations such as plugin paths via the search_paths attribute, and utility JAR paths via the user_classpath attribute. In addition, these properties files can be used to fine-tune JMeter components' log verbosity.

    • search_paths: This specifies a list of paths (separated by ;) that JMeter will search for JMeter add-on classes; for example additional samplers. This is in addition to any of the JARs found in the lib/ext folder. This is not mandatory. This comes in handy, for example, when extending JMeter with additional plugins that you don't intend to install in the JMETER_HOME/lib/ext folder. You can use this to specify an alternate location on the machine to pick up the plugins. Refer to Chapter 4, Managing Sessions.

    • user.classpath: In addition to JARs in the lib folder, use this attribute to provide additional paths that JMeter will search for utility classes. It is not mandatory.

  • system.properties: This specifies the name of the file containing additional system properties for JMeter to use. These are added before the -S and -D options are processed. This is not mandatory. This typically provides you with the ability to fine-tune various SSL settings, key stores, and certificates.

    • ssl.provider: This specifies the custom SSL implementation if you don't want to use the built-in Java implementation. This is not mandatory. If, for some reason, the default built-in Java implementation of SSL, which is quite robust, doesn't meet your particular usage scenario, this allows you to provide a custom one. In our experience, the default has always been sufficient.

The command-line options are processed in the following order:

  • -p profile: This specifies the custom jmeter properties file to be used. If present, it is loaded and processed. This is optional.

  • jmeter.properties file: This is the default configuration file for JMeter and is already populated with sensible default values. It is loaded and processed after any user-provided custom properties file.

  • -j logfile: This is optional. It specifies the jmeter logfile. It is loaded and processed after the jmeter.properties file that we discussed previously.

  • Logging is initialized.

  • user.properties: is loaded (if any).

  • system.properties: is loaded (if any).

  • All other command-line options are processed.

 

Summary


In this chapter, we have covered the fundamentals of performance testing. We also discussed key concepts and activities surrounding performance testing in general. In addition, we installed JMeter and you learned how to get it fully running on a machine and explored some of the configurations available with it. We explored some of the options that make JMeter a great tool of choice for your next performance testing assignment. These include the fact it is free and mature, open sourced, easily extensible and customizable, completely portable across various operating systems, is a great plugin eco-system, has a large user community, built-in GUI and recording, and validating test scenarios among others. In comparison with the other tools for performance testing, JMeter holds its stance.

In the next chapter, we will record our first test scenario and dive deeper into JMeter.

About the Author
  • Bayo Erinle

    ● Proficient Java developer with more than 11 years experience ● Skilled at defining the architecture for stand-alone, distributed, two tier, three tier or multi tier applications ● Productive and creative both as a software developer and analyst ● Extensive practical hands-on experience in developing, designing, maintaining and recapturing object oriented applications ● Effective working alone or as a cooperative team member ● Adapting new technologies and developments in creating quality products for customers Bayo Erinle is a senior software engineer with over nine years' experience in designing, developing, testing, and architecting software. He has worked in various spectrums of the IT field, including government, finance, and health care. As a result, he has been involved in the planning, development, implementation, integration, and testing of numerous applications, including multi-tiered, standalone, distributed, and cloud-based applications. He is always intrigued by new technology and enjoys learning new things. He currently resides in Maryland, US, and when he is not hacking away at some new technology, he enjoys spending time with his wife Nimota and their three children, Mayowa, Durotimi, and Fisayo.

    Browse publications by this author
Latest Reviews (7 reviews total)
It is easy for me to understand
Well-structured knowledge covering all sorts of topics relevant even for more advanced users.
Performance Testing with JMeter
Unlock this book and the full library FREE for 7 days
Start now