Software performance testing is used to determine the speed or effectiveness of a computer, network, software program, or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. - Wikipedia
Let's consider a case study. Baysoft Training Inc. is an emerging start-up company focused on redefining how software will help get more people trained in various fields in the IT industry. The company achieves this goal by providing a suite of products, including online courses, and on-site and off-site training. As such, one of their flagship products--TrainBot, a web-based application--is focused solely on registering individuals for courses of interest that will aid them in attaining career goals. Once registered, the client can then go on to take a series of interactive online courses.
Up until recently, traffic on TrainBot was light, as it was only open to a handful of clients since it was still in closed beta. Everything was fully operational, and the application as a whole was very responsive. Just a few weeks ago, TrainBot was opened to the public and all is still fine and dandy. To celebrate the launch and promote its online training courses, Baysoft Training Inc. recently offered 75 percent off, on all the training courses. However, this promotional offer caused a sudden influx on TrainBot, far beyond what the company had anticipated. Web traffic shot up by 300 percent and, suddenly, things took a turn for the worse.
Network resources weren't holding up well, server CPUs and memory were at 90-95 percent, and database servers weren't far behind, due to high I/O and contention. As a result, most web requests began to get slower response times, making TrainBot totally unresponsive for most of its first-time clients. It didn't take too long for the servers to crash and for the support lines to get flooded after that.
It was a long night at the Baysoft Training Inc. corporate office. How did this happen? Could this have been avoided? Why were the application and system not able to handle the load? Why weren't adequate performance and stress tests conducted on the system and application? Was it an application problem, a system resource issue, or a combination of both? All these were questions that the management demanded answers to from the group of engineers, which comprised software developers, network and system engineers, Quality Assurance (QA) testers, and database administrators gathered in the meeting room. There sure was a lot of finger-pointing and blame going around the room. After a little brainstorming, it wasn't long before the group had to decide what needed to be done. The application and its system resources needed to undergo extensive and rigorous testing. This included all facets of the application and all supporting system resources, including, but not limited to, infrastructure, network, database, servers, and load balancers. Such a test would help all involved parties to discover exactly where the bottlenecks were and address them accordingly.
Performance testing is a type of testing intended to determine the responsiveness, reliability, throughput, interoperability, and scalability of a system and/or application under a given workload. It can also be defined as a process of determining the speed or effectiveness of a computer, network, software application, or device. Testing can be conducted on software applications, system resources, targeted application components, databases, and a whole lot more. It normally involves an automated test suite, as this allows easy and repeatable simulations of a variety of normal, peak, and exceptional load conditions. Such forms of testing help verify whether a system or application meets the specifications claimed by its vendor. The process can compare applications in terms of parameters such as speed, data transfer rate, throughput, bandwidth, efficiency, or reliability. Performance testing can also aid as a diagnostic tool in determining bottlenecks and single points of failure. It is often conducted in a controlled environment and in conjunction with stress testing; a process of determining the ability of a system or application to maintain a certain level of effectiveness under unfavorable conditions.
Why bother? Using Baysoft's case study, it should be obvious why companies bother and go to great lengths to conduct performance testing. The disaster could have been minimized, if not totally prevented, if effective performance testing had been conducted on TrainBot prior to opening it up to the masses. As we go ahead in this chapter, we will continue to explore the many benefits of effective performance testing.
At a very high level, performance testing is mostly conducted to address one or more risks related to expenses, opportunity costs, continuity, and/or corporate reputation. Conducting such tests helps give insights into software application release readiness, adequacy of network and system resources, infrastructure stability, and application scalability, to name a few. Gathering estimated performance characteristics of application and system resources prior to launch helps address issues early and provides valuable feedback to stakeholders, helping them make key and strategic decisions.
Performance testing covers a whole lot of ground, including areas such as the following:
- Assessing application and system production readiness
- Evaluating against performance criteria (for example, transactions per second, page views per day, and registrations per day)
- Comparing performance characteristics of multiple systems or system configurations
- Identifying the source of performance bottlenecks
- Aiding with performance and system tuning
- Helping identify system throughput levels
- Acting as a testing tool
Most of these areas are intertwined with each other, each aspect contributing to attaining the overall objectives of stakeholders. However, before jumping right in, let's take a moment to understand the following core activities in conducting performance tests:
- Identifying acceptance criteria: What is the acceptable performance of the various modules of the application under load? Specifically, we need to identify the response time, throughput, and resource utilization goals and constraints. How long should the end user wait before rendering a particular page? How long should the user wait to perform an operation? Response time is usually a user concern, throughput a business concern, and resource utilization a system concern. As such, response time, throughput, and resource utilization are key aspects of performance testing. Acceptance criteria are usually driven by stakeholders, and it is important to continuously involve them as the testing progresses, as the criteria may need to be revised.
- Identifying the test environment: Becoming familiar with the physical test and production environments is crucial for a successful test run. Knowing things such as the hardware, software, and network configurations of the environment helps derive an effective test plan and identify testing challenges from the outset. In most cases, these will be revisited and/or revised during the testing cycle.
- Planning and designing tests: Know the usage pattern of the application (if any), and come up with realistic usage scenarios, including variability among the various scenarios. For example, if the application in question has a user registration module, how many users typically register for an account in a day? Do those registrations happen all at once, at the same time, or are they spaced out? How many people frequent the landing page of the application within an hour? Questions such as these help put things in perspective and design variations in the test plan. Having said that, there may be times when the application under test is new, and so, no usage pattern has been formed yet. At such times, stakeholders should be consulted to understand their business process and come up with as close to a realistic test plan as possible.
- Preparing the test environment: Configure the test environment, tools, and resources necessary to conduct the planned test scenarios. It is important to ensure that the test environment is instrumented for resource monitoring to help analyze results more efficiently. Depending on the company, a separate team might be responsible for setting up the test tools, while another team may be responsible for configuring other aspects, such as resource monitoring. In other organizations, a single team may be responsible for setting up all aspects.
- Preparing the test plan: Using a test tool, record the planned test scenarios. There are numerous testing tools available, both free and commercial, that do the job quite well, with each having their pros and cons.
- Such tools include HP LoadRunner, NeoLoad, LoadUI, Gatling, WebLOAD, WAPT, Loadster, Load Impact, Rational Performance Tester, Testing Anywhere, OpenSTA, LoadStorm, The Grinder, Apache Benchmark, httpref, and so on. Some of these are commercial, while others are not as mature, portable or extendable as JMeter. HP LoadRunner, for example, is a bit pricey and limits the number of simulated threads to 250 without purchasing additional licenses, although it does offer a much better graphical interface and monitoring capability. Gatling is the new kid on the block, is free, and looks rather promising. It is still in its infancy and aims to address some of the shortcomings of JMeter, including easier testing of domain-specific language (DSL) versus JMeter's verbose XML, and better and more meaningful HTML reports, among others.
Having said that, it still has only a tiny user base as compared to JMeter, and not everyone may be comfortable with building test plans in Scala, its language of choice. Programmers may find it more appealing.
- In this book, our tool of choice will be Apache JMeter to perform this step. This shouldn't be a surprise considering the title of the book.
- Running the tests: Once recorded, execute the test plans under light load and verify the correctness of the test scripts and output results. In cases where test or input data is fed into the scripts to simulate more realistic data (more on this in subsequent chapters), also validate the test data. Another aspect to pay careful attention to during test plan execution is the server logs. This can be achieved through the resource monitoring agents set up to monitor the servers. It is paramount to watch for warnings and errors. A high rate of errors, for example, can be an indication that something is wrong with the test scripts, application under test, system resource, or a combination of all these.
- Analyzing results, report, and retest: Examine the results of each successive run and identify areas of bottleneck that need to be addressed. These can be related to system, database, or application. System-related bottlenecks may lead to infrastructure changes, such as increasing the memory available to the application, reducing CPU consumption, increasing or decreasing thread pool sizes, revising database pool sizes, and reconfiguring network settings. Database-related bottlenecks may lead to analyzing database I/O operations, top queries from the application under test, profiling SQL queries, introducing additional indexes, running statistics gathering, changing table page sizes and locks, and a lot more. Finally, application-related changes might lead to activities such as refactoring application components, and reducing application memory consumption and database round trips. Once the identified bottlenecks are addressed, the test(s) should then be rerun and compared with the previous runs. To help better track what change or group of changes resolved a particular bottleneck, it is vital that changes are applied in an orderly fashion, preferably one at a time. In other words, once a change is applied, the same test plan is executed and the results are compared to a previous run to check whether the change made had any improved or worsened effect on the results. This process is repeated until the performance goals of the project have been met.
The performance testing core activities are displayed as follows:

Performance testing core activities
Performance testing is usually a collaborative effort between all parties involved. Parties include business stakeholders, enterprise architects, developers, testers, DBAs, system admins, and network admins. Such collaboration is necessary to effectively gather accurate and valuable results when conducting tests. Monitoring network utilization, database I/O and waits, top queries, and invocation counts helps the team find bottlenecks and areas that need further attention in ongoing tuning efforts.
There is a strong relationship between performance testing and tuning, in the sense that one often leads to the other. Often, end-to-end testing unveils system or application bottlenecks that are regarded as unacceptable for project target goals. Once those bottlenecks are discovered, the next step for most teams is a series of tuning efforts to make the application perform adequately.
Such efforts normally include, but are not limited to, the following:
- Configuring changes in system resources
- Optimizing database queries
- Reducing round trips in application calls, sometimes leading to redesigning and re-architecting problematic modules
- Scaling out application and database server capacity
- Reducing application resource footprint
- Optimizing and refactoring code, including eliminating redundancy and reducing execution time
Tuning efforts may also commence if the application has reached acceptable performance but the team wants to reduce the amount of system resources being used, decrease the volume of hardware needed, or further increase system performance.
After each change (or series of changes), the test is re-executed to see whether the performance has improved or declined due to the changes. The process will be continued with the performance results having reached acceptable goals. The outcome of these test-tuning circles normally produces a baseline.
A Baseline is the process of capturing performance metric data for the sole purpose of evaluating the efficacy of successive changes to the system or application. It is important that all characteristics and configurations, except those specifically being varied for comparison, remain the same in order to make effective comparisons as to which change (or series of changes) is driving results toward the targeted goal. Armed with such baseline results, subsequent changes can be made to the system configuration or application and testing results can be compared to see whether such changes were relevant or not. Some considerations when generating baselines include the following:
- They are application-specific
- They can be created for systems, applications, or modules
- They are metrics/results
- They should not be over generalized
- They evolve and may need to be redefined from time to time
- They act as a shared frame of reference
- They are reusable
- They help identify changes in performance
Load testing is the process of putting demand on a system and measuring its response, that is, determining how much volume the system can handle. Stress testing is the process of subjecting the system to unusually high loads, far beyond its normal usage pattern, to determine its responsiveness. These are different from performance testing, whose sole purpose is to determine the response and effectiveness of a system, that is, how fast the system is. Since load ultimately affects how a system responds, performance testing is mostly done in conjunction with stress testing.
In the last section, we covered the fundamentals of conducting a performance test. One of the areas performance testing covers is testing tools. Which testing tool do you use to put the system and application under load? There are numerous testing tools available to perform this operation, from free to commercial solutions. However, our focus in this book will be on Apache JMeter, a free, open source, cross-platform desktop application from the Apache Software foundation. JMeter has been around since 1998 according to historic change logs on its official site, making it a mature, robust, and reliable testing tool. Cost may also have played a role in its wide adoption. Small companies usually don't want to foot the bill for commercial end testing tools, which often place restrictions, for example, on how many concurrent users one can spin off. My first encounter with JMeter was exactly a result of this. I worked in a small shop that had paid for a commercial testing tool, but during the course of testing, we had outrun the licensing limits of how many concurrent users we needed to simulate for realistic test plans. Since JMeter was free, we explored it and were quite delighted with the offerings and the share amount of features we got for free.
Here are some of its features:
- Performance tests of different server types, including web (HTTP and HTTPS), SOAP, database, LDAP, JMS, mail, and native commands or shell scripts
- Complete portability across various operating systems
- Full multithreading framework allowing concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups
- Full featured Test IDE that allows fast test plan recording, building, and debugging
- Dashboard report for detailed analysis of application performance indexes and key transactions
- In-built integration with real-time reporting and analysis tools, such as Graphite, InfluxDB, and Grafana, to name a few
- Complete dynamic HTML reports
- Graphical User Interface (GUI)
- HTTP proxy recording server
- Caching and offline analysis/replaying of test results
- High extensibility
- Live view of results as testing is being conducted
JMeter allows multiple concurrent users to be simulated on the application, allowing you to work toward most of the target goals obtained earlier in this chapter, such as attaining baselines and identifying bottlenecks.
It will help answer questions, such as the following:
- Will the application still be responsive if 50 users are accessing it concurrently?
- How reliable will it be under a load of 200 users?
- How much of the system resources will be consumed under a load of 250 users?
- What will the throughput look like with 1000 users active in the system?
- What will be the response time for the various components in the application under load?
JMeter, however, should not be confused with a browser (more on this in Chapter 2, Recording Your First Test, and Chapter 3, Submitting Forms). It doesn't perform all the operations supported by browsers; in particular, JMeter does not execute JavaScript found in HTML pages, nor does it render HTML pages the way a browser does. However, it does give you the ability to view request responses as HTML through many of its listeners, but the timings are not included in any samples. Furthermore, there are limitations to how many users can be spun on a single machine. These vary depending on the machine specifications (for example, memory, processor speed, and so on) and the test scenarios being executed. In our experience, we have mostly been able to successfully spin off 250-450 users on a single machine with a 2.2 GHz processor and 8 GB of RAM.
Now, let's get up and running with JMeter, beginning with its installation.
JMeter comes as a bundled archive, so it is super easy to get started with it. Those working in corporate environments behind a firewall or machines with non-admin privileges appreciate this more. To get started, grab the latest binary release by pointing your browser to http://jmeter.apache.org/download_jmeter.cgi. At the time of writing this, the current release version is 3.1. The download site offers the bundle as both a .zip
file and a .tgz
file. In this book, we go with the .zip
file option, but feel free to download the .tgz
file if that's your preferred way of grabbing archives.
Once downloaded, extract the archive to a location of your choice. Throughout this book, the location you extracted the archive to will be referred to as JMETER_HOME
.
Provided you have a JDK/JRE correctly installed and a JAVA_HOME
environment variable set, you are all set and ready to run!
The following screenshot shows a trimmed down directory structure of a vanilla JMeter install:

JMETER_HOME folder structure
The following are some of the folders in Apache-JMeter-3.2, as shown in the preceding screenshot:
bin
: This folder contains executable scripts to run and perform other operations in JMeterdocs
: This folder contains a well-documented user guideextras
: This folder contains miscellaneous items, including samples illustrating the usage of the Apache Ant build tool (http://ant.apache.org/) with JMeter, and bean shell scriptinglib
: This folder contains utility JAR files needed by JMeter (you may add additional JARs here to use from within JMeter; we will cover this in detail later)printable_docs
: This is the printable documentation
Follow these steps to install Java JDK:
- Go to http://www.oracle.com/technetwork/java/javase/downloads/index.html.
- Download Java JDK (not JRE) compatible with the system that you will use to test. At the time of writing, JDK 1.8 (update 131) was the latest, and that's what we use throughout this book.
- Double-click on the executable and follow the onscreen instructions.
Note
On Windows systems, the default location for the JDK is under Program Files
. While there is nothing wrong with this, the issue is that the folder name contains a space, which can sometimes be problematic when attempting to set PATH and run programs, such as JMeter, depending on the JDK from the command line. With this in mind, it is advisable to change the default location to something like C:\tools\jdk
.
Here are the steps to set up the JAVA_HOME
environment variable on Windows and Unix operating systems.
For illustrative purposes, assume that you have installed Java JDK at C:\tools\jdk
:
- Go to
Control Panel
. - Click on
System
. - Click on
Advance System settings
. - Add Environment to the following variables:
- Value:
JAVA_HOME
- Path:
C:\tools\jdk
- Value:
- Locate
Path
(under system variables, bottom half of the screen). - Click on
Edit
. - Append
%JAVA_HOME%/bin
to the end of the existing path value (if any).
For illustrative purposes, assume that you have installed Java JDK at /opt/tools/jdk
:
- Open up a Terminal window.
- Export
JAVA_HOME=/opt/tools/jdk
. - Export
PATH=$PATH:$JAVA_HOME
.
It is advisable to set this in your shell profile settings, such as .bash_profile
(for bash users) or .zshrc
(for zsh users), so that you won't have to set it for each
new Terminal window you open.
Once installed, the bin
folder under the JMETER_HOME
folder contains all the executable scripts that can be run. Based on the operating system that you installed JMeter on, you either execute the shell scripts (.sh
file) for operating systems that are Unix/Linux flavored, or their batch (.bat
file) counterparts on operating systems that are Windows flavored.
Note
JMeter files are saved as XML files with a .jmx
extension. We refer to them as test scripts or JMX files in this book.
These scripts include the following:
jmeter.sh
: This script launches JMeter GUI (the default)jmeter-n.sh
: This script launches JMeter in non-GUI mode (takes a JMX file as input)jmeter-n-r.sh
: This script launches JMeter in non-GUI mode remotelyjmeter-t.sh
: This opens a JMX file in the GUIjmeter-server.sh
: This script starts JMeter in server mode (this will be kicked off on the master node when testing with multiple machines remotely; more on this in Chapter 6, Distributed Testing)mirror-server.sh
: This script runs the mirror server for JMetershutdown.sh
: This script gracefully shuts down a running non-GUI instancestoptest.sh
: This script abruptly shuts down a running non-GUI instance
To start JMeter, open a Terminal shell, change to the JMETER_HOME/bin
folder, and run the following command on Unix/Linux:
./jmeter.sh
Alternatively, run the following command on Windows:
jmeter.bat
A short moment later, you will see the JMeter GUI displayed in the configuring proxy server section. Take a moment to explore the GUI. Hover over each icon to see a short description of what it does. The Apache JMeter team has done an excellent job with the GUI. Most icons are very similar to what you are used to, which helps ease the learning curve for new adapters. Some of the icons, for example, stop and shutdown, are disabled for now till a scenario/test is being conducted. In the next chapter, we will explore the GUI in more detail as we record our first test script.
Note
The JVM_ARGS
environment variable can be used to override JVM settings in the jmeter.bat
or jmeter.sh
script. Consider the following example:export JVM_ARGS="-Xms1024m -Xmx1024m -Dpropname=propvalue"
.
To see all the options available to start JMeter, run the JMeter executable with the -?
command. The options provided are as follows:
./jmeter.sh -? -? print command line options and exit -h, --help print usage information and exit -v, --version print the version information and exit -p, --propfile <argument> the jmeter property file to use -q, --addprop <argument> additional JMeter property file(s) -t, --testfile <argument> the jmeter test(.jmx) file to run -l, --logfile <argument> the file to log samples to -j, --jmeterlogfile <argument> jmeter run log file (jmeter.log) -n, --nongui run JMeter in nongui mode ... -J, --jmeterproperty <argument>=<value> Define additional JMeter properties -G, --globalproperty <argument>=<value> Define Global properties (sent to servers) e.g. -Gport=123 or -Gglobal.properties -D, --systemproperty <argument>=<value> Define additional system properties -S, --systemPropertyFile <argument> additional system property file(s)
This is a snippet (non-exhaustive list) of what you might see if you did the same. We will explore some, but not all, of these options as we go through the book.
Since JMeter is 100 percent pure Java, it comes packed with functionalities to get most of the test cases scripted. However, there might come a time when you need to pull in a functionality provided by a third-party library, or one developed by yourself, which is not present by default. As such, JMeter provides two directories where such third-party libraries can be placed to be auto discovered on its classpath:
JMETER_HOME/lib
: This is used for utility JARs.JMETER_HOME/lib/ext
: This is used for JMeter components and add-ons. All custom-developed JMeter components should be placed in thelib/ext
folder, while third-party libraries (JAR files) should reside in thelib
folder.
If you are working from behind a corporate firewall, you may need to configure JMeter to work with it, providing it with the proxy server host and port number. To do so, supply additional command-line parameters to JMeter when starting it up. Some of them are as follows:
-H
: This command-line parameter specifies the proxy server hostname or IP address-P
: This specifies the proxy server port-u
: This specifies the proxy server username if it is secure-a
: This specifies the proxy server password if it is secure; consider the following example:
./jmeter.sh -H proxy.server -P 7567 -u username -a password
On Windows, run the jmeter.bat
file instead.
Note
Do not confuse the proxy server mentioned here with JMeter's built-in HTTP(S) Test Script Recorder, which is used to record HTTP or HTTPS browser sessions. We will be exploring this in the next chapter when we record our first test scenario.
The screen is displayed as follows:

JMeter GUI
As described earlier, JMeter can run in non-GUI mode. This is needed when you run remotely, or want to optimize your testing system by not taking the extra overhead cost of running the GUI. Normally, you will run the default (GUI) when preparing your test scripts and running light load, but run the non-GUI mode for higher loads.
To do so, use the following command-line options:
-n
: This command-line option indicates running in non-GUI mode-t
: This command-line option specifies the name of the JMX test file-l
: This command-line option specifies the name of the JTL file to log results to-j
: This command-line option specifies the name of the JMeter run log file-r
: This command-line option runs the test servers specified by theremote_hosts
JMeter property-R
: This command-line option runs the test in the specified remote servers (for example,-Rserver1,server2
)
In addition, you can also use the -H
and -P
options to specify proxy server host and post, as we saw earlier:
./jmeter.sh -n -t test_plan_01.jmx -l log.jtl
This is used when performing distributed testing, that is, using more testing servers to generate additional load on your system. JMeter will be kicked off in server mode on each remote server (slave), and then a GUI on the master server will be used to control the slave nodes. We will discuss this in detail when we dive into distributed testing in Chapter 4, Managing Sessions:
./jmeter-server.sh
JMeter provides two ways to override Java, JMeter, and logging properties. One way is to directly edit jmeter.properties
, which resides in the JMETER_HOME/bin
folder. I suggest that you take a peek into this file and see the vast number of properties you can override. This is one of the things that makes JMeter so powerful and flexible. On most occasions, you will not need to override the defaults, as they have sensible default values.
The other way to override these values is directly from the command line when starting JMeter.
The options available to you include the following ones:
- Defining a Java system property value:
-D<property name>=<value>
- Defining a local JMeter property:
-J<property name>=<value>
- Defining a JMeter property to be sent to all remote servers:
-G<property name>=<value>
- Defining a file containing JMeter properties to be sent to all remote servers:
-G<property file>
- Overriding a logging setting, setting a category to a given priority level:
-L<category>=<priority> ./jmeter.sh -Duser.dir=/home/bobbyflare/jmeter_stuff -Jremote_hosts=127.0.0.1 -Ljmeter.engine=DEBUG
JMeter keeps track of all errors that occur during a test in a log file named jmeter.log
by default. The file resides in the folder from which JMeter was launched. The name of this log file, like most things, can be configured in jmeter.properties
or via a command-line parameter, -j <name_of_log_file>
. When running the GUI, the error count is indicated in the top-right corner, that is, to the left of the number of threads running for the test, as shown in the following screenshot. Clicking on it reveals the log file contents directly at the bottom of the GUI. The log file provides an insight into what exactly is going on in JMeter when your tests are being executed and helps determine the cause of error(s) when they occur:

JMeter GUI error count/indicator
Should you need to customize JMeter default values, you can do so by editing the jmeter.properties
file in the JMETER_HOME/bin
folder, or making a copy of that file, renaming it as something different (for example, my-jmeter.properties
), and specifying that as a command-line option when starting JMeter.
Some options you can configure include the following:
xml.parser
: This specifies Custom XML parser implementation. The default value isorg.apache.xerces.parsers.SAXParser
; it is not mandatory. If you found the provided SAX parser buggy for some of your use cases, it provides you with the option to override it with another implementation. For example, you can usejavax.xml.parsers.SAXParser
, provided that the right JARs exist on your instance of JMeter classpath.remote_hosts
: This is a comma-delimited list of remote JMeter hosts (orhost:port
if required). When running JMeter in a distributed environment, list the machines where you have JMeter remote servers running. This will allow you to control those servers from this machine's GUI. This applies only to distributed testing and is not mandatory. More on this will be discussed in Chapter 6, Distributed Testing.not_in_menu
: This is a list of components you do not want to see in JMeter's menus. Since JMeter has quite a number of components, you may wish to restrict it to show only components you are interested in or those you use regularly. You may list their class name or their class label (the string that appears in JMeter's UI) here, and they will no longer appear in the menus. The defaults are fine, and in our experience, we have never had to customize them, but we list it here so that you are aware of its existence; it's not mandatory.user.properties
: This specifies the name of the file containing additional JMeter properties. These are added after the initial property file, but before the-q
and-J
options are processed. This is not mandatory. User properties can be used to provide additional classpath configurations, such as plugin paths via thesearch_paths
attribute, and utility JAR paths via theuser_classpath
attribute. In addition, these properties files can be used to fine-tune JMeter components' log verbosity.search_paths
: This specifies a list of paths (separated by;
) that JMeter will search for JMeter add-on classes; for example, additional samplers. This is in addition to any of the JARs found in thelib/ext
folder. This is not mandatory. This comes in handy, for example, when extending JMeter with additional plugins that you don't intend to install in theJMETER_HOME/lib/ext
folder. You can use this to specify an alternate location on the machine to pick up the plugins. Refer to Chapter 4, Managing Sessions.user.classpath
: In addition to JARs in thelib
folder, use this attribute to provide additional paths that JMeter will search for utility classes. It is not mandatory.system.properties
: This specifies the name of the file containing additional system properties for JMeter to use. These are added before the-S
and-D
options are processed. This is not mandatory; it typically provides you with the ability to fine-tune various SSL settings, key stores, and certificates.
ssl.provider
: This specifies the custom SSL implementation if you don't want to use the built-in Java implementation; it is not mandatory. If, for some reason, the default built-in Java implementation of SSL, which is quite robust, doesn't meet your particular usage scenario, this allows you to provide a custom one. In our experience, the default has always been sufficient.
The command-line options are processed in the following order:
-p profile
: This specifies the customjmeter properties
file to be used. If present, it is loaded and processed. This is optional.jmeter.properties
file: This is the default configuration file for JMeter and is already populated with sensible default values. It is loaded and processed after any user-provided custom properties files.-j logfile
: This is optional; it specifies thejmeter
logfile
. It is loaded and processed after thejmeter.properties
file that we discussed earlier.- Logging is initialized.
user.properties
: It is loaded (if any).system.properties
: It is loaded (if any).
In this chapter, we covered the fundamentals of performance testing. We also discussed key concepts and activities surrounding performance testing in general. In addition, we installed JMeter, and you learned how to get it fully running on a machine and explored some of the configurations available with it. We explored some of the options that make JMeter a great tool of choice for your next performance testing assignment. These include the fact that it is free and mature, open source, easily extensible and customizable, completely portable across various operating systems, is a great plugin ecosystem, has a large user community, built-in GUI and recording, and validates test scenarios, among others. In comparison with other tools for performance testing, JMeter holds its stance.
In the next chapter, we will record our first test scenario and dive deeper into JMeter.