Baysoft Training Inc. is an emerging startup company focused on redefining how software will help get more people trained in various fields in the IT industry. The company achieves this goal by providing a suite of products, including online courses, onsite training, and offsite training. As such, one of their flagship products, TrainBot—a web-based application—is focused solely on registering individuals for courses of interest that will aid them in attaining career goals. Once registered, the client can then go on to take a series of interactive online courses.
Up until recently, traffic on TrainBot had been light as it had only been opened to a handful of clients, since it was still in closed beta. Everything was fully operational and the application as a whole was very responsive. Just a few weeks ago, TrainBot was open to the public and all was still good and dandy. To celebrate the launch and promote its online training courses, Baysoft Training Inc. recently offered 75 percent off for all the training courses. However, that promotional offer caused a sudden influx on TrainBot, far beyond what the company had anticipated. Web traffic shot up by 300 percent and suddenly things took a turn for the worse. Network resources weren't holding up well, server CPUs and memory were at 90-95 percent and database servers weren't far behind due to high I/O and contention. As a result, most web requests began to get slower response times, making TrainBot totally unresponsive for most of its first-time clients. It didn't take too long after that for the servers to crash and for the support lines to get flooded.
It was a long night at BaySoft Training Inc. corporate office. How did this happen? Could this have been avoided? Why was the application and system not able to handle the load? Why weren't adequate performance and stress tests conducted on the system and application? Was it an application problem, a system resource issue or a combination of both? All of these were questions management demanded answers to from the group of engineers, which comprised software developers, network and system engineers, quality assurance (QA) testers, and database administrators gathered in the WAR room. There sure was a lot of finger pointing and blame to go around the room. After a little brainstorming, it wasn't too long for the group to decide what needed to be done. The application and its system resources will need to undergo extensive and rigorous testing. This will include all facets of the application and all supporting system resources, including, but not limited to, infrastructure, network, database, servers, and load balancers. Such a test will help all the involved parties to discover exactly where the bottlenecks are and address them accordingly.
Performance testing is a type of testing intended to determine the responsiveness, reliability, throughput, interoperability, and scalability of a system and/or application under a given workload. It could also be defined as a process of determining the speed or effectiveness of a computer, network, software application, or device. Testing can be conducted on software applications, system resources, targeted application components, databases, and a whole lot more. It normally involves an automated test suite as this allows for easy, repeatable simulations of a variety of normal, peak, and exceptional load conditions. Such forms of testing help verify whether a system or application meets the specifications claimed by its vendor. The process can compare applications in terms of parameters such as speed, data transfer rate, throughput, bandwidth, efficiency, or reliability. Performance testing can also aid as a diagnostic tool in determining bottlenecks and single points of failure. It is often conducted in a controlled environment and in conjunction with stress testing; a process of determining the ability of a system or application to maintain a certain level of effectiveness under unfavorable conditions.
Why bother? Using Baysoft's case study mentioned earlier, it should be obvious why companies bother and go through great lengths to conduct performance testing. Disaster could have been minimized, if not totally eradicated, if effective performance testing had been conducted on TrainBot prior to opening it up to the masses. As we go ahead in this chapter, we will continue to explore the many benefits of effective performance testing.
At a very high level, performance testing is always almost conducted to address one or more risks related to expense, opportunity costs, continuity, and/or corporate reputation. Conducting such tests help give insights to software application release readiness, adequacy of network and system resources, infrastructure stability, and application scalability, just to name a few. Gathering estimated performance characteristics of application and system resources prior to the launch helps to address issues early and provides valuable feedback to stakeholders, helping them make key and strategic decisions.
Performance testing covers a whole lot of ground including areas such as:
Assessing application and system production readiness
Evaluating against performance criteria
Comparing performance characteristics of multiple systems or system configurations
Identifying source of performance bottlenecks
Aiding with performance and system tuning
Helping to identify system throughput levels
Testing tool
Most of these areas are intertwined with each other, each aspect contributing to attaining the overall objectives of stakeholders. However, before jumping right in, let's take a moment to understand the core activities in conducting performance tests:
Identify the test environment: Becoming familiar with the physical test and production environments is crucial to a successful test run. Knowing things, such as the hardware, software, and network configurations of the environment help derive an effective test plan and identify testing challenges from the outset. In most cases, these will be revisited and/or revised during the testing cycle.
Identify acceptance criteria: What is the acceptable performance of the various modules of the application under load? Specifically, identify the response time, throughput, and resource utilization goals and constraints. How long should the end user wait before rendering a particular page? How long should the user wait to perform an operation? Response time is usually a user concern, throughput a business concern, and resource utilization a system concern. As such, response time, throughput, and resource utilization are key aspects of performance testing. Acceptance criteria is usually driven by stakeholders and it is important to continuously involve them as testing progresses as the criteria may need to be revised.
Plan and design tests: Know the usage pattern of the application (if any), and come up with realistic usage scenarios including variability among the various scenarios. For example, if the application in question has a user registration module, how many users typically register for an account in a day? Do those registrations happen all at once, or are they spaced out? How many people frequent the landing page of the application within an hour? Questions such as these help to put things in perspective and design variations in the test plan. Having said that, there may be times where the application under test is new and so no usage pattern has been formed yet. At such times, stakeholders should be consulted to understand their business process and come up with as close to a realistic test plan as possible.
Prepare the test environment: Configure the test environment, tools, and resources necessary to conduct the planned test scenarios. It is important to ensure that the test environment is instrumented for resource monitoring to help analyze results more efficiently. Depending on the company, a separate team might be responsible for setting up the test tools, while another may be responsible for configuring other aspects such as resource monitoring. In other organizations, a single team is responsible for setting up all aspects.
Record the test plan: Using a testing tool, record the planned test scenarios. There are numerous testing tools available, both free and commercial that do the job quite well, each having their pros and cons.
Such tools include HP Load Runner, NeoLoad, LoadUI, Gatling, WebLOAD, WAPT, Loadster, LoadImpact, Rational Performance Tester, Testing Anywhere, OpenSTA, Loadstorm, and so on. Some of these are commercial while others are not as mature or as portable or extendable as JMeter is. HP Load Runner, for example, is a bit pricey and limits the number of simulated threads to 250 without purchasing additional licenses. It does offer a much nicer graphical interface and monitoring capability though. Gatling is the new kid on the block, is free and looks rather promising. It is still in its infancy and aims to address some of the shortcomings of JMeter, including easier testing DSL (domain specific language) versus JMeter's verbose XML, nicer and more meaningful HTML reports, among others. Having said that, it still has only a tiny user base when compared with JMeter, and not everyone may be comfortable with building test plans in Scala, its language of choice. Programmers may find it more appealing.
In this book, our tool of choice will be Apache JMeter to perform this step. That shouldn't be a surprise considering the title of the book.
Run the tests: Once recorded, execute the test plans under light load and verify the correctness of the test scripts and output results. In cases where test or input data is fed into the scripts to simulate more realistic data (more on that in the later chapters), also validate the test data. Another aspect to pay careful attention to during test plan execution is the server logs. This can be achieved through the resource monitoring agents set up to monitor the servers. It is paramount to watch for warnings and errors. A high rate of errors, for example, could be indicative that something is wrong with the test scripts, application under test, system resource, or a combination of these.
Analyze results, report, and retest: Examine the results of each successive run and identify areas of bottleneck that need addressing. These could be system, database, or application related. System-related bottlenecks may lead to infrastructure changes such as increasing the memory available to the application, reducing CPU consumption, increasing or decreasing thread pool sizes, revising database pool sizes, and reconfiguring network settings. Database-related bottlenecks may lead to analyzing database I/O operations, top queries from the application under test, profiling SQL queries, introducing additional indexes, running statistics gathering, changing table page sizes and locks, and a lot more. Finally, application-related changes might lead to activities such as refactoring application components, reducing application memory consumption and database round trips. Once the identified bottlenecks are addressed, the test(s) should then be rerun and compared with previous runs. To help better track what change or group of changes resolved a particular bottleneck, it is vital that changes are applied in an orderly fashion, preferably one at a time. In other words, once a change is applied, the same test plan is executed and the results compared with a previous run to see if the change made had any improved or worsened effect on results. This process repeats until the performance goals of the project have been met.

Performance testing core activities
Performance testing is usually a collaborative effort between all parties involved. Parties include business stakeholders, enterprise architects, developers, testers, DBAs, system admins, and network admins. Such collaboration is necessary to effectively gather accurate and valuable results when conducting testing. Monitoring network utilization, database I/O and waits, top queries, and invocation counts, for example, helps the team find bottlenecks and areas that need further attention in ongoing tuning efforts.
There is a strong relationship between performance testing and tuning, in the sense that one often leads to the other. Often, end-to-end testing unveils system or application bottlenecks that are regarded as incompatible with project target goals. Once those bottlenecks are discovered, the next step for most teams is a series of tuning efforts to make the application perform adequately.
Such efforts normally include but are not limited to:
Configuring changes in system resources
Optimizing database queries
Reducing round trips in application calls; sometimes leading to re-designing and re-architecting problematic modules
Scaling out application and database server capacity
Reducing application resource footprint
Optimizing and refactoring code; including eliminating redundancy, and reducing execution time
Tuning efforts may also commence if the application has reached acceptable performance but the team wants to reduce the amount of system resources being used, decrease volume of hardware needed, or further increase system performance.
After each change (or series of changes), the test is re-executed to see whether performance has increased or declined as a result of the changes. The process will be continued until the performance results reach acceptable goals. The outcome of these test-tuning circles normally produces a baseline.
Baseline is a process of capturing performance metric data for the sole purpose of evaluating the efficacy of successive changes to the system or application. It is important that all characteristics and configurations except those specifically being varied for comparison remain the same, in order to make effective comparisons as to which change (or series of changes) is the driving result towards the targeted goal. Armed with such baseline results, subsequent changes can be made to system configuration or application and testing results compared to see whether such changes were relevant or not. Some considerations when generating baselines include:
They are application specific
They can be created for system, application, or modules
They are metrics/results
They should not be over generalized
They evolve and may need to be redefined from time to time
They act as a shared frame of reference
They are reusable
They help identify changes in performance
Load testing is the process of putting demand on a system and measuring its response; that is, determining how much volume the system can handle. Stress testing is the process of subjecting the system to unusually high loads far beyond its normal usage pattern to determine its responsiveness. These are different from performance testing whose sole purpose is to determine the response and effectiveness of a system; that is, how fast is the system. Since load ultimately affects how a system responds, performance testing is almost always done in conjunction with stress testing.
In the previous section, we covered the fundamentals of conducting a performance test. One of the areas performance testing covers is testing tools . Which testing tool do you use to put the system and application under load? There are numerous testing tools available to perform this operation, from free to commercial solutions. However, our focus in this book will be on Apache JMeter, a free open source, cross platform desktop application from The Apache Software Foundation. JMeter has been around since 1998 according to historic change logs on its official site, making it a mature, robust, and reliable testing tool. Cost may also have played a role in its wide adoption. Small companies usually may not want to foot the bill for commercial testing tools, which often still place restrictions on how many concurrent users one can spin off, for example. My first encounter with JMeter was exactly as a result of this. I worked in a small shop that had paid for a commercial testing tool, but during the course of testing, we had overrun the licensing limits of how many concurrent users we needed to simulate for realistic test plans. Since JMeter was free, we explored it and were quite delighted with the offerings and the sheer number of features we got for free.
Here are some of its features:
Performance test of different server types including web (HTTP and HTTPS), SOAP, database, LDAP, JMS, mail, and native commands or shell scripts
Complete portability across various operating systems
Full multithreading framework allowing concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups
GUI (Graphical User Interface)
HTTP proxy recording server
Caching and offline analysis/replaying of test results
Highly extensible
Live view of results as testing is being conducted
JMeter allows multiple concurrent users to be simulated on the application allowing you to work towards most of the target goals mentioned earlier in the chapter, such as attaining baseline, identifying bottlenecks, and so on.
It will help answer questions such as:
Will the application still be responsive if 50 users are accessing it concurrently?
How reliable will it be under a load of 200 users?
How much system resources will be consumed under a load of 250 users?
What is throughput going to look like when 1000 users are active in the system?
What is the response time for the various components in the application under load?
JMeter, however, should not be confused with a browser (more on that in Chapter 2, Recording Your First Test and Chapter 3, Submitting Forms). It doesn't perform all the operations supported by browsers; in particular, JMeter does not execute JavaScript found in HTML pages, nor does it render HTML pages the way a browser does. It does give you the ability to view request responses as HTML through one of its many listeners, but the timings are not included in any samples. Furthermore, there are limitations as to how many users can be spun on a single machine. These vary depending on the machine specifications (for example, memory and processor speed) and the test scenarios being executed. In our experience, we have mostly been able to successfully spin off 250-450 users on a single machine with 2.2GHz processor and 8 GB of RAM.
Now let's get up and go running with JMeter, beginning with its installation.
JMeter comes as a bundled archive so it is super easy to get started with it. Those working in corporate environments behind a firewall or machines with non-admin privileges appreciate this more. To get started, grab the latest binary release by pointing your browser to http://jmeter.apache.org/download_jmeter.cgi. At the time of writing, the current release version is 2.9. The download site offers the bundle as both zip
and tar
. In this book, we will use the ZIP option, but feel free to download the TGZ if that's your preferred way of grabbing archives.
Once downloaded, extract the archive to a location of your choice. Throughout this book, the location you extracted the archive to will be referred to as JMETER_HOME
.
Provided you have a JDK/JRE correctly installed and a JAVA_HOME
environment variable set, you are all set and ready to run!

The JMETER_HOME folder structure
The following are some of the folders in the apache-jmeter-2.9
folder, as shown in the preceding screenshot:
bin
: This folder contains executable scripts to run and perform other operations in JMeterextras
: This folder contains miscellaneous items including samples illustrating using Apache Ant build tool (http://ant.apache.org/) with JMeter and bean shell scriptinglib
: This is the folder utility JAR files needed by JMeter (you may add additional JARs here to use from within JMeter — more on that later)
Follow these steps to install Java JDK:
Go to http://www.oracle.com/technetwork/java/javase/downloads/index.html.
Download Java JDK (not JRE) compatible with the system you will be using to test.
Double-click on the executable and follow the on-screen instructions.
Note
On Windows systems, the default location for the JDK is under
Program Files
. While there is nothing wrong with that, the issue is that the folder name contains a space, which can sometimes be problematic when attempting to set PATH and run programs such as JMeter from the command line. With that in mind, it is advisable to change the default location to something such asC:\tools\jdk
.
The steps to set up the JAVA_HOME
environment variable for Windows and Unix operating systems are explained next.
For illustrative purposes, we assume you have installed Java JDK at C:\tools\jdk
:
Go to Control Panel.
Click on System.
Click on Advance System settings.
Add the Environment variable as follows:
Value:
JAVA_HOME
Path:
C:\tools\jdk
Locate Path (under System variables; bottom half of the screen).
Click on Edit.
Append
%JAVA_HOME%/bin
to the end of the existing path value (if any).
For illustrative purposes, we assume you have installed Java JDK at /opt/tools/jdk
:
Open a terminal window.
Export
JAVA_HOME=/opt/tools/jdk
.Export
PATH=$PATH:$JAVA_HOME
.
It is advisable to set this in your shell profile settings such as .bash_profile
(for Bash users) or .zshrc
(for zsh users) so you won't have to set it for each new terminal window you open.
Once installed, the bin
folder under JMETER_HOME
folder contains all the executable scripts that can be run. Based on which operating system you installed JMeter on, you either execute the shell scripts (.sh
) for Unix/Linux flavored operating systems or their batch (.bat
) counterparts on Windows operating systems.
Tip
JMeter files are saved as XML files with a .jmx
extension. We refer to them as test scripts or JMX files in this book.
These scripts include:
jmeter-n.sh
: This script launches JMeter in non-GUI mode (takes a JMX file as input)jmeter-n-r.sh
: This script lauches JMeter in non-GUI mode, remotelyjmeter-server.sh
: This script starts JMeter in server mode (this will be started on the master node when testing with multiple machines remotely. More on that in Chapter 6, Distributed Testing).mirror-server.sh
: This script runs the mirror server for JMetershutdown.sh
: This script gracefully shuts down a running non-GUI instancestoptest.sh
: This script abruptly shuts down a running non-GUI instance
To start JMeter, open a terminal shell, change to the JMETER_HOME\bin
folder and run the following:
On Unix/Linux:
./jmeter.sh
On Windows:
jmeter.bat
After a short moment, you should see the JMeter GUI (as shown in the following screenshot). Take a moment to explore the GUI. Hover over each icon to see a short description of what it does. The Apache JMeter team has done an excellent job with the GUI. Most icons are very similar to what you are used to, which helps ease the learning curve for new adapters. Some of the icons, for example, stop, and shutdown, are disabled until a scenario/test is being conducted. In the next chapter, we will explore the GUI in more detail as we record our first test script.

The Apache JMeter GUI
Running JMeter with incorrect option provides you with usage info. The options provided are as follows:
./jmeter.sh – -h, --help print usage information and exit -v, --version print the version information and exit -p, --propfile <argument> the jmeter property file to use -q, --addprop <argument> additional JMeter property file(s) -t, --testfile <argument> the jmeter test(.jmx) file to run -l, --logfile <argument> the file to log samples to -j, --jmeterlogfile <argument> jmeter run log file (jmeter.log) -n, --nongui run JMeter in nongui mode
The previous code snippet (non-exhaustive list) is what you might see if you did the same. We will explore some, but not all of these options as we go through the book.
Since JMeter is 100 percent pure Java, it comes packed with functionality to get most test cases scripted. However, there might come a time when you need to pull in a functionality provided by a third-party library or one developed by yourself, which is not present by default. As such, JMeter provides two directories where such third-party libraries can be placed to be autodiscovered in its classpath.
JMETER_HOME\lib
: This is used for utility JARsJMETER_HOME\lib\ext
: This is used for JMeter components and add-ons. All custom developed JMeter components should be placed in thelib\ext
folder, while third-party libraries (JAR files), should reside in thelib
folder.
If you are working from behind a corporate firewall, you may need to configure JMeter to work with it by providing the proxy server host and port number. To do so, supply additional command-line parameters to JMeter when starting it up. Some of them are as follows:
-H
: Specifies the proxy server hostname or IP address-P
: Specifies the proxy server port-u
: Specifies the proxy server username if required-a
: Specifies the proxy server password if required, for example:./jmeter.sh -H proxy.server -P7567 -u username -a password
On Windows, run jmeter.bat
instead.
Do not confuse the proxy server mentioned here with JMeter's built-in HTTP Proxy Server, which is used for recording HTTP or HTTPS browser sessions. We will be exploring that in the next chapter when we record our first test scenario.
As described earlier, JMeter can run in non-GUI mode. This is needed for times when you are running remotely, or want to optimize your testing system by not taking the extra overhead cost of running the GUI. Normally, you will run the default (GUI), when recording your test scripts and running a light load but run in non-GUI mode for higher loads.
To do so, use the following command-line options:
-n
: This command-line option indicates to run in non-GUI mode-t
: This command-line option specifies the name of the JMX test file-l
: This command-line option specifies the name of the JTL file to log results to-j
: This command-line option specifies the name of the JMeter run log file-r
: This command-line option runs the test servers specified by the JMeter propertyremote_hosts
-R
: This command-line option runs the test on the specified remote servers (for example,-Rserver1,server2
)
In addition, you can also use the -H
and -P
options to specify proxy server host and port, as we saw earlier:
./jmeter.sh -n -t test_plan_01.jmx -l log.jtl
This is used when performing distributed testing; that is, using more testing servers to generate additional load on your system. JMeter will be kicked off in server mode on each remote server (slaves) and then a GUI on the master server is used to control the slave nodes. We will discuss this in detail when we dive into distributed testing in Chapter 6, Distributed Testing.
./jmeter-server.sh
JMeter provides two ways to override Java, JMeter, and logging properties. One way is to directly edit the jmeter.properties
, which resides in the JMETER_HOME\bin
folder. We'll suggest you take a peek into this file and see the vast number of properties you can override. This is one of the things that make JMeter so powerful and flexible. On most occasions, you will not need to override the defaults, as they have sensible default values.
The other way to override these values is directly from the command line when starting JMeter.
The options available to you include:
Define a Java system property value:
-D<property name>=<value>
Define a local JMeter property:
-J<property name>=<value>
Define a JMeter property to be sent to all remote servers:
-G<property name>=<value>
Define a file containing JMeter properties to be sent to all remote servers:
-G<property file>
Overriding a logging setting by setting a category to a given priority level:
-L<category>=<priority> ./jmeter.sh -Duser.dir=/home/bobbyflare/jmeter_stuff \ -Jremote_hosts=127.0.0.1 -Ljmeter.engine=DEBUG
JMeter keeps track of all errors that occur during a test in a logfile named jmeter.log
by default. The file resides in the folder from which JMeter was launched. The name of this log file, like most things, can be configured in jmeter.properties
or via a command-line parameter (-j <name_of_log_file>
). When running the GUI, the error count is indicated in the top-right corner, to the left of the number of threads running for the test. Clicking on it reveals the log file contents directly at the bottom of the GUI. The log file provides an insight into what exactly is going on in JMeter when your tests are being executed and helps determine the cause of error(s) when they occur.

The JMeter GUI error count/indicator
Should you need to customize the default values for JMeter, you can do so by editing the jmeter.properties
file in the JMETER_HOME\bin
folder, or making a copy of that file, renaming it to something different (for example, my-jmeter.properties
), and specifying that as a command-line option when starting JMeter.
Some options you can configure include:
xml.parser
: It specifies a custom XML parser implementation. The default value isorg.apache.xerces.parsers.SAXParser
. It is not mandatory. If you find the provided SAX parser buggy for some of your use cases, this provides you the option to override it with another implementation. You could, for example, usejavax.xml.parsers.SAXParser
provided the right JARs exist on your instance of the JMeter classpath.remote_hosts
: It is a comma-delimited list of remote JMeter hosts (orhost:port
if required). When running JMeter in a distributed environment, list the machines where you have JMeter remote servers running. This will allow you to control those servers from this machine's GUI. This applies only while doing distributed testing and is not mandatory. More on this in Chapter 6, Distributed Testing.not_in_menu
: It is a list of components you do not want to see in JMeter's menus. Since JMeter has quite a number of components, you may wish to restrict it to show only components you are interested in or those you use regularly. You may list their classname or their class label (the string that appears in JMeter's UI) here, and they will no longer appear in the menus. The defaults are fine, and in our experience we have never had to customize this, but we list it here so that you are aware of its existence. It is not mandatory.user.properties
: It specifies the name of the file containing additional JMeter properties. These are added after the initial property file, but before the-q
and-J
options are processed. It is not mandatory. User properties can be used to provide additional classpath configurations such as plugin paths, via thesearch_paths
attribute, and utility JAR paths via theuser_classpath
attribute. In addition, these properties file can be used to fine-tune JMeter components' log verbosity.search_paths
: It specifies a list of paths (separated by;
) that JMeter will search for JMeter add-on classes; for example, additional samplers. This is in addition to any JARs found in thelib\ext
folder. It is not mandatory. This comes in handy, for example, when extending JMeter with additional plugins that you don't intend to install in theJMETER_HOME\lib\ext
folder. You could use this to specify an alternate location on the machine to pick up the plugins. See Chapter 5, Resource Monitoring.user.classpath
: In addition to JARs in thelib
folder, use this attribute to provide additional paths JMeter will search for utility classes. It is not mandatory.
system.properties
: It specifies the name of the file containing additional system properties for JMeter to use. These are added before the-S
and-D
options are processed. It is not mandatory. This typically provides you with the ability to fine-tune various SSL settings, key stores, and certificates.ssl.provider
: It specifies a custom SSL implementation, if you don't want to use the built-in Java implementation. It is not mandatory. If for some reason, the default built-in Java implementation of SSL, which is quite robust, doesn't meet your particular usage scenario, this allows you to provide a custom one. In our experience, the default has always been sufficient.
The command-line options are processed in the following order of precedence:
-p profile
is optional. If present, it is loaded and processed.jmeter.properties
is loaded and processed after any user provided custom properties file.-j logfile
is optional. If present, it is loaded and processed after thejmeter.properties
file.Logging is initialized.
user.properties
is loaded (if any).system.properties
is loaded (if any).All other command-line options are processed.
In this chapter, we have covered the fundamentals of performance testing. We also learned key concepts and activities surrounding performance testing in general. In addition, we installed JMeter, learned how to get it fully running on a machine and explored some of the configurations available with it. We explored some of the options that make JMeter a great tool of choice for your next performance testing engagement. These include the fact that it is free and mature, open-sourced, easily extensible and customizable, completely portable across various operating systems, has a great-plugin ecosystem, large user community, built-in GUI, and recording and validating test scenarios among others. In comparison with the other tools for performance testing, JMeter holds its own. In the next chapter, we will record our first test scenario and dive deeper into JMeter.