About this book

A bad response time on a website can drive away visitors and prospective customers. To measure what a website can handle, there should be a way to simulate and analyze different load scenarios—this is where a load-testing tool like JMeter comes in. JMeter is a powerful desktop performance tool from the Apache Jakarta project, written in Java, for load-testing web pages, web applications, and other static and dynamic resources including databases, files, Servlets, Perl scripts, Java Objects, FTP Servers, and more.

JMeter works by acting as the "client side" of an application, and measures response time. As such, it's one half of the testing arsenal; the other half consists of a tool to watch metrics on the server side, such as thread counts, CPU loads, resource usage, and memory usage. Although it can't behave like a browser to measure rich client-side logic such as JavaScripts or Applets, JMeter certainly measures the performance of the target server from the client's point of view. JMeter is able to capture test results that help you make informed decisions and benchmark your application.

This book introduces you to JMeter (version 2.3) and test automation, providing a step-by-step guide to testing with JMeter. You will learn how to measure the performance of a website using JMeter.

While it discusses test automation generally, the bulk of this book gives specific, vivid, and easy-to-understand walkthroughs of JMeter's testing tools showing what they can do, and when and how to use them.

Publication date:
June 2008


Chapter 1. Automated Testing

Really, what is test automation? Is it something like pressing some button to turn on the testing on auto-pilot? To an extent, yes, you can have that, and more. According to Wikipedia, (http://en.wikipedia.org/wiki/Test_automation):

Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control, and test reporting functions.

Simply put, it is the process of automating the manual testing process currently in use, by the use of software. Hence, this definition goes further than simply using some Word Processor software.

This chapter will give you a quick overview of what test automation is all about and its significance in the testing process, and ultimately, the software process. It aims to help you decide whether test automation is the way to go for testing applications. It will also describe the cost-effectiveness of test automation in comparison with manual testing or no testing at all.

As you begin to ponder if test automation is what you need, some questions may be lingering in your mind:

  • Why do I need to automate software testing?

  • How do I decide whether to automate or not?

  • How much would test automation add to the total cost of testing?

This chapter will answer your questions.


Why Automate Testing?

Some software project managers hold strongly to the myth that testing costs too much, takes too much time, does not help them build the product, and can create hostility between the tester(s) and the development team. You will find these are the very people who would spend the least on testing.

On the other hand, there are smarter software managers who understand that testing is an investment in quality. Hence, they tend to spend more on testing. Efficient test project management produces a positive return, fits within the overall project schedule, has quantifiable findings, and is seen as a definite contributor to the project.

However, as developing software overruns, as it normally does, time is at a premium. As you may know or have experienced, 'manual' testing, especially regression testing can be exhausting. A time-consuming and tedious process, it is inefficient and conflicts with today's shorter application development cycles. As a result, it gets in the way to test an application thoroughly—enabling critical bugs to slip through undetected. What's more, manual tests are prone to human error and inconsistencies that can distort test results.

Can we do without automation? Yes, of course—if time is abundant and your client (or boss) is NOT on your tail for the application's next release. However, for most of the time, this is not the case. In software testing, time is a determining factor and the effective use of automation CAN help improve the testing speed.

On the other hand, despite of the appeals of test automation, we need to bear in mind that test automation may just be suitable for only parts of the software testing process. Automated testing IS NOT a total replacement for manual testing. Certain aspects of testing an application would rely more on the human tester than on test automation. The ultimate testers still are the human testers themselves; where applicable, test automation only complements manual testing. Test automation may not test any better than the human tester, but if implemented wisely, can certainly help the tester test faster. Since certain testing of the application can be automated, the tester can spend more quality time on more important and critical aspects of the testing. Ultimately, the tester can test better and more effectively.


To Automate or Not to Automate—Some Hints

The previous paragraph cautions against using automation to replace manual testing, putting you, the reader (or the tester) in an awkward predicament. However, let us think about an average-case scenario: You are pressed against a tight budget and schedule, and you are sure that manually regression testing the application completely would only leave you and your team physically and mentally exhausted. Would automation help you test, if not any better, at least faster? Some hints may just help you decide:

  • Pick a good time to start automating:

    Automation is best used after the tester has grasped the fundamental testing skills and concepts through manual testing experience. Another good time is when the tests that are going to be repeated or simulated, as normally found in regression testing and performance testing, respectively. As this goes, not all testing approaches may justify the use of automation.

    Rex Black in his article, Investing in Software Testing: Manual or Automated? concludes that the decision to automate testing comes from the need to repeat tests numerous times or reduce the cycle time for test execution while higher per-test costs and needs for human skills, judgment, and interaction incline towards decision to test manually.

  • Not all testing approaches are suitable to automate:

    Suitable: Acceptance, Compatibility, Load, Volume or Capacity, Performance and Reliability, Structural testing, Regression, Exception or Negative testing.

    Type of Testing

    Description (adapted from source: http://www.istqb.org)

    Acceptance testing

    Formal testing with respect to user needs, requirements, and business processes conducted to determine whether a system satisfies or does not satisfy the acceptance criteria and to enable the user, customers, or other authorized entity to determine whether or not to accept the system.

    Compatibility testing

    The process of testing to determine the interoperability of a software product.

    Load testing

    A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.

    Volume/Capacity testing

    Testing where the system is subjected to large volumes of data.

    Performance testing

    The process of testing to determine the performance of a software product.

    Reliability testing

    The process of testing to determine the reliability of a software product.

    Structural testing

    Testing based on an analysis of the internal structure of the component or system (also known as white-box testing)

    Regression testing

    Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made.

    Exception testing

    Testing behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or due to an internal failure.

    Negative testing

    Tests aimed at showing that a component or system does not work.

    Not suitable: Installation and setup, Configuration and Compatibility,Documentation and help, Error handling and Recovery, Localization,Usability, and any other that relies heavily on human judgment.

    Type of Testing

    Description (adapted from source: http://www.istqb.org)

    Installation and setup testing

    Testing that focuses on what customers will need to do to install and set up the new software successfully.

    Configuration testing

    The process of testing the installability or configurability of a software product.

    Compatibility testing

    Testing to evaluate the application's compatibility with the computing environment.

    Documentation testing

    Testing the quality of the documentation, e.g. user guide or installation guide.

    Error handling testing

    Testing to determine the ability of applications system to properly process the incorrect transactions.

    Recovery testing

    Testing how well the software is able to recover from crashes, hardware failures, and other similar problems.

    Localization testing

    Testing that focuses on internationalization and localization aspects of software in adapting a globalized application to a particular culture/locale.

    Usability testing

    Testing to determine the extent to which the software product is understood, easy to learn, easy to operate, and attractive to the users under specified conditions.

    A point worthy of note is that there are tests that may justify the use of both manual and automated testing. These include: functionality testing, user interface, date and time handling, and use cases (user scenarios).

  • Make automation only a supplement to a testing project:

    In many cases, when a test requires the human mind making better judgments, use of automation merely accommodates that, but is not its replacement. For example, performing usability testing on application with a user interface designed for visually impaired users, no automation test can be any better than the human tester making judgments about the appropriate page element sound, size, or colors that would benefit the application's targeted users. While testing other aspects of the application, load testing or performance testing, for example, can be automated.

  • Do some comparison of Automated vs. Manual Testing:

    Manual Testing

    Automated Testing

    Running (and re-running) tests manually can be very time consuming.

    Cost-effective, if you have to repeat tests numerous times.

    All required tests need to be rerun each time there is a new build—which eventually would become very mundane and tiresome. Also, would wear out the tester.

    Allows you to run automation against code that frequently evolves in a timely manner. Most suited to test codes within Agile software development framework.

    Manual tests would have to be run sequentially.

    Automated tests can be run simultaneously on different machines.

    Time-consuming and tedious if testing a large test matrix. Highly error-prone.

    Aids in testing a large test matrix.

    If the test case only runs twice a coding milestone, it should most likely be a manual test. Less cost than automating it.

    It costs more to set up and configure a test automation framework and test cases.

    Better suited if you are testing UIs.

    Cannot automate visual information. More suited for non-UI tests

    It allows the tester to perform more ad hoc (random testing), which increases the odds of finding real user bugs.

    Automation test tools are software themselves, and there is no 'perfect' software. Bugs may also surface in these tools.

    Tester can do testing without automation.

    Only suitable for portions of the testing process.


How Much Does it Cost?

The total cost needs to consider the costs of numerous resources undertaking a testing project. These resources generally include:

  • Person hours to test—time to set up and perform automation

  • Testing environment—testing infrastructure or environment

  • Testing software—testing technology/tools

As our main focus is on the cost of testing software, it can range from high as six to seven figures per license to as little as $0 (free of charge, normally in the form of freeware or open-source code). However, as testing software relies on the tester and the environment in which the tests are executed, the total cost counts for more.

Rex Black's article provides us with a hypothetical scenario summarizing the cost of testing—no testing, manual testing and automated testing. An undisputed fact that any software project manager is aware of: bugs found by the customers are much more expensive than if the same bugs are found during development. Depicting a hypothetical example, the table below indicates that automation gives the client higher return on investment (ROI) than manual testing, while no testing at all brings no benefit in the long haul. I have taken the liberty to extend Rex's table to include the ROI if using an open-source testing software such as JMeter, as you will find in the last column.

Testing Investment Options: ROI Analysis

(Adapted from : http://www.compaid.com/caiinternet/ezine/cost_of_quality_1.pdf)


No Formal Testing

Manual Testing

Automated Testing (from Vendor)

Automated Testing (Open Source – FOC)


















Total Investment







Must-Fix Bugs Found





FixCost (Internal Failure)







Must-Fix Bugs Found





FixCost (Internal Failure)





Customer Support


Must-Fix Bugs Found





FixCost (External Failure)





Cost of Quality












Total COQ










Return on Investment





Consequently, an effective combination of automated and manual testing, in the long run, may result in potentially cost-effective and efficient testing as it helps to shorten return on investment (ROI) of a software project.



How effective test automation is to a testing project depends heavily on whether automation really is what the testing team needs. Given that a testing team is comfortable with the idea of automating their tests (or ideally, part of their tests), automation can work wonders. Used effectively at the right turns of a testing project, it:

  • Saves time

  • Saves money

  • Saves pride (normally hurt when you simply could not honor the datelines)

The next chapter will begin your experience with a freely distributed,application that is one of the most widely used open-source testing applications on earth—JMeter. This application has been stable for many years and its design is scalable so that an advanced user is free to use its source code to make his or her own version for exclusive use. Since it is available as an open-source project, anyone can contribute to the project development. You can too contribute.

About the Author

  • Emily H. Halili

    Since graduating in 1998, from California State University in Computer Science, Emily H. Halili has taken numerous roles in the IT/Software industry—namely as Software Engineer, Network Engineer, Lecturer, and Trainer. Currently a QA Engineer in CEO Consultancy-Malaysia with great passion for testing, she has two years of experience in software testing and managing QA activities. She is an experienced manual tester and has practical knowledge of various open-source automation tools and framework, including JMeter, Selenium, JProfiler, Badboy, Sahi, Watij and many more.

    Browse publications by this author

Latest Reviews

(1 reviews total)
Good one but I need more practical guide in terms of bean shell scripts used in jmeter
Book Title
Unlock this book and the full library for FREE
Start free trial