Introduction to Test Automation
One of the indisputable things when building and delivering any product is its quality. End users of a product will not settle for anything less than superior when it comes to product quality. In this chapter, we will dive deep into how testing and test automation help achieve this level of quality in a software product. The first few pages will introduce the reader to testing and test automation. Later in the chapter, we will dive deeper into the subject of test automation.
Quality is everyone’s responsibility in a team, and this chapter provides various practical pointers to help establish that collaboration. Additionally, we will see how automated tests add another layer of complexity to a project and help understand the best practices to accomplish coherent test automation. We will also look at the lineup of development and test automation processes to provide a reliable and bug-free product experience for customers.
Here are the key topics that you will learn by the end of this chapter:
- Getting familiar with software testing
- Introducing test automation
- Exploring the roles in quality engineering
- Familiarizing yourself with common terminologies and definitions
Getting familiar with software testing
Testing as an activity is critical to delivering a trustworthy product and forms a strong foundation for building reliable test automation. So, being a good tester makes you a more effective test automation engineer.
Software testing is an indispensable task in any software development project that is mainly done with the goals of validating the specified product requirements and finding bugs. These bugs can be functional or non-functional in nature. Functional bugs include deviations from the specified requirements or product specifications. Usually, non-functional issues are performance-based or security-based. The primary goals of testing are usually interwoven at multiple levels but can be broken down at a high level as follows:
- Functional: Checking the business functionalities of the software system:
- Compliance: Regulator agencies, government agencies, and more
- Portability: Cross-browser testing, mobile support, and more
- Usability: Support for disabled users
- Maintainability: Vendor support
- Non-functional: Checking the non-functional aspects of the software system, which are never tackled by functional testing:
- Reliability: Operational up time, failovers, business continuity, and more
- Security: Vulnerability, penetration testing, and more
- Performance: Load, stress testing, and more
Now that we have seen a quick introduction to testing, let us understand why testing is so critical.
Knowing the importance of testing
Even though software’s quality is initially laid out by the design and architecture, testing is the core activity that gives stakeholders much-needed confidence in the product. By verifying the behavior of the product against documented test cases, the activity of testing helps uncover bugs and address other design issues promptly. By preventing and identifying bugs early in the software development life cycle, testing helps both the engineering and business teams to increase customer satisfaction and reduce overall operating costs. The valuable insights that the team derives from the repeated testing of the product can be further used to improve the efficiency of the software development process. Therefore, it is imperative to conduct testing for achieving the established goals of a successful product launch.
Tasks involved in testing
- Discussion with product and business about the acceptance criteria
- Creation of a test plan and strategy
- Review of the test plan with the engineering and product teams
- Cross-team collaboration for the test environment setup
- Creation of test cases and test data
- Execution of test cases
- Reporting of test results and associated quality metrics
Depending on the team size, capacity, and structure, some or all of these activities must be performed for a successful release of a software product.
In the following diagram, we get a comprehensive view of all the deliverables involved in software testing:
Figure 1.1 – Testing deliverables
As you can see in the preceding diagram, testing encompasses a wide variety of deliverables resulting from many cross-functional activities. Next, let’s look at some unique demands for testing in the world.
Testing in an world
Unlike a traditional waterfall model, in the Agile world, it is recommended that each user story or feature has the right balance of manual and automated tests before calling it done. This arguably slows down the team, but it also drastically reduces the technical debt and improves code quality. You will notice that scaling the software becomes easy, and there is a significant decrease in reworking as the automated tests increase. This saves huge amounts of time for the engineering team in the long run. Another common occurrence in an Agile setup is developers writing automated tests. Usually, developers own unit tests, and quality engineers write other types of tests. Sometimes, it is productive for the developers to write automated test code while the quality engineers focus on other testing tasks. One of the most important takeaways from the Agile world is that quality is everyone’s responsibility on the team. Test engineers keep the communication and feedback going constantly between the product and the software engineers, which, in turn, results in an excellent team culture. This also opens up early discussions on the necessary tools and infrastructure needed for testing.
Since the Agile environment aims to deliver the simplest possible product to the customer as quickly as possible, the test engineers focus on the most necessary set of test cases for that product to be functionally correct. In each increment, the test engineers create and execute a basic set of test cases for the features being delivered to the customer. Test engineers are constantly working with the product, developers, and in some cases, with customers to keep the stories as simple and concise as feasible. Additionally, the Agile landscape opens doors to automating the entire software development life cycle using the principles of continuous integration, which demands a major shift in the test engineer’s mindset. This demands excellent communication skills and fluent collaboration from the test engineers.
Often, a test engineer’s output is measured by the number and quality of defects they find in the software application. Let us examine some key aspects of defect management next.
Defect management in testing
A defect is primarily a deviation from the business requirement, and it does not necessarily mean there is an error in the code. The analyzing, documenting, and reporting of defects in the product is a core activity for the testing team. It is essential to set and follow standard templates for reporting defects. There are a variety of tools in the market that help with test case and defect management. Maintaining a good rapport with all the engineers on the team goes a long way in making the defect reporting and resolving process much smoother. Usually, testers have to put in as much information as possible in the defect log to assist developers in reproducing the defects in various environments as needed. This also helps in categorizing the defects correctly. There are cases when the defect still cannot be reproduced consistently, and the engineer and tester have to pair up in order to spot the failure in the application precisely. It is important to file high-quality functional defects and follow up when necessary to get them fixed on time. So, the testing team is primarily responsible for the defect management process and keeping the higher management updated on any high-severity defects that might affect the product release.
Defect versus bug
A defect is a deviation from the expected behavior. It can be a missing, incorrect, or extra implementation. In comparison, a bug is a programming error that causes the software to crash, produce incorrect results, or perform badly.
Besides the number of bugs being identified, it is also crucial to know when a defect is identified in the project. Next, let’s explore the effects of shifting testing early or late in the development life cycle.
Shift-Right and Shift-Left propositions
It is evident that there are a lot of benefits that can be reaped by shifting the testing and quality efforts, in general, earlier on in the development life cycle. This is termed the Shift-Left approach. In this approach, quality engineers are involved right from the inception of the project in early discussions about the design and architecture. They start working on deliverables such as the test plan and test cases in parallel with developers. This approach builds a quality-first mindset for the entire team and ensures quality is built right into the vein of every activity in the development life cycle.
The Shift-Right approach is an extension of Shift-Left, where the testing team’s responsibilities are stretched further into the release of the product and maintenance. Test engineers, with their deep knowledge of the product, assist with the implementation and help with testing and monitoring in production. Additionally, the Shift-Right approach has some test engineers support performance, load, and security testing. There is also good scope for test automation to address quality issues under real-world conditions within this approach. Both these approaches are equally important. While the push for Shift-Left has been happening for quite some time now, the Shift-Right approach is relatively new and is extending the tester’s involvement in obtaining and testing actionable production data in safer ways.
This leads us to the question of how to incorporate quality early on, and DevOps processes have helped enormously in that regard. Next, let’s look at how quality and DevOps blend together to deliver organizational value.
Quality and DevOps
The previous section about the Shift-Right approach leads us right into the area of DevOps and how it has accelerated product development and changed the quality process. DevOps strives continuously to align high-performing engineering teams to business value in the most efficient way. Quality is a core component in each of the DevOps processes. DevOps attempts to automate each and every task in the product delivery, right from building the code to deploying the application to production for customers to use. This adds further emphasis on quality. Since the whole process is automated, feedback at every checkpoint is crucial. A predetermined set of units, functional, and integration tests executed at the right times in the deployment pipeline act as a gate to the production deployment. At times, the manual involvement of test engineers will be required for debugging test failures and for specific types of testing.
In the DevOps world, it is essential to maintain nimbleness in testing activities and deploy the right type of testing resources when and where needed. We will dive deep into how test automation helps in continuous integration and continuous delivery in Chapter 9, CI, CD, and Test Automation. For now, it is good to grasp the significant role that testing plays in ensuring a successful DevOps implementation.
Challenges in testing
Before diving into the world of test automation, it is vital to understand the common challenges faced in the testing world. Building a solid manual testing foundation is paramount to developing a sound test automation strategy. A clear and concise testing process acts as a strong pillar for building reliable automation scripts. Some of the challenges faced regularly by the testing team are outlined as follows:
- The most common challenge faced by Agile teams is the changing requirements. Various reasons such as late feedback from customers or design challenges lead to a change in the requirements and cause major rework for test engineers.
- Test engineers are required to interface constantly with the various teams to get the job done. The lack of proper communication here could lead to major blockers for the project.
- Having stories or requirements with incomplete information is another area that test engineers struggle to cope with. They constantly have to reach out to products for additional information, which causes a lot of back and forth.
- Not having the right technical skills for performing all kinds of testing required for a feature poses a great challenge for test engineers.
- The lack of quality-related metrics hurts fast-growing teams where the velocity is expected to increase with the team size. The engineering team will be imperceptive to any patterns in code issues.
- Inadequate test environments are a major bottleneck to testing efforts and directly affect the confidence in the product delivery. The lack of diverse test data in the test environment leads to an inadequately tested product.
- The absence of standard test processes increases both the development and testing times, thereby delaying the project timelines. It is good to be cautious about partially implemented test processes as they, sometimes, hurt more than they help.
Next, let’s look at how testing early and often helps overcome some of the obstacles in the testing journey.
Test early, test often
Usually, an enterprise software application contains hundreds, if not thousands, of components. Software testing has to ensure the reliability, accuracy, and availability of each of these components. So, it is inevitable to emphasize the importance of the common quality industry term, Test early and test often:
Table 1.1 – Importance of testing early and testing often
The following excerpt highlights the cost of fixing bugs later in the product development cycle. As Kent Beck explains in his book, Extreme Programming Explained, “Most defects end up costing more than it would have cost to prevent them. Defects are expensive when they occur, both the direct costs of fixing the defects and the indirect costs because of damaged relationships, lost business, and lost development time.”
The common arguments for not testing early are “We don’t have the resources” or “We don’t have enough time.” Both of these are results of not examining the risks of the project thoroughly. It is a proven fact that the total amount of effort is greater when testing is introduced later on in the project. On the tester’s part, it is important to reduce redundancy and constantly think about increasing the efficiency of their tests. Test-Driven-Design (TDD) is a common approach to practice Test Early, Test Often. Testing processes have to be fine-tuned and adhered to strictly for this approach to be successful. Testing can have a strategic impact on the quality of the product if introduced early and done often and efficiently. The Agile test pyramid (which is discussed, in detail, in Chapter 2) can act as a guide to strategically categorize and set up different types of tests.
So far, we have dealt with a range of concepts that help us acquire a well-rounded knowledge of testing. We looked at the importance of testing and how crucial it is to integrate it with every facet of the software development process. With this background, let us take on the primary topic of this book, test automation.
Understanding test automation
Imagine an engineering team adding one or two software engineers every quarter. This team wants to delight the customer by delivering more features every sprint. Even though they have one or two quality engineers on the team to test all of the new features, they notice that the faster they try to deliver code, the higher the number of regression bugs introduced. The manual testing of the features just isn’t enough. They want certain core test scenarios to be executed repetitively and, often, when the changes are introduced. This is where test automation helps tremendously. It is always better to get started with test automation before feeling the agony of a high-severity bug in production or a catastrophic incident happening due to the lack of timely testing.
Testing is definitely not a one-time activity and must be done any time and every time a change is introduced into the software application. The longer we go without testing an application, the higher the chances of failure. So, continuous testing is not an option but an absolute necessity in today’s Agile software engineering landscape. Manual testing involves a tester executing the predefined test cases against the system under test as a real user would. Automated testing involves the same tests to be run by a script, thereby saving the tester’s valuable time so that they can focus on usability or exploratory testing. When done right, automated tests are more reliable than a manual tester. More importantly, it provides more time for the tester to draw valuable insights from the results of the automated tests, which further aids in increasing the test coverage of the software application as a whole.
Test automation is one of the chief ways to set up and achieve quality in an orderly fashion. The core benefit of test automation lies in identifying software bugs and fixing them as early and as close as possible to the developer’s coding environment. This helps subdue the damaging effects of late defect detection and also keeps the feedback cycle going between the engineering and product teams. Even though the upfront investment in automated tests might seem large, analysis has shown that, over time, it pays for itself. Test automation enables teams to deliver new features quickly as well as with superior quality. Figure 1.2 shows how various components of a software application can be interdependent, and continuously testing each of them asserts their behavior. It is not only necessary to validate these components as a standalone but also as an integrated system:
Figure 1.2 – Continuous testing
- Creating the applicant
- Creating a loan application
- Determining loan eligibility
Each of the APIs has to be tested in an isolated manner at first to validate the accuracy of the business functionality. Later when the APIs are integrated, the business workflow should be tested to confirm the system behavior. Here, a test automation suite can be built to automate both the test cases for individual APIs and the whole system behavior. Also, when the applicant creation API is reused in another application, the automation suite can be reused, thus enabling reusability and portability in testing. A similar implementation can be done for user interface components, too.
Test automation is a very collaborative activity involving the commitment of business analysts, software engineers, and quality engineers/software development engineer in test (SDET). It unburdens the whole team from the overwhelming effects of repetitive manual tests, thus enabling the achievement of quality at speed. While peer code reviews and internal design reviews act as supplemental activities to identify defects early, test automation places the team in a fairly good place to start testing the product with the end users. It is a common misconception that automated tests undercut human interaction with the system under test. While it might be true that the tester does not interact with the system as often as they would in manual testing, the very activity of developing and maintaining the automated tests brings together the whole team by commenting on the test code and design. Automated tests open a new way of communication within the team to improve the quality of the system and the prevention of bugs.
Agile test automation
Having the right selection of automated tests at the right spots in the deployment makes a ton of difference in the quality of the delivered software. In an Agile environment, there has to be a constant discovery of the right tests for the current iteration, and the ratio between the manual and automated tests needs to be tweaked as and when necessary. Since the focus is on delivering a feature as quickly as possible to the customer, it is important that developers, testers, and the product manager are aware of what is being built, tested, and shipped. Collaboration becomes crucial and is the primary driver of the success of test automation in the Agile environment. Some of the important considerations for test automation in an Agile environment are as follows:
- Start small and build iteratively on the automated test scripts. This applies to both functional test coverage and the complexity of the test automation framework.
- Be extremely cautious about what tests are selected for automation. These tests will act as a gate to production deployment. Make every effort to avoid false positives and false negatives.
- Make sure the automated tests are considered when deciding on the acceptance criteria for a feature or a story. It is critical to allocate necessary time and resources for completing and executing the automated tests as part of a feature.
- Get frequent feedback from other engineers on the quality and performance of the automated tests.
- Do not be afraid to adapt and change test automation. Constant innovation and improvement are core activities in an Agile environment.
As we have seen so far, test automation is an intricate pursuit. Like all other complex things in the software world, test automation comes with its own set of challenges. Let’s examine some of them in the next section.
Test automation challenges
There is never enough test automation, and this is a constant challenge faced by test engineers. Test engineers are always under time constraints to finish the manual or automated tests and to get the completed feature out of the door. Just as with manual testing, there are a variety of challenges that test engineers face on a daily basis. Some of them are listed as follows:
- As mentioned earlier, not having enough test automation is always a challenge in the Agile world. Often, testers have to compromise coverage for speed, and at times, the consequence can be adverse.
- Not enough planning before beginning test automation work often leads to duplication and exhaustion.
- Upfront investment in test automation is heavy, and the constant need to convince the stakeholders of the benefits puts a lot of pressure on testing teams.
- The lack of collaboration between developers and test engineers while designing and developing automated tests could lead to poor-quality scripts or complicated frameworks. This affects the quality of the build pipeline.
- The absence of skilled test engineers has a detrimental effect on quality.
- Not aligning the testing processes with the development processes could cause release delays. It is extremely important to time the code completion of the feature with automated test readiness for on-time delivery.
- Sometimes, test engineers do not understand the requirements and assume the expected behavior instead of validating it with the product. It is extremely hard to identify and eliminate such assertions in the test script after they have been built in.
- The test automation framework is not scalable or portable. This hinders the execution speed and results in a lack of test coverage.
Creating and maintaining the test automation infrastructure is a separate project by itself. Every bit of detail should be thought about and discussed in detail with the concerned stakeholders to address these challenges.
Finding and handling regression bugs
One of the chief purposes of test automation is finding regression bugs. Usually, regression means the quality gets worse after a change has been introduced to an already tested product. Well-written automated tests are tremendously helpful in identifying regression bugs. But an important thing to remember about regression issues is that even though more automated tests help find them, the root cause of regression issues still has to be addressed by the management team. Test engineers can provide awareness about the inherent regression issues through automated test results. It is important to educate the team about fixing the underlying lapses. Some of the most common slippages that cause regression bugs include the following:
- Code review standards are below par or non-existent.
- The project schedule is no longer realistic to push good quality code through.
- There is a huge disconnect between developers and the product teams regarding the feature being developed.
- Various integration points are not being considered appropriately when designing the product.
- The product has become too complex over time.
Test engineers can be a key component to break the pattern of regression issues and steer the quality of the product in the right direction by keeping everyone on the team informed. In the next section, let’s survey some of the top metrics used in the test automation world.
Test automation metrics
Why and what to measure in test automation is a constant question lingering in the minds of the testing team. Test engineers are curious to know how effective their scripts are and how they are performing across different conditions. On the other hand, the management team will be interested to know the ROI on the investment made in test automation rather than just hoping that they deliver value in the long run. Let’s look at some key metrics that can help in gauging the value of test automation that is already in place:
- Test automation effectiveness: This is a key metric that provides visibility into how effective the automated tests are in finding bugs. When broken down logically by scripts/test environments, this metric gives direction on where to focus our future efforts:
Test Automation Effectiveness = (# of defects found by automation/Total number of defects found) * 100
- Test automation coverage: This metric provides the test coverage for the automated tests. It is important to be aware of what is not covered by your test automation suite to clearly direct the manual testing efforts:
Test Automation Coverage = (# of test cases automated/Total number of test cases) * 100
- Test automation stability: This metric provides clarity on how well the automated scripts are performing in the long run. This metric, along with test automation effectiveness, is a key indicator of how flaky the automated tests are. It is good to have this metric in an Agile environment to monitor the health of the deployment pipelines:
Test Automation Stability = (# of test failures/Total number of test runs) * 100
- Test automation success or failure rate: This metric provides a quick idea of the health of the current build. In the long term, this metric gives you a good view of how the build qualities have changed:
Test Automation Success Rate = (# of test cases passed/Total number of test cases run) * 100
- Equivalent manual test effort: This metric, usually expressed in man hours, is an indicator of the ROI to determine whether a particular feature can be automated or not:
Equivalent Manual test Effort = Effort to test a specific feature * Number of times a feature is tested in a test cycle
- Test automation execution time: As the name indicates, this metric shows you how long or short your test runs are. Comparing this with the overall deployment times to production gives a good idea of how much time is spent executing the automated tests over time. This is key in the Agile setup where the speed to the customer is considered a key factor:
Test Automation Execution Time = Test Automation End Time - Test Automation Start Time
An important note regarding test automation metrics is that these numbers, by themselves, do not produce much value. A combination of these metrics properly assessed in the context of project delivery provides valuable insights to the product and engineering teams.
So far, we have seen how testing and test automation blend well with every activity in the software development life cycle. It is evident that test automation is a multidimensional undertaking, and the people performing test automation have to meet the needs of this demanding role. In the next section, let’s survey a couple of important roles in the quality engineering space.
Exploring the roles in quality engineering
There are three main roles in the testing world without involving people management. Different companies might use or call these roles in different ways. Quality-related roles also vary depending on the size and structure of the organization. For example, smaller companies might combine all of these into a single role and call it SDET. The main differences stem from their technical expertise and overall experience in the software and quality engineering domains:
- Traditional manual tester (or quality assurance analyst)
- Test automation engineer (or software quality engineer)
- Software Development Engineer in Test (SDET)
The traditional manual tester role is not as prevalent as it used to be, as every role in quality is expected to perform some level of test automation in the Agile world. So, here, we will be focusing only on the latter two. This does not mean that manual testing is not done anymore. The responsibilities of a manual tester are shared by test engineers, SDETs, business analysts, and product owners/managers.
There are organizations where software engineers/developers perform their own testing. The engineer developing the feature is responsible for performing the required testing and delivering a high-quality product. Often, small- to mid-sized companies use this approach when they are still not able to build a separate quality organization or hire test engineers. An important downside of this is that there is a lot of context-switching that needs to be done by the developers. Also, consistently maintaining the test coverage at high levels becomes a challenge. A single developer will be focused on providing good coverage for their own feature, and it is very easy to miss the integration or system-level tests needed for a business workflow. Some developers, plain and simple, do not like to perform tests other than unit tests due to the tedious and repetitive nature of testing. It is also hard to keep track of the various details involved in testing different parts of an application.
Next, we will look at the most common testing roles that are prevalent in the market.
Test automation engineer
Test automation engineers are the core members of the quality organization within the software engineering teams. They can either be embedded into the engineering team or be part of a team of quality engineers reporting to a quality manager. The main responsibilities of a test automation engineer include the following:
- Test planning and test strategy development for testing product features (both manual and automated)
- Preparation of test cases and test data
- Setting up the test environment (usually with help from other engineers)
- Create, execute, and maintain automated tests for the product features
- Uses the existing test automation infrastructure to build a sound test automation strategy
- Collaborate with product and implementation teams to achieve good test coverage
- Reporting and retesting of bugs
- Coordinating bug fixes across teams if necessary
- Streamline test processes
To sum it up, on a daily basis, test engineers have to partner with most of the members of the team and stay on top of the user stories. They take ownership of the product quality of the team they are embedded into. Next, let’s see how an SDET is different from a test automation engineer.
An ideal candidate for the SDET position exhibits sound technical skills and a deep expertise in testing methodologies. For all practical purposes, technically, an SDET is as good as a software engineer with extensive knowledge of the quality engineering space. An SDET will be involved throughout the development life cycle from unit test creation to production release validation and always strives to enhance the productivity of both the software engineer and quality engineer in the team.
- Setting clear objectives for test automation
- Creating and improving the test automation infrastructure
- Owning the test automation strategy
- Liaison with software engineers on the team and across teams (if needed) to build and maintain the automation framework
- Being involved in the design and architectural discussions
- Acting as a mentor for the quality engineers in the team
- Interfacing with the DevOps team to ensure testing happens at every stage of the development pipeline
- Adapting and implementing the latest technological developments in the quality engineering domain
Test automation engineer
Creates and executes automated and manual tests
Creates and maintains the test automation framework
Collaborates with the product and implementation teams
Collaborates with software engineers and DevOps teams
Highly skilled in programming with testing skills
Experts in testing either manually or by automation
Develops test automation tools
Uses test automation tools
Table 1.2 – Test automation versus SDET
Even though their roles demand different responsibilities, both the quality engineers and SDET are equally accountable for the release of a bug-free product to the customer. Both should be in the meetings with product stakeholders to make a final decision on every feature release. A quality engineer is good at test case creation, while an SDET specializes in choosing the right ones to automate in the best possible way. There are times when quality engineers and SDETs have to work in tandem to keep the upper management informed and educated about the capabilities of test automation and the effort it takes to achieve the ROI. Also, it is important to note that the relationship between software engineers and quality engineers/SDETs is of utmost importance to the success of any test automation work. It is vital to get continuous feedback from software engineers on the test automation code and design. Software engineers should also be educated, when necessary, about the various benefits derived from test automation
In the next section, let’s get ourselves familiarized with some commonly used definitions in the world of testing and test automation.
Familiarizing yourself with common terminologies and definitions
In this section, we will look at some of the most commonly used terms in the quality engineering space. Since quality engineering is part of the software engineering practice, you would notice quite a few familiar terms if you’re a software engineer:
- A/B testing: Testing to compare two versions of a web page for performance and/or usability to decide which one better engages the end users.
- Acceptance testing: A testing technique to establish whether a feature/product meets the acceptance criteria of business users or end users.
- Agile methodology: This is an iterative approach to software development that puts collaboration and communication at the forefront. Essentially, it is a set of ideas to deliver software to customers quickly and efficiently.
- Behavior-Driven Development (BDD): BDD is a common Agile practice where critical business scenarios are first documented and then implemented to make sure the end product evolves continuously with shared understanding. We will look at an implementation of the BDD framework in Chapter 7.
- Black-box testing: This testing technique focuses on the output of a product, ignoring the design and implementation details.
- Data-driven testing: A common automated testing approach where a reusable logic, often part of a test script, is run over a collection of test data to compare the actual and expected results.
- End-to-end testing: A testing technique to ensure that the integrated components of a product work as expected. This is a very important testing type to verify critical business flows.
- Exploratory testing: A manual testing approach where the product is tested in an investigative and inquisitive manner without documented steps with the main goal of finding bugs.
- Integration testing: A testing technique in which the communication logic of the individual software components or services are combined and tested as a group.
- Load testing: A type of testing to evaluate the performance of a software application by simulating the real-world traffic on the system. It can be done with a software system as a whole or just an API or database. We will consider the setup for load testing in Chapter 8.
- Penetration testing: This is a type of security testing where a tester attempts to find and exploit the vulnerabilities of a software application.
- Security testing: A testing type to ensure that all the required defenses are in place against various types of cyber threats.
- Stress testing: This is a type of performance testing where the system is subject to heavy loads with the intention of breaking it. Stress testing is mainly used to determine the performance limits of a software application. We will look into the setup for stress testing in Chapter 8.
- API testing: This is a testing type focused on verifying the API’s logic, build, and structure. The main goal is to validate the functional logic of the APIs. We will look at an implementation of API testing in Chapter 7.
- Smoke testing: A testing technique that is primarily used to check the core features of a product when there is a change introduced or before releasing it to a wider audience.
- System testing: A testing type used to evaluate the software system as a whole to make sure the integration of all the components is working as expected.
- Test case: This is a collection of steps, data, and expected results to test a piece of code or a functional component within a software application.
- Test plan: This is a document that outlines a methodical approach to testing a software application. It provides a detailed perspective on testing the various parts of the application in its parts and as a whole.
- Test-Driven Development (TDD): This is an approach mainly used in the Agile world where a test is written first followed by just enough code to fulfill the test. The refactoring of the code continues to follow the tests, thus keeping the emphasis on specifications.
- Test suite/test automation suite: A collection of automated test scripts and associated dependencies used to run against the software application.
- Unit testing: A type of testing in which the logic of the smallest components or objects of software is tested. This is predominantly the first type of testing performed in the development life cycle.
- Usability testing: This is a testing technique used to evaluate a product (usually, a web application) with the aim of assessing the difficulties in using the product.
- Validation: This is the process of ensuring the input and output parameters of a product. It answers this question: Are we building the product correctly?
- Verification: This is the process of checking whether the product fulfills the specified requirements. Verifications answer question: Are we building the right product?
- Waterfall model: This is a project management methodology in which the software development life cycle is divided into multiple phases, and each phase is executed in a sequential order.
- White-box testing: A testing technique to validate the features of a product through low-level design and implementation.
We have equipped ourselves with the knowledge of a wide array of terms used in the quality industry. Now, let’s quickly summarize what we have seen in this chapter.
In this chapter, we looked at the importance of testing and test automation. We dealt with some practical guidance on how test automation interfaces with different teams and all the members of the engineering team. Additionally, we looked at the roles in the test automation world and their specifics and similarities. We understood how test automation helps achieve quality at every stage of the product development life cycle. Finally, we looked at the different terminologies used in test automation.
In the next chapter, we will examine what a thorough test automation strategy entails and how to define one. Additionally, we will review a few common test automation design patterns.
Take a look at the following questions to test your knowledge of the concepts learned in this chapter:
- What is software testing, and why do you have to do it?
- What are some important deliverables in software testing?
- What is test automation, and how does it help testing?
- What are the challenges faced in testing and test automation?
- What are the common roles in quality engineering?