Do we, as human beings, make mistakes? The answer to that is an overwhelming yes. There are examples of failures in quality control and decision-making that have shaken the world and resulted in huge losses to the companies involved. For example, we all remember the tragic accident with the Challenger space shuttle that exploded on launch. Was this a case of simple oversight or was it possible to have adequately tested the systems to control the threat of failure and avoid the explosion?
To get to the bottom of such incidents, we need to learn from the very people who are involved in the design and production of such systems. Mistakes are generally unavoidable and can happen at any stage of production, due to reasons such as weak or unclear requirements, hurrying to meet deadlines, or insufficient knowledge about a system. What we can do, however, is follow a process that can help reduce making or introducing any new errors, while preventing known errors from being repeated. This calls for a change in thought processes and a reliance on crafting standard practices in order to produce more successful products. Let's first understand what quality means before we embark on our journey to rewire ourselves to create sustainable and repeatable best practices for delivering defect-free software.
In this chapter, we'll be covering the following topics:
- What is quality?
- How do we ensure quality?
- Software testing thought process
- Quality Management Systems
- Software Development Life Cycle versus Software Testing Life Cycle
- Types of testing
- Preparing test data and managing test artifacts
Quality, just like any other measure, requires a frame of reference or standards for us to compare against customer needs. These standards can help us to maintain and promote the consistency of the products developed, minimize the amount of rework required, and produce a customer-oriented product.
There are seven main ISO principles (by ISO 9000) that revolve around making a good quality product:
- Customer focus
- Engagement of people
- Process approach
- Evidence-based decision-making
- Relationship management
The quality model presented by ISO (ISO/IEC 25010:2011) is useful to assess the quality of products. Adoption of this model can guide organizations on how to improve the quality of software. This model describes the quality characteristics and sub-characteristics that software should possess to qualify as production-ready before it can be released to end users. Let's take a closer look at these characteristics and sub-characteristics.
The product quality model relates to the static properties of software and the dynamic properties of a computer system:
As you can see in the preceding diagram, there are eight product quality characteristics, which I will explain to you:
- FUNCTIONAL SUITABILITY: Characterizes the functional potential and abilities of the software by sub-categorizing it into three different categories:
- Completeness: The measurement of the set of functions implemented and covered for all the specified requirements to satisfy user goals
- Correctness: The measurement of deviation from the specified requirements and the measurement of the precision of the generation of end results
- Appropriateness: The measurement of the generation of suitable and relevant results that can facilitate achieving specified tasks and objectives
- PERFORMANCE EFFICIENCY: Takes three main factors into consideration:
- Time-behavior: Measures the response times, processing times, and tolerance of the throughput of an application against the specified load
- Resource utilization: A measurement of the utilization of the amount and types of resources while performing specified tasks
- Capacity: Checks maximum tolerance and limitations to meeting the required goal
- COMPATIBILITY: This checks whether the system can work efficiently in different environments by examining the following two factors:
- Co-existence: Verifies that software or a product can perform its tasks effectively by sharing common resources and environments with other software/hardware
- Interoperability: Ensures the exchange of information between two separate products or components is smooth and has no impact on the intended results
- USABILITY: Examines the ease of use of software by considering the following aspects:
- Appropriateness recognizability: The verification of the product or service against the user's needs
- Learnability: The extent to which a product or service facilitates users' learning of its usage effectively and efficiently
- Operability: Deals with knowing how easy it is to operate, control, and use the system product or service effectively
- User error protection: Measures the degree to which the system can prevent users from making errors
- User interface aesthetics: Checks how the user interface of the system can yield user satisfaction and a pleasing experience
- Accessibility: Makes sure that users can use the system hassle-free, so that users can use it effectively, without compromising its ability to perform a specified set of goals or the purpose that has been set for the system or products
- RELIABILITY: The extent to which end users can rely on the system or products to perform specific tasks or activities. It consists of four sub-categories:
- Maturity: The extent to which the system or its components meets customers' needs in terms of reliability when functioning normally
- Availability: The measurement of the availability and accessibility of the software or product whenever users want to use it
- Fault-tolerance: The measurement of the deviation of the expected results despite being under undesirable conditions
- Recoverability: Checks how quickly the system can recover from interruptions or failures without losing information
- SECURITY: Everyone wants their data to be secure when it comes to using software products or services. Security is needed to control the unauthorized use of data. In order to meet security needs, it has been sub-categorized into the following categories:
- Confidentiality: Ensures the authorized use of data
- Integrity: Capability to prevent unauthorized and unofficial access to data by invalid users, products, or other services that could potentially cause harm by modifying data
- Non-repudiation: Refers to the degree to which the actions and/or events can be ascertained to have occurred so that it cannot be disputed or repudiated late
- Accountability: Ensures that activities or actions performed by an actor in the system can be traced back uniquely to having been performed by the same actor
- Authenticity: Ensures that the actor or person is uniquely identifiable in the system, which can be proven to match the identity as claimed
- MAINTAINABILITY: Deals with the maintenance of the software product or service to fulfill customer needs and to continue to perform efficiently. It has been sub-categorized into the following:
- Modularity: Measures the effect on the other parts or components of the system when one part undergoes a change. High cohesion and low coupling are what we strive to achieve. Thus, the code for a particular module should be closely related within the module but each module should function independently from other modules.
- Reusability: Refers to the degree to which specific parts or components of software can be reused.
- Analyzability: Deals with checking the ease of analyzing the software or product in order to detect failures, deficiencies, and/or the impact of modifications.
- Modifiability: A measurement of the extent to which the software product or service is modifiable without affecting its current efficiency and functionality.
- Testability: Defines the baseline for the test that confirms whether the software product or service meets the specified requirements.
- PORTABILITY: Defines the resilience of the system to perform efficiently with the changes in software, hardware, or the environment.
- Adaptability: A measurement of the extent to which the software system or product can be adopted without affecting its efficiency
- Installability: A measurement of the capacity of a product or software to be installed or uninstalled in a stipulated environment
- Replaceability: The ability of software to be replaced with other software to perform the same set of tasks in the same environment
The QUALITY IN USE model is applicable to the complete human-computer interaction and has the following five characteristics:
Let's look at these characteristics in detail:
- EFFECTIVENESS: A measure of the accuracy and completeness of results generated by the component or functions of the software product or service.
- EFFICIENCY: A measurement of the utilization of resources needed to produce complete and accurate results.
- SATISFACTION: These are four ways to test user satisfaction with software:
- Usefulness: Makes sure that the software satisfies customer needs and functions as expected
- Trust: Ensures software fulfills customer expectations
- Pleasure: A measurement of how much the software product or service helps customers meet their needs
- Comfort: The degree to which the user is satisfied with the software product and feels comfortable using it
- FREEDOM FROM RISK: There are three main ways to analyze the level to which a system can reduce potential risks, as follows:
- Economic Risk Mitigation: Analyzes how much the system can mitigate potential risks that could have a severe impact on various levels, such as financial, commercial, reputation, and disrupt the efficiency of the software product or service
- Health and Safety Risk Mitigation: Identifies the level to which software can mitigate potential risks to end users
- Environmental Risk Mitigation: Identifies how much software can mitigate the potential risk to property or the environment
- CONTEXT COVERAGE: The level to which the system meets the specified context can be measured by the following:
- Context Completeness: Verifies that software meets specified objectives and can be used efficiently without any risk in all specified contexts of use
- Flexibility: A measurement of the degree to which the software can be used beyond the specified requirements
It takes a lot of work to establish a brand and even more work to continue to build it and to sustain trust in the brand. To survive in today's competitive market and to maintain a good reputation, organizations incorporate testing phases and dedicate time to testing and debugging software products in the Software Development Life Cycle. Building quality products reduces the risk involved and boosts performance. A well-designed product can decrease the level of user dissatisfaction and frustration. It also increases the product's reliability and improves the end user's experience, resulting in happy customers.
Products and services have a direct impact on their customer base, since they are released on the market to solve a problem that customers face. Thus, it is imperative that organizations that provide such services or products are responsible for their quality both before and after they hit the market. Organizations need to consider both internal and external environmental factors that can affect a product. This requires proper planning and delegation to dedicate teams and resources to each facet of the product. Usually, teams consist of the following roles:
- Product managers
- Project managers
- Quality Assurance (QA) managers
- Business analysts
- Software developers
- QA engineers/testers
This team works toward the defined goal together and delivers the product. There are other focus areas in which we need to perform some groundwork to help organizations effectively manage the delivery of quality products, but the focus should always be on improving products and services. Knowing your customer is the first step in enhancing the quality and standards of the products or services they receive. Sustaining them is the key to a successful product. In the next section, we will discuss the process of developing sustainable, high-quality products and services.
Quality assurance is the key to the success of any business. The software development process goes through various phases, and ensuring quality at every step is a must. In the previous section, we saw why it's important to deliver a quality product. In this section, we'll learn how we can deliver quality products.
Delivering a project with a defined scope within a specified amount of time, with a set budget, and with certain quality standards expected by the customer are key factors in making a project successful. However, reaching a reasonable trade-off between these factors is necessary to get to market quickly and to remain competitive.
For example, if the scope of the project increases while the resources and time remain the same, it will affect quality directly, since the team to remain to deliver more within the stipulated time frame. Since their work hours do not change, the team might have to cut the testing time or reduce test coverage to deliver on time. The following diagram depicts the Iron Triangle:
The Iron Triangle
The objectives of the triangle—also referred to as the Iron Triangle—help us to deliver projects successfully. To ensure quality, we need to satisfy the Iron Triangle's objectives. A traditional project management triangle consists of the following:
The Iron Triangle helps project managers to analyze and understand the trade-offs while catering to these factors. A proper balance must be achieved to ensure the desired levels of quality to produce a successful product.
Software products are the result of a multidisciplinary team coming together to make a concrete product that serves customer needs. Although the team is formed of several roles, such as managers, analysts, developers, and testers, each role is essential to deliver a suitable and robust product. This requires each of these contributors to be a part of the quality process.
If every role has a part in ensuring quality, why do we need a separate role for testers? One simple reason is to introduce a fresh set of eyes. While it is possible for a developer to test their own code or software, it requires a different mindset to ensure quality. A developer's mindset is to prove that their software works, but a tester's mindset is to make the product fail to work. Thus, the tester's role is more about finding defects in the software.
Now that we understand the difference in the thought process required for software testing, let's discuss the key skills an effective tester should possess:
- Analytical thinking: There are various methods to approach a problem, however, solutions based on the analysis of data tend to be more accurate and optimal. Hence, a tester should analyze complex problems first and then design steps to resolve it. It helps a tester to plan and come up with various scenarios to compare with the business requirements.
- Observability: Good observational skills and attention to detail are always important when trying to identify defects. This makes our job easy by simply observing and identifying what's not working. This helps testers to design end-to-end workflows, comparing the throughput, validating UI design and functionality, and so on.
- Logical thinking: Pattern analysis is another aspect of noticing what is going wrong and where the problem could lie – recognizing patterns, connecting the dots, and the ability to perform root-cause analysis are things that help testers stay relevant in the event of automation.
- Reasoning: It's always better to know an application in depth, not only its functionalities and user interface but also the logic built in the code. Supporting your reasoning with artifacts or proof is essential. Artifacts, such as logs and screenshots, are good sources of proof that testers can provide along with a defect. Not only does this help to reproduce defects, it also helps developers to debug code and provides a place to start debugging.
- Test-to-break attitude: You should have a mindset that is bent on breaking the application. In order to do that, you need to first understand how the specific function or application being tested works. This results in generating ideas for scenarios that can help to find hidden and latent defects.
- Broadening perspective: Domain knowledge adds value when it comes to verifying a product, as, sometimes, it's essential for testers to think from a broader perspective. Testing is not limited to verifying the part or component of the software where the changes have been made; it also deals with testing the parts where nothing has changed. Verifying whether the developed product works in a different environment, with different software, and whole different set of parameters, makes it more reliable.
- Understanding end user's perspective: Testing is not always about breaking; it's also about ensuring that the software works as expected and satisfies the intended purpose. Hence, when validating software, testing from an end user's perspective helps testers see different dimensions to perform verification effectively.
- Good communication skills: This plays a key role in team environments—it helps testers collaborate better and build a rapport as a team. Testers play a very delicate role, which is to effectively find flaws in developers' code. Hence, it is imperative for them to ensure the team focuses collectively to improve the software rather than blaming the developers.
- Thinking outside the box: Out-of-the-box thinking results in finding pathways that have probably not been explored before. This helps testers to find hidden and not-so-obvious defects.
- Learning attitude: As technology keeps changing, it is a must for a tester to keep themselves abreast with the newest technology. Learning new skills is crucial for them to be able to survive and adapt.
As we have seen so far, there are various ways we can ensure quality in our projects, but how do we evaluate whether the quality system we pick is effective? This becomes more of a concern if one organization needs to contract its work to another and needs to know whether the contractor will be able to provide quality services and products. This need for the quality system to be auditable necessitates the use of a Quality Management System (QMS).
A QMS is a set of standards that defines how an organization can meet the requirements of its customers and other stakeholders. Quality standards are a set of guidelines, rather than actual standards, that have been widely accepted in the software industry with defined processes and evaluation metrics to help improve the quality of software. The motivation for the selection of a standard is left to the business and the management to decide. Once certified, it is imperative to have the quality plan in place based on the certification that is opted for.
All quality standards have the same underlying principles:
- Well-defined processes to develop software
- Aligning people with processes to synergize and promote commitment to the quality-improvement program
- Enforcing the requirement to produce documentation for each process
Thus, processes should be used as facilitators for quality improvement rather than a hindrance. It is the management's responsibility to foster a culture within the organization that works within the well-defined framework for development while promoting incentives to drive quality at every step of the development process.
There are several software-engineering standards that have been developed by major standardization and certification bodies. The ISO 9000 and Capability Maturity Model Integration (CMMI) are the most widely-used international standards in software-engineering and product-development organizations. Let's look at them in detail to understand how implementing standards can help an organization to ensure quality.
ISO 9000 is a set of standards defined by ISO. If an organization needed to be certified, it would certify for the latest standard, ISO 9001:2015, which replaced the previous version, ISO 9001:2008. ISO 9001:2015 provides guidelines that drive continual improvement for an organization.
The ISO 9001:2015 standard specifies 10 clauses, as summarized in the following points:
- Clause 1 (Scope): Explains what the standard is for and what it encompasses. The scope clause covers the following aspects:
- The goals and objectives of the standard to understand the expectations of the certifying organization
- The approach and reference to customer requirements
- The approach and reference to regulatory or statutory requirements
- The applicability of the standard requirements, since they are applicable to all sorts of organizations, regardless of their type, size, or the products and services being provided
- Clause 2 (Normative references): Includes the terms, principles, fundamental concepts, and vocabulary that are essential for the application of the ISO 9001 standard. It also provides references to other documentation to assist in complying with the requirements of the ISO 9001 standard.
- Clause 3 (Terms and definitions): Specifies the terms and definitions given in the ISO 9000:2015 that apply to ISO 9001. This clause helps to clarify unfamiliar terms and resolves unnecessary disputes or conflicts.
- Clause 4 (Context of the organization): Establishes the context of the QMS. The organization achieves this by doing the following:
- Identifying relevant external—such as market-driven, local or global environments, and competition—and internal factors—such as values, culture, or the performance of the organization—that can affect the quality of the product being delivered.
- Establishing the requirements and expectations of all stakeholders.
- Determining the scope of the QMS; whether it needs to be implemented organization-wide or for relevant business functions.
- Establishing, maintaining, and continually improving the QMS using a process approach.
- Clause 5 (Leadership): Dictates the activities required from top management for the success of a QMS, as follows:
- Being actively engaged in the operation of the QMS and ensuring that it is embedded in the organization’s processes
- Direct and establish a quality policy that aligns with the business strategy to formalize the goals and commitment required from all parties
- Ensure that roles, responsibilities, and authorities are defined for all employees and that everyone involved is made aware of them
- Clause 6 (Planning): Focuses on creating an action plan to address risks and opportunities. It requires the organization to do the following:
- Understand the risks and opportunities relevant to the scope of the organization, as required in clause 4
- Establish clear, measurable, and documented quality objectives with an action plan to monitor, control, and communicate risks and opportunities effectively
- Create a change-management plan to carry out changes to the system in a systematic way
- Clause 7 (Support): Stresses the basic HLS clauses of bringing the right resources, the right people, and the right infrastructure, are as follows:
- Ensure adequate resources are provisioned, which includes employees, equipment, and IT systems
- Assess existing competence and fill gaps in competence with training and documentation
- Awareness about the quality policy is a must for all personnel, as there is also the need to understand the relevance of their roles and the implications of non-conformance
- Communication, both external and internal, is key to the success of the system; the organization needs to plan and implement an effective communication process
- Document information to demonstrate compliance in any format that suits the organization while implementing appropriate access controls for information security
- Clause 8 (Operation): Focuses on enabling the organization to meet customer requirements by executing plans and processes, as follows:
- Establish appropriate performance monitoring for the continual improvement of all functions
- Understand customer requirements for products and services through effective communication
- Create a design plan that includes all customer specifications, budget, drawings, and so on
- Require the organization to select, evaluate, and re-evaluate all external entities sourced for procuring processes, products, or services
- Clarity in product specification and evaluation to monitor whether the processes, products, or services being provided by the external entity conform to the customer's requirements
- The need for systematic planning and the execution of all production operations to ensure quality control and to demonstrate the capabilities to deliver consistently to meet customer expectations
- Monitor and measure products and/or services to verify conformance to customer requirements, and have the evidence duly documented by authorized personnel before the product or service is released to the customer
- The control of nonconforming output being released to the customer and the establishment of a course of action to handle nonconforming deliveries
- Clause 9 (Performance Evaluation): Details ways to measure and evaluate the QMS to ensure it is effective and sustainable:
- Utilize simple analysis methods, such as bar charts, or complex statistical process controls to analyze collected data to identify opportunities for improvement and to measure the effectiveness of the management system
- Establish a clear and consistent internal audit program to audit processes at regular frequencies to find nonconformities and trigger preventive measures for improvement
- Require top management to be involved in reviewing the quality-management system to ensure continuing suitability, adequacy, and effectiveness
- Clause 10 (Improvement): Requires the organization to determine and identify what improvement means with regard to the following cases:
- Establish means of improvement by reviewing processes, products, or services, and analyzing the results from the management system
- Begin corrective actions to prevent the recurrence of non-conformities by using root-cause analysis, problem-solving methods, and providing training to improve capabilities
- Build a feedback mechanism that requires the management system to utilize input such as corrective actions, internal audits, management reviews, and customer feedback for continual improvement
These clauses can be grouped in relation to Plan-Do-Check-Act (PDCA), since it is the operating principle of the ISO 9001 process approach, which drives continuous improvement in the organization. The PDCA principle combines planning, implementing, controlling, and improving the operations of a QMS, as shown in the following diagram:
Let's look at each stage of the PDCA cycle:
Here's an example of the PDCA cycle for an SQA team—if the team wanted to increase the number of defects detected in each release sprint by 20%, the team would create a plan for making changes to the processes, following which the changes would be made to the process, and the process would be executed. After execution, checking the results shows a defect detection ratio of 15%, which is then acted on to make further changes. This is then taken up in the next planning phase to plan defect-detection until the goal of 20% is reached.
CMMI is a set of guidelines that enable organizations to produce good-quality software and improve its performance. CMMI was developed mainly to assess an organization's ability to take on large development projects for the US Department of Defense.
CMMI released version 2 of the model in March 2018. This was an update from version 1.3. CMMI v2.0 is divided into 4 categories and 10 capabilities with 25 practice areas.
Now let's understand the categories and the practice areas:
- Doing: This category deals with designing and developing high-quality products that adhere to customer needs while reducing supply-chain risks. The doing stage includes four capabilities with 10 practice areas, as follows:
- Ensuring quality (ENQ):
- Developing and managing requirements: Obtaining requirements, ensuring the mutual understanding of stakeholders, and aligning requirements, plans, and work products.
- Process quality assurance: Verifying and enabling the improvement of the quality of the processes performed and the resulting products.
- Verification and validation: Processes for this practice area should do the following:
- Verify that the selected solutions and components meet their requirements
- Validate that the selected solutions and components fulfill their intended use in their target environments
- Peer review: Utilize subject matter experts (SMEs) and peers to review the product to identify and address issues.
- Engineering and Developing Products (EDP):
- Delivering and Managing Services (DMS):
- Selecting and Managing Suppliers (SMS):
- Ensuring quality (ENQ):
- Managing: This category deals with improving staff productivity while managing disruptions from the Porter’s Five Forces model to achieve speed-to-market. This category includes three capabilities with seven practice areas, as follows:
- Planning and Managing Work (PMW):
- Estimating: Forecasting the factors of the Iron Triangle needed to produce quality product or solution.
- Planning: Developing plans describing delivery processes based on the standards and constraints of the organization. This includes budget, schedule, and resources, as well as stakeholders and the development team.
- Monitor and Control: Tracking the project's progress to assert appropriate controls if the project deviates from the plan.
- Managing Business Resilience (MBR):
- Risk management and opportunity management: Identifying, recording, and managing potential risks and opportunities
- Incident resolution and prevention: Analyzing nonconformance to find the root cause and create a plan to prevent the event from recurring
- Continuity: Establishing contingency plans for sustaining operations during emergencies
- Managing the Workforce (MWF):
- Planning and Managing Work (PMW):
- Enabling: This category deals with securing stakeholder buy-in and assuring product integrity. It includes one capability with three practice areas, as follows:
- Supporting Implementation (SI):
- Causal analysis and resolution: Understanding the root cause of all results and acting to prevent the recurrence of nonconformities and/or acting to ensure conformities
- Decision analysis and resolution: Making and recording decisions using a recorded process that analyzes alternatives
- Configuration management: Managing the integrity of deliveries using version control, change control, and appropriate audit mechanisms
- Supporting Implementation (SI):
- Improving: This category deals with ensuring that performance goals support business needs while establishing sustainable efficiencies. It includes two capabilities and five practice areas, as follows:
- Improving Performance (IMP):
- Process management: Managing and implementing the continuous improvement of processes and infrastructure to identify the most beneficial process improvements that support accomplishing business objectives in a sustainable way
- Process asset development: Recording and maintaining the list of processes used to perform the work
- Managing performance and measurement: Managing performance using measurement and analysis to achieve business objectives
- Sustaining Habit and Persistence (SHP):
- Improving Performance (IMP):
To learn more and get updates about the new CMMI v2.0, please visit https://www.cmmiinstitute.com/cmmi/model-viewer.
The previously-mentioned categories and process areas are basically factors to improve the business performance of an organization. The ranks at which these organizations would be at, based on how they have implemented those process areas, are called maturity levels.
The following diagram shows the levels of software-process maturity. Based on software-process maturity, an organization can be at one of these six maturity levels:
Let's look at these maturity levels in detail:
- Maturity Level 0 (Incomplete): Organizations at this level do not have any defined processes. These organizations usually work on ad hoc procedures, and any positive outcomes are the result of chance. Work in these organizations may or may not get completed.
- Maturity Level 1 (Initial): Organizations at this level are characterized by last-minute chaos in terms of delivery due to a lack of clarity. Work in these organizations gets completed but its success is dependent on one or a few highly competent people. In most cases, work is often delayed and over-budget.
- Maturity Level 2 (Managed): Organizations at this level follow a well-defined process at the project level. Every project is planned and executed in a systematic way. Every activity is measured and controlled to be managed and improved upon later.
- Maturity Level 3 (Defined): Organizations at this level have a well-defined process across the organization, and processes at the project level are derived from ones defined at the organizational level. These organizations have clearer definitions of processes compared to Level 2 organizations, and targets to achieve performance objectives at both the project and organization levels. Efforts are also made to measure and continuously improve process definitions.
- Maturity Level 4 (Quantitatively Managed): Organizations at this level build on Level 3 practices, and use statistical and other quantitative techniques to understand process performance and product quality. Utilizing scientific quantitative tools helps the organization identify and predict variations, which provides agility to improve and achieve quality and performance objectives.
- Maturity Level 5 (Optimizing): Organizations at Level 5 build on Level 4 practices and utilize quantitative techniques to continuously optimize their process and product performance. These organizations are flexible and able to pivot, thus providing a platform for agility and innovation.
To learn more about maturity levels and the adoption of the CMMI in your organization, make sure to check out https://consulting.itgonline.com/cmmi-consulting/cmmi-v2/.
There are various ways in which continuous improvement can be achieved in either CMMI or ISO 9001:2015 implementations. You can also integrate business improvement processes, such as Six Sigma quality control or the Consortium for IT software Quality (CISQ) quality model. A clear understanding of business processes, and alignment with the company goals and objectives are necessary for the success of the system.
The Software Development Life Cycle (SDLC) is a process to develop and deliver software products or services that details the end-to-end phase, from designing, coding, and testing, to maintaining the product after release. The Software Testing Life Cycle (STLC) is a subset of the SDLC. Let's explore both the SDLC and STLC in detail.
The SDLC is a planned and organized process that divides software development tasks into various phases. These phases help the team to build a product that adheres to the factors of scope, time, cost, and quality. It also helps the project manager to monitor and control project activities at each stage and perform risk analysis effectively.
Any traditional SDLC comprises the following basic, but critical, phases:
- Requirement analysis: A software product exists to solve a problem for the customer. Understanding customer needs is hence essential to building one. Requirement analysis is the phase where this is achieved. This is the stage where we try to answer the question, what do we want to build and why?
- We create formal documentation (for example, a Business Requirement Document (BRD)) with customer needs, wants, and wish lists.
- We also identify the objectives, goals, risks, resources, and the technology being used, as well as its limitations.
- We need to specify what is within and what is out of scope for the selected iteration or version of the software that has been committed to for the customer. Usually, a team of client managers, business analysts, and project managers work together to prepare the final version of the business requirement document. Once it's ready and approved, the team moves on to the designing phase.
- Designing: Designing is done based on the requirement documents.
- In this phase, the team prepares high-level and low-level design documents, to further narrow down the broad requirements
- These documents help to establish a logical relationship between different components of the application, and define its architecture in detail, including a format, look and feel, and a UI mockup
- Once everything is ready, it moves on to the team of developers to start with actual coding
- Coding: The end of the designing phase kicks off the start of the coding phase, where developers start to build actual applications.
- In this phase, developers convert every component, the logical relationships between them, and build the architecture as mentioned in the high-level and low-level documents. The main goal here is to generate an actual workable software product or service, as designed in the mockup.
- Developers make sure to meet the customer requirements mentioned in the requirement documents.
- Developers also perform unit testing, a method for testing functions to get the desired results by passing different input parameters.
- Once the code looks good and it's ready for the testers to verify, the developers deploy it in the test environment and make it available for the testers to start testing.
- Testing: This is where testers verify the application to confirm that it meets customer requirements. The main goal is to determine whether the solution works for customer needs without any issues or defects.
- As a part of the testing process, testers verify critical paths, verify all the necessary workflows, and perform happy path testing. However, they also try to break the application by passing invalid parameters in the form of negative testing.
- Using different testing types, they confirm whether the product or service is acceptable to the user, and think from the end user's perspective when validating every single text field, checkbox, links, and buttons—in short, every single UI component of the application.
- Testing the application under stress to see how it reacts under extreme conditions and how it performs by adding load to it are parts of performance testing. Once the software is thoroughly tested and confirmed, with the number of known defects, which are either in closed or deferred status, it's shipped to the end users.
- Maintenance: This is where errors get reported or suggestions and/or enhancements get added by end users after realizing the product or service are implemented, as a part of the maintenance phase. It can also be a part of releasing or upgrading the current version of the software or service.
- Requirement analysis: Once the project gets initiated, the team actively starts working on gathering customer requirements. In this phase, testers, business analysts, and developers take a closer look at each specification requested by users. For requirement analysis in STLC, testers can do the following things:
- Testers need to break drown broader and more complex requirements into smaller pieces to understand the testable requirements, the scope of the testing, and verification key points, and to identify the gaps in the requirements
- They can clarify their doubts regarding technology or software requirements, limitations and dependencies, and so on with the developers and business analysts, and improve suggestions or highlight missing information that needs to be added into the requirements
- Testers can also highlight risks and develop risk-mitigation strategies before proceeding to the test-planning phase
- Test planning: This is where testers (usually lead testers or managers) plan testing activities and milestones based on various factors, such as time, scope, and resources that help them to track the progress of the project. Let's check out some activities that the tester performs during test planning:
- In this phase, testers plan test activities and strategies that can be used effectively during the subsequent testing phases
- Also, the scope of testing needs to be identified and parts out of scope should be marked as well
- They also need to decide on the testing techniques and types that will be implemented during the test-execution phase based on the current product requirements
- Along with that, an understanding of the tool's requirements and the number of resources required with their skill level can help them plan tasks better
Considering these factors and the timelines for the selected project, a tester can prepare an effective test plan that will fit into the project budget and help the team to create a quality product.
- Test designing: This is where the test team starts to break down each requirement and converts them into test scenarios. These test scenarios cover happy path, positive testing, the critical path that needs to be verified, and functions that need to be verified with a different set of parameters. It also consists of negatives scenarios, acceptance tests, and scenarios based on user-interaction workflows and data flows.
- Based on the type of application and the types of testing listed in the requirement analysis, phase testers can work on creating automated test scripts, adding scenarios for stress and load testing, and performance testing can help testers to test the application better and find more defects.
- Once the scenarios are ready and reviewed, testers move on to preparing the test cases or test scripts (in the case of automation testing) in order to list the detailed steps.
- One scenario can have one or more test cases, whereas a requirement can be linked to one or more scenarios. This mapping is helpful when creating a Requirement Traceability Matrix (RTM).
- Environment setup: Establishing a separate test environment is always good practice. Keeping testing code distinct from development code can help both testers and developers debug the code in the specific version and get to the root cause more quickly. Also, it gives developers a chance to make bug fixes in the code and in their copy of the code, and to verify it in their environment to confirm that the fix is working before sending it to the testers. It saves the time and effort needed to log defects and collect artifacts.
- When setting up the environment, testers need to ensure that they have configured the required version of the tool, the software, the hardware, and the test data.
- They also need to make sure that they have authorization to access the environment with the required roles to test the application, databases, and other tools required. The testing environment should mimic the end user's environment. This results in documenting the known behavior of the product and helps to manage expectations after delivery.
- Test execution: Once the code is ready and unit tested by the developers, it's deployed in the test environment so that testers can initiate the test-execution phase.
- The first test that testers perform is a smoke test to validate whether the software product or service caters to the basic requirements
- After the software passes the smoke test, testers can continue with the validation process, following the types of testing as planned during the test-planning phase
- During the execution phase, testers log an undesirable result as a defect. Once the defects have been fixed, testers need to retest the parts that have been changed and the part of the application that has not been changed, as part of regression testing
- Test reporting: It is very important for testers, leads, and managers to track and monitor the progress of the project consistently so that it becomes easy to identify obstacles or risks early. It also helps being agile to provide the solution and resolve the problem.
- Reporting the test helps the stakeholders to know the status of the test execution after each iteration or test cycle.
- It also helps defect managers to identify the blocked test case that is dependent on the defect.
- Accordingly, its priority or severity can be changed so that it can help to progress test execution.
- At the end of all iterations, a final report is prepared with the number of defects found during the test execution phase, the number of defects closed or marked as deferred, and the number of test cases passed or marked
N/A. Along with this report, all the artifacts are validated and made sure that it's been added whenever it's needed.
- Closure: During the closure phase, test managers or test leads make sure that all the tests completed successfully, as per the schedule.
- Team leads or managers make sure that all the required deliverables and closure documents are approved and accepted as per the evaluation criteria, and signed off as part of the closure phase
We will be learning more about each phase in the STLC, along with its practical implementation in Jira and using its plugin, in the following chapters.
To ensure the quality of the product, we need to understand our application and its testing needs to make it more robust and bug-free. Based on the customer requirements and the type of product we are developing, we can come up with a list of the types of testing that are needed during the test planning phase of STLC.
In this section, we will be learning about the different testing types that can be used during the test-execution phase:
- Black-box testing: Pays attention to external behavior, specifications, and desirable end results produced by the application by passing a set of known input parameters rather than the internal structure of the code. The main goal here is to verify the software in the way the end user will test or use it without having knowledge of the internal workings of the system under test. Black box testing helps testers to identify whether it meets all stated and unstated requirements and behaves as per end user's perspective. There are various techniques that can be used in this testing type:
- Analysis of requirements specification: Confirms whether the software behaves as specified in the requirement specification document. It is reachable and available for end users to use, it behaves consistently and accurately. Testers prepares traceability matrix, where they confirm that their test scenarios have covered all the stated requirements. We will be covering requirement traceability in detail in the following chapters.
- Positive testing and negative testing: Positive testing refers to validating all the positive scenarios, in short, happy path testing. It verifies whether the end-to-end workflows, or part of the workflows, function as expected. Negative testing is the reverse of positive testing, where the intent is to show that the application does not behave as expected. In this case, testers must come with a set of input parameters, or conditions in which the application will not withstand and break. This is a very effective way to find loopholes in an application.
- Boundary-value analysis: When testing is done at the boundary level, or at the extreme limits (edges), it is referred to as boundary-value analysis. It is a very effective technique for finding defects. It's a condition where the limitations of the application's functions are identified and adding testing around those limitations gives positive or negative results. If it works around those conditions, that means precaution has been taken by developers, and if not, testers log it as a defect.
An example of boundary value would be to create a password field that accepts letters (A-Z) and numbers (0-9) with a minimum length of 6 and a maximum of 14 (that is, validating condition, if variable length <=6 and >=14, then throw error). In this case, testers can try to test this field by creating a password with the following:
- 5 characters
- 6 characters
- 7 characters
- 13 characters
- 14 characters
- 15 characters
It helps testers to identify whether it allows the user to create a password under or above the specified boundary range.
- Equivalence partitioning: Involves creating a small set of input values that can help generate a different set of output results. This helps with test coverage and reduces the work of the tester by verifying every single input value. This partition can consist of a set of the same values, different values, or a set of values with extreme conditions.
For example, an insurance company has three types of subscription offers based on the users' age: the price is $100 per month if they're under 18, $250 if they're aged in the range 19-40, and $150 if they're older than 41. In this case, the input set of values can consist of the test data of users aged in the ranges 0-18, 18-20, 19-39, 35-40, 40-42, and above 41. It can also have some invalid input parameters, where age is 0, -1, a set letter (ABCD), decimal point values (33.45), three- or four-digit values (333 or 5,654), and so on.
- White-box testing: This is done at the code level for any software application. It involves verifying functions, loops, statements, its structure, the flow of data, expected output results based on a specified set of input values, as well as its internal design. A part of it is covered during the code-review process and unit testing to ensure the code coverage as per the specified requirements. Statement coverage, path coverage condition, and function coverage are all components of code coverage that help the reviewer to review every aspect of the code. With the help of white-box testing, we can identify the following things:
- Unreachable parts of code, mostly created using
- Variables (local or global) that have never been used or that store invalid values
- Memory leaks where memory allocation and deallocation for variables or pointers has been taken care of
- Whether a function returns values in the right type and expected format
- Whether all the required variables, pointers, classes, and objects are initialized as expected
- Whether the code is readable and follows the organization's coding conventions
- Whether the newly-added code functions as expected with the existing part of the code
- Whether the data flow is sequential and accurate
- Its efficiency and performance to optimize the code
- Resource utilization
- Whether all the configuration requirements have been met and include all the dependencies to run the component or the entire application
- Unreachable parts of code, mostly created using
- Integration testing: Any piece of software is made of different modules or components and/or is used along with other software. In order to make sure that two or more individuals and independent units or components work together seamlessly, testers perform integration testing. This confirms that data across the different components of a system or two separate systems flows smoothly. An example of integration testing would be an online shopping website where selecting the item that you want to purchase and pay online using the internet-baking option, where you use your bank credentials to make a payment.
- Performance testing: The performance of an application is directly proportional to its business growth and value. Slow-performing applications are usually avoided by customers, which is why performance testing is important. It focuses on the factors that affect the performance of an application, product, or service, such as response time to perform any transaction or even load a page, throughput, and availability when a number of people are accessing it at the same time. On the other hand, if there are other jobs depending on one particular job that becomes slow or unresponsive, it delays all the dependent jobs and makes the situation even worse. Requirement specification documents should specify acceptable performance, limitations, and breaking situations. Performance testing can further be categorized into two components:
- Stress Testing: Stress testing involves testing the system under test (SUT) under stress and reaching its breaking point. This helps testers to know under what circumstances the system will break and become unresponsive.
- Load Testing: Load testing involves testing the SUT under a specified heavy load, in order to make it withstand it and function as expected. An example would be a website that functions properly if there are 1,000 users accessing it simultaneous to upload photos up to 2 GB. It will break if there are more than 1,100 users accessing the website and uploading data that is more than 2 GB. Now, in this case, testers can create a set of concurrent users to access the website simultaneously and upload data greater than 2 GB, for example, using 1,110 users, 1,200 users and so on. The minute the system becomes unresponsive and stops working determines its breaking point. The point until which it can still respond and work becomes part of load testing.
- Regression testing: The main point of regression testing is to verify that newly-developed code or an updated version of code has no adverse effects on the existing and functioning part of the application. Sometimes, a newly designed part of an application or feature works perfectly but it breaks existing working functions. This is where regression testing comes into the picture.
Regression testing is mostly done at the end of test cycles to ensure that the entire application—after making code changes multiple times due to bug fixes or an upgrade of any component of the code or database—still gives the desired results. Most of the time, testers use automated scripts to perform regression testing repetitively on the application. Tools such as HP-UFT, TestComplete, Eggplant, or Selenium with JUnit to NUnit are very useful for this type of testing.
- Acceptance testing: Confirms whether the software product or service is acceptable and functions as per the end user's expectations. Most organizations have user acceptance testing (UAT) as a separate phase of testing, which is generally conducted by a small group of end users or clients. The goal is to verify that the software product functions and meets customer needs, is safe to use, and has no ill effects on end users. It gives the development team an opportunity to incorporate any missing features or enhancement requests before releasing the product to a wider audience. At this stage, the client can still reject the product or its feature. When testing is carried out within the organization, mimicking a real-world environment setup, it's referred to as Alpha testing. When the acceptance test is carried out by end users in their own environments, it's referred to as beta testing. In this type of testing, the development team is not involved with the actual end users. This is a good test to share a beta version of a product with a relatively small group of actual end users so that they can verify the product, its functionality, and its features.
However, when releasing a beta version, it's important to list the hardware or software requirements. Along with that, a dedicated team of support executives should be made available to address customers' queries. Also, this version of the software could be made available for free for a limited time (generally, two weeks to a month) to encourage more people to participate in the actual test.
In software testing, verifying test scenarios with valid or invalid parameters, and different sets of input values is crucial to make sure that it behaves as per the designed test. In order to validate end-to-end scenarios and happy path workflows, we need to create test data. However, sometimes, it's a requirement of the test to bring the system to the initial level from where testing can begin. All these things can be done as a part of the test data preparation phase.
Depending on system requirements, testers can create different sets of authorized and unauthorized users with different roles, such as admin, or customer support executive, all of whom have different sets of permissions to access the application. Creating a concurrent set of users to access the application is also part of test data preparation.
Testers may also have to use different types of files, such as
.jpeg to import the data in order to make sure that it works, or doesn't work as defined in the test case. In these files, they can add valid or invalid users, leave some fields blank, or add unacceptable values that will break the application or throw an error.
Testers also use these files as an input for their automated test scripts, which, in turn, do the job of test validation by inserting test data read from these input files.
Managing test artifacts involves storing and managing the evidence that has been generated as a part of the test execution phase, or it can also be a set of deliverables generated after any phase of the SDLC.
These artifacts are very useful when managed properly:
- Artifacts generated during defect logging and retesting saves time for both developers and testers, preventing them from having to debug every part of the code, reproducing tests using specified test data, a build version and environment. Log error files, screenshots, database queries with the result set, input parameters, the URL of the application used during testing, the environment, the date tested on, the build number, and so on.
- Deliverables generated after the execution of each phase in the SDLC, such as project charters, BRDs, test plans, RTMs, and test execution reports, often serve as input to the subsequent phase and help teams to focus on the objective and track the progress of a project.
- Other types of artifacts, such as code review or inspection reports, project performance reports, and lessons learned reports, can be useful across the organization and can be used by other team members to make changes to their current strategy. These documents can be part of knowledge base of an organization.
- Training documents, templates (for project management plans, project charters, or requirement specifications) are also part of knowledge management and serve the training needs of new recruits to know more about the organization, its products, and standards followed by them.
- The end user's manual or product-specification documents are usually shared with end users, by the organization, to help them use software or services effectively.
The involvement of testers is not only limited to preparing test data but also preparing and building the knowledge base of an organization and making sure that all the information in the previously listed document is up to date and accurate.
In this chapter, we discussed software quality assurance in detail. Let's summarize the important points—a quality product refers to products that meet customer requirements. The ISO/IEC 25010:2011 quality model enumerates 13 characteristics that help us to assess the quality of products. Producing quality products requires a combination of complementary skills and roles as part of the product-development team. Scope, time, cost, and quality are intertwined, and hence a balance between them is essential when developing a product that caters to an organization's capabilities as well as customer satisfaction. A test-to-break attitude is necessary for a tester to be successful in their career. We looked at the thought process a tester needs to bring to the table to be proficient at the job. A quality management system addresses the processes to be followed to develop quality products. We discussed ISO 9001:2015 and CMMI v2.0 in detail. We looked at the five stages of the SDLC and learned how the STLC fits into the picture. We discussed the seven types of testing that a tester can utilize when planning tests based on customer and product needs. In the final section, we learned about how test data and artifacts are prepared, managed, retained, and shared for effective test management.
In the next chapter, we will look at project organization in Jira and explore the Zephyr, Test Management, and synapseRT plugins, which will be used to implement test management in Jira.