(For more resources related to this topic, see here.)
The following is a conceptual overview of some fundamental testing terminologies and principles. These are used in day-to-day testing activities.
A test case is a scenario that will be executed by the tester or by an automation tool, such as the Test Studio for any of the software testing purposes, such as uncovering potential errors in the system. It contains:
- Test case identifier: This identifier uniquely distinguishes a test case.
- Priority: The priority holds a value to indicate the importance of a test case so that the most important ones are executed first and so on.
- Preconditions: The preconditions describe the initial application state in which the test case is to be executed. It includes actions that need to be completed before starting the execution of the test case, such as performing certain configurations on the application, or other details about the application's state that are found relevant.
- Procedure: The procedure of a test case is the set of steps that the tester or automated testing tool needs to follow.
- Expected behavior: It is important to set an expected behavior resulting from the procedure. How else would you verify the functionality you are testing? The expected behavior of a test case is specified before running a test, and it describes a logical and friendly response to your input from the system. When you compare the actual response of the system to the preset expected behavior, you determine whether the test case was a success or a failure.
Executing a test case
When executing a test case, you would add at least one field to your test case description. It is called the actual behavior and it logs the response of the system to the procedure. If the actual behavior deviates from the expected behavior, an incident report is created. This incident report is further analyzed and in case a flaw is identified in the system, a fix is provided to solve the issue. The information that an incident report would include are the details of the test case in addition to the actual behavior that describes the anomalous events. The following example demonstrates the basic fields found in a sample incident report. It describes a transaction carried out at a bank's ATM:
- Incident report identifier: ATM-398
- Preconditions: User account balance is $1000
- Procedure: It includes the following steps:
- User inserts a card.
- User enters the pin.
- Attempts to withdraw a sum of $500.
- Expected behavior: Operation is allowed
- Actual behavior: Operation is rejected, insufficient funds in account!
- Procedure results: Fail
The exit criteria
The following definition appears in the ISTQB (International Software Testing Qualification Board) glossary:
"The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task, which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]"
The pesticide paradox
Software testing is governed by a set of principles out of which we list the pesticide paradox. The following definition appears in the ISTQB glossary:
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this, "pesticide paradox", the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
Element recognition is a pillar of automated test execution as the tool used can't perform an action on an object unless it recognizes it and knows how to find it. Element identification is important in making the automated scripts less fragile during execution. This topic will be reflected in this article.
The following set of fundamental testing phases is based on their definition by ISTQB. Other organizations might name them differently or include different activities in them.
- Test planning and control: Test objectives and activities are set during test planning and a test plan is created. It can include:
- Test strategy: The general approach to testing the application
- Test tools: Reporting tools, automated testing tool, and so on
- Test techniques : Will be discussed in the next section
- Human resources: The personnel needed to carry out the testing
As for test control, it should be exercised during all the phases to monitor progress and amend the test plan as needed.
- Test analysis and design: During this phase, the system specifications are analyzed and test cases, along with their data, are designed. They are also prioritized and the testing environment is identified.
- Test implementation and execution: When implementing your tests and before executing them, you should set up your environment, generate the detailed test cases, run them, and then log and report the results of your findings.
- Evaluating the exit criteria and reporting: Evaluating exit criteria is important in order to know when to stop testing. Occasionally, we find that more tests are needed if the risk in one or more application areas hasn't been fully covered. In case it is decided to stop that test implementation and execution, reports are generated and submitted to the implicated persons.
- Test closure activities: The test closure activities are designed to facilitate reusing of the test data across different versions and products, as well as to promote evaluating and enhancing the testing process. These activities include saving all the test data and testware in a secure repository, evaluating the testing process, and logging suggested amendments.
Ranging from easy and straightforward to complex and machine-computed, many testing techniques guide the design and generation of your test cases. In the this section, we will describe the most basic of these techniques based on the ISTQB standards:
- Equivalence classes: By definition, an equivalence class is a single class of inputs generating an equivalent output. Vice versa, it could be a single class of outputs generated from equivalent inputs. For example, imagine you need to test a simple numeric field which accepts values from 0 to 100. During your testing, you cannot possibly exhaust all the values, hence we would identify one valid equivalence partition and three invalid partitions as follows:
For valid partitions:
- Values between 0 and 100 inclusive
For invalid partitions:
- Values less than zero
- Values greater than 100
- Nonnumeric inputs
As a result, you now choose tests from the four equivalence classes instead of testing all the options. The value of equivalence classes analysis lies in the reduction of testing time and effort.
- Boundary values: When choosing boundary value analysis, you study the limits of your system input. Typically, they are the logical minimum and maximum values in addition to technical or computational limits, such as register sizes, buffer sizes, or memory availability. After determining your logical and technical limits, you would test the system by inputting the actual boundary, the boundary decremented by the smallest possible unit, and the boundary increment by the smallest possible unit.
Assuming our system is an application form where you need to enter your first name in one of the fields, you can proceed with a boundary value analysis on the length of the first name string. Considering that the smallest input is one character, and the largest input is one hundred, our boundary values analysis will lead to a test for strings having the following number of characters: zero (empty input), one, two, ninety-nine, one hundred, and one hundred and one.
- Decision tables: In certain systems, many rules may be interacting with each other to produce the output, such as a security matrix. For instance, let's assume your system is a document management system. The possible factors determining whether a user will have view rights or not are as follows:
- Belonging to user groups with a permission set for each group
- Having an individual permission for each user
- Having access to the documents' file path
These factors are called the conditions of the decision table, where the actions might be reading, editing, or deleting a document. A decision table would allow you to test and verify every combination of the listed conditions. Certain rules might simplify your table, but they are outside the scope of this article. The resulting decision table for the previous example of document management system is illustrated as follows:
Decision table for user rights
- State transition diagram: In some systems, not only do the actions performed determine the output and the routing of the application, but also the state in which the system was in before these actions. For such systems, a state transition diagram is used to generate test cases.
- Firstly, the state transition diagram is drawn with every state as a circle and every possible action as an arrow. Conditions are written between square brackets and the output is preceded by a forward slash.
- Secondly, each action represented in the diagram is attempted from an initial state.
- Thirdly, test cases are generated by looping around the state transition diagram and by choosing different possible paths while varying the conditions.
The expected behavior in state transition test cases are both the output of the system and the transition to the next expected state. In the following sample diagram, you will find the state transition diagram of a login module:
State transition diagram for user authentication to the system
About Test Studio
This section gives the list of features provided in Test Studio:
- Functional test automation: The Test Studio solution to functional test automation is going to be discovered through the following topics: building automated tests, using translators and inserting verifications, adding coded steps, executing tests and logging, adding custom logging, inserting manual steps, assigning and reading variables in tests, debugging errors, and integrating automated test creations with Visual Studio.
- Data-driven architecture: Test Studio offers built-in integration with data sources, allowing you to apply the data-driven architecture during test automation. This feature includes binding tests to SQL, MS Excel, XML, and local data sources, creating data-driven verification, and integrating data-driven architecture with normal automated execution contexts.
- Element recognition: Element recognition is a powerful feature in Test Studio from which it derives additional test reliability. Element recognition topics will be covered through Test Studio Find expressions for UI elements, element repository consolidation and maintenance, and specialized Find chained expressions.
- Manual testing: In addition to automated testing, Test Studio guides the manual testing process. Manual testing includes creating manual test steps, integrating with MS Excel, converting manual tests to hybrid, and executing these two types of tests.
- Organizing the test repository and source control: Tests within the Test Studio project can be organized and reorganized using the features embedded in the tool. Its integration with external source control systems also adds to this management process. The underlying topics are managing tests under folders, setting test properties, and binding your test project to source control from both Test Studio and Visual Studio.
- Test suites execution and reporting: Grouping tests under test suites is achievable through the Test Studio test lists. This feature comprises creating static and dynamic test lists, executing them, logging their execution result, viewing standard reports, and extending with custom reports.
- Extended libraries: Extending testing framework automation functionalities for Test Studio is an option available through the creation of Test Studio plugin libraries.
- Performance testing: In Test Studio, nonfunctional testing is firstly addressed with performance testing. This feature covers developing performance tests, executing them, gathering performance counters, and analyzing and baselining execution results.
- Load testing: Nonfunctional testing in Test Studio is augmented with another type of test, which is load testing. This topic covers configuring Test Studio load testing services, developing load tests, recording HTTP traffic, creating user profiles and workloads, monitoring machines, gathering performance metrics, executing load tests, and creating custom charts
- Mobile testing: Test Studio is extended with a version specialized in iOS web, native and hybrid apps testing. It includes preparing applications for testing within Test Studio, creating automated tests, inserting verifications on UI elements, registering applications on the web portal, syncing test projects, sending and viewing built-in feedback messages, sending and viewing crash reports, and managing and monitoring registered applications through web portals.
While reading this article, you will find a problem-based approach to automating tests with Test Studio.
The following general approach might vary slightly between the different examples:
- General problem: We will start by stating the general problem that you may face in real-life automation
- Real-life example: We will then give a real-life example based on our previous experience in software testing
- Solutions using the Test Studio IDE: Having described the problem, a solution using the Test Studio IDE will be provided
- Solutions using code: Finally, some solutions will be provided by writing code.
Setting up your environment
You will get a list of files with this article to help you try the examples properly. The following is an explanation on how to set up the environment to practice the automation examples against the applications under test.
The File Comparer application
To configure this application environment, you need to:
- Run the FC_DB-Database Scripts.sql files in the SQL Management Studio.
- Open the settings.xml file from the solution bin and edit the ConnectionString parameter.
The data source files for these reports can be found in the ODCs folder. In order to properly display the charts in the workbook:
- Edit the ConnectionString parameter inside the ODC extension files.
- Bind the pivot tables inside the excel workbook to these files as follows:
- The Execution Metrics for Last Run sheet to the FC_DB-L-EMLR. odc file
- The Execution Metrics over Time sheet to the FC_DB-MOT.odc file
- The Feature Coverage sheet to the FC_DB-FC.odc file
- The Test Execution Duration sheet to the FC_DB-TED.odc file
The following are the additional files used in this article :
- The Test Studio Automated Solutions folder contains the Test Studio automated solution for the examples in the article
- The TestStudio.Extension folder is a Visual Studio solution and it corresponds to the Test Studio extension library.
Other reference sources
Refer to Telerik online documentation for:
- Test Studio standalone and VS plugin editions found at http://www.telerik.com/automated-testing-tools/support/documentation/user-guide/test-execution/test-list-settings.aspx
- Mobile testing using Test Studio extension for iOS testing found at http:// www.telerik.com/automated-testing-tools/support/documentation/ mobile-testing/testing.aspx
Also, for software testing and automation concepts you can refer to:
- ISTQB-BCS Certified Tester Foundation Level book, Foundations of Software Testing by Dorothy Graham, Erik Van Veenendaal, Isabel Evans, and Rex Black
- ISTQB glossary of testing terms 2.2
This article explains in brief about Test Studio, its features, and so on. It gives a basic knowledge of what Test Studio is and some of the useful links.
Resources for Article :
- Load Testing Using Visual Studio 2008: Part 1 [Article]
- Load Testing Using Visual Studio 2008: Part 2 [Article]
- Ordered and Generic Tests in Visual Studio 2010 [Article]