Editing load tests
The load can contain one or more scenarios for testing. The scenarios can be edited any time during the design. To edit a scenario, select the scenario you want to edit and right-click to edit the test mix, browser mix, or network mix in the existing scenario or add a new scenario to the load test. The context menu has different options for editing as shown here:
The Add Scenario... will open the same wizard, which we used before adding the scenario to the load test. We can keep adding the scenarios as much as we need. The scenario properties window also helps us modify some properties such as the Think Profile and the Think Time Between Test Iteration and the scenario Name.
The Add Tests... option is used for adding more tests to the test mix from the tests list in the project. We can add as many tests as required to be part of the test mix.
The Edit Test Mix... option is used for editing the test mix in the selected scenario. This option will open a dialog with the selected tests and distribution.
Using this Edit Test Mix window we can:
- Change the test mix model listed in the drop–down.
- Add new tests to the list and modify the distribution percentage.
- Select an initial test that executes before other tests for each virtual server. The browse option next to it opens a dialog showing all the tests from the project from which we can select the initial test.
- Similar to the initial test, we can choose a test which is the final test to run during the test execution. The same option is used here to select the test from the list of available tests.
The Edit Browser Mix... option opens the Edit Browser Mix dialog from where you can select the new browser to be included to the browser mix and delete or change the existing browsers selected in the mix.
The Edit Network Mix... option opens the Edit Network Mix dialog from where you can add new browsers to the list and modify the distribution percentages. We can change or delete the existing network mix.
For changing the existing load pattern, select the Load Pattern under the Scenarios and open the Properties window which shows the current patterns properties. You can change or choose any pattern from the available patterns in the list as shown in the following screenshots:
The Run Settings can be multiple for the load tests, but at any time only one can be active. To make the run settings active, select Run Settings, right-click and select Set as Active. The properties of the run settings can be modified directly using the properties window. The properties that can be modified include results storage, SQL tracing, test iterations, timings, and the web test connections.
Adding context parameters
The web tests can have context parameters added to them. The context parameter is used in place of the common values of multiple requests in the web test. For example, every request has the same web server name, which can be replaced by the context parameters. So whenever the web server changes, we can just change the context parameter value, which replaces all the requests with the new server name.
We know that the load test can have web tests and unit tests in the list. If there is a change in the web server for the load test other than what we used for web tests then we will end up modifying the context parameter values in all the web tests used in the load tests. Instead of this, we can include another context parameter in the load test with the same name used in the web tests. The context parameter added to the load test will override the same context parameter used in the web tests. To add new context parameter to the load test, select the Run Settings and right-click to choose the Add Context Parameter option, which adds a new context parameter. For example, the context parameter used in the web test has the web server value as
Now to overwrite this in load tests, add a new context parameter with the same name as shown below:
All information collected during the load test run is stored in the central result store. The load test results store contains the data collected by the performance counters and the violation information and errors that occurred during the load test. The result store is the SQL server database created using the script loadtestresultsrepository.sql, which contains all the SQL queries to create the objects required for the result store.
If there are no controllers involved in the test and if it is the local test, we can create the results store sql database using SQL Express. Running the script creates the store using SQL Express. Running this script once on the local machine is enough for creating the result store. This is a global central store for all the load tests in the local machine. To create the store, open the visual studio command prompt and run the command with the actual drive where you have installed the visual studio.
cd c:Program FilesMicrosoft Visual Studio 9.0Common7IDE
In the same folder, run the following command which creates the database store
SQLCMD /S localhostsqlexpress /i loadtestresultsrepository.sql
If you have any other SQL Server and if you want to use that to have the result store then you can run the script on that server and use that server in connection parameters for the load test. For example, if you have the SQL Server name as SQLServer1 and if the result store has to be created in that store, then run the command as below:
SQLCMD /S SQLServer1 -U <user name> -P <password> -i
All these commands create the result store database in the SQL Server. If you look at the tables created in the store, it would look like this:
If you are using a controller for the load tests, the installation of the controller itself takes care of creating the results store on the controller machine. The controller can be installed using the Visual Studio 2008 Team Test Load agent Product.
To connect to the SQL Server result store database, select the Test option from the Visual Studio IDE and then select the Administer Test Controller window. This option would be available only in the controller machine. If the result store is on a different machine or the controller machine, select the controller from the list or select <Local-No controller>, if it is in the local machine without any controller. Then select the Load Test Results store using the browse button and close the Administer test Controller window.
The controllers are used for administering the agent computers and these controller and agents form the rig. Multiple agents are required to simulate a large number of loads from different locations. All the performance data collected from all these agents are saved at the central result store at the controller or any global store configured at the controller.
Running the load test
Load tests are run like any other test in Visual Studio. Visual Studio also provides multiple options for running the load test.
One is through the Test View window where all the tests are listed. We can select the load test, right-click and choose the option Run Selection option, which starts the load tests to run.
The second option is to use the Test List Editor. Select the load test from the test list in the test lists editor and choose the option to run the selected tests from the test list editor toolbar.
The third option is the built-in run option in the load test editor toolbar. Select the load test from the project and open the load test. This opens the load test in the load test editor. The toolbar for this load test editor has the option to run the currently opened load test.
The fourth option is through the command line command. MS Test command line utility is used for running the test. This utility is installed along with the Visual Studio Team System for Test. Open the Visual Studio command prompt. From the folder where the load test resides, run the following command to start the load test
In all the above cases, the load test editor will show the progress during the test run. But the other option does not show the progress instead stores the result to the result store repository. It can be loaded later to see the test result and analyze it. You can follow these steps to open the last run tests:
- Open the menu option Test | Windows | Test Runs.
- From the Connect drop-down, select the location for the test results store. On selecting this, you can see the trace files of the last run tests getting loaded in the window.
- Select the test run name from the list and double-click to open the test results for the selected run.
- Double-click the test result shown in the Results window that connects to the store repository, fetches the data for the selected test result, and presents in the load test window.
The end result of the load test editor window will look like the one shown in the following screenshot with all the performance counter values and the violation points.
More details about the graph are given under the Graphical View subsection.
Running load test using rig
The rigs are used to simulate the load test with a large number of computers at different locations. The rig consists of one central controller that controls multiple agent computers at different locations. These computers together form a rig. The rig is used for generating heavy load from multiple machines instead of load testing with single machine. Any time that we want the load test simulation to be increased, we can simply add more agents to the controller. All these agents added to the rig are controlled by a single controller. When these controller and agents are involved in running the load test and collecting the data, the client will monitor and present the data to the user.
The agents run the test and the controller controls all these agents and collects the data and stores it into the central repository from which the client fetches the data and presents it to the user. The controller keeps track of sending messages to the agents about when to start the load test.
The controller and agents are configured using the controller and agent tabs in the configuration window. Before configuration, the controller and the agents should be installed on the computers. This can be done only through the Microsoft Visual Studio 2008 Team Test Load Agent disc.
After installation of the controller and the agents on the respective machines, the client has to be configured with the controller and the agents for the load test. This can be done through the menu option Test and the Administer Test Controllers...option in Visual Studio. This option opens the window to select the controller and add the agents to the list for the controller. The list of agents should be added to this list.
The Add... option in the Administer Test Controller windows has the option to add, remove, restart, and set the properties for the agent. Select the Add... option, which opens the dialog for adding new agent. Set the agent properties such as the name and the attributes for the agent.
- Name: The name is the system name, which will be one of the agents for the test.
- Weighting: The weighting is the amount of tests to be run by this agent. For example, let's assume we have two agents and the total number of tests to run is 100. In this case, if the weighting for the agent System1 is 70, then it means that the System1 should take care of running 70 tests out of 100.
- Enable IP_Switching: This option helps us test the requests to multiple backend web servers in a web farm through load balancer. This option allows the agents to go through the IPv4 address during the load test.
- NIC: This is the Network Interface Card to be used to the IPv4 address.
- Base Address: This is to enter the first three segments of your base IPv4 address as in 192.168.0.
- Start Range: This is to enter the starting number to be used in the IPv4 address. For example, if the starting number is 15, then the first IPv4 address would be the base address with the first number in the start range which is 192.168.0.15.
- End Range: This is the final or the end range for the IPv4 addresses. For example, if the end range is 20, then the agent will generate addresses from the start range to the end range where the end range address would be 192.168.0.20.
- Subnet Mask: This is to enter the subnet mask.
- Attributes: The attributes are the properties of the agent, which will be used for the selection of a suitable agent for the test, if any constraints are specified. This is the name value pair. The name represents the name given for the attribute and the value is the value given for the attribute. The following are some examples of attributes:
- Attribute Name: OS
- Attribute Value: WindowsXP
- Attribute Name: RAM
- Attribute Value: 2GB
These attributes are used when the test configuration has the constraints for selecting the agents for the tests. The configuration file will have the properties such as shown in the following screenshot:
The controller will consider the agents whose properties match with the properties mentioned in the name value for the agents on the screen for selecting the location to run tests.
Working with test result and analyzing test results
The result that we get out of the load test contains a lot of information about the test. All of these details get stored in the results repository store. The graph and indicators shown during the test run contain only the important cached results. The actual detailed information will get stored to the store. We can load the test result later from the store and analyze it.
There are different ways to see the test results using the options in the Test Editor toolbar. At any time we can switch between the views to look at the results. The one that follows is the graphical view of the test results. The graphical view window contains different graphs shown for different counters.
The graphical view of the result gives a high-level view of the test result, but the complete test result data is stored in the repository. By default, there are four different graphs provided with different readings. We can select the drop-down and choose any other counter reading for the graphical view.
- Key Indicators: This graph shows the data collected on average response time, Just-In-Time (JIT) percentage, threshold violations per second, errors per second, and the user load. The details about the graph are given below the four graphs section, which describes the actual counter data collected during the test.
- Page Response Time: This graph explains how long the response for each request took in different URLs. The details are given below the graphs.
- System Under Test: This is the graph, which presents the data about different computers or agents used in the test. The data include readings such as the available memory and the processing time.
- Controller and Agents: The last graph will present the details about the system or machine involved in the load test. The data collected would be the processor time and the available memory in MB.
- Transaction Response Time: This indicates the average time taken by the transactions in load testing.
For all the graphs, we have more detailed information on each counter in the following grid with the color legends. The details shown contain information on the counter name, category, range, min, max, and average readings for each counter. The legend grid can be made visible or invisible using the option in the toolbar.
For example, in the above image you can see the Key Indicators graph on the top left on all the graphs. Different types of readings are plotted in different colors in the graphs. The counters from this counter set are also presented in the following table below the graphs showing all the counters used in the graph and their corresponding colors.
We can add a new graph and the counters to the graphical view. Right-click on any graph area and select the option Add Graph, which adds a new graph with the given name. Now expand the counter sets and drag-and-drop the required counters on the new graph so that the readings are shown in the graph as shown in the following sample graph:
The New Custom Graph is the new graph added to the result and counters are added to the graph. The counters and readings are listed in the table below the graphs.
The summary view option in the load test editor window toolbar presents more information on the overall load testing.
The very important information is the top five slowest pages and tests. The tests are ordered based on the average test time taken for each test and the time taken for each page request.
- Test Results: This section gives the status information such as the number of tests conducted on each test selected for load testing. For example, out of 100 web tests selected for load testing, what is the number of passed tests and failed tests?
- Page Results: This section gives information about the test conducted for each URL used in the test. This result shows the number of times the page is requested, and the average time taken for each request. The detail includes the test name to which the URL belongs.
- Transaction Results: The transaction is the set of tasks in the test. This section in the summary view gives the information like scenario name, test name, the elapsed time for testing this transaction, the number of times this transaction tested, and so on.
- System under Test resources and controller and agents resources: This section gives the information about the systems involved in testing, the processor time for the test, and the amount of memory available at the end of test completion.
- Errors: This section details the list of errors that occurred during the test. It gives information such as the error type, the sub type, and the number of times the same error occurred during the test, the last message from the error stack, and so on.
We have seen the Summary view and Graphical view and how to customize the Graphical view by adding custom graphs and counters to it. The tool bar provides a third view to the results, which is the tabular view.
In this tabular view, you can see the summarized result information in table format. By default there are two tables shown on the right pane with the table on top showing the list of tests run and their run details such as the scenario name, the total number of tests run, tests passed and the number of tests failed, and the time. The second tab down shows the Errors that occurred while testing. The details shown are the type of exceptions, the sub type of the exception, the number of exceptions raised, and the detailed error message.
Both these table headers are drop-downs. If you select the drop–down, you can see different options such as the Tests, Errors, Pages, Requests, Thresholds, and Transactions. You can select the option to get the results to be displayed in the table. For example, the following screenshot shows the tabular view of the threshold violations and the number of transactions for a test sample. You can see the summary of the threshold violations at the header down below the toolbar.
The Threshold violation table shows the detailed information about each violation that occurred during the test. The counter category, the counter name, the instance type, and the detailed message explain the reasons for the violation showing the actual value and the threshold value set for the counter.
The other details provided by the tabular view are:
- The Requests table view shows the different pages requested during the tests with their statuses and the response time and content length for each request during the test.
- The Pages table view shows the number of pages involved during the test. The result table shows information on the page names, scenario, test names, total pages for the entire test, and page time.
Exporting to Excel
We can export part of the results view to Excel using the Export to Excel option in the toolbar of the load test editor. You can select a particular graph from the four different graphs shown here and then select the option to Export. All counter information shown in the selected graph gets exported to Excel as shown in the screenshot below for the Key Indicators graph.
The above spreadsheet shows the actual counter value collected at every five seconds of elapsed time which is exported using the graph.
This article explained the steps involved in creating the load test using sample web tests. We have seen the details of each step and the parameter values to set in each step in the load test creation wizard. There is always a chance of going back to edit the test to change the parameters set or add additional counters and scenarios, which is explained in this article. Creating custom performance counters, including the ones for load testing for different systems, setting the threshold rules for counters, and creating rules in different ways are some of the other topics we covered. After creating the load test, we have seen the different methods of running the tests and collecting the test results. And finally this article explained the multiple ways of looking at the results such as Summary view, Graphical view, and Tabular view, and how useful they are in analyzing the test results. Having all these results in the test repository may not solve our purpose sometimes. So we may have to export the results for detailed analysis purposes. From this article, we have also gotten some idea of how to export the test results.