Load testing an application helps the development and management team understand the application performance under various conditions. Load testing can have different parameter values and conditions to test the application and check the application performance.
Each load test can simulate the number of users, network bandwidths, combinations of different web browsers, and different configurations. In the case of web applications, it is required to test the application with different sets of users and different sets of browsers to simulate multiple requests at the same time to the server. The following figure shows a sample of real-time situations for multiple users accessing the web site using different networks and different browsers from multiple locations.
Load testing is meant not just for web applications. We can also test the unit tests under load tests to check the performance of the data access from the server. The load test helps identify application performance in various capacities, application performance under light loads for a short duration, performance with heavy loads, and different durations.
Load testing uses a set of computers, which consists of a controller and agents. These are called rig. The agents are the computers at different locations used for simulating different user requests. The controller is the central computer which controls multiple agent computers. The Visual Studio 2008 Test Load agent in the agent computers generates the load. The Visual Studio 2008 Test Controller at the central computer controls these agents. This article explains the details of creating test scenarios and load testing the application.
Creating load test
The load tests are created using the Visual Studio Load Test wizard. You can first create the test project and then add the new load test which opens the wizard, and guides us to create the test. We can edit the test parameters and configuration later on, if required.
Before we go on to create the test project and understand the parameters, we will consider a couple of web applications. Web applications or sites are the ones accessed by a large number of users from different locations at the same time. It is quite required to simulate this situation and check the application performance. Let’s take a couple of web applications. It is a simple web page, where the orders placed by the current user are displayed. The other application is the coded web test that retrieves the orders for the current user, similar to the first one.
Using the above examples we will see the different features of load testing that is provided by Visual Studio. The following sections describe the creation of load testing, setting parameters, and load testing the application.
Load test wizard
The load test wizard helps us create the load test for the web tests and unit tests. There are different steps to provide the required parameters and configuration information for creating the load test. There are different ways of adding load tests to the test project:
- Select the test project and then select the option Add | Load Test...
- Select the test menu in Visual Studio 2008 and select New Test, which opens the Add | New Test... dialog. Select the load test type from the list. Provide a test name and select the test project to which the new load test should be added.
Both the above options open the New Load Test Wizard shown as follows:
The wizard contains four different sets with multiple pages, which collects the parameters and configuration information for the load test.
The welcome page explains the different steps involved in creating a load test. On selecting a step like Scenario or Counter Sets or Run Settings, the wizard displays the section to collect the parameter information for the selected set option. We can click on the option directly or keep clicking Next and set the parameters. Once we click on Finish, the load test is created. To open the load test, expand the solution explorer and double-click on the load test, LoadTest1. We can also open the load test using the Test View window in the Test menu and double-click on the name of the load test from the list to open the Test Editor. For example, the following is a sample load test:
The following detailed sections explain setting the parameters in each step:
Scenarios are used for simulating the actual user tests. For example, there are different end users to any web application. For a public web site, the end user could be anywhere and there could be any number of users. The bandwidth of the connection and the type of browsers used by the users also differ. Some users might be using a high-speed connection, and some a slow dial-up connection. But if the application is an Intranet application, the end users are limited within the LAN network. The speed at which the users connect will also be constant most of the time. The number of users and the browsers used are the two main things which differ in this case. The scenarios are created using these combinations which are required for the application under test. Enter the name for the scenario in the wizard page.
We can add any number of scenarios to the test. For example, we might want to load test the WebTest3 with 40 per user per hour and another load test for WebTest11Coded with 20 per user per hour.
Now, let us create a new load test and set the parameters for each scenario.
The think time is the time taken by the user to navigate to the next web page. This is useful for the load test to simulate the test accurately.
We can set the load test to use the actual think time recorded by the web test or we can give a specific think time for each test. The other option is to set the normal distribution of the think time between the requests. The time varies slightly between the requests, but is mostly realistic. There is a third option, which is configured not to use the think times between the requests.
The think times can also be modified for the scenario after creating the load test. Select the scenario and right-click and then open Properties to set the think time.
Now once the properties are set for the scenario, click Next in the New Load Test Wizard to set parameters for Load Pattern.
Load pattern is used for controlling the user loads during the tests. The test pattern varies based on the type of test. If it is a simple Intranet web application test or a unit test, then we might want to have a minimum number of users for a constant period of time. But in case of a public web site, the amount of users would differ from time to time. In this case, we might want to increase the number of users from a very small number to a large number with a time interval. For example I might have a user load of 10 initially but during testing I may want the user load to be increased by 10 in every 10 seconds of testing until the maximum user count reaches 100. So at 90th second the user count will reach 100 and the increment stops and stays with 100 user load till the test completion.
The load starts with the specified user count and ends with the same number.
User Count: This is to specify the number of user counts for simulation.
The load test starts with the specified minimum number of users and the count increases constantly with the time duration specified until the user count reaches the maximum specified.
- Start user count: This specifies the number of users to start with
- Step duration: This specifies the time between the increases in user count
- Step user count: This specifies the number of users to add to the current user count
- Maximum user count: This specifies the maximum number of user count
We have set the parameters for the Load Pattern. The next step in the wizard is to set the parameter values for Test Mix Model and Test Mix.
Test mix model and test mix
The test load model has to simulate the accuracy of the end-users count distribution. Before selecting the test mix, the wizard provides a configuration page to choose the test mix model from three different options. They are based on the total number of tests, on the virtual users, and on user pace.
The next page in the wizard provides the option to select the tests and provide the distribution percentage, or the option to specify the tests per user per hour for each test for the selected model. The mix of tests is based on the percentages specified or the test per user specified for each test.
Based on the total number of tests
The next test to be run is based on the selected number of times. The number of times the tests is run should match the test distribution. For example, if the distribution is like the one shown in the image on the previous page, then the next test selected for the run is based on the percentage distributions.
Based on the number of virtual users
This model determines running particular tests based on the percentage of virtual users. Selecting the next test to be run depends on the percentage of virtual users and also on the percentage assigned to the tests. At any point, the number of users running a particular test matches the assigned distribution.
Based on user pace
This option determines running each test for the specified number of times per hour. This model is helpful when we want the virtual users to conduct the test at a regular pace.
The test mix contains different web tests, each with different number of tests per user. The number of users is defined using load pattern. The next step in the wizard is to specify the Browser Mix, explained in the next section.
We have set the number of users and the number of tests, but there is always a possibility that all the users may not use the same browser. To mix the different browser types, we can go to the next step in the wizard and select the browsers listed and give a distribution percentage for each browser type.
The test does not actually use the specified browser, but sets the header information in the request to simulate the same request through the specified browser. Like different browsers, the network speed also differs with user location, which is the next step in the wizard.
Click on Next in the wizard to specify the Network Mix, to simulate the actual network speed of the users. The speed differs based on user location and the type of network they use. It could be a LAN network, or cable or wireless, or a dial-up. This step is useful in simulating the actual user scenario.
The next step in the wizard is to set the Counter Sets parameters, which is explained in the next sections.
Load testing the application not contains the application-specific performance but also the environmental factors. This is to know the performance of the other services required for running the load test or accessing the application under test. For example, the web application makes use of IIS and ASP.NET process and SQL Server. VSTS provides an option to track the performance of these services using counter sets as part of the load test. The load test collects the counter set data during the test and represents it as a graph for a better understanding. The same data is also saved locally so that we can load it again and analyze the results. The counter set is for all the scenarios in the load test.
The counter set data is collected for the controllers and agents by default. We can also add the other systems that are part of load testing. These counter set results help us to know how the services are used during the test. Most of the time the application performance is affected by the common services or the system services used.
The load test creation wizard provides the option to add the performance counters. The wizard includes the current system by default and the common counter set for the controllers and the agents. The following screenshot shows the sample for adding systems to collect the counter sets during the load test:
There are lists of counters listed for the system by default. We can select any of these for which we want to collect the details during the load test. For example, the above screenshot shows that we need the data to be collected for ASP.NET, .Net Application, and IIS from System1. Using the Add Computer... option, we can keep adding the computers on which we are running the tests and choose the counter sets for each system.
Once we are done with selecting the counter sets, we are ready with all the parameters for the test. But for running the test some parameters are required, which is done in the next step in the wizard.
These settings are basically for controlling the load test run to specify the maximum duration for the test and the time period for collecting the data about the tests. The screenshot below shows the options and the sample setting.
There are two options for the test run. One is to control it by a maximum time limit and the other is to provide a maximum test iteration number. The test run will stop once it reaches the maximum, as per the option selected. For example, the screenshot below shows the option to run the test for 5 minutes.
The Details section specifies the rate at which the test data should be collected (Sampling rate), the Description, the Maximum error details to show, and the Validation level. The validation level is the option to specify the rules that should be considered during the test. This is based on the level that is set while creating the rules.
Now we are done with setting all the parameters required for load testing. Finish the wizard by clicking the Finish button, which actually creates the test with all the parameters from the wizard and shows the load test editor. The load test editor would look like the one shown here:
The actual run settings for the load test contains the counter sets selected for each system and the common run settings provided in the last wizard section. To know more about what exactly these counter sets contain and what the options are to choose in each counter set, select a counter set from the Counter Sets folder under the load test. Right-click and select the option Manage Counter Sets... for choosing more counters or adding additional systems. This option displays the same window that we saw as the last window in the wizard.
We can also add additional counters to the existing default list.
For example, the following screenshot shows the default list of categories under the .NET application counter set, when you complete the creation of the wizard.
To add more counter categories, just right-click on the Counter Categories and select the Add Counters option and then choose the category you wish to add from the Performance category list. After selecting the category, select the counters from the list for the selected category.
The previous screenshot shows the .NET CLR Exceptions category selected with the counters like the number of exceptions thrown, the number of exceptions thrown per second, the number of filters thrown per second, and finally per second. After selecting the additional counters, click on OK, which adds the selected counters to the existing list for the test.
What we have seen above is for the existing counter sets. What if we want to add the custom performance counter and add it to the run settings for the test? We can create a new counter by choosing the Add Custom Counter option in the context menu that opens when you right-click on the counters sets folder. The screenshot below shows a new custom performance counter added to the list.
Now select the custom counter, right-click and choose the Add Counters option and select the category, and pick the counters required for the custom counter set. For example, we might want to collect the Network Interface counters such as the number of bytes sent and received per second and the current bandwidth during the test. Select these counters for the counter set.
Once we are ready with the custom counter set, we need to add this as part of the run settings on all the systems that are part of the test. Select the Run Settings folder, right-click and choose the Manage Counter Sets option from the context menu and select the custom performance counter Custom1 for the available systems. The final list of Run Settings would look like this:
Keep adding all the custom counters and counter sets and select them for the systems used for running the test.
The main use of these counters is to collect the data during the test, but at the same time we can use it to track the readings. The load test has an option to track the counter data and indicate if it crosses the threshold values by adding rules to it. This is explained in the coming section.
The main use of the counters and counter sets are to identify the actual performance of the current application under test and the usage of memory and time taken for the processor. There is another main advantage in collecting these counter data. We can set the threshold limits for each of these data collected during the test. For example, we may want to get an alert warning when the system memory is almost full. Also if any process takes more time than the expected maximum time, we may want the system to notify us so that we can act upon it immediately. These threshold rules can be set for each performance counters.
Select a performance counter and choose the Add Threshold Rule option, which opens a dialog for adding the rules.
There are two different types of rules that can be added. One is to compare with constant values and the other is to compare the value with the other performance counter value. The following rules explain different ways of setting the threshold values:
- Compare Constant: This is to compare the performance counter value with a constant value. For example, we may want a violation, if the percentage of time taken for the processor is more than 70 and a critical message if it crosses 90. The Alert If Over option can be set to true or false, where true denotes that the violation would be generated if the counter value is greater than the specified threshold value and false denotes that the violation would be generated if the counter value is less than the specified threshold value.
- Compare Counters: This is to compare the performance counter value with another performance counter value. The functionality is the same as the above constants. But here the performance counter values are compared instead of constant.
In the above screenshot, the threshold constant value is set to 70 to trigger the warning violation and the threshold value is set to 90 for the critical violation message.
The above screenshot shows the options for adding Compare Counters to the counter set. The warning and critical threshold values are constants, which is multiplied by the dependent counter value and then compared with the current counter value. For example, if the dependent counter value is 50 and if the constant is set to 1.25 as the warning threshold, then the violation would be raised when the current counter value reaches a value of over 62.5.
The screenshot below shows the example of the threshold violation whenever the value exceeds the constant defined in the rule. The test is aborted because the test was stopped before it got completed.
You can see from the graph that there are three different threshold warning messages raised during the load test run as shown at the top summary information about the test. The graph also indicates when the counter value had reached the value above the constant defined in the rule. As the graph shows, the value has reached the value 25.13204, which is higher than the constant 25 defined in the rule. If the value is above the warning level, it is indicated as yellow and it is red, if it is above the critical threshold value. These rules will not fail the test but will provide the alert if the values are above the thresholds set.