Sunday, May 4, 2014

Performance Testing


Load Testing Process:

You use load testing to verify application behavior under normal and peak load conditions. You incrementally increase the load from normal to peak load to see how your application performs with varying load conditions. You continue to increase the load until you cross the threshold limit for your performance objectives. For example, you might continue to increase the load until the server CPU utilization reaches 75 percent, which is your specified threshold. The load testing process lets you identify application bottlenecks and the maximum operating capacity of the application.

Input
Input may include the following:
• Performance objectives from your performance model. For more information about Performance Modeling, see Chapter 2, “Performance Modeling.”
• Application characteristics (scenarios).
• Workload characteristics.
• Performance objectives for each scenario.
• Test plans.

Steps
The load testing process is a six step process

The load testing process:
The load testing process involves the following steps:
1. Identify key scenarios. Identify application scenarios that are critical for performance.

2. Identify workload. Distribute the total application load among the key scenarios identified in step 1.

3. Identify metrics. Identify the metrics that you want to collect about the application when running the test.

4. Create test cases. Create the test cases where you define steps for executing a single test along with the expected results.

5. Simulate load. Use test tools to simulate load according to the test cases and to capture the result metrics.

6. Analyze the results. Analyze the metric data captured during the test.
The next sections describe each of these steps.

Step 1 – Identify Key Scenarios
-----------------------------------------
Start by identifying your application’s key scenarios. Scenarios are anticipated user paths that generally incorporate multiple application activities. Key scenarios are those for which you have specific performance goals or those that have a significant performance impact, either because they are commonly executed or because they are resource intensive. 

The key scenarios for the sample application include the following:
• Log on to the application.
• Browse a product catalog.
• Search for a specific product.
• Add items to the shopping cart.
• Validate credit card details and place an order.

Step 2 – Identify Workload
-------------------------------------
Identify the performance characteristics or workload associated with each of the defined scenarios. For each scenario you must identify the following:
• Numbers of users. The total number of concurrent and simultaneous users who access the application in a given time frame. For a definition of concurrent users, see “Testing Considerations,” later in this chapter.
• Rate of requests. The requests received from the concurrent load of users per unit time.
• Patterns of requests. A given load of concurrent users may be performing different tasks using the application. Patterns of requests identify the average load of users, and the rate of requests for a given functionality of an application.
For more information about how to create a workload model for your application, see “Workload Modeling,” later in this chapter.
After you create a workload model, begin load testing with a total number of users distributed against your user profile, and then start to incrementally increase the load for each test cycle. Continue to increase the load, and record the behavior until you reach the threshold for the resources identified in your performance objectives. You can also continue to increase the number of users until you hit your service level limits, beyond which you would be violating your service level agreements for throughput, response time, and resource utilization.

Step 3 – Identify Metrics
---------------------------------
Identify the metrics that you need to measure when you run your tests. When you simulate load, you need to know which metrics to look for and where to gauge the performance of your application. Identify the metrics that are relevant to your performance objectives, as well as those that help you identify bottlenecks. Metrics allow you to evaluate how your application performs in relation to performance objectives — such as throughput, response time, and resource utilization.
As you progress through multiple iterations of the tests, you can add metrics based upon your analysis of the previous test cycles. For example, if you observe that that your ASP.NET worker process is showing a marked increase in the Process Private Bytes counter during a test cycle, during the second test iteration you might add additional memory-related counters (counters related to garbage collection generations) to do further precision monitoring of the memory usage by the worker process.
For more information about the types of metrics to capture for anASP.NET application, see “Metrics,” later in this chapter.
To evaluate the performance of your application in more detail and to identify the potential bottlenecks, monitor metrics under the following categories:
• Network-specific metrics. This set of metrics provides information about the overall health and efficiency of your network, including routers, switches, and gateways.
• System-related metrics. This set of metrics helps you identify the resource utilization on your server. The resources are CPU, memory, disk I/O, and network I/O.
• Platform-specific metrics. Platform-specific metrics are related to software that is used to host your application, such as the .NET Framework common language runtime and ASP.NET-related metrics.
• Application-specific metrics. These include custom performance counters inserted in your application code to monitor application health and identify performance issues. You might use custom counters to determine the number of concurrent threads waiting to acquire a particular lock or the number of requests queued to make an outbound call to a Web service.
• Service level metrics. Service level metrics can help to measure overall application throughput and latency, or they might be tied to specific business scenarios as shown in Table 16.1.
Table 16.1: Sample Service Level Metrics for the Sample Application
Metric Value
Orders / second 70
Catalogue Browse / second 130
Number of concurrent users 100
For a complete list of the counters that you need to measure, see “Metrics,” later in this chapter.
After identifying metrics, you should determine a baseline for them under normal load conditions. This helps you decide on the acceptable load levels for your application. Baseline values help you analyze your application performance at varying load levels. An example is showed in Table 16.2.
Table 16.2: Acceptable Load Levels
Metric Accepted level
% CPU Usage Must not exceed 60%
Requests / second 100 or more
Response time (TTLB) for client on 56 Kbps bandwidth Must not exceed 8 seconds

Step 4 – Create Test Cases
----------------------------------------------------
Document your various test cases in test plans for the workload patterns identified in Step 2. Two examples are shown in this section.
Test Case for the Sample E-Commerce Application
The test case for the sample e-commerce application used for illustration purposes in this chapter might define the following:
• Number of users: 500 simultaneous users
• Test duration: 2 hours
• Think time: Random think time between 1 and 10 seconds in the test script after each operation
Divide the users into various user profiles based on the workload identified in step 2. For the sample application, the distribution of load for various profiles could be similar to that shown in Table 16.3.
Table 16.3: Load Distribution
User scenarios Percentage of users Users
Browse 50 250
Search 30 150
Place order 20 100
Total 100 500
Expected Results
The expected results for the sample application might be defined as the following:
• Throughput: 100 requests per second (ASP.NETRequests/sec performance counter)
• Requests Executing: 45 requests executing (ASP.NETRequests Executing performance counter)
• Avg. Response Time: 2.5 second response time (TTLB on 100 megabits per second [Mbps] LAN)
• Resource utilization thresholds:
Processor% Processor Time: 75 percent
MemoryAvailable MBytes: 25 percent of total physical RAM

Step 5 – Simulate Load
----------------------------------------------------
Use tools such as ACT to run the identified scenarios and to simulate load. In addition to handling common client requirements such as authentication, cookies, and view state, ACT allows you to run multiple instances of the test at the same time to match the test case.
Note Make sure the client computers you use to generate load are not overly stressed. Resource utilization such as processor and memory should be well below the utilization threshold values.
For more information about using ACT for performance testing, see “How To: Use ACT to Test Performance and Scalability” in the “How To” section of this guide.

Step 6 – Analyze the Results
-----------------------------------------------
Analyze the captured data and compare the results against the metric’s accepted level. The data you collect helps you analyze your application with respect to your application’s performance objectives:
• Throughput versus user load.
• Response time versus user load.
• Resource utilization versus user load.
Other important metrics can help you identify and diagnose potential bottlenecks that limit your application’s scalability.
To generate the test data, continue to increase load incrementally for multiple test iterations until you cross the threshold limits set for your application. Threshold limits may include service level agreements for throughput, response time, and resource utilization. For example, the threshold limit set for CPU utilization may be set to 75 percent; therefore, you can continue to increase the load and perform tests until the processor utilization reaches around 80 percent.
The analysis report that you generate at the end of various test iterations identifies your application behavior at various load levels. For a sample report, see the “Reporting” section later in this chapter.
If you continue to increase load during the testing process, you are likely to ultimately cause your application to fail. If you start to receive HTTP 500 (server busy) responses from the server, it means that your server’s queue is full and that it has started to reject requests. These responses correspond to the 503 error code in the ACT stress tool.
Another example of application failure is a situation where the ASP.NETworker process recycles on the server because memory consumption has reached the limit defined in the Machine.config file or the worker process has deadlocked and has exceeded the time duration specified through the responseDeadlockInterval attribute in the Machine.config file.
You can identify bottlenecks in the application by analyzing the metrics data. At this point, you need to investigate the cause, fix or tune your application, and then run the tests again. Based upon your test analysis, you may need to create and run special tests that are very focused.
Output
The various outputs of the load testing process are the following:
• Updated test plans.
• Behavior of your application at various load levels.
• Maximum operating capacity.
• Potential bottlenecks.
• Recommendations for fixing the bottlenecks.

1 comment:

  1. Just found your post by searching on the Google, I am Impressed and Learned Lot of new thing from your post.

    KissAnime alternatives

    ReplyDelete