Monday, July 18, 2011

Nuts and Bolts of Performance Testing


Performance Testing was always a part of Software Testing whether explicitly stated or not. The world of software testing is becoming smarter and mature. With the changing landscape of Computing and a paradigm Industry wise shift towards SOA/web based applications has brought a forefront focus on Performance Testing and the expectations saught from this kind of testing are measurable numbers. I have tried to extract the entire essence in terms of minimal words which could serve the purpose to cut-short the learning time. Performance Testing is about to measure the performance of an application under test whether the application is an embedded, desktop or a distributed enterprise application. However, these are the enterprise based applications/architectures where lies the prime focus of performance testing. Expectations saught from AUT is to measure the performance numbers and ensuring that it conforms to the expectations.

Goals of Performance Testing The business goal of performance testing is to measure the application performance and ensure that the numbers conform to the Service Level Agreements. Goals can be internal (if the application is an in house project), or external (when the SLA's define the objectives). External Goals: Conforming to the Service Level Agreements (SLA's). The SLA's at the highest level consists of the following parameters (whether or not explicitly stated). Application Response time. Application throughput. Maximum number of concurrent users Resource utilization in terms of various performance counters for example: CPU, RAM, network I/O, and disk I/O. Soak Tests1 under varied workload patterns including normal load conditions, excessive load conditions, and conditions in between. This can include increase in the number of users, amount of data and so on.

Internal Goals: Application Crash. The application crash translates into a condition where the application either hangs or stops responding to requests. Some of the symptoms of breaking point include 503 errors with a "Server Too Busy" message, and errors in the application event log that indicate that the ASPNET worker process recycled because of potential deadlocks. Symptoms and causes of application failure under stress conditions. Recoverability options, whether the application recovers after a crash or not.

Soak Testing is about measuring application performance over long periods of test run typically one would expect in a real production/live environment.

More importantly to ensure that there is no data loss when the application crashes and application recovers gracefully. Known issues/bugs in the AUT.

Performance Objectives Most of the performance tests depend on a set of predefined, documented, and agreed-upon performance objectives. Knowing the objectives from the beginning helps make the testing process more efficient. You can evaluate your application's performance by comparing it with your performance objectives. One should by all means just run Ad-hoc tests randomly without any specific objectives (Old Principle: how many bugs were discovered just by executing the test cases).

As a thumb of rule, following are the performance expectations from the Application Under Test

I. Application Response Time: This is the most fundamental parameter which ideally is the second nature of the performance tester. Application Response time is the amount of time taken to respond to a request. You can measure response time at the server or client as follows: II. Response Time at the server. This is the time taken by the server to

complete the execution of a request. This does not include the client-to-server latency, which includes additional time for the request and response to cross the network. III. Response Time at the client. The latency measured at the client includes the request queue, plus the time taken by the server to complete the execution of the request and the network latency. You can measure the latency in various ways. Two common approaches are time taken by the first byte to reach the client (time to first byte, TTFB), or the time taken by the last byte of the response to reach the client (time to last byte, TTLB). Generally, you should test this using various network bandwidths between the client and the server.

By measuring latency, you can gauge whether your application takes too long to respond to client requests. 1. Application Throughput Throughput is the number of requests that can be served by Application Under Test per unit time. It is measured in terms of transactions per second or orders per second. The throughput varies largely due to the type of load applied, volume of load applied etc. The various examples include credit card transactions, the number of concurrent users, volumes of downloads and so on. A larger parameter however in this case also happens to be the network connection. For example, in terms of numbers lets say, there are 1000 users with an average page request data of 5k for every 5 minutes. The throughput would be = 1,000 x (5x1024x8) / (5 x 60)




Abhinav Vaid



This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

No comments:

Post a Comment