Introduction to Performance Testing


Why Performance testing?
Performance testing has proved itself to be crucial for the success of a business. Not only does a poor performing site face financiallosses, it also could lead to legal repercussions at times.
No one wants to put up with a slow performing, unreliable site in cases of purchasing, online test taking, bill payment, etc. With the internet being so widely available, the alternates are immense. It is easier to lose clientele than gain them and performance is a key game changer.
Therefore, performance testing is no longer a name sake checkpoint before going live. It is indeed a comprehensive and detailed stagethat would determine whether the performance of a site or an application meets the needs.
Introduction
The purpose of this test is to understand the performance of application under load, particularly users.

Types of Performance Testing

Performance testing types
Load Testing
Load testing is a type of performance test where the application is tested for its performance on normal and peak usage. Performance of an application is checked with respect to its response to the user request, its ability to respond consistently within accepted tolerance on different user loads.
The key considerations are:
  1. What is the max load the application is able to hold before the application starts behaving unexpectedly?
  2. How much data the Database is able to handle before system slowness or the crash is observed?
  3. Are there any network related issues to be addressed?
Stress Testing
Stress testing is the test to find the ways to break the system. The test also gives the idea for the maximum load the system can hold.
Generally Stress testing has incremental approach where the load is increased gradually. The test is started with good load for which application has been already tested. Then slowly more load is added to stress the system and the point when we start seeing servers not responding to the requests is considered as a break point.
During this test all the functionality of the application are tested under heavy load and on back-end these functionality might be running complex queries, handling data, etc.
The following questions are to be addressed:
  • What is the max load a system can sustain before it breaks down?
  • How is the system break down?
  • Is the system able to recover once it’s crashed?
  • In how many ways system can break and which are the weak node while handling the unexpected load?
Volume Testing
Volume test is to verify the performance of the application is not affected by volume of data that is being handled by the application. Hence to execute Volume Test generally huge volume of data is entered into the database. This test can be incremental or steady test. In the incremental test volume of data is increased gradually.
Generally with the application usage, the database size grows and it is necessary to test the application against heavy Database.  A good example of this could be a website of a new school or college having small data to store initially but after 5-10 years the data stores in database of website is much more.
The most common recommendation of this test is tuning of DB queries which access the Database for data. In some cases the response of DB queries is high for big database, so it needs to be rewritten in a different way or index, joints etc need to be included.
Capacity Testing
=> Is the application capable of meeting business volume under both normal and peak load conditions?
Capacity testing is generally done for future prospects.  Capacity testing addresses the following:
  1. Will the application able to support the future load?
  2. Is the environment capable to stand for upcoming increased load?
  3. What are the additional resources required to make environment capable enough?
Capacity testing is used to determine how many users and/or transactions a given web application will support and still meet performance. During this testing resources such as processor capacity, network bandwidth, memory usage, disk capacity, etc. are considered and altered to meet the goal.
Online Banking is a perfect example of where capacity testing could play a major part.
Reliability/Recovery Testing
Reliability Testing or Recovery Testing – is to verify as to whether the application is able to return back to its normal state or not after a failure or abnormal behavior- and also how long does it take for it to do so(in other words, time estimation).
An online trading site if experience a failure where the users are not able to buy/sell shares at a certain point of the day (peak hours) but are able to do so after an hour or two. In this case, we can say the application is reliable or recovered from the abnormal behavior.
In addition to the above sub-forms of performance testing, there are some more fundamental ones that are prominent:
Smoke Test:
  • How is the new version of the application performing when compared to previous ones?
  • Is any performance degradation observed in any area in the new version?
  • What should be the next area where developers should focus to address performance issues in the new version of application?
Component Test:
  • Whether the component is responsible for the performance issue?
  • Whether the component is doing what is expected and component optimization has been done?
Endurance Test:
  • Whether the application will able to perform well enough over the period of time.
  • Any potential reasons that could slow the system down?
  • Third party tool and/or vendor integration and any possibility that the interaction makes the application slower.
How does Functional Testing differ from Performance Testing?
Functional vs Performance Testing

Identification of components for testing

In an ideal scenario, all components should be performance tested. However, due to time & other business constraints that may not be possible. Hence, the identification of components for testing happens to be one of the most important tasks in load testing.
The following components must be included in performance testing:
------------
#1. Functional, business critical features
Components that have a Customer Service Level Agreement or those having complex business logic (and are critical for the business’s success) should be included.
Example:  Checkout and Payment for an E-commerce site like eBay.
#2. Components that process high volumes of data
Components, especially background jobs are to be included for sure.Example: Upload and download feature on a file sharing website.
#3. Components which are commonly used
A component that is frequently used by end-users, jobs scheduled multiple times in a day, etc.
Example: Login and Logout.
#4. Components interfacing with one or more application systems
In a system involving multiple applications that interact with one another, all the interface components must be deemed as critical for performance test.
Example: E-commerce sites interface with online banking sites for payments, which is an external third party application. This should be definitely the part of Perf testing.

Tools for performance testing

Sure, you could have a million computers set up with a million different credentials and all of them could login at once and monitor the performance. Apparently it’s not practical and even if we do, do that, we still need some sort of monitoring infrastructure.
The best way this situation is handled is through – virtual user (VU).For all our tests the VU behave just the way a real user would.
For the creation of as many VUs as you would require and to simulate real time conditions, performance testing tools are employed. Not only that, Perf testing also tests for the peak load usage, breakdown point, long term usage, etc
To enable all with limited resources, fast and to obtain reliable results tools are often used for this process. There are a variety of tools available in the market- licensed, free wares and open sourced.
Few of the such tools are:
  • HP LoadRunner,
  • Jmeter,
  • Silk Performer,
  • NeoLoad,
  • Web Load,
  • Rational Performance Tester (RTP),
  • VSTS,
  • Loadstorm,
  • Web Performance,
  • LoadUI,
  • Loadster,
  • Load Impact,
  • OpenSTA,
  • QEngine,
  • Cloud Test,
  • Httperf,
  • App Loader,
  • Qtest,
  • RTI,
  • Apica LoadTest,
  • Forecast,
  • WAPT,
  • Monitis,
  • Keynote Test Perspective,
  • Agile Load, etc.
The tool selection depends on budget, technology used, purpose of testing, nature of the applications, performance goals being validated, infrastructure, etc.
HP Load Runner captures majority of market due to:
  1. Versatility – can be used on windows as well as web based applications. It also works for many kinds of technologies.
  2. Test Results – It provides in-depth insights that can be used for tuning the application.
  3. Easy Integrations – works with diagnostics tool like HP Sitescope and HP Diagnostic.
  4. Analysis utility provides a variety of features which help in deep analysis.
  5. Robust Reports – LoadRunner has a good reporting engine and provides a variety of reporting formats.
  6. Comes with an Enterprise package too.
The only flip side is its license cost. It is a little bit on the expensive side – which is why other open source or affordably licensed tools that are specific to a technology, protocol and with limited analysis & reporting capabilities have emerged in the market.
Still, the HP LoadRunner is a clear winner.

Future in Performance Testing Career

Performance testing is easy to learn but need lots of dedication to master it. It’s like a mathematics subject where you have to build your concept. Once the concept is through, it can be applied to most of the tools irrespective of the scripting language being different, straight forward logic not being applicable, look and feel of the tool being different, etc. – the approach to Perf testing is almost always the same.
I would highly recommend this hot and booming technology and to enhance your skill by learning this. Mastering PT could be just what you are looking for to move ahead in your software testing career.

Conclusion

In this article we have covered most of the information required to build a base to move ahead and understand the Performance testing.  In the next article we will apply these concepts and understand the key activities of Performance testing.
Load Runner is going to be our vehicle in the journey, but the destination we want to reach is to understand everything about performance testing.

Performance Testing Tools

How to choose a value randomly from the list:


For demonstrating the example, we will use the sample application (HP Web Tours Application).  This application shows a sample where we can book flight tickets.
The options in depart and arrive are shown as below:
web_submit_data("reservations.pl",
        "Action=http://127.0.0.1:1080/cgi-bin/reservations.pl",
        "Method=POST",
        "RecContentType=text/html",
        "Referer=http://127.0.0.1:1080/cgi-bin/reservations.pl?page=welcome",
        "Snapshot=t4.inf",
        "Mode=HTML",
        ITEMDATA,
        "Name=advanceDiscount""Value=0"ENDITEM,
        "Name=depart""Value=London"ENDITEM,
        "Name=departDate""Value=03/28/2014"ENDITEM,
        "Name=arrive""Value=Paris"ENDITEM,
        "Name=returnDate""Value=03/29/2014"ENDITEM,
        "Name=numPassengers""Value=1"ENDITEM,
        "Name=roundtrip""Value=on"ENDITEM,
        "Name=seatPref""Value=Aisle"ENDITEM,
        "Name=seatType""Value=Coach"ENDITEM,
        "Name=.cgifields""Value=roundtrip"ENDITEM,
        "Name=.cgifields""Value=seatType"ENDITEM,
        "Name=.cgifields""Value=seatPref"ENDITEM,
        "Name=findFlights.x""Value=45"ENDITEM,
        "Name=findFlights.y""Value=6"ENDITEM,
        LAST);

Loadrunner recorded the script as above when depart is selected as “London” and arrive is selected “Paris”:
Now we want to provide a random value in depart and arrive from the list of values available.

Solution:

Simple Solution is we capture the values and perform a parameterization.
But lets do using it capturing the values using correlation at runtime and select a random value programmatically.

1.     Capture the list of values.
2.     Get a random value from array.
3.     In next web_sumbit_form or web_url use the value.

If we check the Code generation Log we have the below values for depart and arrive.



Instead of discussing theoretically, lets go through below action.
Statements are followed as comments where and when required.
Solution:

Action()
{

    int place_count,i;
    char Place[100];
    web_reg_save_param("places","LB=<option value=\"","RB=\">","ORD=ALL",LAST);
/*web_reg_save_param should be placed just above the request
Here we want to exclude the double quotes. So we used \” in both LB and RB.
Also we have used ORD=ALL to capture all the options
*/

    lr_start_transaction("Flights");

    web_url("welcome.pl",
        "URL=http://127.0.0.1:1080/cgi-bin/welcome.pl?page=search",
        "Resource=0",
        "RecContentType=text/html",
        "Referer=http://127.0.0.1:1080/cgi-bin/nav.pl?page=menu&in=home",
        "Snapshot=t3.inf",
        "Mode=HTML",
        LAST);

    lr_end_transaction("Flights",LR_AUTO);
  

//Capturing the Number of places found using correlation

    place_count=atoi(lr_eval_string("{places_count}"));
    lr_output_message("Number of places= %d",place_count);
    
// output: Action.c(47): Number of places= 18
//Here I have used lr_output_message, the output in the Replay Log is shown along with the Line number.

    for(i=1;i<=place_count;i++)
    {
        sprintf (Place,"{places_%d}",i );
      //save Place to  String city
      lr_save_stringlr_eval_string (Place),"city" );
      lr_messagelr_eval_string("{city}") );

    }

/*
Output obtained from above For Loop:

Frankfurt
London
Los Angeles
Paris
Portland
San Francisco
Seattle
Sydney
Zurich
Frankfurt
London
Los Angeles
Paris
Portland
San Francisco
Seattle
Sydney
Zurich
*/
   
/*  As we check the above output, we have duplication of values. In total there are 9 cities in the List. But we have captured 18 cities, 9 from depart and 9 from arrive. Since the List are same we will select only half list as below.
*/
    //code to select random value
    //(place_count)/2 will select only 9 out of 18.

    sprintf (Place,"{places_%d}",1 + rand() % (place_count/2) );

      //save Place to  String depart
      lr_save_stringlr_eval_string (Place),"depart" );
      lr_message"City Selected for Depart : %s" , lr_eval_string("{depart}") );

      //Output: City Selected for Depart : Seattle
//Here I have used lr_message, so in the Replay log the message is displayed without the Line number


    sprintf (Place,"{places_%d}",1 + rand() % (place_count/2) );
      //save Place to  String arrive
      lr_save_stringlr_eval_string (Place),"arrive" );
      lr_message"City Selected for Arrival : %s" , lr_eval_string("{arrive}") );

      // Output: City Selected for Arrival : Portland
      
//Parameterizing  the Depart Date as Todays date
    lr_save_datetime("%m/%d/%Y"DATE_NOW"departDate");
    lr_output_message("Depart Date is %s",lr_eval_string("{departDate}"));

    //Output: Action.c(103): Depart Date is 03/29/2014

// Parameterizing  the return Date as todays date+3

    lr_save_datetime("%m/%d/%Y"DATE_NOW+ONE_DAY*3"returnDate");
    lr_output_message("Return Date is %s",lr_eval_string("{returnDate}"));

    //Output: Action.c(106): Return Date is 04/01/2014
    
//Parameterizing and passing the Values of depart, departdate, arrive and arrivedate
    lr_start_transaction("Find Flight");

    web_submit_data("reservations.pl",
        "Action=http://127.0.0.1:1080/cgi-bin/reservations.pl",
        "Method=POST",
        "RecContentType=text/html",
        "Referer=http://127.0.0.1:1080/cgi-bin/reservations.pl?page=welcome",
        "Snapshot=t4.inf",
        "Mode=HTML",
        ITEMDATA,
        "Name=advanceDiscount""Value=0"ENDITEM,
        "Name=depart""Value={depart}"ENDITEM,
        "Name=departDate""Value={departDate}"ENDITEM,
        "Name=arrive""Value={arrive}"ENDITEM,
        "Name=returnDate""Value={returnDate}"ENDITEM,
        "Name=numPassengers""Value=1"ENDITEM,
        "Name=roundtrip""Value=on"ENDITEM,
        "Name=seatPref""Value=Aisle"ENDITEM,
        "Name=seatType""Value=Coach"ENDITEM,
        "Name=.cgifields""Value=roundtrip"ENDITEM,
        "Name=.cgifields""Value=seatType"ENDITEM,
        "Name=.cgifields""Value=seatPref"ENDITEM,
        "Name=findFlights.x""Value=45"ENDITEM,
        "Name=findFlights.y""Value=6"ENDITEM,
        LAST);

    lr_end_transaction("Find Flight",LR_AUTO);
    return 0;
}
In the above we can do a check for random values that depart and arrive cities are not same.


Loadrunner Runtime Settings

Loadrunner Run Time Settings:
When running a training or mentoring session, people often ask what runtime settings they should use; as if there is a magical list of settings that will always be correct for any testing situation. Obviously you select runtime settings that are appropriate for what you are trying to achieve with your test, but the funny thing is that there are actually a small list of settings that are usually appropriate for most situations. Read on…
General: Run Logic
Whenever I am using a vuser type that allows multiple actions in a single script, I will create a separate action for each business process and put appropriate percentage weightings on each action. It is very unusual to have to do anything more complicated than this. I don’t usually use the “sequential” option or create blocks unless I need to have fractional percentage weightings for a business process – percentages must be integer values, so to run a business process 0.1% of the time you could create a block that runs 1% of the time, and put an action in the block that runs 10% of the time.
It’s also rare to set a script in a scenario to run for a specified number of iterations (mostly done by time or set to run indefinitely). Generally “number of iterations” is only used when running the script in VuGen.
General: Pacing
  • “As soon as the previous iteration ends” is used when running in VuGen or when loading/verifying data. Do not use this for load testing
  • I have never seen the point of the “After the previous iteration ends” option. Why would you want to run an unknown number of transactions per hour against the system?
  • Don’t use the “At fixed intervals”. If something causes your users to become “in step”, they will tend to stay that way and continue to all hit the server at the same time.
  • “At random intervals” is definitely the way to go. Obviously for your users to create a certain number of orders per hour the iteration time must average to 3600/num iterations in an hour. Do not make the lower boundary value any bigger than the maximum time it takes to complete the business process, or you will end up creating less transactions per hour than you intend to.
General: Log
  • Logging creates additional overhead on your load generators, and can create huge log files.
  • I log absolutely everything when debugging in VuGen.
  • When running the script as part of a scenario, I leave extended logging on but change the logging to “Send messages only when an error occurs”. This gives a little more information than turning logging off entirely, and won’t create any additional overhead while everything is running smoothly (and if the system is not running smoothly you are going to need to stop the test and investigate anyway).
General: Think Time
  • Just like the pacing setting, I think that it is a good idea to put some randomness in your think times.
  • I use a random percentage of 50-150% of recorded think times.
  • Use “Ignore think time” if you are debugging in VuGen or if you are loading/verifying data.
General: Additional Attributes
  • This option is ignored by most people. It is used to create a parameter with a given value without having to edit the script (as runtime settings can be overridden in the Controller).
  • In the screenshot I have created a parameter of ServerName with the address of the test envioronment. If you were testing in more than one test environment at a time, this would make save some time.
General: Miscellaneous
  • Continue on error is generally only going to be used if you have written code to do something when you encounter an error. Usually the default behaviour of ending the current iteration and then starting the next one is sufficient). I don’t advise anyone to try to write a script that handles errors in the same way as a real user because it will create a lot of additional work for very little benefit, but doing something simple like writing some useful information to the logs and then calling lr_exit(LR_EXIT_ACTION_AND_CONTINUE , LR_FAIL) can be useful.
  • “Fail open transactions on lr_error_message” should always be ticked. If you are raising an error, you should fail the transaction step that you are performing.
  • “Generate snapshot on error” is useful. If it is a web script, any error messages should be added to your content check rules.
  • Run your virtual user as a thread unless you have code that is not threadsafe or there is some other reason to run your virtual users as a process. The overall memory footprint on your load generators will be higher if you run as a process.
  • I never use the “Define each action as a transaction” option. If I want a transaction in my script I will add it myself with lr_start_transaction.
  • I never use “Define each step as a transaction” either. If it is a web script, I can use the transaction breakdown graph to get this information, otherwise I will add the transactions myself.
Network: Speed Simulation
  • Not all vuser types have this option available.
  • Most of the time my virtual users will use the maximum bandwidth.
  • If I want to emulate users with bandwidth constraints, I will do this in a separate scenario.
  • Google calculator is handy to calculate bitrates if your bitrate is not available from the drop-down list e.g./ “256 Kbps in bps
All of the following settings only apply to web-based scripts. Each vuser type will have its own runtime setting options. It is important to know what they mean and how they will influence your test results before running any tests that you plan to report on.
Browser: Browser Emulation
  • Some people get confused by the User-Agent (browser to be emulated) setting. If 90% of your users use Internet Explorer 6.0 and the rest use Firefox 1.5, you don’t have to change the runtime settings for your users to match this. All it changes is the string that is sent in the “User-Agent” field of your HTTP requests. This is completely pointless unless your application has been written to serve different content to different browsers based on the User-Agent field.
  • TODO
Internet Protocol: Proxy
  • Generally people won’t be using your web applications through your proxy server, so it shouldn’t be part of your test either.
  • If you start getting errors that are due a proxy server rather than the system under test, it will just confuse the people who have to fix the problem.
  • A proxy server will also make IP-based load balancing ineffective.
  • If it’s an intranet application and everyone will be using the application through the company’s proxy, then the proxy server should be explicity declared to be in scope for your load test. You should make sure that you have an identical proxy server for your test environment, or that you have permission to be generating load on a piece of Production infrastructure.
Internet Protocol: Preferences
  • TODO
Internet Protocol: Preverences - Options
  • These settings are default values specified by Mercury, rather than being inherited from the web browser that is installed on your workstation. Generally you will not need to change them, but be aware that they are here.
Internet Protocol: Download Filters
  • Download filters are a quick way of preventing your scripts from downloading content from certain URLs or hosts/domains.
  • I generally use this feature when the web application in the test environment contains third-party images used for tracking website usage (e.g. images from Webtrends or Red Sheriff etc).
  • I think it is better to specify which hosts your script is allowed connect to, rather than which hosts your script can’t connect to (because it’s easy to miss one accidentally, or the application may change and refer to a new third-party domain).
  • Use web_add_auto_filter if you want to specify this in your script rather than your runtime settings.
Internet Protocol: ContentCheck
  • I have talked about Content Check rules before; I think that if you aren’t using them already, then you are not getting the most out of the LoadRunner feature-set.