Save Dynamic Parameter value in Text File using load runner scripting

I have a value which dynamic for each iteration. I have captured that value using web_reg_save_param(correlation) function. Please any enahncement in the script is most welcome….
Action_Main_URL()
{
char MainID; 
int i; 
char Length[100]; 
long file; 
char * filename = “c:\\Session.txt”; 

if ((file = fopen(filename, “a+” )) == NULL) 

lr_output_message(“Unable to create %s”, filename); 
return -1; 
}

web_reg_save_param( “Cor_Session_Id”, “LB= value=’”, “RB=’”, “Ord=6″, “IgnoreRedirections=Yes”, “Search=Body”, “RelFrameId=1″, LAST );
web_url(“Workplace”,
”URL=http://server/Workplace”,
”Resource=0″,
”RecContentType=text/html”,
”Referer=”,
”Snapshot=t1.inf”,
”Mode=HTML”,
LAST);
lr_start_transaction(“TS_Main_URL_Login”);
web_reg_find(“Text=Record Search and Check-In”, “SaveCount=Value_Count”,
LAST);
web_submit_data(“WcmSignIn.jsp”,
”Action=http://server/Workplace/WcmSignIn.jsp?eventTarget=signInModule&eventName=SignIn”,
”Method=POST”,
”RecContentType=text/html”,
”Referer=http://server/Workplace/WcmSignIn.jsp?targetUrl=WcmDefault.jsp&targetBase=http%3A%2F%2Fserver%2FWorkplace&sessionId={Cor_Session_Id}&originIp=10.x.x.x&originPort=”,
”Snapshot=t2.inf”,
”Mode=HTML”,
ITEMDATA,
”Name=targetBase”, “Value=http://server/Workplace”, ENDITEM,
”Name=originPort”, “Value=”, ENDITEM,
”Name=targetUrl”, “Value=Default.jsp”, ENDITEM,
”Name=encodedSessionId”, “Value=null”, ENDITEM,
”Name=originIp”, “Value=10.x.x.x”, ENDITEM,
”Name=sessionId”, “Value={Cor_Session_Id}”, ENDITEM,
”Name=browserTime1″, “Value=Sat Jan 1 05 EST 2011″, ENDITEM,
”Name=browserTime2″, “Value=Wed Jun 15 05 EDT 2011″, ENDITEM,
”Name=browserOffset1″, “Value=300″, ENDITEM,
”Name=browserOffset2″, “Value=240″, ENDITEM,
”Name=clientTimeZone”, “Value=”, ENDITEM,
”Name=appId”, “Value=Workplace”, ENDITEM,
”Name=userId”, “Value=userid”, ENDITEM,
”Name=password”, “Value=password”, ENDITEM,
EXTRARES,
”Url=images/web/common/Banner.jpg”, “Referer=http://server/Workplace/HomePage.jsp?mode=reset”, ENDITEM,
LAST);
if (atoi(lr_eval_string(“{Value_Count}”)) > 0)
{
lr_output_message(“Page found successfully.”);
}
else
{
lr_error_message(“Page is not found.”);
lr_exit( LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL );
return(0);
}

lr_end_transaction(“TS_Main_URL_Login”,LR_AUTO);
sprintf(Length,”\n%s,”,lr_eval_string(“{Cor_Session_Id}”)); 
i = fwrite(&Length,sizeof(Length), 1, file); 
if ( i > 0) 
lr_output_message(“Successfully wrote %d record”, i ); 
fclose(file);
return 0;
}

Requirement Gathering for Performance test Project

Here are the Ideal Requirements to be included while developing a Performance test plan.

• Deadlines available to complete performance testing, including the scheduled deployment date.
• Whether to use internal or external resources to perform the tests. This will largely depend on time scales and in-house expertise (or lack thereof).
• Test environment design agreed upon. Remember that the test environment should be as close an approximation of the live environment as you can achieve and will require longer to create than you estimate.
• Ensuring that a code freeze applies to the test environment within each testing cycle.
• Ensuring that the test environment will not be affected by other user activity. Nobody else should be using the test environment while performance test execution is taking place; otherwise, there is a danger that the test execution and results may be compromised.
• All performance targets identified and agreed to by appropriate business stakeholders. This means consensus from all involved and interested parties on the performance targets for the application. 
• The key application transactions identified, documented, and ready to script. Remember how vital it is to have correctly identified the key transactions to script. Otherwise, your performance testing is in danger of becoming a wasted exercise.
• Which parts of transactions (such as login or time spent on a search) should be monitored separately. This will be used in Step 3 for “checkpointing.”
• Identify the input, target, and runtime data requirements for the transactions that you select. This critical consideration ensures that the transactions you script run correctly and that the target database is realistically populated in terms of size and content. Data is critical to performance testing. Make sure that you can create enough test data of the correct type within the time frames of your testing project. You may need to look at some form of automated data management, and don’t forget to consider data security and confidentiality.
• Performance tests identified in terms of number, type, transaction content, and virtual user deployment. You should also have decided on the think time, pacing, and injection profile for each test transaction deployment.
• Identify and document server, application server, and network KPIs. Remember that you must monitor the application landscape as comprehensively as possible to ensure that you have the necessary information available to identify and resolve any problems that occur.
• Identify the deliverables from the performance test in terms of a report on the test’s outcome versus the agreed performance targets. It’s a good practice to produce a document template that can be used for this purpose.
• A procedure is defined for submission of performance defects discovered during testing cycles to the development or application vendor. This is an important consideration that is often overlooked. What happens if, despite your best efforts, you find major application, related problems? You need to build contingency into your test plan to accommodate this possibility. There may also be the added complexity of involving offshore resources in the defect submission process. If your plan is to carry out the performance testing in-house then you will also need to address the following points, relating to the testing team.

Important aspects of Performance test project

Important aspects of a Performance test project
For a performance testing project to be successful, both the approach to testing performance and the testing itself must be relevant to the context of the project. Without an understanding of the project context, performance testing is bound to focus on only those items that the performance tester or test team assumes to be important, as opposed to those that truly are important, frequently leading to wasted time, frustration, and conflicts.

The project context is nothing more than those things that are, or may become, relevant to achieving project success. This may include, but is not limited to: 
  • The overall vision or intent of the project 
  • Performance testing objectives 
  • Performance success criteria 
  • The development life cycle 
  • The project schedule 
  • The project budget 
  • Available tools and environments set of the 
  • The skill performance tester and the team 
  • The priority of detected performance concerns 
  • The business impact of deploying an application that performs poorly 
Some examples of items that may be relevant to the performance-testing effort in your project context include: 
Project vision: Before beginning performance testing, ensure that you understand the current project vision. The project vision is the foundation for determining what performance testing is necessary and valuable. Revisit the vision regularly, as it has the potential to change as well. 
Purpose of the system: Understand the purpose of the application or system you are testing. This will help you identify the highest-priority performance characteristics on which you should focus your testing. You will need to know the system’s intent, the actual hardware and software architecture deployed, and the characteristics of the typical end user. 
Customer or user expectations. Keep customer or user expectations in mind when planning performance testing. Remember that customer or user satisfaction is based on expectations, not simply compliance with explicitly stated requirements. 
Business drivers: Understand the business drivers – such as business needs or opportunities – that are constrained to some degree by budget, schedule, and/or resources. It is important to meet your business requirements on time and within the available budget. 
Reasons for testing performance. Understand the reasons for conducting performance testing very early in the project. Failing to do so might lead to ineffective performance testing. These reasons often go beyond a list of performance acceptance criteria and are bound to change or shift priority as the project progresses, so revisit them regularly as you and your team learn more about the application, its performance, and the customer or user. 
Value that performance testing brings to the project. Understand the value that performance testing is expected to bring to the project by translating the project- and business-level objectives into specific, identifiable, and manageable performance testing activities. Coordinate and prioritize these activities to determine which performance testing activities are likely to add value. 
Project management and staffing:Understand the team’s organization, operation, and communication techniques in order to conduct performance testing effectively. 
Process. Understand your team’s process and interpret how that process applies to performance testing. If the team’s process documentation does not address performance testing directly, extrapolate the document to include performance testing to the best of your ability, and then get the revised document approved by the project manager and/or process engineer. 
Compliance criteria: Understand the regulatory requirements related to your project. Obtain compliance documents to ensure that you have the specific language and context of any statement related to testing, as this information is critical to determining compliance tests and ensuring a compliant product. Also understand that the nature of performance testing makes it virtually impossible to follow the same processes that have been developed for functional testing. 
Project schedule: Be aware of the project start and end dates, the hardware and environment availability dates, the flow of builds and releases, and any checkpoints and milestones in the project schedule. 

Difference between Application server and Web server ?

A Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols.

The Web server:
A Web server handles the HTTP protocol. When the Web server receives an HTTP request, it responds with an HTTP response, such as sending back an HTML page. To process a request, a Web server may respond with a static HTML page or image, send a redirect, or delegate the dynamic response generation to some other program such as CGI scripts, JSPs (JavaServer Pages), servlets, ASPs (Active Server Pages), server-side JavaScripts, or some other server-side technology. Whatever their purpose, such server-side programs generate a response, most often in HTML, for viewing in a Web browser. 


Understand that a Web server's delegation model is fairly simple. When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn't provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program usually provides for itself such functions as transaction processing, database connectivity, and messaging. 
While a Web server may not itself support transactions or database connection pooling, it may employ various strategies for fault tolerance and scalability such as load balancing, caching, and clustering—features oftentimes erroneously assigned as features reserved only for application servers. 

The application server:
As for the application server, an application server exposes business logic to client applications through various protocols, possibly including HTTP. While a Web server mainly deals with sending HTML for display in a Web browser, an application server provides access to business logic for use by client application programs. The application program can use this logic just as it would call a method on an object (or a function in the procedural world). 

Such application server clients can include GUIs (graphical user interface) running on a PC, a Web server, or even other application servers. The information traveling back and forth between an application server and its client is not restricted to simple display markup. Instead, the information is program logic. Since the logic takes the form of data and method calls and not static HTML, the client can employ the exposed business logic however it wants.

In most cases, the server exposes this business logic through a component API, such as the EJB (Enterprise JavaBean) component model found on J2EE (Java 2 Platform, Enterprise Edition) application servers. Moreover, the application server manages its own resources. Such gate-keeping duties include security, transaction processing, resource pooling, and messaging. Like a Web server, an application server may also employ various scalability and fault-tolerance techniques.

Lets see the below example:
As an example, consider an online store that provides real-time pricing and availability information. Most likely, the site will provide a form with which you can choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. I'll show you one scenario that doesn't use an application server and another that does. Seeing how these scenarios differ will help you to see the application server's function. 

Scenario 1: Web server without an application server
In the first scenario, a Web server alone provides the online store's functionality. The Web server takes your request, then passes it to a server-side program able to handle the request. The server-side program looks up the pricing information from a database or a flat file. Once retrieved, the server-side program uses the information to formulate the HTML response, then the Web server sends it back to your Web browser.
To summarize, a Web server simply processes HTTP requests by responding with HTML pages.

Scenario 2: Web server with an application server
Scenario 2 resembles Scenario 1 in that the Web server still delegates the response generation to a script. However, you can now put the business logic for the pricing lookup onto an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server's lookup service. The script can then use the service's result when the script generates its HTML response. 
In this scenario, the application server serves the business logic for looking up a product's pricing information. That functionality doesn't say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server's lookup service, the service simply looks up the information and returns it to the client. 
By separating the pricing logic from the HTML response-generating code, the pricing logic becomes far more reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checks out a customer. In contrast, in Scenario 1 the pricing lookup service is not reusable because the information is embedded within the HTML page. To summarize, in Scenario 2's model, the Web server handles HTTP requests by replying with an HTML page while the application server serves application logic by processing pricing and availability requests.
Note:Recently, XML Web services have blurred the line between application servers and Web servers. By passing an XML payload to a Web server, the Web server can now process the data and respond much as application servers have in the past.
Additionally, most application servers also contain a Web server, meaning you can consider a Web server a subset of an application server. While application servers contain Web server functionality, developers rarely deploy application servers in that capacity. Instead, when needed, they often deploy standalone Web servers in tandem with application servers. Such a separation of functionality aids performance (simple Web requests won't impact application server performance), deployment configuration (dedicated Web servers, clustering, and so on), and allows for best-of-breed product selection.

LR function web_reg_save_param(); in load runner(detailed description)

This function registers a request to save dynamic data information to a parameter. 

Syntax for C language:

int web_reg_save_param( const char *ParamName, <List of Attributes>, LAST ); 

Syntax for Java language:

int object.reg_save_param( String ParamName, String[] attributeList ); 

The web_reg_save_param function is a Service function used for correlating HTML statements in Web scripts..


Object :An expression evaluating to an object of type WebApi. Usually web for Java and Visual Basic. See also Function and Constant Prefixes.

ParamName : A null–terminated string indicating the name of the parameter to create.

List of Attribute : Attribute value strings (e.g., "Search=all") are not case–sensitive.

LAST :A marker that indicates the end of the argument list.

Note: (Service functions : Service Functions perform customization tasks, like setting of proxies, authorization information, user–defined headers and so forth. These functions do not make any change in the Web application context.

Many of the service functions specify run–time settings for a script. A setting that is set with a service function always overrides the corresponding setting set with the Run–time settings dialog box.)

General Information
web_reg_save_param is a registration type function. It registers a request to find and save a text string within the server response. The operation is performed only after executing the next action function, such as web_url.

web_reg_save_param is only recorded when correlation during recording is enabled (see VuGen's Recording Options). VuGen must be in either URL–based recording mode, or in HTML–based recording mode with the A script containing explicit URLs only option checked (see VuGen's Recording Options).

This function registers a request to retrieve dynamic information from the downloaded page, and save it to a parameter. For correlation, enclose the parameter in braces (e.g., "{param1}") in ensuing function calls which use the dynamic data. The request registered byweb_reg_save_param looks for the characters between (but not including) the specified boundaries and saves the information that begins at the byte after the left boundary and ends at the byte before the right boundary.

If you expect leading and trailing spaces around the string and you do not want them in the parameter, add a space at the end of the left boundary, and at the beginning of the right boundary. For example, if the Web page contains the string, "Where and when do you want to travel?", the call:

web_reg_save_param("When_Txt", "LB=Where and ", "RB= do",LAST );

Wth a space after "and" and before "do", will result in "when" as the value of When_Txt. However,

web_reg_save_param("When_Txt", "LB=Where and", "RB=do",LAST );

without the spaces, will result in a value of " when ".

Embedded boundary characters are not supported. web_reg_save_param results in a simple search for the next occurrence after the most recent left boundary. For example, if you have defined the left boundary as the character `{` and the right boundary as the character `}', then with the following buffer c is saved:

{a{b{c}

The left and right boundaries have been located. Since embedded boundaries are not supported, the `}' is matched to the most recent `{` appearing just before the c. The ORD attribute is 1. There is only one matching instance.

The web_reg_save_param function also supports array type parameters. When you specify ORD=All, all the occurrences of the match are saved in an array. Each element of the array is represented by the ParamName_index. In the following example, the parameter name is A:

web_reg_save_param("A", "LB/ic=", "Ord=All", LAST );

The first match is saved as A_1, the second match is saved as A_2, and so forth. You can retrieve the total number of matches by using the following term: ParamName_count. For example, to retrieve the total number of matches saved to the parameter array, use:

TotalNumberOfMatches=atoi(lr_eval_string("{A_count}"));

This function is supported for all Web scripts, and for WAP scripts running in HTTP or Wireless Session Protocol (WSP) replay mode.

List of Attributes
Convert: The possible values are:
HTML_TO_URL: convert HTML–encoded data to a URL–encoded data format
HTML_TO_TEXT: convert HTML–encoded data to plain text format
This attribute is optional. 
IgnoreRedirections: If "IgnoreRedirections=Yes" is specified and the server response is redirection information (HTTP status code 300-303, 307), the response is not searched. Instead, after receiving a redirection response, the GET request is sent to the redirected location and the search is performed on the response from that location.
This attribute is optional. The default is "IgnoreRedirections=No".

LB: The left boundary of the parameter or the dynamic data. If you do not specify an LB value, it uses all of the characters from the beginning of the data as a boundary. Boundary parameters are case–sensitive and do not support regular expressions. To further customize the search text, use one or more text flags. This attribute is required. See the Boundary Arguments section.

NOTFOUND: The handling option when a boundary is not found and an empty string is generated.
"Notfound=error", the default value, causes an error to be raised when a boundary is not found.
"Notfound=warning" ("Notfound=empty" in earlier versions), does not issue an error. If the boundary is not found, it sets the parameter count to 0, and continues executing the script. The "warning" option is ideal if you want to see if the string was found, but you do not want the script to fail.
Note: If Continue on Error is enabled for the script, then even when NOTFOUND is set to "error", the script continues when the boundary is not found, but an error message is written to the Extended log file.
This attribute is optional.

ORD: Indicates the ordinal position or instance of the match. The default instance is 1. If you specify "All," it saves the parameter values in an array. This attribute is optional.
Note: The use of Instance instead of ORD is supported for backward compatibility, but deprecated. 
RB: The right boundary of the parameter or the dynamic data. If you do not specify an RB value, it uses all of the characters until the end of the data as a boundary. Boundary parameters are case–sensitive and do not support regular expressions. To further customize the search text, use one or more text flags. This attribute is required. See the Boundary Arguments section.

RelFrameID: The hierarchy level of the HTML page relative to the requested URL. The possible values are ALL or a number. Click RelFrameID Attribute for a detailed description. This attribute is optional.
Note: RelFrameID is not supported in GUI level scripts.

SaveLen: The length of a sub–string of the found value, from the specified offset, to save to the parameter. This attribute is optional. The default is –1, indicating to save to the end of the string. 
SaveOffset: The offset of a sub–string of the found value, to save to the parameter. The offset value must be non–negative. The default is 0. This attribute is optional. 

Search: The scope of the search—where to search for the delimited data. The possible values are Headers (Search only the headers), Body (search only body data, not headers), Noresource (search only the HTML body, excluding all headers and resources), or ALL (search body , headers, and resources). The default value is ALL. This attribute is optional.

Use Perfmon to monitor servers and find bottlenecks

What and When to Measure
Bottlenecks occur when a resource reaches its capacity, causing the performance of the entire system to slow down. Bottlenecks are typically caused by insufficient or misconfigured resources, malfunctioning components, and incorrect requests for resources by a program.
There are five major resource areas that can cause bottlenecks and affect server performance: physical disk, memory, process, CPU, and network. If any of these resources are overutilized, your server or application can become noticeably slow or can even crash. I will go through each of these five areas, giving guidance on the counters you should be using and offering suggested thresholds to measure the pulse of your servers.
Since the sampling interval has a significant impact on the size of the log file and the server load, you should set the sample interval based on the average elapsed time for the issue to occur so you can establish a baseline before the issue occurs again. This will allow you to spot any trend leading to the issue.
Fifteen minutes will provide a good window for establishing a baseline during normal operations. Set the sample interval to 15 seconds if the average elapsed time for the issue to occur is about four hours. If the time for the issue to occur is eight hours or more, set the sampling interval to no less than five minutes; otherwise, you will end up with a very large log file, making it more difficult to analyze the data.
Hard Disk Bottleneck
Since the disk system stores and handles programs and data on the server, a bottleneck affecting disk usage and speed will have a big impact on the server’s overall performance.
Please note that if the disk objects have not been enabled on your server, you need to use the command-line tool Diskperf to enable them. Also, note that % Disk Time can exceed 100 percent and, therefore, I prefer to use % Idle Time, Avg. Disk sec/Read, and Avg. Disk sec/write to give me a more accurate picture of how busy the hard disk is. You can find more on % Disk Time in the Knowledge Base article available at support.microsoft.com/kb/310067.
Following are the counters the Microsoft Service Support engineers rely on for disk monitoring.
LogicalDisk\% Free Space This measures the percentage of free space on the selected logical disk drive. Take note if this falls below 15 percent, as you risk running out of free space for the OS to store critical files. One obvious solution here is to add more disk space.
PhysicalDisk\% Idle Time This measures the percentage of time the disk was idle during the sample interval. If this counter falls below 20 percent, the disk system is saturated. You may consider replacing the current disk system with a faster disk system.
PhysicalDisk\Avg. Disk Sec/Read This measures the average time, in seconds, to read data from the disk. If the number is larger than 25 milliseconds (ms), that means the disk system is experiencing latency when reading from the disk. For mission-critical servers hosting SQL Server®and Exchange Server, the acceptable threshold is much lower, approximately 10 ms. The most logical solution here is to replace the current disk system with a faster disk system.
PhysicalDisk\Avg. Disk Sec/Write This measures the average time, in seconds, it takes to write data to the disk. If the number is larger than 25 ms, the disk system experiences latency when writing to the disk. For mission-critical servers hosting SQL Server and Exchange Server, the acceptable threshold is much lower, approximately 10 ms. The likely solution here is to replace the disk system with a faster disk system.
PhysicalDisk\Avg. Disk Queue Length This indicates how many I/O operations are waiting for the hard drive to become available. If the value here is larger than the two times the number of spindles, that means the disk itself may be the bottleneck.
Memory\Cache Bytes This indicates the amount of memory being used for the file system cache. There may be a disk bottleneck if this value is greater than 300MB.
Memory Bottleneck
A memory shortage is typically due to insufficient RAM, a memory leak, or a memory switch placed inside the boot.ini. Before I get into memory counters, I should discuss the /3GB switch.
More memory reduces disk I/O activity and, in turn, improves application performance. The /3GB switch was introduced in Windows NT® as a way to provide more memory for the user-mode programs.
Windows uses a virtual address space of 4GB (independent of how much physical RAM the system has). By default, the lower 2GB are reserved for user-mode programs and the upper 2GB are reserved for kernel-mode programs. With the /3GB switch, 3GB are given to user-mode processes. This, of course, comes at the expense of the kernel memory, which will have only 1GB of virtual address space. This can cause problems because Pool Non-Paged Bytes, Pool Paged Bytes, Free System Page Tables Entries, and desktop heap are all squeezed together within this 1GB space. Therefore, the /3GB switch should only be used after thorough testing has been done in your environment.
This is a consideration if you suspect you are experiencing a memory-related bottleneck. If the /3GB switch is not the cause of the problems, you can use these counters for diagnosing a potential memory bottleneck.
Memory\% Committed Bytes in Use This measures the ratio of Committed Bytes to the Commit Limit—in other words, the amount of virtual memory in use. This indicates insufficient memory if the number is greater than 80 percent. The obvious solution for this is to add more memory.
Memory\Available Mbytes This measures the amount of physical memory, in megabytes, available for running processes. If this value is less than 5 percent of the total physical RAM, that means there is insufficient memory, and that can increase paging activity. To resolve this problem, you should simply add more memory.
Memory\Free System Page Table Entries This indicates the number of page table entries not currently in use by the system. If the number is less than 5,000, there may well be a memory leak.
Memory\Pool Non-Paged Bytes This measures the size, in bytes, of the non-paged pool. This is an area of system memory for objects that cannot be written to disk but instead must remain in physical memory as long as they are allocated. There is a possible memory leak if the value is greater than 175MB (or 100MB with the /3GB switch). A typical Event ID 2019 is recorded in the system event log.
Memory\Pool Paged Bytes This measures the size, in bytes, of the paged pool. This is an area of system memory used for objects that can be written to disk when they are not being used. There may be a memory leak if this value is greater than 250MB (or 170MB with the /3GB switch). A typical Event ID 2020 is recorded in the system event log.
Memory\Pages per Second This measures the rate at which pages are read from or written to disk to resolve hard page faults. If the value is greater than 1,000, as a result of excessive paging, there may be a memory leak.
Processor Bottleneck
An overwhelmed processor can be due to the processor itself not offering enough power or it can be due to an inefficient application. You must double-check whether the processor spends a lot of time in paging as a result of insufficient physical memory. When investigating a potential processor bottleneck, the Microsoft Service Support engineers use the following counters.
Processor\% Processor Time This measures the percentage of elapsed time the processor spends executing a non-idle thread. If the percentage is greater than 85 percent, the processor is overwhelmed and the server may require a faster processor.
Processor\% User Time This measures the percentage of elapsed time the processor spends in user mode. If this value is high, the server is busy with the application. One possible solution here is to optimize the application that is using up the processor resources.
Processor\% Interrupt Time This measures the time the processor spends receiving and servicing hardware interruptions during specific sample intervals. This counter indicates a possible hardware issue if the value is greater than 15 percent.
System\Processor Queue Length This indicates the number of threads in the processor queue. The server doesn’t have enough processor power if the value is more than two times the number of CPUs for an extended period of time.
Network Bottleneck
A network bottleneck, of course, affects the server’s ability to send and receive data across the network. It can be an issue with the network card on the server, or perhaps the network is saturated and needs to be segmented. You can use the following counters to diagnosis potential network bottlenecks.
Network Interface\Bytes Total/Sec This measures the rate at which bytes are sent and received over each network adapter, including framing characters. The network is saturated if you discover that more than 70 percent of the interface is consumed. For a 100-Mbps NIC, the interface consumed is 8.7MB/sec (100Mbps = 100000kbps = 12.5MB/sec* 70 percent). In a situation like this, you may want to add a faster network card or segment the network.
Network Interface\Output Queue Length This measures the length of the output packet queue, in packets. There is network saturation if the value is more than 2. You can address this problem by adding a faster network card or segmenting the network.
Process Bottleneck
Server performance will be significantly affected if you have a misbehaving process or non-optimized processes. Thread and handle leaks will eventually bring down a server, and excessive processor usage will bring a server to a crawl. The following counters are indispensable when diagnosing process-related bottlenecks.
Process\Handle Count This measures the total number of handles that are currently open by a process. This counter indicates a possible handle leak if the number is greater than 10,000.
Process\Thread Count This measures the number of threads currently active in a process. There may be a thread leak if this number is more than 500 between the minimum and maximum number of threads.
Process\Private Bytes This indicates the amount of memory that this process has allocated that cannot be shared with other processes. If the value is greater than 250 between the minimum and maximum number of threads, there may be a memory leak.
Wrapping Up
Now you know what counters the Service Support engineers at Microsoft use to diagnose various bottlenecks. Of course, you will most likely come up with your own set of favorite counters tailored to suit your specific needs. You may want to save time by not having to add all your favorite counters manually each time you need to monitor your servers. Fortunately, there is an option in the Performance Monitor that allows you to save all your counters in a template for later use.
You may still be wondering whether you should run Performance Monitor locally or remotely. And exactly what will the performance hit be when running Performance Monitor locally? This all depends on your specific environment. The performance hit on the server is almost negligible if you set intervals to at least five minutes.
You may want to run Performance Monitor locally if you know there is a performance issue on the server, since Performance Monitor may not be able to capture data from a remote machine when it is running out of resources on the server. Running it remotely from a central machine is really best suited to situations when you want to monitor or baseline multiple servers.

Performance Test Plan

performance Test Plan contains....
-Project requirements:
-functional requirements
-functionality of application
e.g. Login functionality
user should be able to login into application with valid username and password.if user enters invalid username/password , application has to give an error message
-non-functional requirements
-performance requirements
-Performance of application
e.g. how much time the user took to login into application
50 concurrent users login, time < 100 sec
100 concurrent users login, time < 200 sec
Types of test:
-load test:
-testing an application with requested no. of users.         The main objective is to check whether application can         sustain the no. of users with the time frame.
50 concurrent users login, time < 100 sec
100 concurrent users login, time < 200 sec
no. of users (vs) response time
    -stress test:
-testing an application with requested no. of users  over a time period. The main objective is to check             stability/reliability of application
-Capacity test:
-to check max. user load. How many users the application can sustain.
ramp up scenario
Components in loadrunner:
-virtual user generator
-Create ONE virtual user (vUser: simulation of real         user)
-record script
-enhancements:
-parametrization:
-test with more data sets
-check points
-to verify expected result
-correlation(regular expressions)
-to handle dynamic objects
-controller
-controller
-how many such users you need e.g. 50
-design load test scenario
-manual scenario
-no. of users (vs) response time
-no. of users is defined
-goal oriented scenario
-define a goal
- 20 hits per sec
- 10 transactions per sec
-run test scenario
-monitor test scenario
-generate dynamic graphs
-analysis
-prepare load test reports
-send it to project stake holders

Client certificates in Truclient

When certificates are missing in TruClient ‘s Firefox, it is common to receive the following message during replay of the scripts, after which it fails
Certificate Error "Untrusted Connection…".
In such cases, the below procedure should resolve the problem:
Verify the security certificate is present in Firefox settings. Open the stand-alone Firefox installed on your system or the one in %vugen_path%\bin\firefox\firefox.exe.
Go to Tools > Options > Advanced > Encryption. The Certificate Manager window should appear.
Use the tabs to navigate through the certificate store that you would like to view. In case there are missing certificates, they need to be imported.
Close Firefox
Go to Firefox’ profile directory. On Windows 7 machines with stand alone Firefox installed the profiles are usually located in "C:\Users\Administrator\Appdata\Roaming\Mozilla\Firefox\Profiles\.default".
Copy key3.db and cert8.db to %vugen_path%\dat\LrwebToMasterProfile folder.
Create a new script.
This should overcome the Untrusted Connection message and allow the script to continue executing.

Ajax TruClient Protocol in Load Runner

To disable the popups from appearing:I tried a workaround with Loadrunner Ajax Click ans script Protocol to execute in Controller with out Ajax Click n Script License

Record a script with Ajax ClicknScript protocol. Then go to .usr file edit with notepad. You see below information No need to buy New Protocola license. 

[General]
Type=WebAjax
RecordedProtocol=WebAjax
DefaultCfg=default.cfg
AppName=
BuildTarget=
ParamRightBrace=}
ParamLeftBrace={
NewFunctionHeader=1
LastActiveAction=Action
CorrInfoReportDir=
LastResultDir=
DevelopTool=Vugen

Change the highlighted items in the above to below highlighted text. Then you can run your script successfully in the controller.

[General]
Type=Multi
AdditionalTypes=QTWeb
ActiveTypes=QTWeb
GenerateTypes=QTWeb
RecordedProtocols=
DefaultCfg=default.cfg
AppName=
BuildTarget=
ParamRightBrace=}
ParamLeftBrace={
NewFunctionHeader=1
LastActiveAction=vuser_init

  1. In the url address field in Firefox, enter about:config.
  2. Click the I’ll be careful, I promise! button.
  3. In the Filter field, enter disable_open_during_load.
  4. Right click disable_open_during_load and select Toggle. This changes the value to false.
  5. Try to record the initial navigation step again.

Error: “Error -27279: Internal Error – Report initialization failed, error code=-2147467259 (0×80004005)” when replaying a script in VuGen on an ALM Performance Center Host

When replaying a Web (HTTP/HTML) script in VuGen 11.51 on an ALM Performance Center (PC) Host, the following error message is displayed at the beginning of the replay log:
Error -27279: Internal Error – Report initialization failed, error code=-2147467259 (0×80004005)
The script replay then continues normally and finishes successfully. However, when trying to view the replay results from the Replay -> Test Results… menu, a blank window opens and the results are not displayed.
During installation of the ALM PC Host the "logger.dll" DLL file was not registered.
Follow these steps to register the DLL file:
Ø Navigate to the command prompt launch file, e.g. C:\Windows\system32\cmd.exe
Ø Start the command prompt as Administrator, i.e. right-click on cmd.exe and select "Run as Administrator"
Ø Navigate to the bin folder in the ALM PC Host installation folder, e.g. "\bin"
Ø Execute the following command:
regsvr32 logger.dll
Ø Replay the script again

Error -26547: Authentication required, please use web_set_user in LoadRunner

Action.c(12): Error -26547: Authentication required, please use web_set_user, e.g. web_set_user("domain\\user", "password", "host:port"); [MsgId: MERR-26547]
After adding the web_set_user() as suggested in the error message a further error is received:
Error -26630: HTTP Status-Code=401 (Access Denied) for "http://" [MsgId: MERR-26630]
LoadRunner does not currently support Microsoft Distributed Password Authentication (DPA)
To determine if DPA is being used in the generation log look at the header in the Server response to the initial GET. The following is an example server response:
HTTP/1.1 401 Access Denied\r\n
Server: Microsoft-IIS/5.0\r\n
Date: Fri, 15 Jan 2010 11:07:04 GMT\r\n
WWW-Authenticate: DPA\r\n
Content-Type: text/html\r\n
Cache-Control: proxy-revalidate\r\n
Content-length: 616\r\n
Proxy-Connection: Keep-Alive\r\n
Connection: Keep-Alive\r\n
Proxy-support: Session-based-authentication\r\n
Note the WWW-Authenticate:DPA
There is currently no workaround available