SCOM (System Center Operations Manager) Monitoring tool

System Center Operations Manager 2012 – the complete application monitoring solution

For many years Operations Manager has delivered infrastructure monitoring, providing a strong foundation on which we can build to deliver application performance monitoring. It is important to understand that in order to provide the application level performance monitoring, we must first have a solid infrastructure monitoring solution in place. After all, if an application is having a performance issue, we must first establish if the issue is due to an underlying platform problem, or within the application itself.

A key value that Operations Manager 2012 delivers is a solution that uses the same tools to monitor with visibility across infrastructure AND applications.

To deliver application performance monitoring, we provide 4 key capabilities in Operations Manager 2012:
Infrastructure monitoring – network, hardware and operating system
Server-side application monitoring – monitoring the actual code that is executed and delivered by the application
Client-side application monitoring – end-user experiences related to page load times, server and network latency, and client-side scripting exceptions
Synthetic transaction – pre-recorded testing paths through the application that highlight availability, response times, and unexpected responses

Configuring application performance monitoring

So it must be hard to configure all this right? Lots of things to know, application domain knowledge, settings, configurations? Rest assured, this is not the case! We make it incredibly easy to enable application performance monitoring!

1. Define the application to monitor.



2. Configure server-side monitoring to be enabled and set your performance thresholds



3. Configure client-side monitoring to be enabled and set your performance thresholds



And that’s it, you’re now set to go. Of course setting the threshold levels is the most important part of this, and that is the one thing we can’t do for you… you know your application and what the acceptable performance level is.
Configuring an application performance dashboard in 4 steps

It’s great that we make the configuration of application performance monitoring so easy, but making that information available in a concise, impactful manner is just as important.

We have worked hard to make the creation of dashboards incredibly easy, with a wizard driven experience. You can create an application level dashboard in just 4 steps:

1. Choose where to store the dashboard



2. Choose your layout structure. There are many different layouts available.



3. Specify which information you want to be part of your dashboard.



4. Choose who has access to the dashboard. As you will see a little later in this article, publishing information through web and SharePoint portals is very easy.



And just like that, you’ve created and published an application performance monitoring dashboard!


Anyone who has either worked in IT, or been the owner of an application knows the conversations and finger pointing that can go on when users complain about poor performance. Is it the hardware, the platform, a code issue or a network problem?

This is where the complete solution from Operations Manager 2012 really provides an incredible solution. It’s great that an application and associated resources are highly available, but availability does not equal performance. Indeed, an application can be highly available (the ‘5 nines’) but performing below required performance thresholds.

The diagram below shows an application dashboard that I created using the 4 steps above for a sample application. You can see that the application is available and ‘green’ across the board. But the end users are having performance issues. This is highlighted by the client side alerts about performance.






Deep Insight into application performance

Once you know that there is an issue, Operations Manager 2012 provides the ability to drill into the alert down to the code level to see exactly what is going on and where the issue is.




Reporting and trending analysis

An important aspect of application performance monitoring is to be able to see how your applications are performing over time, and to be able to quickly gain visibility into common issues and problematic components of the application.

In the report shown below, you can see that we can quickly see areas of the application we need to focus on, and also understand how these components are related to other parts of the application, and may be causing flow-on effects.




Easily make information available
With Operations Manager 2012, we have made it very easy to delegate and publish information across multiple content access solutions. Operations staff have access to the Operations Manager console, and we can now easily publish delegated information to the Silverlight based Operations web console and also to SharePoint webparts

lr_paramarr_random function in load runner

In performance testing, it is really important to simulate a realistic user path through an application. For example, randomly select an image link from a gallery or select a share from a share list. In such situations, you can use the LoadRunner lr_paramarr_randomfunction to select a random value from a captured parameter array. Similarly, you can also write a code to do the same.

Before you use the above function, you will need to use web_reg_save_paramfunction to capture all the ordinal values. This can be achieved by passing "ORD=ALL" into the function.

The following code demonstrates the use of lr_paramarr_random function. The code saves link Ids using web_reg_save_param function and then uses


Example:

This example shows how to get a random value from a parameter array.

char * FlightVal;

web_reg_save_param("outFlightVal",

"LB=outboundFlight value=", "RB=>",

"ORD=ALL",

"SaveLen=18",

LAST );

web_submit_form("reservations.pl",

"Snapshot=t4.inf",

ITEMDATA,

"Name=depart", "Value=London", ENDITEM,

"Name=departDate", "Value=11/20/2003", ENDITEM,

"Name=arrive", "Value=New York", ENDITEM,

"Name=returnDate", "Value=11/21/2003", ENDITEM,

"Name=numPassengers", "Value=1", ENDITEM,

"Name=roundtrip", "Value=", ENDITEM,

"Name=seatPref", "Value=None", ENDITEM,

"Name=seatType", "Value=Coach", ENDITEM,

"Name=findFlights.x", "Value=83", ENDITEM,

"Name=findFlights.y", "Value=16", ENDITEM,

LAST );

/*

The result of the web_reg_save_param having been called before the web_submit_form is:

Notify: Saving Parameter "outFlightVal_1 = 230;378;11/20/2003"

Notify: Saving Parameter "outFlightVal_2 = 231;337;11/20/2003"

Notify: Saving Parameter "outFlightVal_3 = 232;357;11/20/2003"

Notify: Saving Parameter "outFlightVal_4 = 233;309;11/20/2003"

Notify: Saving Parameter "outFlightVal_count = 4"

*/

FlightVal = lr_paramarr_random("outFlightVal");

Base64 Encode/Decode for LoadRunner

Code:

#include "base64.h"
vuser_init()
{
int res;
// ENCODE
lr_save_string("ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789","plain");
b64_encode_string( lr_eval_string("{plain}"), "b64str" );
lr_output_message("Encoded: %s", lr_eval_string("{b64str}") );

// DECODE
b64_decode_string( lr_eval_string("{b64str}"), "plain2" );
lr_output_message("Decoded: %s", lr_eval_string("{plain2}") );

// Verify decoded matches original plain text
res = strcmp( lr_eval_string("{plain}"), lr_eval_string("{plain2}") );
if (res==0) lr_output_message("Decoded matches original plain text");

return 0;
}

 base64.h include file

/*
Base 64 Encode and Decode functions for LoadRunner
==================================================
This include file provides functions to Encode and Decode
LoadRunner variables. It's based on source codes found on the
internet and has been modified to work in LoadRunner.

Created by Kim Sandell / Celarius - www.celarius.com
*/
// Encoding lookup table
char base64encode_lut[] = {
'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q',
'R','S','T','U','V','W','X','Y','Z','a','b','c','d','e','f','g','h',
'i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y',
'z','0','1','2','3','4','5','6','7','8','9','+','/','='};

// Decode lookup table
char base64decode_lut[] = {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0,62, 0, 0, 0,63,52,53,54,55,56,57,58,59,60,61, 0, 0,
0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14,
15,16,17,18,19,20,21,22,23,24,25, 0, 0, 0, 0, 0, 0,26,27,28,
29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,
49,50,51, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, };

void base64encode(char *src, char *dest, int len)
// Encodes a buffer to base64
{
int i=0, slen=strlen(src);
for(i=0;i
{ // Enc next 4 characters
*(dest++)=base64encode_lut[(*src&0xFC)>>0x2];
*(dest++)=base64encode_lut[(*src&0x3)<<0x4 amp="" src="" xf0="">>0x4];
*(dest++)=((i+1)>0x6]:'=';
*(dest++)=((i+2)
}
*dest='\0'; // Append terminator
}

void base64decode(char *src, char *dest, int len)
// Encodes a buffer to base64
{
int i=0, slen=strlen(src);
for(i=0;i
{ // Store next 4 chars in vars for faster access
char c1=base64decode_lut[*src], c2=base64decode_lut[*(src+1)], c3=base64decode_lut[*(src+2)], c4=base64decode_lut[*(src+3)];
// Decode to 3 chars
*(dest++)=(c1&0x3F)<<0x2 amp="" c2="" x30="">>0x4;
*(dest++)=(c3!=64)?((c2&0xF)<<0x4 amp="" c3="" x3c="">>0x2):'\0';
*(dest++)=(c4!=64)?((c3&0x3)<<0x6 amp="" c4="" div="" x3f="">
}
*dest='\0'; // Append terminator
}

int b64_encode_string( char *source, char *lrvar )
// ----------------------------------------------------------------------------
// Encodes a string to base64 format
//
// Parameters:
// source Pointer to source string to encode
// lrvar LR variable where base64 encoded string is stored
//
// Example:
//
// b64_encode_string( "Encode Me!", "b64" )
// ----------------------------------------------------------------------------
{
int dest_size;
int res;
char *dest;
// Allocate dest buffer
dest_size = 1 + ((strlen(source)+2)/3*4);
dest = (char *)malloc(dest_size);
memset(dest,0,dest_size);
// Encode & Save
base64encode(source, dest, dest_size);
lr_save_string( dest, lrvar );
// Free dest buffer
res = strlen(dest);
free(dest);
// Return length of dest string
return res;
}

int b64_decode_string( char *source, char *lrvar )
// ----------------------------------------------------------------------------
// Decodes a base64 string to plaintext
//
// Parameters:
// source Pointer to source base64 encoded string
// lrvar LR variable where decoded string is stored
//
// Example:
//
// b64_decode_string( lr_eval_string("{b64}"), "Plain" )
// ----------------------------------------------------------------------------
{
int dest_size;
int res;
char *dest;
// Allocate dest buffer
dest_size = strlen(source);
dest = (char *)malloc(dest_size);
memset(dest,0,dest_size);
// Encode & Save
base64decode(source, dest, dest_size);
lr_save_string( dest, lrvar );
// Free dest buffer
res = strlen(dest);
free(dest);
// Return length of dest string
return res;

Oracle Goldengate Technology

After Oracle corp. acquiring Goldengate software there is a lot of buzz about Oracle Goldengate and it is one of the hot topics at Oracle open world 2010.Oracle Goldengate can be used as a replication tool, ETL, and even as a DR solution.

Oracle Goldengate (Golden Gate) is probably the best replication software and it is very easy to configure and deploy it in large scale environment. Here are some of the things you need to be aware of:
  •  All Golden Gate configuration files are ascii text based files. Very easy to make changes but it is prone to human errors in an environment having many DBA's working on it.
  •  In order to use parallel apply threads, Golden Gate breaks down the database transaction into multiple transactions based on the hashing key defined for range split of the data. So, transactional consistency will not be guaranteed during real time but there won't be any data loss, but make sure that your application can tolerate this.
  • If there is no primary key or unique index exists on any table, Golden Gate will use all the columns as supplemental logging key pair for both extracts and replicats. But if you define key columns in the Golden Gate extract parameter file and if you don't have the supplemental logging enabled on that key columns combination, then Golden Gate will assume missing key columns record data as "NULL", which is a huge deal, and this will introduce logical data corruption on the target.
  •  Golden Gate started supporting bulk data loads with their 11.1 release but any NOLOGolden GateING data changes will be silently ignored without any warning.
  •  Golden Gate doesn't support compression on the source database.
  •  Golden Gate does support DDL replication but it is not easy to do selective DDL replication, it replicates every DDL that happens on the source database which is not desirable for some customers.
  • Tables being replicated to on the target can also be written to by any other application or DBA's.
  • Golden Gate supports ignoring data conflicts for updates after the first instantiation of the target database until it catches up. But it is very easy to forget turning off that parameter and any updates being lost will not be alerted by Golden Gate.
  • Golden Gate still works by reverse engineering the Oracle redolog. This may not be totally true with Golden Gate 11, but I expect Golden Gate to interpret Oracle redo more directly in later versions of 11 or 12.
  • Golden Gate dynamically decides to change the key columns that form the supplemental logging based on the state of primary key (i.e. in VALIDATED or NONVALIDATED state), which can introduce data corruptions on the target databases as the expected key columns data is missing in the trail files and they will be set to NULL. They now have the patch available for this, you can set "_USEALLKEYCOLUMNS and ALLOWNONVALIDATEDKEYS" parameters in GLOBALS file to get around this problem.

mongo DB

MongoDB is a document database that provides high performance, high availability, and easy scalability.

Document Database

  • Documents (objects) map nicely to programming language data types.
  • Embedded documents and arrays reduce need for joins.
  • Dynamic schema makes polymorphism easier.
High Performance
  • Embedding makes reads and writes fast.
  • Indexes can include keys from embedded documents and arrays.
  • Optional streaming writes (no acknowledgments).
High Availability
  • Replicated servers with automatic master failover.
Easy Scalability
  • Automatic sharding distributes collection data across machines.
  • Eventually-consistent reads can be distributed over replicated servers.
Advanced Operations

MongoDB Data Model

A MongoDB deployment hosts a number of databases. A manual:database holds a set of collections. Amanual:collection holds a set of documents. A manual:document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema means that documents in the same collection do not need to have the same set of fields or structure, and common fields in a collection’s documents may hold different types of data.

MongoDB Queries

Queries in MongoDB provides a set of operators to define how the find() method selects documents from a collection based on a query specification document that uses a combination of exact equality matches and conditionals using a query operator.

Deployment Architectures

Although MongoDB supports a “standalone” or single-instance operation, production MongoDB deployments are distributed by default. Replica sets provide high performance replication with automated failover, while sharded clusters make it possible to partition large data sets over many machines transparently to the users. MongoDB users combine replica sets and sharded clusters to provide high levels redundancy for large data sets transparently for applications.

MongoDB Design Philosophy


MongoDB wasn’t designed in a lab. We built MongoDB from our own experiences building large scale, high availability, robust systems. We didn’t start from scratch, we really tried to figure out what was broken, and tackle that. So the way I think about MongoDB is that if you take MySql, and change the data model from relational to document based, you get a lot of great features: embedded docs for speed, manageability, agile development with schema-less databases, easier horizontal scalability because joins aren’t as important. There are lots of things that work great in relational databases: indexes, dynamic queries and updates to name a few, and we haven’t changed much there. For example, the way you design your indexes in MongoDB should be exactly the way you do it in MySql or Oracle, you just have the option of indexing an embedded field.

—Eliot Horowitz, MongoDB CTO and Co-founder
  • New database technologies are needed to facilitate horizontal scaling of the data layer, easier development, and the ability to store order(s) of magnitude more data than was used in the past.
  • A non-relational approach is the best path to database solutions which scale horizontally to many machines.
  • It is unacceptable if these new technologies make writing applications harder. Writing code should be faster, easier, and more agile.
  • The document data model (JSON/BSON) is easy to code to, easy to manage(dynamic schema), and yields excellent performance by grouping relevant data together internally.
  • It is important to keep deep functionality to keep programming fast and simple. While some things must be left out, keep as much as possible – for example secondaries indexes, unique key constraints, atomic operations, multi-document updates.
  • Database technology should run anywhere, being available both for running on your own servers or VMs, and also as a cloud pay-for-what-you-use service.

Key MongoDB Features

MongoDB focuses on flexibility, power, speed, and ease of use:

Flexibility

MongoDB stores data in JSON documents (which we serialize to BSON). JSON provides a rich data model that seamlessly maps to native programming language types, and the dynamic schema makes it easier to evolve your data model than with a system with enforced schemas such as a RDBMS.

Power

MongoDB provides a lot of the features of a traditional RDBMS such as secondary indexes, dynamic queries, sorting, rich updates, upserts (update if document exists, insert if it doesn’t), and easy aggregation. This gives you the breadth of functionality that you are used to from an RDBMS, with the flexibility and scaling capability that the non-relational model allows.
Speed/Scaling

By keeping related data together in documents, queries can be much faster than in a relational database where related data is separated into multiple tables and then needs to be joined later. MongoDB also makes it easy to scale out your database. Autosharding allows you to scale your cluster linearly by adding more machines. It is possible to increase capacity without any downtime, which is very important on the web when load can increase suddenly and bringing down the website for extended maintenance can cost your business large amounts of revenue.

Ease of use

MongoDB works hard to be very easy to install, configure, maintain, and use. To this end, MongoDB provides few configuration options, and instead tries to automatically do the “right thing” whenever possible. This means that MongoDB works right out of the box, and you can dive right into developing your application, instead of spending a lot of time fine-tuning obscure database configurations.

Operations


MongoDB is a server process that runs on Linux, Windows and OS X. It can be run both as a 32 or 64-bit application. We recommend running in 64-bit mode, since MongoDB is limited to a total data size of about 2GB for all databases in 32-bit mode.

The MongoDB process listens on port 27017 by default (note that this can be set at start time - please see mongod options for more information).

Clients connect to the MongoDB process, optionally authenticate themselves if security is turned on, and perform a sequence of actions, such as inserts, queries and updates.

MongoDB stores its data in files (default location is /data/db/), and uses memory mapped files for data management for efficiency.

MongoDB can also be configured for data replication.

Additionally the MongoDB Management Service (MMS) application for managing MongoDB clusters using a simple user interface. MMS provides backup and monitoring. MMS is available to all users in the cloud and on-premises as part of MongoDB Standard and Enterprise Subscriptions.Agile and Scalable

MongoDB (from "humongous") is an open-source document database, and the leading NoSQL database. Written in C++, MongoDB features:
Document-Oriented Storage »

JSON-style documents with dynamic schemas offer simplicity and power.
Full Index Support »

Index on any attribute, just like you're used to.
Replication & High Availability »

Mirror across LANs and WANs for scale and peace of mind.
Auto-Sharding »

Scale horizontally without compromising functionality.
Querying »

Rich, document-based queries.
Fast In-Place Updates »

Atomic modifiers for contention-free performance.
Map/Reduce »

Flexible aggregation and data processing.
GridFS »

Store files of any size without complicating your stack.
MongoDB Management Service »

Manage MongoDB on the cloud infrastructure of your choice.
MongoDB Enterprise »

The best way to run MongoDB in production. Secured. Supported. Certified.
Production Support »

Our experts at your fingertips. Get access to our global support organization 24x365

Wireshark

Source:https://www.wireshark.org/download.html

Wireshark is the world's foremost network protocol analyzer. It lets you see what's happening on your network at a microscopic level. It is the de facto (and often de jure) standard across many industries and educational institutions.

Wireshark development thrives thanks to the contributions of networking experts across the globe. It is the continuation of a project that started in 1998.

Features:
  1. Deep inspection of hundreds of protocols, with more being added all the time
  2. Live capture and offline analysis
  3. Standard three-pane packet browser
  4. Multi-platform: Runs on Windows, Linux, OS X, Solaris, FreeBSD, NetBSD, and many others
  5. Captured network data can be browsed via a GUI, or via the TTY-mode TShark utility
  6. The most powerful display filters in the industry
  7. Rich VoIP analysis
  8. Read/write many different capture file formats: tcpdump (libpcap), Pcap NG, Catapult DCT2000, Cisco Secure IDS iplog, Microsoft Network Monitor, Network General Sniffer® (compressed and uncompressed), Sniffer® Pro, and NetXray®, Network Instruments Observer, NetScreen snoop, Novell LANalyzer, RADCOM WAN/LAN Analyzer, Shomiti/Finisar Surveyor, Tektronix K12xx, Visual Networks Visual UpTime, WildPackets EtherPeek/TokenPeek/AiroPeek, and many others
  9. Capture files compressed with gzip can be decompressed on the fly
  10. Live data can be read from Ethernet, IEEE 802.11, PPP/HDLC, ATM, Bluetooth, USB, Token Ring, Frame Relay, FDDI, and others (depending on your platform)
  11. Decryption support for many protocols, including IPsec, ISAKMP, Kerberos, SNMPv3, SSL/TLS, WEP, and WPA/WPA2
  12. Coloring rules can be applied to the packet list for quick, intuitive analysis
  13. Output can be exported to XML, PostScript®, CSV, or plain text

Capture, Filter and Inspect Packets using Wireshark Tool

Here is the demo..



Wireshark, a network analysis tool formerly known as Ethereal, captures packets in real time and display them in human-readable format. Wireshark includes filters, color-coding and other features that let you dig deep into network traffic and inspect individual packets.

This tutorial will get you up to speed with the basics of capturing packets, filtering them, and inspecting them. You can use Wireshark to inspect a suspicious program’s network traffic, analyze the traffic flow on your network, or troubleshoot network problems.

Getting Wireshark


You can download Wireshark for Windows or Mac OS X from its official website. If you’re using Linux or another UNIX-like system, you’ll probably find Wireshark in its package repositories. For example, if you’re using Ubuntu, you’ll find Wireshark in the Ubuntu Software Center.

Just a quick warning: Many organizations don’t allow Wireshark and similar tools on their networks. Don’t use this tool at work unless you have permission.


Capturing Packets:

After downloading and installing Wireshark, you can launch it and click the name of an interface under Interface List to start capturing packets on that interface. For example, if you want to capture traffic on the wireless network, click your wireless interface. You can configure advanced features by clicking Capture Options, but this isn’t necessary for now.



As soon as you click the interface’s name, you’ll see the packets start to appear in real time. Wireshark captures each packet sent to or from your system. If you’re capturing on a wireless interface and have promiscuous mode enabled in your capture options, you’ll also see other the other packets on the network.



Click the stop capture button near the top left corner of the window when you want to stop capturing traffic.



Color Coding
You’ll probably see packets highlighted in green, blue, and black. Wireshark uses colors to help you identify the types of traffic at a glance. By default, green is TCP traffic, dark blue is DNS traffic, light blue is UDP traffic, and black identifies TCP packets with problems — for example, they could have been delivered out-of-order.



Sample Captures
If there’s nothing interesting on your own network to inspect, Wireshark’s wiki has you covered. The wiki contains a page of sample capture files that you can load and inspect.

Opening a capture file is easy; just click Open on the main screen and browse for a file. You can also save your own captures in Wireshark and open them later.



Filtering Packets
If you’re trying to inspect something specific, such as the traffic a program sends when phoning home, it helps to close down all other applications using the network so you can narrow down the traffic. Still, you’ll likely have a large amount of packets to sift through. That’s where Wireshark’s filters come in.

The most basic way to apply a filter is by typing it into the filter box at the top of the window and clicking Apply (or pressing Enter). For example, type “dns” and you’ll see only DNS packets. When you start typing, Wireshark will help you autocomplete your filter.



You can also click the Analyze menu and select Display Filters to create a new filter.



Another interesting thing you can do is right-click a packet and select Follow TCP Stream.



You’ll see the full conversation between the client and the server.



Close the window and you’ll find a filter has been applied automatically — Wireshark is showing you the packets that make up the conversation.



Inspecting Packets

Click a packet to select it and you can dig down to view its details.



You can also create filters from here — just right-click one of the details and use the Apply as Filter submenu to create a filter based on it.



Wireshark is an extremely powerful tool, and this tutorial is just scratching the surface of what you can do with it. Professionals use it to debug network protocol implementations, examine security problems and inspect network protocol internals

Parameterization in Load Runner

Replacing hard coded values in the script with different values is called Parameterization.


Parameterization used for :
  1. Reducing script size
  2. Avoiding cache effect

Type of Parameters


1.Date/Time – Whenever we have to replace a date value with a parameter, Date/Time parameter is used. Any post with past date is not valid. To keep it updated, Date/Time parameter provides flexibility to get the current or future date. If past date is needed, it handles that too.

2.Group Name -We can generate a parameter on the basis of group that we select on controller for the script while execution. This parameter will only work while running the script on controller.

3. Iteration Number – This replaces the parameter with current iteration number. This is generally used to build some logic. For example- when we want some code in script to be executed alternatively. For this, we will use the iteration number to check whether it is even or odd number and for one of the condition we will execute the function.

4. Load Generator Name – We can also generate parameter while executing the script on controller on the basis of load generator name on which that script is running. This parameter only works while running the script on controller.

5. Vuser ID – When we run the script on controller, it assigns a unique id to each virtual user that emulate during the execution. This parameter type is used –
To print the Vuser ID in an external file for script-debugging purpose.
To segregate transaction volume based on Vuser ID

6. File – Some time we want to pass the specific value in the script. In such cases, we use file and enter the values that want to use during execution. LR provides options to run the script with provided list sequentially or randomly on next iteration.
In few cases we want to use a set of values passed to the script. In such cases, we can use same file for the other parameter value as well.
7. Random Number – As per need, Vugen also generates random value from the provided range.

8.Unique value – In few situations, script is not allowed to pass any duplicate value. In such cases, unique parameter is used to avoid failures due to duplicate value,.

9.User Defined function – Such parameter calls a function whose return value replaces the parameter name.

10. XML – XML Parameter Types are used for multiple valued data contained in an XML structure. XML parameters are widely used with Web Service scripts and with SOA services.

What is a HAR File and what is the use of HAR?

HAR stands for HTTP Archive. 

This is a common format for recording HTTP tracing information. This file contains a variety of information, but for our purposes, it has a record of each object being loaded by a browser. Each of these objects’ timings is recorded.

The HAR file format is still an evolving standard, and the information contained within is both flexible and extensible. You should expect the HAR file to include a breakdown of timings including:
  • how long it takes to fetch the DNS information
  • how long each object takes to be requested
  • how long it takes to connect to the server
  • how long it takes to transfer from the server to the browser of each object
  • whether the object is blocked or not
The data is stored as a JSON document and extracting meaning from the low level data is not always easy, but with practice, a HAR file can quickly help you identify the key performance problems with a web page, which in turn will help you efficiently target your development towards the areas that will deliver the greatest return on your efforts.

HTTP WATCH

Why do you need an HTTP Viewer or Sniffer?
All web applications make extensive use of the HTTP protocol (or HTTPS for secure sites). Even simple web pages require the use of multiple HTTP requests to download HTML, graphics and javascript. The ability to view the HTTP interaction between the browser and web site is crucial to these areas of web development:
  • Trouble shooting
  • Performance tuning
  • Verifying that a site is secure and does not expose sensitive information
How can HttpWatch Help?
HttpWatch integrates with Internet Explorer and Firefox browsers to show you exactly what HTTP traffic is triggered when you access a web page. If you access a site that uses secure HTTPS connections, HttpWatch automatically displays the decrypted form of the network traffic.

Conventional network monitoring tools just display low level data captured from the network. In contrast, HttpWatch has been optimized for displaying HTTP traffic and allows you to quickly see the values of headers, cookies, query strings .

HttpWatch also supports non-interactive examination of HTTP data. When log files are saved, a complete record of the HTTP traffic is saved in a compact file. You can even examine log files that your customers and suppliers have recorded using the free Basic Edition.
Why HttpWatch?

Seven reasons to use HttpWatch rather than other HTTP monitoring tools:
  1. Easy to Use - start logging after just a couple of mouse clicks in Internet Explorer or Firefox. No other proxies, debuggers or network sniffers have to be configured
  2. Productive - quickly see cookies, headers, POST data and query strings without having to manually decode raw HTTP packets
  3. Robust - reliably log thousands of HTTP transactions for hours or days while tracking down intermittent problems
  4. Accurate - HttpWatch has minimal impact on the normal interaction of the browser with a web site. No extra network hops are added, allowing you to measure real world HTTP performance
  5. Flexible - HttpWatch only requires client-side installation and will work with any server side technology that renders HTML pages in Internet Explorer or Firefox. No special server-side permissions or configurations are required - ideal for use against production servers on the Internet or Intranet
  6. Comprehensive - works with HTTP compression, redirection, SSL encryption & NTLM authentication. A complete automation interface provides access to recorded data and allows HttpWatch to be controlled from most popular programming languages.
  7. Professional Support - updates and bug fixes are provided free of charge on our website and technical support is available by email, phone or fax.

Uses of HttpWatch:
  1. Testing a web application to ensure that it is correctly issuing or setting headers that control page expiration
  2. Finding out how other sites work and how they implement certain features
  3. Checking the information that the browser is supplying when you visit a site
  4. Verifying that a secure web site is not issuing sensitive data in cookies or headers
  5. Tuning the performance of a web site by measuring download times, caching or the number of network round trips
  6. Learning about how HTTP works (useful for programming and web design classes)
  7. Alowing webmasters to fine tune the caching of images and other content
  8. Performing regression testing on production servers to verify performance and correct behavior

How to run Ajax Click n Script in Controller?

AJAX (Asynchronous JavaScript and XML) is a technique for creating interactive Web applications. With AJAX, Web pages exchange small packets of data with the server, instead of reloading an entire page. This reduces the amount of time that a user needs to wait when requesting data. It also increases the interactive capabilities and enhances the usability.
Using AJAX, developers can create fast Web pages using Javascript and asynchronous server requests. The requests can originate from user actions,timer events, or other predefined triggers.AJAX components, also known as AJAX controls, are GUI based controls that use the AJAX technique—they send a request to the server when trigger occurs.

For example, a popular AJAX control is a Reorder List control that lets you drag components to a desired position in a list. VuGen’s support for AJAX implementation is based on Microsoft’s ASP.NET AJAX Control Toolkit formerly known as Atlas.

AJAX Supported Frameworks

The supported frameworks for AJAX functions are:
Atlas 1.0.10920.0/ASP.NET AJAX—All controls
 Scriptaculous 1.8—Autocomplete, Reorder List, and Slider

VuGen supports the following frameworks at the engine level. This implies
that VuGen will create standard Web Click and Script steps, but not AJAX
specific functions:
 Prototype 1.6
 Google Web Toolkit (GWT) 1.4

AJAX Example Script

VuGen uses the control handler layer to create the effect of an operation on a GUI control. During recording, when encountering one of the supported AJAX controls, VuGen generates a function with an ajax_xxx prefix. In the following example, a user selected item number 1 (index=1) in an
Accordion control. VuGen generated an ajax_accordion function.

Note: When you record an AJAX session, VuGen generates standard Web (Click and Script) functions for objects that are not one of the supported AJAX controls. In the example above, the word FILE_PATH was typed into an edit box.

web_browser("Accordion.aspx",

DESCRIPTION,
ACTION,
"Navigate=http://labm1app08/AJAX/Accordion/Accordion.aspx",
LAST);
lr_think_time(5);
ajax_accordion("Accordion",
DESCRIPTION,
"Framework=atlas",
"ID=ctl00_SampleContent_MyAccordion",
ACTION,
"UserAction=SelectIndex",
"Index=1",
LAST);
web_edit_field("free_text_2",
"Snapshot=t18.inf",
DESCRIPTION,
"Type=text",
"Name=free_text",
ACTION,
"SetValue=FILE_PATH",
LAST);
  

Note: When you record an AJAX session, VuGen generates standard Web (Click and Script) functions for objects that are not one of the supported AJAX controls. In the example above, the word FILE_PATH was typed into an edit box.

"The requested operation cannot be completed because the Terminal connection is currently busy processing a connect operation" Error solved

This is the issue where a user has disconnected from a remote server instead logging off, taking up one of the Remote Desktop sessions.Then we will get the error "The terminal server has exceeded the maximum number of allowed connections".This can be easily corrected by logging into the server in console mode and manually logging off the user.

Whenever we try to connect for first time it will show the same error this is because the user was disconnected from the remote machine instead of logoff.vSo for this the user need to login to the system and log off the session that he opened previously.You can use the below commands to kill the user in remote machine. 




c:\>sc \\THESERVERNAME query TermService


SERVICE NAME: TermService

DISPLAY_NAME: Terminal Services
TYPE               : 20  WIN32_SHARE_PROCESS
STATE              : 4  RUNNING(NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN))
WIN32_EXIT_CODE    : 0  (0x0)
SERVICE_EXIT_CODE  : 0  (0x0)
CHECKPOINT         : 0x0
WAIT_HINT          : 0x0

The Terminal Services was running, it can't be restarted on Server 2003 so we can take a look att the running processes:

C:\>tasklist /s MYSERVERNAME /u MYUSERNAME /p MYPASSWORD
(Output truncated to highlight relevant processes) 

Image Name PID Session Name Session# Mem Usage

Image Name           PID      Session Name   Session#  Mem Usage

==================== ======== ============== ========= ============

System Idle Process  0                       0                 28 K

csrss.exe            4140     Console        7              2,684 K

winlogon.exe         4220     Console        7              5,840 K

logon.scr            4500     Console        7              1,580 K

Looking at the processes above, I recalled an issue that could sometimes arise with the logon.scr process on Virtual Machines. 

Thinking that logon.scr (Process ID 4500) may be the culprit, I decided try killing the process: 

C:\>taskkill /s MYSERVERNAME /u MYUSERNAME /p MYPASSWORD /PID 4500 SUCCESS: The process with PID 4500 has been terminated.

After seeing that the process was successfully killed, I tried logging in again and could do so successfully!