Wednesday

When to Automate the Application with QTP

Source : Linkedin Discussions

I am posting this topic as a discussion

With QTP, you need a stable front end (GUI) on the app before you start automation testing. If the GUI is undergoing significant change, you are likely to run into maintenance issues with automated tests as the objects on the GUI will be changing.

That said, just like with any sort of testing, there is nothing to stop you planning your automation testing ahead of time.

In terms of working out what to automate, when I teach a QTP class I usually suggest to my students that as a rule of thumb, if a test is going to need to be run more than 50 times, then it is a good candidate for being automated as the ROI will increase.

Regards,

Michael Weinstock
Melbourne, Australia

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Automation can start from the very beginning of the project.If we have have a prototype of the web application then the automation engineer can start work on automation. The Test automation will have various activities like Feasilibility analysis,Tool selection, skill set development, training on the tool, Framework design, script development, Script verification, Script execution and bug reporting.
- Automation is basically driven by the bussiness needs. When i mean the bussiness needs it basically refers to cost and benifits . One needs to ask himself the following questions :
- Does the project demand the reduction of the lifecycle cost of the software product ?
- Does the project some kinds of testing activities that cannot be run manually such as (Memory leak, concurrent, load, ) ?
- Does the project/product span over releases, where the tester will have to run through rounds and rounds of regression tests. In that case there is a likelihood of mundaneness being introduced into the work of the test engineer ? The test engineer can be used more effectively to do other complex area.

If answers to all the questions is yes then there is a need for the Test Automation in the project.

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Vijay thank you for your answer. Could you please tell an example. Below are more ?'s

1) Take a scenario like "A web page/Win Application has 3 text boxes, 2 buttons, 1 lable control" My questions is can we automate this scenario. If yes, then how can we do that.

2) As per Michael can't we automate more than 50 times and less than 50 times. If yes, can you please explain with an example.

3) What is the major difference between Automation and Manual testing.

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Following are my view on the questions :

1) It can definitely be automated. But i will suggest you to do the following steps
- Do a feasibility analysis on the existing manual test cases so that you can decide on which test cases you are going to automate.
- Get a buy in from the lead/manage/management before you start of the automation.
- Decide on the tool that you want to use. ( Open source or purchased tool)
- Scripting language that you/ your team is comfortable with.
- Build a framework .
- Script the case and verify the same.

2) If there is a cost and time benefit that your project is going to deliver then it is defenetly should be automated. The best example is suppose you have started automation on a banking project which needs to be tested for lot of ( say 100 scenarios) and the life span of the project is 1 and half year.Then it can be a candidate for automation.

3) Major difference what i feel from the Automation testing over the manual testing are :
- Can be run unattended
- Can be used to cover lot of scenarios, which manually would be tedious.
- Some of the tests cannot be run manually . This has to be automated ( such as concurrency, Load, Stress, Volume, memory leak).
- Can be tested for lot of combination of the data.
- Can be used to generate the test results that need minimal analysis.
These are some of the difference there are lot more .. :)

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Q: when exactly an automation will start in the project.?
My Answer: Before starting to automate you must know whether or not we really need to automate the application... for that you have to meet the actors of the project: a tester a business analyst, an IT developer etc.. who know very well the application and who are lead to implement the test Plan and the test strategy etc... during the meeting you have to answer the following criteria:
1- did all expected results reproductible, or predictable? (Desired Answer: yes)
2- did the insurance application stable with expected behaviour? (A: stable)
3- Is it necessary to have human intervention? (A: No)
4- How long take a test to last? (A: Not a long time)
5- What are the hard tests to execute manualy? .....
6- What are the technologies used to buil th insurance application? (A: the technos must be compliant with the QTP addins you have)
7- Tests are predictable, reusable? (A: Yes, Yes)

So when you have all the answers then you can decide to do automation.
About a previous post on number of execution, the formula to calculate the ROI for a test is:

ROI = [(Nb of executions * (time to execute the test manualy - time to execute it with automation) - time spent to design the test]/time spent to design the test

The aim is to have an ROI > > > than 1, and this is possible if we have a very important number of executions..


Kind Regards,
Amine

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Hi All,

With regards to my comment about "50 times", it is simply a rule of thumb that I teach my QTP students as a starting point when considering automation ROI. It usually does not make sense to automate everything. So where do you start? I ensure that my students get a grasp on reality which is, you need to be smart about what you automate. It does not necessarily make sense to automate a manual test that might only be run a couple of times and has high complexity. You might end up spending days automating a test that takes perhaps 20 minutes to run. Where is the ROI to automate 40 minutes worth of manual testing when it might take 2 days to setup the QTP automation for the same?

The example of "50 times" gets my students thinking about where the value resides in automation. If a manual test is going to be run 50 times, then chances are there is a very good ROI to spend the time automating the test.

Of course, being a rule of thumb, there are exceptions.... this is not a hard and fast rule, but it does get my students thinking about automation ROI and grasping the concept of selective and smart automation, rather than "lets automate everything because we have a really cool automation tool".

Regards,

Michael Weinstock
Test Specialist
Melbourne, Australia

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------



Tuesday

How to send an email from QTP

Source : QTP Google groups

Set ol=CreateObject("Outlook.Application")
Set Mail=ol.CreateItem(0)
Mail.to=SendTo
Mail.Subject=Subject
Mail.Body=Body
If (Attachment <> "") Then
Mail.Attachments.Add(Attachment)
End If
Mail.Send
ol.Quit

Set Mail = Nothing
Set ol = Nothing

--------------------------------------------------------------
Set qtApp = CreateObject("QuickTest.Application")
qtApp.Launch
qtApp.Visible = True

qtApp.Open "scriptname", False
'qtApp.Open "C:\test", False

Set qtResultsOpt = CreateObject("QuickTest.RunResultsOptions")
qtResultsOpt.ResultsLocation = "C:\results"
Set qtTest = qtApp.Test
qtTest.Run qtResultsOpt

qtTest.Close
qtApp.Quit
Set qtApp = nothing
Set qtResultsOpt = nothing
Set qtTest = nothing

Saturday

How to merge values in two columns into one column

I Just recap my memory about merging the data in two columns into One column using the macro in excel 2010

here is the sample i worked on :

I have first name and last name in A and B columns and now I want it in Column C as full name (First name and last name )


How to write macro for this :

1. Open excel sheet
2.make sure you have the first worksheet name as Sheet1
3. Enter some names in Columns A and B starting at A1 and B1
3.Press ALT+F11 (This is to open VBA Code window)
4. use the code below or type in to code window
5. Run the macro and see in Column C ( The full names should appear in Column C)


Sub mergenames()

Sheets("Sheet1").Select
Range("C1").Select

Do Until Selection.Offset(0, -2).Value = ""
Selection.Value = Selection.Offset(0, -2).Value & " " & Selection.Offset(0, -1)
Selection.Offset(1, 0).Select
Loop

Range("A1").Select
End Sub




Load Testing Metrics

Source : http://loadstorm.com

Load Testing Metrics

There are many measurements that you can use when load testing. The following metrics are key performance indicators for your web application or web site.

  • Average Response Times
  • Peak Response Times
  • Error Rates
  • Throughput
  • Requests per Second
  • Concurrent Users


Average Response Time

When you measure every request and every response to those requests, you will have data for the round trip of what is sent from a browser and how long it takes the target web application to deliver what was needed.

For example, one request will be a web page...let's say the home page of the web site. The load testing system will simulate the user's browser in sending a request for the "home.html" resource. On the target's side, the request is received by the web server, it makes further requests of the application to dynamically build the page, and when the full HTML document is compiled, the web server returns that document along with a response header.

The Average Response Time takes into consideration every round trip request/response cycle up until that point in time of the load test and calculates the mathematical mean of all response times.

The resulting metric is a reflection of the speed of the web application being tested - the BEST indicator of how the target site is performing from the users' perspective. The Average Response Time includes the delivery of HTML, images, CSS, XML, Javascript files, and any other resource being used. Thus, the average will be significantly affected by any slow components.

Response times can be measured as either:

  • Time to First Byte
  • Time to Last Byte

Some people like to know when the first byte of the response is received by the load generator (simulated browser). This shows how long the request took to get there and how long the server took to start replying. However, that is only part of the real equation. It seems to be much more valuable to know the entire cycle of response that encompasses the duration of download for the resource. Meaning, why would I want to know only part of the response time? What is most important is what the user experiences, and that includes the delivery of the full payload from the server. A user wants to see the HTML page - which requires receipt of the full document. So the Time to Last Byte would be preferred as a Key Performance Indicator (KPI) over Time to First Byte.

Peak Response Time

Similar to the previous metric, Peak Response Time is measuring the round trip of a request/response cycle. However the peak will tell us what is the LONGEST cycle at this point in the test.

For example, if we are looking at a graph that is showing 5 minutes into the load test that the Peak Response Time is 12 seconds, then we now know one of our requests took that long. The average may still be sub-second because our other resources had speedy response.

The Peak Response Time shows us that at least one of our resources are potentially problematic. It can reflect an anomaly in the application where a specific request was mishandled by the target system. Usually though, there will be an "expensive" database query involved in fulfilling a certain request such as a page that makes it take much longer, and this metric is great to expose those issues.

Typically images and stylesheets are not the slowest (although they can be when a mistake is made like using a BMP file). In a web application, the process of dynamically building the HTML document from application logic and database queries is usually the most time intensive part of the system. It is less common, yet occurs more often with open source apps, to have very slow Javascript files because of their enormous size. Large files can produce slow responses that will show up in Peak Response Time, so be careful when using big images or calling big JS libraries. Many times, you really only need less than 20% of the Javascript inside those libraries. Lazy coders won't take the trouble to clean out the other 80%, and that will hurt their system performance.

Error Rate

It is to be expected that some errors may occur when processing requests, especially under load. Most of the time you will see errors begin to be reported when the load has reached a point that exceeds the web application's ability to deliver what is necessary.

The Error Rate is the mathematical calculation that produces a percentage of problem requests to all requests. The percentage reflects how many responses are HTTP status codes indicating an error on the server, as well as any request that never gets a response.

The web server will return an HTTP Status Code in the response header. Normal codes are usually 200 (OK) or something in the 3xx range indicating a redirect on the server. A common error code is 500, which means the web server knows it has a problem with fulfilling that request. That of course doesn't tell you what caused the problem, but at least you know that the server knows there is a definitive technical defect in the functioning of the system somewhere.

It is much trickier to measure something you never receive, so an error code can be reported by the load testing tool for a condition not indicated by the server. Specifically, the tool must wait for some period of time before it quits "listening" for a response. The tool must determine when it will "give up" on a request and declare a timeout condition. Timeouts will not a code received from a web server, so the tool must choose a code such as a 408 to represent the timeout error.

Other errors can be hard to describe because they do not occur at the HTTP level. A good example is when the web server refuses a connection at the TCP network layer. There is no way to receive an HTTP Status Code for this, thus the load testing tool must choose some error code to use for reporting this condition back to you in the load testing results. A code of 417 is what LoadStorm reports.

Error Rate is a significant metric because it measure "performance failure" in the application. It tells you how many failed requests are occurring at a particular point in time of your load test. The value of this metric is most evident when you can easily see the percentage of problems increase significantly as the higher load produces more errors. In many load tests, this climb in Error Rate will be drastic. This rapid rise in errors tells you where the target system is stressed beyond its ability to deliver adequate performance.

No one can define the tolerance for Error Rate in your web application. Some testers consider less than 1% Error Rate successful if the test is delivering greater than 95% of the maximum expected traffic. However, other testers consider any errors to be a big problem and work to eliminate them. It is not uncommon to have a few errors in web applications - especially when you are dealing with thousands of concurrent users.

Throughput

Throughput is the measurement of bandwidth consumed during the test. It shows how much data is flowing back and forth from your servers.

Throughput is measured in units of Kilobytes Per Second.

Requests per Second

RPS is the measurement of how many requests are being sent to the target server. It includes requests for HTML pages, CSS stylesheets, XML documents, JavaScript libraries, images and Flash/multimedia files.

RPS will be affected by how many resources are called from the site's pages. Some sites can have 50-100 images per page, and as long as these images are small in size (e.g. <25k),>

Concurrent Users

Concurrent users is the most common way to express the load being applied during a test. This metric is measuring how many virtual users are active at any particular point in time. It does not equate to RPS because one user can generate a high number of requests, and each vuser will not constantly be generating requests.

A virtual user does what a "real" user does as specified by the scenarios and steps that you have created in the load testing tool. If there are 1,000 vusers, then there are 1,000 scenarios running at that particular time. Many of those 1,000 vusers may be spawning requests at the same time, but there are many vusers that are not because of "think time". Simply put, think time is the pause between vuser actions that simulates what happens with a real user as he or she reads the page received before clicking again.

Other Thoughts on Load Testing Metrics

On SOA Testing blog, they list the most important load testing metrics in their context as:

* Response time: It's the most important parameter to reflect the quality of a Web Service. Response time is the total time it takes after the client sends a request till it gets a response. This includes the time the message remains in transit on the network, which can't be measured exclusively by any load-testing tool. So we're restricted to testing Web Services deployed on a local machine. The result will be a graph measuring the average response time against the number of virtual users.
* Number of transactions passed/failed: This parameter simply shows the total number of transactions passed or failed.
* Throughput: It's measured in bytes and represents the amount of data that the virtual users receive from the server at any given second. We can compare this graph to the response-time graph to see how the throughput affects transaction performance.
* Load size: The number of concurrent virtual users trying to access the Web Service at any particular instance in an interval of time.
* CPU utilization: The amount of CPU time used by the Web Service while processing the request.
* Memory utilization: The amount of memory used by the Web Service while processing the request.
* Wait Time (Average Latency): The time it takes from when a request is sent until the first byte is received.

Tuesday

Siebel Scripting -2

Source: http://software-qe.blogspot.com/2008/01/siebel-7x-record-and-replay-for.html


Siebel 7.x Record and Replay for LoadRunner 8.x

Preparation before record – Internet Explorer Settings

To avoid problems, verify and change the following Internet Explorer’s settings before you try to record:

1. Enable all ActiveX controls and plug-ins. This option is available in Internet Explorer ® Tools ® Internet Options ® Security ®Custom Level).

2. Enable the “Use HTTP 1.1 through proxy connection” option. This option is available in Internet Explorer ® Tools ® Internet Options® Advanced, under the “HTTP 1.1 settings” radio button.

Recording Siebel script with Auto-Correlation option

Correlation is the mechanism by which VuGen saves dynamic values to parameters during record and replay, for use at a later point in the script. For general information about correlation, you can refer to Problem ID 11806 - What is correlation and how is it done

For Siebel script, you can instruct VuGen to automatically apply correlation during recording using one of the following methods:

· VuGen Native Siebel Correlation

The native, built-in rules, work on a low level, allowing you to debug your script and understand the correlations in depth.

· Siebel Correlation Library

The Siebel correlation library automatically correlates most of the dynamic values, creating a concise script that you can replay easily. Note that this is only available for Siebel 7.7 and the library is distributed by Siebel.

How to record with VuGen Native Siebel Correlation

VuGen’s native built-in rules for the Siebel server detect the Siebel server variables and strings, automatically saving them for use at a later point within the script. It is available in VuGen recording option by default; you do not need to have any additional components installed.

Steps to record with Native Siebel correlation:

1. From the ‘New Multiple Protocol Script’ window, add ‘Siebel-Web’ and click OK.

2. Set the following Recording Options:

a. Internet Protocol: Recording:

· Select ‘ HTML based script’

· Click on ‘HTML Advanced’ and select the following

i. Script Type: a script containing explicit URLs only

ii. Non HTML-generated elements: Do not record

  1. Internet Protocol: Advanced:

i. Clear the ‘Reset context for each action’ option.

ii. Select ‘Support Charset’ then select ‘UTF-8’

  1. Internet Protocol: Correlation:

i. Make sure that you have ‘Enable Correlation during recording’

ii. Make sure that ‘Siebel’ is selected. You can expand the list to see the details about each rule if you wish.

  1. Leave other options as default.

3. Record in the following way:

· Record the login in the vuser_init section

· Record the Business Process in Action1

· Record the logout in the vuser_end section

How to record with Siebel Correlation Library

Siebel has released a correlation library file, ssdtcorr.dll, as part of the Siebel Application Server version 7.7. This library is available only through Siebel and can be found on siebsrvr\bin directory for Windows

Note: The Siebel Correlation API is supported on Windows 2000 and Windows XP only. This implies that if you correlate the script with this method, you cannot replay this script on UNIX platform. If you need to run the script on UNIX platform, use the VuGen Native Siebel Correlation method.

The library file, ssdtcorr.dll, must be available to all machines where a Load Generator, Controller, or Tuning Console resides.

Steps to record with Siebel correlation library:

1. Copy ssdtcorr.dll into the \bin directory of Controller / Tuning Console, and ALL Load Generator machines

2. From the ‘New Multiple Protocol Script’ window, add ‘Siebel-Web’ and click OK.

3. Set the following Recording Options:

a. Internet Protocol: Recording:

· Select ‘HTML based script’

· Click on ‘HTML Advanced’ and select the following

    1. Script Type: a script containing explicit URLs only
    2. Non HTML-generated elements: Do not record
  1. Internet Protocol: Advanced:

i. Clear the ‘Reset context for each action’ option.

ii. Select ‘Support Charset’ then select ‘UTF-8’

  1. Internet Protocol: Correlation:

i. Delete the default ‘Siebel’ correlation rule

ii. Click on ‘Import’, the ‘Import correlation Settings from a file’ window opens.

iii. Navigate to \dat\webrulesdefaultsetting directory, select ‘WebSiebel77Correlation.cor’ and click ‘Open’

iv. On the ‘Confirm Rule Replacement’ window, select ‘Overwrite’

Note: To revert back to the default correlation, delete all of the Siebel rules and click ‘Use Defaults’.

  1. Leave other options as default.

4. Record in the following way:

· Record the login in the vuser_init section

· Record the Business Process in Action1

· Record the logout in the vuser_end section

Note: After using the Siebel Correlation Library, a script may throw errors when run for multiple iterations. This is because the server caches some of the data after the first iteration and on the second iteration some of the data does not return from the server. The solution is to record the business process twice. First time record it into vuser_init and then into Action. This is actually the way Siebel is using Load Runner and this is the recommended way to script Siebel.

Replaying Siebel script – Run-Time Settings

Make sure that “Simulate a new user on each iteration” is not selected in the Browser Emulation options.

Common Replay Errors

Error: “We detected an Error which may have occurred for one or more of the following reasons: We are unable to process your request. This is most likely because you used the browser BACK or REFRESH button to get to this point.”

Diagnosis: A HTTP request has been sent twice to the server. This could be an individual web_url request or part of the resources being downloaded from another request. When sending the second request to the server, the Siebel 7.x server detects multiple requests and thus, issues the above error.

Example:

The following is a sample HTML-based script. Even though “start.swe3” is a frame within step “start.swe2,” you can see that an additional request is generated for “start.swe3” because of the “wait.html” step. On replay, the server may reject the second request,“start.swe3,” since it is the same for the HTTP call generated by “start.swe2.” This may be due to the SWECount or SWEC.

web_submit_data("start.swe2",

"Action=http://64.242.155.45/callcenter/start.swe",

"Method=POST",

"RecContentType=text/html",

"Referer=http://64.242.155.45/callcenter/start.swe?SWECmd=Start",

"Mode=HTML",

ITEMDATA,

"Name=SWEUserName", "Value=sadmin", ENDITEM,

"Name=SWEPassword", "Value=sadmin", ENDITEM,

"Name=SWENeedContext", "Value=false", ENDITEM,

"Name=SWEFo", "Value=SWEEntryForm", ENDITEM,

"Name=SWETS", "Value=1024549479671", ENDITEM,

"Name=SWECmd", "Value=ExecuteLogin", ENDITEM,

"Name=SWEBID", "Value=-1", ENDITEM,

"Name=SWEC", "Value=0", ENDITEM,

LAST);

web_url("wait.html",

"URL=http://64.242.155.45/callcenter/wait.html",

"TargetFrame=", "Resource=0","RecContentType=text/html","Referer=",

"Snapshot=t6.inf","Mode=HTML",

LAST);

web_url("start.swe3",

"URL=http://64.242.155.45/callcenter/start.swe?SWEFrame=top._swe&_sn={Siebel_sn_body3}&SWECmd=GetCachedFrame&SWEC=1",

"TargetFrame=", "Resource=0",

"RecContentType=text/html",

"Referer=http://64.242.155.45/callcenter/start.swe",

"Mode=HTML",

LAST);

Solutions:

  1. Change the Mode in "start.swe2" to “Mode=HTTP”

The idea behind changing the mode from HTML to HTTP is to avoid parsing the HTML page that is returned by the server, so that resources are not downloaded. This helps to avoid multiple downloads of same request.

If the script still fails on the first iteration, go to step 2. If the script fails on the second iterations onward, go to step 3.

  1. Disable the Run-Time Viewer

If the script still fails on the first iteration after the change from step 1, try to close the Run-Time Viewer. This option is in VuGen’s Tools ® General Options ® Display tab; clear the “Show Browser during Replay” option. For more information about this, refer to Problem ID 17234 - Errors in Web replay because of conflict with the runtime browser.

If problem persists, refer to step 4

  1. Correlate SWECount or SWEC

If you are able to run the first iteration, but the script fails on the second iteration or onwards, you will need to correlate SWECount (7.0.3) or SWEC (7.0.4) from the previous step “start.sweXXX.” For information about correlation, refer to Problem ID 11806 - What is correlation and how is it done.

If problem persists, refer to step 4.

  1. Run the script with the extended log

If none of the above helps, replay the script with the extended log and identify the HTTP request that is being downloaded multiple times. Search for a similar HTTP Request being sent earlier in the execution log. Once you locate the same, set “Mode=HTTP” so that the resources for that request are not downloaded, and try replaying the script again.


Siebel Tricks and Tips for LoadRunner

Extract from LoadRunner Group :


I work with the Siebel Web protocol all the time and i prefer to use
the native correlation rules over the ones provided by Siebel. Sure
the added rules help you grab more stuff but the correlations are not
always on target either. Also, you end up with so many possible
parameters to choose from it becomes confusing. But thats just me and
this opinion is highly subjective. As a matter of fact, my partner has
the Siebel rules installed and swears by them so yeah... you can try
them and see for yourself :)

To answer a little on your questions:
rowids: The single most important part of a record. You can use it to
retrieve a given record but you will also need it to perform actions
like "delete", "drilldown", "select", etc. You will also need it as a
"reference" when you want to navigate through a record's sub tabs.
This is vulgarized but maybe it'll help shed some light:
SWERowId = rowid to perform the action/method on
SWERowIds = record's parent rowid

SWEACn & SWC:
These gave me a lot of trouble at first but for two different reasons.
SWEACn doesn't always appear in the server response where you've
originally recorded it. That being said, it will always be in the
"web_url's" response right before the call where you need it.
Basically, what i'm saying is, move your web_reg_save_param from where
you originally had it recorded to one request prior to where you need
it. This has never failed me so far.

SWC is a strange one... I don't quite understand how the system
updates this value. It adds / subtracts integers like it was
calculated instead of considering as part of a response from the
server (maybe this is solved by the Siebel correlation rules but i
don't use them). Anyways, when you get a "back or refresh error",
first thing you should suspect is the Siebel_SWECount (SWC). Usually,
this will happen after the first iteration. My advice for this one,
stick a web_reg_save_param before the call prior to the problematic
URL. Look in the appropriate server response for the real SWEC value
and put the proper boundaries in the WRSP. Of course, you'll need to
put this code before the problematic URL:

Siebel_SWECount_var = atoi(lr_eval_string("{YOUR_SWEC_PARAM_NAME}"));
lr_save_int(Siebel_SWECount_var, "Siebel_SWECount");

Also, make sure the problematic URL calls a dynamic SWEC not a
hardcoded one.
SWEC={Siebel_SWECount} GOOD!
SWEC=13 BAD!

Saturday

All about LoadRunner 11.0

for the Windows operating system

Software version: 11.00

Publication date: October 2010

This file provides information about LoadRunner version 11.00.

What's New

Protocols

  • Ajax TruClient - An advanced protocol for modern JavaScript based applications (including Ajax) emulating user activity within a web browser. Scripts are developed interactively in Mozilla Firefox.
  • Silverlight - A new protocol for Silverlight based applications emulating user activity at the transport level. Allows generating high level scripts by automatically importing and configuring WSDL files used by the application.
  • Java over HTTP - A new protocol designed to record java-based applications and applets. It produces a Java language script using web functions. This protocol is distinguished from other Java protocols in that it can record and replay Java remote calls over HTTP.
  • Citrix
    • The Citrix Protocol now supports Citrix Online Plugin versions 11.2 and 12.0.
    • Added support for Citrix XenApp Server 5.0
  • Oracle NCA - NCA Java object property support now provides automated creation and registration within a script of a query-answer table of communication between client-side Java objects and the Oracle NCA server.
  • SAPGUI - Added support for SAPGUI for Windows Client version 7.20.
  • Service Test - The LoadRunner Controller can run scripts created in HP Service Test 11.00, HP's solution for creating and running automated tests for SOA and headless technologies. Refer to the Service Test documentation for details of creating Service Test scripts for a load testing scenario.

Features

  • Data Format Extension (DFE) - Enhanced data format capabilities for the Web (HTTP/HTML) protocol family. Allows converting raw HTTP traffic into a maintainable and structured XML format and enables correlations by XPATH.
  • Correlation Studio - Web (HTTP/HTML) automatic correlation mechanism has been enhanced to search for possible correlations in the larger scope of snapshot data created during code generation including data formatted by DFE.
  • Snapshot View - New snapshot view for Web (HTTP/HTML) protocol steps allows viewing complete HTTP traffic in both raw and DFE generated formats.
  • VuGen - HP ALM Integration - Enhanced integration with HP Application Lifecycle Management platform that serves also Quality Center and Performance Center editions.
  • Windows Support - Added support for Windows 7 and Windows Server 2008. See below for limitations.
  • Analysis Reports - Enhanced Analysis reports are more customizable. Analysis data can be exported to a variety of formats, including Word, Excel, PDF, and HTML. New report templates allow saving report definitions and generating reports based on a template.

Installation and Configuration Information

Prerequisite Software

Specific software needs to be installed before you can install LoadRunner. When you run the LoadRunner installation wizard, if the prerequisite software is not already installed on your computer, the wizard detects which software is missing and provides the option to install it.

The following prerequisite software needs to be installed:

    • .NET Framework 3.5 SP1
    • Microsoft Data Access Components (MDAC) 2.8 SP1 (or later)
    • Microsoft Windows Installer 3.1
    • Microsoft Core XML Services (MSXML) 6.0
    • Microsoft Visual C++ 2005 SP1 Redistributable Package (x86)
    • Microsoft Visual C++ 2008 Redistributable Package (x86)
    • Web Services Enhancements (WSE) 2.0 SP3 for Microsoft .NET Redistributable Runtime MSI
    • Web Services Enhancements (WSE) 3.0 for Microsoft .NET Redistributable Runtime MSI
    • Strawberry Perl 5.10.1

System Requirements for VuGen, Controller, and Analysis

The following table describes the system requirements for installing VuGen, the Controller, or Analysis:

Processor
CPU Type: Intel Core, Pentium, Xeon, AMD or compatible
Speed: 1 GHz minimum. 2 GHz or higher recommended
Operating System
  • Windows Vista SP2 32-bit
  • Windows XP Professional SP3 32-bit
  • Windows Server 2003 Standard Edition/Enterprise Edition SP2 32-bit
  • Windows Server 2008 Standard Edition/Enterprise Edition SP2 32-bit and 64-bit
  • Windows 7
Note: VuGen recording is not supported on 64-bit operating systems.
Memory (RAM)
Minimum: 2 GB
Recommended: 4 GB or higher
Screen Resolution
Minimum: 1024 x 768
Browser
  • Microsoft Internet Explorer 6.0 SP1 or SP2
  • Microsoft Internet Explorer 7.0
  • Microsoft Internet Explorer 8.0
Available Hard Disk Space
Minimum: 2 GB


Load Generator for Windows System Requirements

The following table describes the system requirements for installing the Load Generator on a Windows machine.

Processor
CPU Type: Intel Core, Pentium, Xeon, AMD or compatible
Speed: 1 GHz minimum. 2 GHz or higher recommended
Note for Pentium Processors: Intel Hyper-Threading technology is not supported. Hyper-Threading can be disabled in the BIOS. For more information, see:
Operating System
The following Windows operating systems are supported:
  • Windows Vista SP2 32-Bit
  • Windows XP Professional SP3 32-Bit
  • Windows Server 2003 Standard Edition/Enterprise Edition SP2 32-Bit
  • Windows Server 2008 Standard Edition/Enterprise Edition SP2 32-Bit and 64-bit
  • Windows 7
Memory (RAM)
Minimum: 1 GB
Note: Memory depends on protocol type and system under test and can vary greatly.
Browser
  • Microsoft Internet Explorer 6.0 SP1 or SP2
  • Microsoft Internet Explorer 7.0
  • Microsoft Internet Explorer 8.0
Available Hard Disk Space
Minimum: 2 GB


Load Generator for UNIX System Requirements

This section describes the system requirements necessary for installing the HP Load Generator on a UNIX machine.

Memory (RAM)
256 MB minimum
Note: Memory depends on protocol type and system under test and can vary greatly.
Available Hard Disk Space
150 MB minimum


The following table describes the supported operating systems on which you can install a UNIX HP Load Generator.

OS Type
OS Version
Platform
Sun Solaris
  • Solaris 9 (2.9)
  • Solaris 10 (2.10)
Sun UltraSPARC-based systems
HP-UX
HP-UX 11iv2 (11.23)
HP PA-RISC
Red Hat Linux
  • Enterprise Linux 4.0
  • Enterprise Linux 5.0
  • CPU Type: Intel Core, Pentium, AMD or compatible
  • Speed: 1 GHz minimum. 2 GHz or higher recommended


Product Compatibility

LoadRunner 11.00 is compatible with the following HP product versions:

  • HP Quality Center version 10.00
  • HP Application Lifecycle Management version 11.00
  • HP QuickTest Professional versions 10.00 and 11.00
  • HP Diagnostics versions 8.04 and 9.00 (Note: To use Diagnostics 8.x with LoadRunner 11.00, the Diagnostics 9.00 LoadRunner Add-in must be installed. For more details, see the HP Diagnostics documentation)
  • HP SiteScope versions 10.12 and 11.00

Pre-Installation Notes and Limitations

This section includes:

Windows
  • On Vista machines, if you want to add a new license from the LoadRunner Launcher (Configuration > LoadRunner License > New License), you need to have Administrator privileges on the Vista machine.
  • If you are running McAfee or Aladdin's eSafe anti-virus applications, close them before installing LoadRunner.
  • To use Windows 2003 with a HASP plug, download Aladdin's latest HASP driver.
UNIX
  • The LoadRunner UNIX installation is based on native packages per operating system. This requires you to be logged in as root user to run the installation.
  • If you are installing a UNIX load generator on an HP-UX operating system, you cannot install it from a network location. You can install it directly from the installation disk or you can copy the installer onto the local directory of the target machine.
Virtual Environment Installation
  • LoadRunner supports Vmware version ESX 3.0, ESX 3.5, and VM Workstation 5.5 and is certified for the following Windows platforms: Windows XP SP2/SP3, Windows Server 2003 SP2, and Windows Vista SP1.
  • Running Vusers on virtual machines may adversely affect performance due to the sharing of physical resources.
Diagnostics for J2EE/.NET Requirements

A unique transaction name must be used for each scenario.

ContentCheck in Multilingual Environments

This version supports ContentCheck rules in French, German, Spanish, and Italian. The correct language file should be installed according to the system locale.

The suitable language file can also be copied from the installation disk:

..\lrunner\MSI\setup\international\\dat\LrwiAedInstallation.xml

to the \dat directory.

Windows Firewall Considerations

In most Windows environments, Windows Firewall is turned on by default. The firewall does not allow certain LoadRunner components to communicate with each other. The Windows firewall therefore needs to be turned off.

Note: Turning off Windows Firewall increases the risk to your computer's security.

For each process that needs the firewall you can unblock the process by clicking the unblock button in the popup window that indicates that the program needs it, or by manually tuning the Windows firewall from the Exceptions tab.

WAN Emulation
  • Make sure that the relevant 3rd party components are installed on the load generator machines. Note that in addition to the load generators, you may be required to install the relevant 3rd party component on additional LoadRunner components. For more information, see the relevant 3rd party software installation documentation.
  • The relevant 3rd party component licenses must be purchased from the 3rd party vendor and not from HP.
HP Performance Validation SDK

HP Performance Validation SDK version 11.00 can be used only with LoadRunner version 11.00 and above.

Notes and Limitations

This section includes:

General

  • To run LoadRunner on Windows 7 or Window Server 2008, you must have Administrator privileges and User Account Control (UAC) must be disabled.
  • Internet Explorer 8
    • For Click and Script based protocols, address bar operations and pop-up windows are not supported.
    • The Internet Explorer SmartScreen Filter must be disabled when recording with Citrix Web Access (formerly known as Citrix NFuse).
  • Internet Explorer Enhanced Security Configuration should be disabled when recording with Citrix Web Access (formerly known as Citrix NFuse) recording on Windows 2003/2008 Server.
  • WinInet recording is not supported.
  • Recording on 64 bit machines is not supported, however replaying scripts on 64 bit machines is supported.
  • FTP active mode with SSL is not supported in both explicit and implicit flavors.
  • The network speed simulation settings in the Network: Speed Simulation node in the Run Time Settings do not work with Windows 7. Virtual users will use the maximum bandwidth regardless of which option was selected.
  • It is not recommended to install and uninstall a Load Generator standalone installation on the same machine with a VuGen standalone installation.
  • The Load Generator cannot run Citrix scripts in service mode when the script was recording using Citrix Client version is 11.2 or higher.
  • The Agent icon does not appear in Windows 2008 and Vista when the LoadRunner Agent service is launched.
  • When LoadRunner Agent runs as service (magentservice.exe), files that are stored on remote network drives or referred to by UNC path cannot be accessed (script, parameter file, etc.). If you want to access files this way, run the LoadRunner Agent as process (magentproc.exe). If this is not possible, please contact Customer Support.

VuGen

  • SAP (Click and Script) recording. During recording, if you use a keyboard option instead of a UI element (for example, pressing Enter instead of clicking the log on button), the step may not be recorded. In general, when recording your script, it is recommended to use UI elements rather than keyboard options.
  • Citrix snapshots. Black snapshots may appear during record or replay when using Citrix Presentation Server 4.0 and 4.5 (before Rollup Pack 3).
  • Possible workaround: On the Citrix server select Start Menu > Settings > Control Panel > Administrative Tools > Terminal Services Configuration > Server Settings > Licensing and change the setting Per User or Per Device to the alternative setting (i.e. If it is set to Per User, change it to Per Device and vice versa.)

  • Recording Window Size and XenApp Plugin for Hosted Applications 11. The recording window size options does not work properly with the XenApp Plugin for Hosted Applications 11. The size of the client window is installed, but the server screen resolution is not. This is a Citrix Client bug and will be fixed in future Citrix Client versions.
  • Workaround: When recording, set the window size equal to the local screen resolution. When replaying/load testing, set the VuGen or Load Generator's screen resolution to equal the resolution used when the script was recorded. To verify the recorded resolution, view the Window property in the

Tip of the day : Finding a software Testing Job

here are many people who would like to get software testing jobs, but they are unsure about how to approach it. This may seem like a dream j...