Saturday

Question : I did not understand what does the transaction percentile value mean in the ? And what is the best option to set the percentage value so that I can exclude the think time from the total scenario time


Answer : I assume you refer to the percentile value that appears in the Summary Report. (Right?)

This value tells you that n% of all transactions samples had response time lower or equal to that value. The higher the percentile is, the closer you get to the maximum response time value. Average transaction response time value may be misleading since standard deviation can be very high.

Let's see an example. Let's assume the following response time samples for a specific tramsaction: 2, 2, 2, 20, 20, 20, 20, 24, 28, 30.
The average value of this series is: ~17
The 90th percentile is ~28

From this series it is clear that it is more likely a user will get response higher than the average response time. The 50th percentile, also known as median, will yield a more realistic number.

90th percentile is often used as a target when setting performance objectives. This is a more rigorous target to set than average.

Regarding your second part of the question....
> Percentage value that appears in the Summary Report can be configured in the General tab of Analysis Tools > Options dialog
> Excluding think time globally from all the Analysis Session can be done from Global Filter filter dialog which is available from Analysis File menu

Please note that LoadRunner 11.00 comes with Percentile SLA feature that can help measuring performance objectives. It also comes with more flexible percentile settings in the new reports introduced in Analysis 11.00

Why Elapsing time goes on increasing in Controller

Why elaspsed time time continuing to increase though have set schedule as 1 user and duration is 1 second and again when I stopped running and check the result it shows as graph is not appearing


From the question, it is clear you need to run some basic LoadRunner training. I suggest you run the LoadRunner tutorial, available from the LoadRunner program menu on the Windows programs menu, and read the product documenation. The scenario elapsed time continues increasing as long as the Scheduler is running or there are running Vusers. It seems that in your case the Vuser is still running hence the time keeps on increasing

Sunday

How to help JAPAN?

Japan was hit by one of the largest earthquakes ever recorded on Friday. The magnitude-8.9 quake spawned a deadly tsunami that slammed into the nation's east coast, leaving a huge swath of devastation in its wake. Hundreds of people are dead and many more are still missing or injured.

Japan has often donated when other countries have experienced disasters, such as when Hurricane Katrina impacted the United States. Below are organizations that are working on relief and recovery in the region.

AMERICAN RED CROSS: Emergency Operation Centers are opened in the affected areas and staffed by the chapters. This disaster is on a scale larger than the Japanese Red Cross can typically manage. Donations to the American Red Cross can be allocated for the International Disaster Relief Fund, which then deploys to the region to help. Donate here.

GLOBALGIVING: Established a fund to disburse donations to organizations providing relief and emergency services to victims of the earthquake and tsunami. Donate here.

SAVE THE CHILDREN: Mobilizing to provide immediate humanitarian relief in the shape of emergency health care and provision of non-food items and shelter. Donate here.

SALVATION ARMY: The Salvation Army has been in Japan since 1895 and is currently providing emergency assistance to those in need. Donate here.

AMERICARES: Emergency team is on full alert, mobilizing resources and dispatching an emergency response manager to the region. Donate here.

CONVOY OF HOPE: Disaster Response team established connection with in-country partners who have been impacted by the damage and are identifying the needs and areas where Convoy of Hope may be of the greatest assistance. Donate here.

INTERNATIONAL MEDICAL CORPS
: Putting together relief teams, as well as supplies, and are in contact with partners in Japan and other affected countries to assess needs and coordinate our activities. Donate here.

SHELTER BOX: The first team is mobilizing to head to Japan and begin the response effort. Donate here.

How to Build Test Environments

How to Build The Test Environment

  1. Test Standards
    1. External Standard - Familiarity with and adoption of industry test standards from organizations such as IEEE, NIST, DoD, and ISO.
    2. Internal Standards - Development and enforcement of the test standards that testers must meet.
  2. Test Environment Components
    1. Test Process Engineering - Developing test processes that lead to efficient and effective production of testing activities and products.
    2. Tool Development and/or Acquisition - Acquiring and using the test tools, methods, and skills needed for test development, execution, tracking, and analysis (both manual and automated tools including test management tools).
    3. Acquisition or Development of a Test Bed/Test Lab/Test Environment - Designing, developing, and acquiring a test environment that simulates the real world, including capability to create and maintain test data.
  3. Test Tools
    1. Tool Competency - Ability to use 1) automated regression testing tools; 2) defect tracking tools; 3) performance/load testing tools; 4) manual tools such as checklists, test scripts, and decision tables; 5) traceability tools; and 6) code coverage tools.
    2. Tool Selection (from acquired tools) - Select and use tools effectively to support the test plan; and test processes.

While there are no generally accepted categories of test tools, experience has shown that the most commonly used tools can be grouped into these eight areas:

  • Automated Regression Testing Tools - Tools that can capture test conditions and results for testing new versions of the software.
  • Defect Management Tools - Tools that record defects uncovered by testers and then maintain information on those defects until they have been successfully addressed.
  • Performance/Load Testing Tools - Tools that can “stress” the software. The tools are looking for the ability of the software to process large volumes of data without either losing data, returning data to the users unprocessed, or have significant reductions in performance.
  • Manual Tools - One of the most effective of all test tools is a simple checklist indicating either items that testers should investigate, or to enable testers to ensure they have performed test activities correctly. There are many manual tools such as decision tables, test scripts to be used to enter test transactions and checklists for testers to use when performing such testing techniques as reviews and inspections.
  • Traceability Tools - One of the most frequently used traceability tools is to trace requirements from inception of the project through operations.
  • Code Coverage - Tools that can indicate the amount of code that has been executed during testing. Some of these tools can also identify non-entrant code.
  • Test Case Management Tools - This category includes test generators and tools that can manage data being processed for online assistance.

Common tools that are applicable to testing Testers have access to a variety of work tools, many included with operating software such as “Windows.” These include such things as word processing, spreadsheets, computer graphics used for reporting and status checking, and tools that can measure the reading difficulty of documentation.

  1. Quality Assurance / Quality Control
    1. Quality Assurance versus Quality Control - Being able to distinguish between those activities that modify the development processes to prevent the introduction of flaws (QA) and those activities that find and correct flaws (QC). Sometimes this is referred to as preventive versus detective quality methods.
    2. Process Analysis and Understanding - Ability to analyze gathered data in order to understand a process and its strengths and weaknesses. Ability to watch a process in motion, so that recommendations can be made to remove flaw-introducing actions and build upon successful flaw- avoidance and flaw-detection resources.
  2. Building the Test Environment Work Processes
    1. Concepts of work processes - understanding the concepts of policies, standards and procedures and their integration into work processes.
    2. Building a Test Work Process - an understanding of the tester’s role in building a test work process.
    3. Test Quality Control - Test quality control is verification that the test process has been performed correctly.
    4. Analysis of the Test Process - The test process should be analyzed to ensure:
      1. The test objectives are applicable, reasonable, adequate, feasible, and affordable.
      2. The test program meets the test objectives.
      3. The correct test program is being applied to the project.
      4. The test methodology, including the processes, infrastructure, tools, methods, and planned work products and reviews, is adequate to ensure that the test program is conducted correctly.
      5. The test work products are adequate to meet the test objectives.
      6. Test progress, performance, processes, and process adherence are assessed to determine the adequacy of the test program.
      7. Adequate, not excessive, testing is performed.
    5. Continuous Improvement - Continuous improvement is identifying and making continuous improvement to the test process using formal process improvement processes.
  3. Adapting the Test Environment to Different Technologies - The test environment must be established to properly test the technologies used in the software system under test. These technologies might include:

a. Security/Privacy b. Client Server c. Web Based Systems d. E-Commercee. E-Business f. Enterprise Resource Planning (ERP) g. Business Reengineeringh. Customer Relationship Management (CRM) i. Supply Chain Management (SCM) j. Knowledge Management k. Applications Service Providers l. Data Warehousing

Thursday

information on Test Confidence ratio of Project

If you're making a (final) test summary report about the progress and quality of a project it's good to mention two things. The usual statistics about the progress and defects and a (qualitative) judgement about the quality of the project. The latter isn't possible to describe in statistics or figures. I think a general feeling about the quality of the project is as important as statistics, because with statistics one can proove everything and nothing.

If you really want to use some kind of test confidence ratio I think the best option is to use the defect detection percentage. This is used by taking the number of defects in the current test level and divide it by the number of defects in the current and all subsequent test levels.

For example: there are four test levels with a various number of defects
UT: 33 defects
ST: 75 defects
SIT: 98 defects
UAT: 18 defects

If one calculates it for the UT the formula is the following 33 / (33 + 75 + 98 +18) = 14,7%, for ST it's 75 / (75 + 98 + 18) = 39,2%, etc.

Take into account that if this is calculated for one project it says nothing. You need to have the history of several project to fully take advantages of this formula. This way the percentages of a test level can be compared with the average of the other projects and this way you have a fairly objective way of measuring the quality of a project.

The obvious disadvantage of this ratio is that it can only be calculated at the end of a project and not in the middle of a project.

Tuesday

Scripting efforts in performance testing

Source : SQA Forums : http://www.sqaforums.com/showflat.php?Number=583987

am looking at understanding what is effort in hours that goes in creating scripts.

I usually classify as simple medium and complex and allot around 6-8/10-14/18-24hrs of effort based on the application type/Protocol Used/Number of screens. ( this also depends on the scripter ).

Would like receive inputs on how experts do estimation for creating scripts/or experiences in effort required while creating performance testing scripts.

And I am not limiting myself to loadrunner, I am looking at most of the industry standard tools like SilkPerformer, Rational Performance Tester/WebLoad/JMeter/Grinder.



==============================================================
(Number of Steps * Skew for Protocol * Skew for level of expertise for the scripter with protocol * skew for tool used * skew value for scripter expertise with tool) + (anticipated number of custom functions based upon examination of the app and business processes * 4 * Skew for level of expertise for the scripter) = estimated number of hours per business process scripted

So, if you have a new untrained scripter on a new protocol on a new tool with a couple of custom functions that need to be created and this protocol is winsock, then perhaps you wind up with a estimate range of

    ( 8(steps)
    * [2.5 - 5.0](protocol Skew)
    * [7 - 10](expertise skew-protocol)
    * 1(tool skew)
    * 2(Expertise with tool skew )
    + ( 2(custom functions)
    * 4(hours)
    * [7-10](expertise skew-protocol) )

    Range of [336 to 880]


Assuming tool and protocol expertise

(8(steps) * [2.5 - 5.0] * 1 * 1 * 1) + (2 * 4 * 1) = Range of [28 to 48]

That gives you a generic formula. You will need to come up with your own values for protocol skew, tool skew, person expertise skew and the like.


For more information on this, click here



Tip of the day : Finding a software Testing Job

here are many people who would like to get software testing jobs, but they are unsure about how to approach it. This may seem like a dream j...