Tuesday

Implementing Automated Software Testing - Continuously Track Progress and Adjust Accordingly

Source : Week End Testers

Implementing Automated Software Testing - Continuously Track Progress and Adjust Accordingly

Thom Garrett, Innovative Defense Technologies, www.IDTus.com

This is an excerpt from the book "Implementing Automated Software Testing," by Elfriede Dustin, Thom Garrett, Bernie Gauf, Copyright Addison Wesley, 2009

"When you can measure what you are speaking about, and can express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind." - Lord Kelvin

Most of us have worked on at least one project where the best-laid plans went awry and at times there wasn’t just one reason we could point to that caused the failure. People, schedules, processes, and budgets can all contribute. [1] Based on such experiences from the past, we have learned that as part of a successful automated testing (AST) program it is important that the right people with the applicable skills are hired, goals and strategies be defined and then implemented, and that steps be put in place to continuously track, measure, and adjust, as needed, against these goals and strategies. Here we’ll discuss the importance of tracking the AST program, to include various defect prevention techniques, such as peer reviews and other interchanges. We’ll then focus on the types of AST metrics to gather so that we can measure progress, gauge the effectiveness of our AST efforts, and help keep them keep on track and/or make adjustments, if necessary. Finally, we will discuss the importance of a root/cause analysis if a defect or issue is encountered.

Based on the outcome of these various efforts, adjustments can be made where necessary; e.g., the defects remaining to be fixed in a testing cycle can be assessed, schedules can be adjusted, and/or goals can be reduced. For example, if a feature is left with too many high-priority defects, a decision can be made to move the ship date, to ship the system as is (which generally isn’t wise, unless a quick patch process is in place), or to go live without that specific feature, if that is feasible.

Success is measured based on achieving the goals we set out to accomplish relative to the expectations of our stakeholders and customers.

AST Program Tracking and Defect Prevention

In our book "Implementing Automated Software Testing (IAST)" in we cover the importance of valid requirements and their assessment; in another we discuss the precautions to take when deciding what to automate; and yet another section we discussed in detail the importance of peer reviews. Here we’ll provide additional ideas that aid in defect prevention efforts, including technical interchanges and walk-throughs; internal inspections; examination of constraints and associated risks; risk mitigation strategies; safeguarding AST processes and environments via configuration management; and defining and tracking schedules, costs, action items, and issues/defects.

Conduct Technical Interchanges and Walk-throughs

Peer reviews, technical interchanges, and walk-throughs with the customer and the internal AST team represent evaluation techniques that should take place throughout the AST effort. These techniques can be applied to all AST deliverables—test requirements, test cases, AST design and code, and other software work products, such as test procedures and automated test scripts. They consist of a detailed artifact examination by a person or a group other than the author. These interchanges and walk-throughs are intended to detect defects, non-adherence to AST guidelines, test procedure issues, and other problems.

An example of a technical interchange meeting is an overview of test requirement documentation. When AST test requirements are defined in terms that are testable and correct, errors are prevented from entering the AST development pipeline that could eventually be reflected as defects in the deliverable. AST design component walk-throughs can be performed to ensure that the design is consistent with defined requirements - e.g., that it conforms to OA standards and applicable design methodology - and that errors are minimized.

Technical reviews and inspections have proven to be the most effective forms of preventing miscommunication, allowing for defect detection and removal.

Conduct Internal Inspections

In addition to customer technical interchanges and walk-throughs, internal inspections of deliverable work products should take place, before anything is even presented to the customer, to support the detection and removal of defects and process/practice omissions or deficiencies early in the AST development and test cycle; prevent the migration of defects to later phases; improve quality and productivity; and reduce cost, cycle time, and maintenance efforts.

Examine Constraints and Associated Risks

A careful examination of goals and constraints and associated risks should take place, leading to a systematic AST strategy and producing a predictable, higher-quality outcome and a high degree of success. Combining a careful examination of constraints together with defect detection technologies will yield the best results.

Any constraint and associated risk should be communicated to the customer and risk mitigation strategies developed as necessary.

Implement Risk Mitigation Strategies

Defined "light weight" processes allow for iterative, constant risk assessment and review without the dreaded overhead. If a risk is identified, appropriate mitigation strategies can be deployed. Require ongoing review of cost, schedules, processes, and implementation to ensure that potential problems do not go unnoticed until too late; instead, processes need to ensure that problems are addressed and corrected immediately. For example, how will you mitigate the risk if your "star" developer quits? There are numerous possible answers: Software development is a team effort and it is never a good practice to rely on one "star" developer. Hire qualified developers, so they can integrate as a team and each can be relied on in various ways based on their respective qualifications. One team member might have more experience than another, but neither should be irreplaceable, and the departure of one of them should not be detrimental to the project. Follow good hiring and software development practices (such as documenting and maintaining all AST-related artifacts) and put the right people on the project; we discuss the "how to" in our book "IAST." Additional risks could be missed deadlines or being over budget. Evaluate and determine risk mitigation techniques in case an identified risk comes to fruition.

Safeguard the Integrity of the AST Process and Environments

Experience shows that it is important to safeguard the integrity of the AST processes and environment. In IAST we discuss the importance of an isolated test environment and having it under configuration management. For example, you might want to test any new technology to be used as part of the AST effort in an isolated environment and validate that a tool, for example, performs to product specifications and marketing claims before it is used on any AUT or customer test environment. At one point we installed a tool on our Micron PC used for daily activities, only to have it blue-screen. It turned out that the tool we wanted to test wasn’t compatible with the Micron PC. To solve the problem, we actually had to upgrade the PC’s BIOS. An isolated test environment for these types of evaluation activities is vital.

The automator should also verify that any upgrades to a technology still run in the current environment. The previous version of the tool may have performed correctly and a new upgrade may perform fine in other environments, but the upgrade may adversely affect the team’s particular environment. We had an experience when a new tool upgrade wasn’t compatible with our e-mail software package any longer. It was a good thing we caught this issue, because otherwise an upgrade install would have rendered the tool useless, as we heavily relied on e-mail notification, for example, if a defect was generated.

Additionally, using a configuration management tool to baseline the test repository will help safeguard the integrity of the automated testing process. For example, all AST automation framework components, script files, test case and test procedure documentation, schedules, cost tracking, and other related AST artifacts need to be under configuration management. Using a configuration management tool ensures that the latest and most accurate version control and records of AST artifacts and products are maintained. For example, we are using the open-source tool Subversion in order to maintain AST product integrity; we evaluate the best products available to allow for the most efficient controls on an ongoing basis.

Define, Communicate, and Track Schedules and Costs

It is not good enough to base a schedule on a marketing-department-defined deadline. Instead, schedule and task durations need to be determined based on past historical performance and associated best estimates gathered from all stakeholders. Additionally, any schedule dependencies and critical path elements need to be considered up front and incorporated into the schedule. Project schedules need to be defined, continuously tracked, and communicated.

In order to meet any schedule—for example, if the program is under a tight deadline - only the AST tasks that can be successfully delivered in time are included in the schedule iteration. As described in IAST, during the AST Phase 1, test requirements are prioritized, which allows for prioritizing the most critical AST tasks to be completed as opposed to the less critical and lower-priority tasks, which can then be moved to later in the schedule, accordingly. Once the requirements are prioritized, an initial schedule is presented to the customer for approval and not before the System Under Test (SUT), AST requirements and associated level of effort are understood.

During the technical interchanges and walk-throughs, schedules are evaluated and presented on an ongoing basis to allow for continuous communication and monitoring. Potential schedule risks should be communicated well in advance and risk mitigation strategies explored and implemented, as needed; any schedule slips should be communicated to the customer immediately and adjustments made accordingly.

By closely tracking schedules and other required AST resources, we can also ensure that a cost tracking and control process is followed. Inspections, walk-throughs, and other status reporting allow for closely monitored cost control and tracking. Tracking cost and schedules and so forth allows for tracking of the project’s performance.

Track Actions, Issues, and Defects

A detailed procedure needs to be defined for tracking action items to completion. Templates should be used that describe all elements to be filled out for action item reports.

Additionally, a procedure needs to be in place that allows for tracking issues/defects to closure, known as a defect tracking lifecycle. See IAST for a sample defect tracking lifecycle used in the open-source defect tracking tool Bugzilla. Various defect tracking lifecycles exist; adapt one to your environment, tool, and project needs. Once defined, put measures in place to verify that the defect or action item lifecycle is adhered to.

If an issue or defect is uncovered, a root cause analysis should be conducted. See that section later on for more on root cause analysis.

AST Metrics

Metrics can aid in improving your organization’s automated testing process and tracking its status. Much has been said and written about the need for using metrics carefully and to not let metrics drive an effort, i.e. don’t measure for the sake of measuring. As with our recommended lightweight and adjustable process described in IAST, we recommend to use these metrics as an enhancement to the AST effort not to drive the AST effort. Our software test teams have successfully used the metrics and techniques discussed here. As the beginning quote implies, if you can measure something, then you have something you can quantify.

As time proceeds, software projects become more complex because of increased lines of code as a result of added features, bug fixes, etc. Also, tasks must be done in less time and with fewer people. Complexity over time has a tendency to decrease the test coverage and ultimately affect the quality of the product. Other factors involved over time are the overall cost of the product and the time in which to deliver the software. Carefully defined metrics can provide insight into the status of automated testing efforts.

When implemented properly, AST can help reverse the negative trend. As represented in Figure 1.1, automation efforts can provide a larger test coverage area and increase the overall quality of a product. The figure illustrates that the goal of automation is ultimately to reduce the time of testing and the cost of delivery, while increasing test coverage and quality. These benefits are typically realized over multiple test and project release cycles.

Figure 1.1 AST goal examples comparing current trend with automation implementation

Automated testing metrics can aid in making assessments as to whether coverage, progress and quality goals are being met. Before we discuss how these goals can be accomplished, we want to define metrics, automated testing metrics, and what makes a good automated test metric.

What is a metric? The basic definition of a metric is a standard of measurement. It can also be described as a system of related measures that facilitate the quantification of some particular characteristic. [2] For our purposes, a metric can be seen as a measure that can be used to display past and present performance and/or predict future performance.

Metrics categories: Most software testing metrics (including the ones presented here) fall into one of three categories:

  • Coverage: meaningful parameters for measuring test scope and success.
  • Progress: parameters that help identify test progress to be matched against success criteria. Progress metrics are collected iteratively over time. They can be used to graph the process itself (e.g., time to fix defects, time to test, etc.).
  • Quality: meaningful measures of testing product quality. Usability, performance, scalability, overall customer satisfaction, and defects reported are a few examples.

What are automated testing metrics? Automated testing metrics are metrics used to measure the performance (past, present, and future) of the implemented automated testing process and related efforts and artifacts. Here we can also differentiate metrics related to unit test automation versus integration or system test automation. Automated testing metrics serve to enhance and complement general testing metrics, providing a measure of the AST coverage, progress, and quality, not replace them.

What makes a good automated testing metric? As with any metrics, automated testing metrics should have clearly defined goals for the automation effort. It serves no purpose to measure something for the sake of measuring. To be meaningful, a metric should relate to the performance of the effort.

Prior to defining the automated testing metrics, there are metrics-setting fundamentals you may want to review. Before measuring anything, set goals. What is it you are trying to accomplish? Goals are important; if you do not have them, what is it that you are measuring? It is also important to track and measure on an ongoing basis. Based on the metrics outcome, you can decide whether changes to deadlines, feature lists, process strategies, etc., need to be adjusted accordingly. As a step toward goal setting, questions may need to be asked about the current state of affairs. Decide what questions to ask to determine whether or not you are tracking toward the defined goals. For example:

  • How many permutations of the test(s) selected do we run?
  • How much time does it take to run all the tests?
  • How is test coverage defined? Are we measuring test cases against requirements (generally during system testing), or are we measuring test cases against all possible paths taken through the units and components (generally used for unit testing)? In other words, are we looking at unit testing coverage, code coverage, or requirements coverage?
  • How much time does it take to do data analysis? Are we better off automating that analysis? What would be involved in generating the automated analysis?
  • How long does it take to build a scenario and required driver?
  • How often do we run the test(s) selected?
  • How many people do we require to run the test(s) selected?
  • How much system and lab time is required to run the test(s) selected?

In essence, a good automated testing metric has the following characteristics:

  • It is objective.
  • It is measurable.
  • It is meaningful.
  • Data for it is easily gathered.
  • It can help identify areas of test automation improvement.
  • It is simple.

A few more words about metrics being simple: Albert Einstein once said, "Make everything as simple as possible, but not simpler." When applying this wisdom to AST and related metrics collection, you will see that

  • Simplicity reduces errors.
  • Simplicity is more effective.
  • Simplicity is elegant.
  • Simplicity brings focus.

It is important to generate a metric that calculates the value of automation, especially if this is the first time an automated testing approach has been used for a project. IAST discusses ROI measurement in detail and provides various worksheets that can serve as a baseline for calculating AST ROI. For example, there we mention that the test team will need to measure the time spent on developing and executing test scripts against the results that the scripts produce. If needed, the test team could justify the number of hours required to develop and execute AST by providing the number of defects found using this automation that would likely not have been revealed during a manual test effort. Specific details as to why the manual effort would not have found the defect can be provided; some possible reasons are that the automated test used additional test data not previously included in the manual effort, or the automated test used additional scenarios and path coverage previously not touched manually. Another way of putting this is that, for example, with manual testing you might have been able to test x number of test data combinations; with automated testing you are now able to test x + y test data combinations. Defects that were uncovered in the set of y combinations are the defects that manual testing may have never uncovered. Here you can also show the increase in testing coverage for future software releases.

Another way to quantify or measure automation benefits is to show that a specific automated test could hardly have been accomplished in a manual fashion. For example, say that during stress testing 1,000 virtual users execute a specific functionality and the system crashes. It would be very difficult to discover this problem manually, using 1,000 test engineers or possibly even extrapolation as it is still very commonly used today.

AST can also minimize the test effort, for example, by the use of an automated test tool for data entry or record setup. Consider the test effort associated with the system requirement that reads, "The system shall allow the addition of 10,000 new accounts." Imagine having to manually enter 10,000 accounts into a system in order to test this requirement! An automated test script can easily support this requirement by reading account information from a file through the use of a looping construct. The data file can easily be generated using a data generator. The effort to verify this system requirement using test automation requires far fewer man-hours than performing such a test using manual test methods. [3] The ROI metric that applies in this case measures the time required to manually set up the needed records versus the time required to set up the records using an automated tool.

What follows are additional metrics that can be used to help track progress of the AST program. Here we can differentiate between test case and progress metrics and defect and defect removal metrics.

Percent Automatable or Automation Index

As part of an AST effort, the project is either basing its automation on existing manual test procedures, or starting a new automation effort from scratch, some combination, or even just maintaining an AST effort. Whatever the case, a percent automatable metric or the automation index can be determined.

Percent automatable can be defined as the percentage of a set of given test cases that is automatable. This could be represented by the following equation:

                     ATC          No. of test cases automatable
PA (%) = -------- = (----------------------------------- )
TC Total no. of test cases

PA = Percent automatable
ATC = Number of test cases automatable
TC = Total number of test cases

When evaluating test cases to be developed, what is to be considered automatable and what is to be considered not automatable? Given enough ingenuity and resources, one can argue that almost anything can be automated. So where do you draw the line? Something that can be considered "not automatable," for example, could be an application area that is still under design, not very stable, and mostly in flux. In cases such as this, you should evaluate whether it makes sense to automate. See IAST for a detailed discussion of how to determine what to automate. There we discussed that we would evaluate, given the set of test cases, which ones would provide the biggest return on investment if automated. Just because a test is automatable doesn’t necessary mean it should be automated.

Prioritize your automation effort based on the outcome of this "what to automate" evaluation. Figure 1.2 shows how this metric can be used to summarize, for example, the percent automatable of various projects or components within a project and to set the automation goal. Once we know the percent automatable, we can use it as a baseline for measuring AST implementation progress.

Figure 1.2 Example of percent automatable (automation index) per project (or component)

Automation Progress

Automation progress refers to the number of tests that have been automated as a percentage of all automatable test cases. Basically, how well are you doing against the goal of automated testing? The ultimate goal is to automate 100% of the "automatable" test cases. This can be accomplished in phases, so it is important to set a goal that states the deadlines for when a specific percentage of the ASTs should be automated. It is useful to track this metric during the various stages of automated testing development.

                      AA            No. of test cases automated
AP (%) = -------- = (-------------------------------------- )
ATC No. of test cases automatable

AP = Automation progress
AA = Number of test cases automated
ATC = Number of test cases automatable

The automation progress metric is a metric typically tracked over time. In the case of Figure 1.3, the time is weeks.

Figure 1.3 Test cases automated over time (weeks)

Test Progress

A common metric closely associated with the progress of automation, yet not exclusive to automation, is test progress. Test progress can simply be defined as the number of test cases (manual and automated) executed over time.

                   TC               No. of test cases executed
TP = -------- = (-------------------------------------------- )
T Total number of test cases

TP = Test progress
TC = Number of test cases executed
T = Total number of test cases

The purpose of this metric is to track test progress and can be used to show how testing is tracking against the overall project plan.

More detailed analysis is needed to determine test pass/fail, which can be captured in a more refined metric; i.e., we need to determine not only how many tests have been run over time and how many more there are to be run, but also how many of those test executions actually pass consistently without failure so that the test can actually be considered complete. In the test progress metric we can replace No. of test cases executed with No. of test cases completed, counting only those test cases that actually consistently pass.

Percent of Automated Test Coverage

Another AST metric we want to consider is percent of automated test coverage. This metric determines what percentage of test coverage the automated testing is actually achieving. Various degrees of test coverage can be achieved, depending on the project and defined goals. Also depending on the types of testing performed, unit test automation coverage could be measured against all identified units, or functional system test coverage can be measured against all requirements, and so forth. Together with manual test coverage, this metric measures the completeness of the test coverage and can measure how much automation is being executed relative to the total number of tests. However, it does not say anything about the quality of the automation. For example, 2,000 test cases executing the same or similar data paths may take a lot of time and effort to execute, but they do not equate to a larger percentage of test coverage. Test data techniques discussed in IAST need to be used to effectively derive the number of test data elements required to test the same or similar data path. Percent of automated test coverage does not indicate anything about the effectiveness of the testing taking place; it is a metric that measures its dimension.

                       AC            Automation coverage
PTC (%) = ------- = (------------------------------- )
C Total coverage

PTC = Percent of automated test coverage
AC = Automation coverage
C = Total coverage (i.e., requirements, units/components, or code coverage)

There is a wealth of material available regarding the sizing or coverage of systems. A useful resource is Stephen H. Kan’s book Metrics and Models in Software Quality Engineering. [4]

Figure 1.4 provides an example of test coverage for Project A versus Project B over various iterations. The dip in coverage for Project A might reveal that new functionality was delivered that hadn’t yet been tested, so that no coverage was provided for that area.

Figure 1.4 Test coverage per project over various iterations

Defect Density

Measuring defects is a discipline to be implemented regardless of whether the testing effort is automated or not. Defect density is another well-known metric that can be used for determining an area to automate. If a component requires a lot of retesting because the defect density is very high, it might lend itself perfectly to automated testing. Defect density is a measure of the total known defects divided by the size of the software entity being measured. For example, if there is a high defect density in a specific functionality, it is important to conduct a causal analysis. Is this functionality very complex, and therefore is it to be expected that the defect density would be high? Is there a problem with the design or implementation of the functionality? Were the wrong (or not enough) resources assigned to the functionality, because an inaccurate risk had been assigned to it and the complexity was not understood? It also could be inferred that the developer responsible for this specific functionality needs more training.

                   D             No. of known defects
DD = ------- = ( ------------------------------- )
SS Size of software entity

DD = Defect density
D = Number of known defects
SS = Size of software entity

We can’t necessary blame a high defect density on large software component size, i.e. while generally the thought is that a high defect density in a large component is more justifiable than in a small component, the small component could be much more complex than the large one. AST complexity is an important consideration when evaluating defect density.

Additionally, when evaluating defect density, the priority of the defect should be considered. For example, one application requirement may have as many as 50 low-priority defects and still pass because the acceptance criteria have been satisfied. Still another requirement may have only one open defect, but that defect prevents the acceptance criteria from being satisfied because it is a high priority. Higher-priority defects are generally weighted more heavily as part of this metric.

Defect Trend Analysis

Another useful testing metric in general is defect trend analysis. Defect trend analysis is calculated as:

                   D              No. of known defects
DTA = ------- = (--------------------------------------- )
TPE No. of test procedures executed

DTA = Defect trend analysis
D = Number of known defects
TPE = Number of test procedures executed over time

Defect trend analysis can help determine the trend of defects found over time. Is the trend improving as the testing phase is winding down, or does the trend remain static or is it worsening? During the AST testing process, we have found defect trend analysis to be one of the more useful metrics to show the health of a project. One approach to showing the trend is to plot the total number of defects over time, as shown in Figure 1.5. [5]

Figure 1.5 Defect trend analysis: total number of defects over time (here after weeks in testing)

Effective defect tracking analysis can present a clear view of the status of testing throughout the project.

Defect Removal Efficiency

One of the more popular metrics is defect removal efficiency (DRE); this metric is not specific to automation, but it is very useful when used in conjunction with automation efforts. DRE is used to determine the effectiveness of defect removal efforts. It is also an indirect measurement of product quality. The value of the DRE is calculated as a percentage. The higher the percentage, the greater the potential positive impact on the quality of the product. This is because it represents the timely identification and removal of defects at any particular phase.

                        DT               No. of defects found during testing
DRE (%) = ------------ = ( -------------------------------------------- )
DT + DA No. of defects found during testing +
No. of defects found after delivery

DRE = Defect removal efficiency
DT = Number of defects found during testing
DA = Number of defects found after delivery

Figure 1.6 Defect removal efficiency over the testing lifecycle effort

The highest attainable value of DRE is 1, which equates to 100%. In practice, we have found that an efficiency rating of 100% is not likely. According to Capers Jones, world-class organizations have a DRE greater than 95%. [6] DRE should be measured during the different development phases. For example, low DRE during analysis and design may indicate that more time should be spent improving the way formal technical reviews are conducted.

This calculation can be extended for released products as a measure of the number of defects in the product that were not caught during the product development or testing phases.

Automated Software Testing ROI

As we have discussed, metrics help define the progress, health, and quality of an automated testing effort. Without such metrics, it would be practically impossible to quantify, explain with certainty, or demonstrate quality. Along with quality, metrics also help with demonstrating ROI, covered in detail in IAST of this book. ROI measurement, like most metrics, is an ongoing exercise and needs to be closely maintained. Consider the ROI and the various testing metrics when investigating the quality and value of AST. As shown in Figure 1.7, metrics can assist in presenting the ROI for your effort. Be sure to include all facets in your ROI metric as described in IAST.

Figure 1.7 AST ROI example (cumulated costs over time)

Other Software Testing Metrics

Along with the metrics mentioned in the previous sections, there are a few more common test metrics useful for the overall testing program. Table 1.1 provides a summary and high-level description of some of these additional useful metrics.

Table 1.1 Additional Common and Useful Software Test Metrics [7]

Metric Name

Description

Category

Error discovery rate

Number of total defects found/Number of test procedures executed

The error discovery rate is used to analyze and support a rational product release decision.

Progress

Defect aging

The date a defect was opened versus the date the defect was fixed

The defect aging metric indicates turnaround of the defect.

Progress

Defect fix retest

The date a defect was fixed and released in a new build versus the date the defect was retested

The defect fix retest metric indicates whether the testing team is retesting the fixes fast enough, in order to get an accurate progress metric.

Progress

Current quality ratio

Number of test procedures successfully executed (without defects) versus the number of test procedures.

The current quality ratio metric indicates the amount of functionality that has successfully been demonstrated.

Quality

Problem reports by priority

The number of software problem reports broken down by priority

The problem reports metric counts the number of software problems reported, listed by priority.

Quality

Root Cause Analysis

It is not good enough to conduct lessons learned after the AST program has been implemented. Instead, as soon as a problem is uncovered, regardless of the phase or the type of issue - whether it’s a schedule, budget, or software defect problem - a root cause analysis should be conducted, so that corrective actions and adjustments can be made. Root cause analysis should not focus on determining blame; it is a neutral investigation that determines the cause of the problem. For example, a root cause template could be developed where stakeholders can fill in their respective parts if a defect is uncovered in production. The template could list neutral questions, such as, "What is the exact problem and its effect?" "How was it uncovered and who reported it?" "When was it reported?" "Who is/was affected by this problem?" "What is the priority of the problem?" Once all this information has been gathered, stakeholders need to be involved in the discussion and determination of the root cause, how to resolve the issue (corrective action to be taken) and understand the priority of solving and retesting it, and how to prevent that sort of issue from happening in the future.

Defects will be uncovered despite the best-laid plans and implementations; corrective actions, and adjustments are always needed, i.e. expect the unexpected, but have a plan for addressing it Effective AST processes should allow for and support the implementation of necessary corrective actions. They should allow for strategic course correction, schedule adjustments, and deviation from AST phases to adjust to specific project needs, to support continuous process improvement and an ultimate successful delivery.

Root cause analysis is a popular area that has been researched and written about a great deal. What we present here is our approach to implementing it. For more information on root cause analysis and a sample template, review the SixSigma discussion on "Final Solution via Root Cause Analysis (with a Template)." [8]

Summary

To assure AST program success, the AST goals need to be not only defined but also constantly tracked. Defect prevention, AST and other software testing metrics, and root causes analysis implementation are important steps to help prevent, detect, and solve process issues and SUT defects. With the help of these steps the health, quality, and progress of an AST effort can be tracked. These activities can also be used to evaluate past performance, current status, and future trends. Good metrics are objective, measurable, meaningful, and simple, and they have easily obtainable data. Traditional software testing metrics used in software quality engineering can be applied and adapted to AST programs. Some metrics specific to automated testing are

  • Percent automatable
  • Automation progress
  • Percent of automated testing coverage
  • Software automation ROI (see IAST for more details)
  • Automated test effectiveness (related to ROI)

Evaluate the metrics outcome and adjust accordingly.

Track budgets, schedules, and all AST program-related activities to ensure that your plans will be implemented successfully. Take advantage of peer reviews and inspections, activities that have been proven useful in defect prevention.

As covered in IAST, in the test case requirements-gathering phase of your automation effort, evaluate whether it makes sense to automate or not. Given the set of automatable test cases, determine which ones would provide the biggest ROI. Consider that just because a test is automatable doesn’t necessary mean it should be automated. Using this strategy for determining what to automate, you are well on your way to AST success.

References:

  1. E. Dustin, "The People Problem," www.stpmag.com/issues/stp-2006-04.pdf; "The Schedule Problem," www.stpmag.com/issues/stp-2006-05.pdf; and "The Processes and Budget Problem," www.stpmag.com/issues/stp-2006-06.pdf.
  2. www.thefreedictionary.com/metric.
  3. Adapted from Dustin et al., Automated Software Testing
  4. Stephen H. Kan, Metrics and Models in Software Quality Engineering, 2nd ed. (Addison-Wesley, 2003).
  5. Adapted from www.teknologika.com/blog/SoftwareDevelopmentMetricsDefectTracking.aspx.
  6. C. Jones, keynote address, Fifth International Conference on Software Quality, Austin, TX, 1995.
  7. Adapted from Dustin et al., Automated Software Testing.
  8. www.isixsigma.com/library/content/c050516a.asp.

Implementing Automated Software Testing - Continuously Track Progress and Adjust Accordingly

Source : Week End Testers

Implementing Automated Software Testing - Continuously Track Progress and Adjust Accordingly

Thom Garrett, Innovative Defense Technologies, www.IDTus.com

This is an excerpt from the book "Implementing Automated Software Testing," by Elfriede Dustin, Thom Garrett, Bernie Gauf, Copyright Addison Wesley, 2009

"When you can measure what you are speaking about, and can express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind." - Lord Kelvin

Most of us have worked on at least one project where the best-laid plans went awry and at times there wasn’t just one reason we could point to that caused the failure. People, schedules, processes, and budgets can all contribute. [1] Based on such experiences from the past, we have learned that as part of a successful automated testing (AST) program it is important that the right people with the applicable skills are hired, goals and strategies be defined and then implemented, and that steps be put in place to continuously track, measure, and adjust, as needed, against these goals and strategies. Here we’ll discuss the importance of tracking the AST program, to include various defect prevention techniques, such as peer reviews and other interchanges. We’ll then focus on the types of AST metrics to gather so that we can measure progress, gauge the effectiveness of our AST efforts, and help keep them keep on track and/or make adjustments, if necessary. Finally, we will discuss the importance of a root/cause analysis if a defect or issue is encountered.

Based on the outcome of these various efforts, adjustments can be made where necessary; e.g., the defects remaining to be fixed in a testing cycle can be assessed, schedules can be adjusted, and/or goals can be reduced. For example, if a feature is left with too many high-priority defects, a decision can be made to move the ship date, to ship the system as is (which generally isn’t wise, unless a quick patch process is in place), or to go live without that specific feature, if that is feasible.

Success is measured based on achieving the goals we set out to accomplish relative to the expectations of our stakeholders and customers.

AST Program Tracking and Defect Prevention

In our book "Implementing Automated Software Testing (IAST)" in we cover the importance of valid requirements and their assessment; in another we discuss the precautions to take when deciding what to automate; and yet another section we discussed in detail the importance of peer reviews. Here we’ll provide additional ideas that aid in defect prevention efforts, including technical interchanges and walk-throughs; internal inspections; examination of constraints and associated risks; risk mitigation strategies; safeguarding AST processes and environments via configuration management; and defining and tracking schedules, costs, action items, and issues/defects.

Conduct Technical Interchanges and Walk-throughs

Peer reviews, technical interchanges, and walk-throughs with the customer and the internal AST team represent evaluation techniques that should take place throughout the AST effort. These techniques can be applied to all AST deliverables—test requirements, test cases, AST design and code, and other software work products, such as test procedures and automated test scripts. They consist of a detailed artifact examination by a person or a group other than the author. These interchanges and walk-throughs are intended to detect defects, non-adherence to AST guidelines, test procedure issues, and other problems.

An example of a technical interchange meeting is an overview of test requirement documentation. When AST test requirements are defined in terms that are testable and correct, errors are prevented from entering the AST development pipeline that could eventually be reflected as defects in the deliverable. AST design component walk-throughs can be performed to ensure that the design is consistent with defined requirements - e.g., that it conforms to OA standards and applicable design methodology - and that errors are minimized.

Technical reviews and inspections have proven to be the most effective forms of preventing miscommunication, allowing for defect detection and removal.

Conduct Internal Inspections

In addition to customer technical interchanges and walk-throughs, internal inspections of deliverable work products should take place, before anything is even presented to the customer, to support the detection and removal of defects and process/practice omissions or deficiencies early in the AST development and test cycle; prevent the migration of defects to later phases; improve quality and productivity; and reduce cost, cycle time, and maintenance efforts.

Examine Constraints and Associated Risks

A careful examination of goals and constraints and associated risks should take place, leading to a systematic AST strategy and producing a predictable, higher-quality outcome and a high degree of success. Combining a careful examination of constraints together with defect detection technologies will yield the best results.

Any constraint and associated risk should be communicated to the customer and risk mitigation strategies developed as necessary.

Implement Risk Mitigation Strategies

Defined "light weight" processes allow for iterative, constant risk assessment and review without the dreaded overhead. If a risk is identified, appropriate mitigation strategies can be deployed. Require ongoing review of cost, schedules, processes, and implementation to ensure that potential problems do not go unnoticed until too late; instead, processes need to ensure that problems are addressed and corrected immediately. For example, how will you mitigate the risk if your "star" developer quits? There are numerous possible answers: Software development is a team effort and it is never a good practice to rely on one "star" developer. Hire qualified developers, so they can integrate as a team and each can be relied on in various ways based on their respective qualifications. One team member might have more experience than another, but neither should be irreplaceable, and the departure of one of them should not be detrimental to the project. Follow good hiring and software development practices (such as documenting and maintaining all AST-related artifacts) and put the right people on the project; we discuss the "how to" in our book "IAST." Additional risks could be missed deadlines or being over budget. Evaluate and determine risk mitigation techniques in case an identified risk comes to fruition.

Safeguard the Integrity of the AST Process and Environments

Experience shows that it is important to safeguard the integrity of the AST processes and environment. In IAST we discuss the importance of an isolated test environment and having it under configuration management. For example, you might want to test any new technology to be used as part of the AST effort in an isolated environment and validate that a tool, for example, performs to product specifications and marketing claims before it is used on any AUT or customer test environment. At one point we installed a tool on our Micron PC used for daily activities, only to have it blue-screen. It turned out that the tool we wanted to test wasn’t compatible with the Micron PC. To solve the problem, we actually had to upgrade the PC’s BIOS. An isolated test environment for these types of evaluation activities is vital.

The automator should also verify that any upgrades to a technology still run in the current environment. The previous version of the tool may have performed correctly and a new upgrade may perform fine in other environments, but the upgrade may adversely affect the team’s particular environment. We had an experience when a new tool upgrade wasn’t compatible with our e-mail software package any longer. It was a good thing we caught this issue, because otherwise an upgrade install would have rendered the tool useless, as we heavily relied on e-mail notification, for example, if a defect was generated.

Additionally, using a configuration management tool to baseline the test repository will help safeguard the integrity of the automated testing process. For example, all AST automation framework components, script files, test case and test procedure documentation, schedules, cost tracking, and other related AST artifacts need to be under configuration management. Using a configuration management tool ensures that the latest and most accurate version control and records of AST artifacts and products are maintained. For example, we are using the open-source tool Subversion in order to maintain AST product integrity; we evaluate the best products available to allow for the most efficient controls on an ongoing basis.

Define, Communicate, and Track Schedules and Costs

It is not good enough to base a schedule on a marketing-department-defined deadline. Instead, schedule and task durations need to be determined based on past historical performance and associated best estimates gathered from all stakeholders. Additionally, any schedule dependencies and critical path elements need to be considered up front and incorporated into the schedule. Project schedules need to be defined, continuously tracked, and communicated.

In order to meet any schedule—for example, if the program is under a tight deadline - only the AST tasks that can be successfully delivered in time are included in the schedule iteration. As described in IAST, during the AST Phase 1, test requirements are prioritized, which allows for prioritizing the most critical AST tasks to be completed as opposed to the less critical and lower-priority tasks, which can then be moved to later in the schedule, accordingly. Once the requirements are prioritized, an initial schedule is presented to the customer for approval and not before the System Under Test (SUT), AST requirements and associated level of effort are understood.

During the technical interchanges and walk-throughs, schedules are evaluated and presented on an ongoing basis to allow for continuous communication and monitoring. Potential schedule risks should be communicated well in advance and risk mitigation strategies explored and implemented, as needed; any schedule slips should be communicated to the customer immediately and adjustments made accordingly.

By closely tracking schedules and other required AST resources, we can also ensure that a cost tracking and control process is followed. Inspections, walk-throughs, and other status reporting allow for closely monitored cost control and tracking. Tracking cost and schedules and so forth allows for tracking of the project’s performance.

Track Actions, Issues, and Defects

A detailed procedure needs to be defined for tracking action items to completion. Templates should be used that describe all elements to be filled out for action item reports.

Additionally, a procedure needs to be in place that allows for tracking issues/defects to closure, known as a defect tracking lifecycle. See IAST for a sample defect tracking lifecycle used in the open-source defect tracking tool Bugzilla. Various defect tracking lifecycles exist; adapt one to your environment, tool, and project needs. Once defined, put measures in place to verify that the defect or action item lifecycle is adhered to.

If an issue or defect is uncovered, a root cause analysis should be conducted. See that section later on for more on root cause analysis.

AST Metrics

Metrics can aid in improving your organization’s automated testing process and tracking its status. Much has been said and written about the need for using metrics carefully and to not let metrics drive an effort, i.e. don’t measure for the sake of measuring. As with our recommended lightweight and adjustable process described in IAST, we recommend to use these metrics as an enhancement to the AST effort not to drive the AST effort. Our software test teams have successfully used the metrics and techniques discussed here. As the beginning quote implies, if you can measure something, then you have something you can quantify.

As time proceeds, software projects become more complex because of increased lines of code as a result of added features, bug fixes, etc. Also, tasks must be done in less time and with fewer people. Complexity over time has a tendency to decrease the test coverage and ultimately affect the quality of the product. Other factors involved over time are the overall cost of the product and the time in which to deliver the software. Carefully defined metrics can provide insight into the status of automated testing efforts.

When implemented properly, AST can help reverse the negative trend. As represented in Figure 1.1, automation efforts can provide a larger test coverage area and increase the overall quality of a product. The figure illustrates that the goal of automation is ultimately to reduce the time of testing and the cost of delivery, while increasing test coverage and quality. These benefits are typically realized over multiple test and project release cycles.

Figure 1.1 AST goal examples comparing current trend with automation implementation

Automated testing metrics can aid in making assessments as to whether coverage, progress and quality goals are being met. Before we discuss how these goals can be accomplished, we want to define metrics, automated testing metrics, and what makes a good automated test metric.

What is a metric? The basic definition of a metric is a standard of measurement. It can also be described as a system of related measures that facilitate the quantification of some particular characteristic. [2] For our purposes, a metric can be seen as a measure that can be used to display past and present performance and/or predict future performance.

Metrics categories: Most software testing metrics (including the ones presented here) fall into one of three categories:

  • Coverage: meaningful parameters for measuring test scope and success.
  • Progress: parameters that help identify test progress to be matched against success criteria. Progress metrics are collected iteratively over time. They can be used to graph the process itself (e.g., time to fix defects, time to test, etc.).
  • Quality: meaningful measures of testing product quality. Usability, performance, scalability, overall customer satisfaction, and defects reported are a few examples.

What are automated testing metrics? Automated testing metrics are metrics used to measure the performance (past, present, and future) of the implemented automated testing process and related efforts and artifacts. Here we can also differentiate metrics related to unit test automation versus integration or system test automation. Automated testing metrics serve to enhance and complement general testing metrics, providing a measure of the AST coverage, progress, and quality, not replace them.

What makes a good automated testing metric? As with any metrics, automated testing metrics should have clearly defined goals for the automation effort. It serves no purpose to measure something for the sake of measuring. To be meaningful, a metric should relate to the performance of the effort.

Prior to defining the automated testing metrics, there are metrics-setting fundamentals you may want to review. Before measuring anything, set goals. What is it you are trying to accomplish? Goals are important; if you do not have them, what is it that you are measuring? It is also important to track and measure on an ongoing basis. Based on the metrics outcome, you can decide whether changes to deadlines, feature lists, process strategies, etc., need to be adjusted accordingly. As a step toward goal setting, questions may need to be asked about the current state of affairs. Decide what questions to ask to determine whether or not you are tracking toward the defined goals. For example:

  • How many permutations of the test(s) selected do we run?
  • How much time does it take to run all the tests?
  • How is test coverage defined? Are we measuring test cases against requirements (generally during system testing), or are we measuring test cases against all possible paths taken through the units and components (generally used for unit testing)? In other words, are we looking at unit testing coverage, code coverage, or requirements coverage?
  • How much time does it take to do data analysis? Are we better off automating that analysis? What would be involved in generating the automated analysis?
  • How long does it take to build a scenario and required driver?
  • How often do we run the test(s) selected?
  • How many people do we require to run the test(s) selected?
  • How much system and lab time is required to run the test(s) selected?

In essence, a good automated testing metric has the following characteristics:

  • It is objective.
  • It is measurable.
  • It is meaningful.
  • Data for it is easily gathered.
  • It can help identify areas of test automation improvement.
  • It is simple.

A few more words about metrics being simple: Albert Einstein once said, "Make everything as simple as possible, but not simpler." When applying this wisdom to AST and related metrics collection, you will see that

  • Simplicity reduces errors.
  • Simplicity is more effective.
  • Simplicity is elegant.
  • Simplicity brings focus.

It is important to generate a metric that calculates the value of automation, especially if this is the first time an automated testing approach has been used for a project. IAST discusses ROI measurement in detail and provides various worksheets that can serve as a baseline for calculating AST ROI. For example, there we mention that the test team will need to measure the time spent on developing and executing test scripts against the results that the scripts produce. If needed, the test team could justify the number of hours required to develop and execute AST by providing the number of defects found using this automation that would likely not have been revealed during a manual test effort. Specific details as to why the manual effort would not have found the defect can be provided; some possible reasons are that the automated test used additional test data not previously included in the manual effort, or the automated test used additional scenarios and path coverage previously not touched manually. Another way of putting this is that, for example, with manual testing you might have been able to test x number of test data combinations; with automated testing you are now able to test x + y test data combinations. Defects that were uncovered in the set of y combinations are the defects that manual testing may have never uncovered. Here you can also show the increase in testing coverage for future software releases.

Another way to quantify or measure automation benefits is to show that a specific automated test could hardly have been accomplished in a manual fashion. For example, say that during stress testing 1,000 virtual users execute a specific functionality and the system crashes. It would be very difficult to discover this problem manually, using 1,000 test engineers or possibly even extrapolation as it is still very commonly used today.

AST can also minimize the test effort, for example, by the use of an automated test tool for data entry or record setup. Consider the test effort associated with the system requirement that reads, "The system shall allow the addition of 10,000 new accounts." Imagine having to manually enter 10,000 accounts into a system in order to test this requirement! An automated test script can easily support this requirement by reading account information from a file through the use of a looping construct. The data file can easily be generated using a data generator. The effort to verify this system requirement using test automation requires far fewer man-hours than performing such a test using manual test methods. [3] The ROI metric that applies in this case measures the time required to manually set up the needed records versus the time required to set up the records using an automated tool.

What follows are additional metrics that can be used to help track progress of the AST program. Here we can differentiate between test case and progress metrics and defect and defect removal metrics.

Percent Automatable or Automation Index

As part of an AST effort, the project is either basing its automation on existing manual test procedures, or starting a new automation effort from scratch, some combination, or even just maintaining an AST effort. Whatever the case, a percent automatable metric or the automation index can be determined.

Percent automatable can be defined as the percentage of a set of given test cases that is automatable. This could be represented by the following equation:

                     ATC          No. of test cases automatable
PA (%) = -------- = (----------------------------------- )
TC Total no. of test cases

PA = Percent automatable
ATC = Number of test cases automatable
TC = Total number of test cases

When evaluating test cases to be developed, what is to be considered automatable and what is to be considered not automatable? Given enough ingenuity and resources, one can argue that almost anything can be automated. So where do you draw the line? Something that can be considered "not automatable," for example, could be an application area that is still under design, not very stable, and mostly in flux. In cases such as this, you should evaluate whether it makes sense to automate. See IAST for a detailed discussion of how to determine what to automate. There we discussed that we would evaluate, given the set of test cases, which ones would provide the biggest return on investment if automated. Just because a test is automatable doesn’t necessary mean it should be automated.

Prioritize your automation effort based on the outcome of this "what to automate" evaluation. Figure 1.2 shows how this metric can be used to summarize, for example, the percent automatable of various projects or components within a project and to set the automation goal. Once we know the percent automatable, we can use it as a baseline for measuring AST implementation progress.

Figure 1.2 Example of percent automatable (automation index) per project (or component)

Automation Progress

Automation progress refers to the number of tests that have been automated as a percentage of all automatable test cases. Basically, how well are you doing against the goal of automated testing? The ultimate goal is to automate 100% of the "automatable" test cases. This can be accomplished in phases, so it is important to set a goal that states the deadlines for when a specific percentage of the ASTs should be automated. It is useful to track this metric during the various stages of automated testing development.

                      AA            No. of test cases automated
AP (%) = -------- = (-------------------------------------- )
ATC No. of test cases automatable

AP = Automation progress
AA = Number of test cases automated
ATC = Number of test cases automatable

The automation progress metric is a metric typically tracked over time. In the case of Figure 1.3, the time is weeks.

Figure 1.3 Test cases automated over time (weeks)

Test Progress

A common metric closely associated with the progress of automation, yet not exclusive to automation, is test progress. Test progress can simply be defined as the number of test cases (manual and automated) executed over time.

                   TC               No. of test cases executed
TP = -------- = (-------------------------------------------- )
T Total number of test cases

TP = Test progress
TC = Number of test cases executed
T = Total number of test cases

The purpose of this metric is to track test progress and can be used to show how testing is tracking against the overall project plan.

More detailed analysis is needed to determine test pass/fail, which can be captured in a more refined metric; i.e., we need to determine not only how many tests have been run over time and how many more there are to be run, but also how many of those test executions actually pass consistently without failure so that the test can actually be considered complete. In the test progress metric we can replace No. of test cases executed with No. of test cases completed, counting only those test cases that actually consistently pass.

Percent of Automated Test Coverage

Another AST metric we want to consider is percent of automated test coverage. This metric determines what percentage of test coverage the automated testing is actually achieving. Various degrees of test coverage can be achieved, depending on the project and defined goals. Also depending on the types of testing performed, unit test automation coverage could be measured against all identified units, or functional system test coverage can be measured against all requirements, and so forth. Together with manual test coverage, this metric measures the completeness of the test coverage and can measure how much automation is being executed relative to the total number of tests. However, it does not say anything about the quality of the automation. For example, 2,000 test cases executing the same or similar data paths may take a lot of time and effort to execute, but they do not equate to a larger percentage of test coverage. Test data techniques discussed in IAST need to be used to effectively derive the number of test data elements required to test the same or similar data path. Percent of automated test coverage does not indicate anything about the effectiveness of the testing taking place; it is a metric that measures its dimension.

                       AC            Automation coverage
PTC (%) = ------- = (------------------------------- )
C Total coverage

PTC = Percent of automated test coverage
AC = Automation coverage
C = Total coverage (i.e., requirements, units/components, or code coverage)

There is a wealth of material available regarding the sizing or coverage of systems. A useful resource is Stephen H. Kan’s book Metrics and Models in Software Quality Engineering. [4]

Figure 1.4 provides an example of test coverage for Project A versus Project B over various iterations. The dip in coverage for Project A might reveal that new functionality was delivered that hadn’t yet been tested, so that no coverage was provided for that area.

Figure 1.4 Test coverage per project over various iterations

Defect Density

Measuring defects is a discipline to be implemented regardless of whether the testing effort is automated or not. Defect density is another well-known metric that can be used for determining an area to automate. If a component requires a lot of retesting because the defect density is very high, it might lend itself perfectly to automated testing. Defect density is a measure of the total known defects divided by the size of the software entity being measured. For example, if there is a high defect density in a specific functionality, it is important to conduct a causal analysis. Is this functionality very complex, and therefore is it to be expected that the defect density would be high? Is there a problem with the design or implementation of the functionality? Were the wrong (or not enough) resources assigned to the functionality, because an inaccurate risk had been assigned to it and the complexity was not understood? It also could be inferred that the developer responsible for this specific functionality needs more training.

                   D             No. of known defects
DD = ------- = ( ------------------------------- )
SS Size of software entity

DD = Defect density
D = Number of known defects
SS = Size of software entity

We can’t necessary blame a high defect density on large software component size, i.e. while generally the thought is that a high defect density in a large component is more justifiable than in a small component, the small component could be much more complex than the large one. AST complexity is an important consideration when evaluating defect density.

Additionally, when evaluating defect density, the priority of the defect should be considered. For example, one application requirement may have as many as 50 low-priority defects and still pass because the acceptance criteria have been satisfied. Still another requirement may have only one open defect, but that defect prevents the acceptance criteria from being satisfied because it is a high priority. Higher-priority defects are generally weighted more heavily as part of this metric.

Defect Trend Analysis

Another useful testing metric in general is defect trend analysis. Defect trend analysis is calculated as:

                   D              No. of known defects
DTA = ------- = (--------------------------------------- )
TPE No. of test procedures executed

DTA = Defect trend analysis
D = Number of known defects
TPE = Number of test procedures executed over time

Defect trend analysis can help determine the trend of defects found over time. Is the trend improving as the testing phase is winding down, or does the trend remain static or is it worsening? During the AST testing process, we have found defect trend analysis to be one of the more useful metrics to show the health of a project. One approach to showing the trend is to plot the total number of defects over time, as shown in Figure 1.5. [5]

Figure 1.5 Defect trend analysis: total number of defects over time (here after weeks in testing)

Effective defect tracking analysis can present a clear view of the status of testing throughout the project.

Defect Removal Efficiency

One of the more popular metrics is defect removal efficiency (DRE); this metric is not specific to automation, but it is very useful when used in conjunction with automation efforts. DRE is used to determine the effectiveness of defect removal efforts. It is also an indirect measurement of product quality. The value of the DRE is calculated as a percentage. The higher the percentage, the greater the potential positive impact on the quality of the product. This is because it represents the timely identification and removal of defects at any particular phase.

                        DT               No. of defects found during testing
DRE (%) = ------------ = ( -------------------------------------------- )
DT + DA No. of defects found during testing +
No. of defects found after delivery

DRE = Defect removal efficiency
DT = Number of defects found during testing
DA = Number of defects found after delivery

Figure 1.6 Defect removal efficiency over the testing lifecycle effort

The highest attainable value of DRE is 1, which equates to 100%. In practice, we have found that an efficiency rating of 100% is not likely. According to Capers Jones, world-class organizations have a DRE greater than 95%. [6] DRE should be measured during the different development phases. For example, low DRE during analysis and design may indicate that more time should be spent improving the way formal technical reviews are conducted.

This calculation can be extended for released products as a measure of the number of defects in the product that were not caught during the product development or testing phases.

Automated Software Testing ROI

As we have discussed, metrics help define the progress, health, and quality of an automated testing effort. Without such metrics, it would be practically impossible to quantify, explain with certainty, or demonstrate quality. Along with quality, metrics also help with demonstrating ROI, covered in detail in IAST of this book. ROI measurement, like most metrics, is an ongoing exercise and needs to be closely maintained. Consider the ROI and the various testing metrics when investigating the quality and value of AST. As shown in Figure 1.7, metrics can assist in presenting the ROI for your effort. Be sure to include all facets in your ROI metric as described in IAST.

Figure 1.7 AST ROI example (cumulated costs over time)

Other Software Testing Metrics

Along with the metrics mentioned in the previous sections, there are a few more common test metrics useful for the overall testing program. Table 1.1 provides a summary and high-level description of some of these additional useful metrics.

Table 1.1 Additional Common and Useful Software Test Metrics [7]

Metric Name

Description

Category

Error discovery rate

Number of total defects found/Number of test procedures executed

The error discovery rate is used to analyze and support a rational product release decision.

Progress

Defect aging

The date a defect was opened versus the date the defect was fixed

The defect aging metric indicates turnaround of the defect.

Progress

Defect fix retest

The date a defect was fixed and released in a new build versus the date the defect was retested

The defect fix retest metric indicates whether the testing team is retesting the fixes fast enough, in order to get an accurate progress metric.

Progress

Current quality ratio

Number of test procedures successfully executed (without defects) versus the number of test procedures.

The current quality ratio metric indicates the amount of functionality that has successfully been demonstrated.

Quality

Problem reports by priority

The number of software problem reports broken down by priority

The problem reports metric counts the number of software problems reported, listed by priority.

Quality

Root Cause Analysis

It is not good enough to conduct lessons learned after the AST program has been implemented. Instead, as soon as a problem is uncovered, regardless of the phase or the type of issue - whether it’s a schedule, budget, or software defect problem - a root cause analysis should be conducted, so that corrective actions and adjustments can be made. Root cause analysis should not focus on determining blame; it is a neutral investigation that determines the cause of the problem. For example, a root cause template could be developed where stakeholders can fill in their respective parts if a defect is uncovered in production. The template could list neutral questions, such as, "What is the exact problem and its effect?" "How was it uncovered and who reported it?" "When was it reported?" "Who is/was affected by this problem?" "What is the priority of the problem?" Once all this information has been gathered, stakeholders need to be involved in the discussion and determination of the root cause, how to resolve the issue (corrective action to be taken) and understand the priority of solving and retesting it, and how to prevent that sort of issue from happening in the future.

Defects will be uncovered despite the best-laid plans and implementations; corrective actions, and adjustments are always needed, i.e. expect the unexpected, but have a plan for addressing it Effective AST processes should allow for and support the implementation of necessary corrective actions. They should allow for strategic course correction, schedule adjustments, and deviation from AST phases to adjust to specific project needs, to support continuous process improvement and an ultimate successful delivery.

Root cause analysis is a popular area that has been researched and written about a great deal. What we present here is our approach to implementing it. For more information on root cause analysis and a sample template, review the SixSigma discussion on "Final Solution via Root Cause Analysis (with a Template)." [8]

Summary

To assure AST program success, the AST goals need to be not only defined but also constantly tracked. Defect prevention, AST and other software testing metrics, and root causes analysis implementation are important steps to help prevent, detect, and solve process issues and SUT defects. With the help of these steps the health, quality, and progress of an AST effort can be tracked. These activities can also be used to evaluate past performance, current status, and future trends. Good metrics are objective, measurable, meaningful, and simple, and they have easily obtainable data. Traditional software testing metrics used in software quality engineering can be applied and adapted to AST programs. Some metrics specific to automated testing are

  • Percent automatable
  • Automation progress
  • Percent of automated testing coverage
  • Software automation ROI (see IAST for more details)
  • Automated test effectiveness (related to ROI)

Evaluate the metrics outcome and adjust accordingly.

Track budgets, schedules, and all AST program-related activities to ensure that your plans will be implemented successfully. Take advantage of peer reviews and inspections, activities that have been proven useful in defect prevention.

As covered in IAST, in the test case requirements-gathering phase of your automation effort, evaluate whether it makes sense to automate or not. Given the set of automatable test cases, determine which ones would provide the biggest ROI. Consider that just because a test is automatable doesn’t necessary mean it should be automated. Using this strategy for determining what to automate, you are well on your way to AST success.

References:

  1. E. Dustin, "The People Problem," www.stpmag.com/issues/stp-2006-04.pdf; "The Schedule Problem," www.stpmag.com/issues/stp-2006-05.pdf; and "The Processes and Budget Problem," www.stpmag.com/issues/stp-2006-06.pdf.
  2. www.thefreedictionary.com/metric.
  3. Adapted from Dustin et al., Automated Software Testing
  4. Stephen H. Kan, Metrics and Models in Software Quality Engineering, 2nd ed. (Addison-Wesley, 2003).
  5. Adapted from www.teknologika.com/blog/SoftwareDevelopmentMetricsDefectTracking.aspx.
  6. C. Jones, keynote address, Fifth International Conference on Software Quality, Austin, TX, 1995.
  7. Adapted from Dustin et al., Automated Software Testing.
  8. www.isixsigma.com/library/content/c050516a.asp.

Monday

All about Diagnostics module in LoadRunner

The HP Diagnostics Software is a standalone software of its own that is fully functional by itself. It is designed to profile J2EE, .NET or ERP/CRM application, by capturing duration at the module level, displaying chain of calls, monitoring heap size and garbage collection. Of course, these are some things that you may be interested in J2EE from the overview level and Diagnostics have more to offer than the mentioned.

For sales talk which is not part of the scope here, you might want to refer to the official vendor website. Before we go further,the discussion here will be specified to performance testing (LoadRunner/Performance Center) and J2EE.

The Diagnostics setup comprises of the Probe/Profiler and the Server (Commander). Note that Probe and Profiler will be use interchangeably in this article. Previous version of Diagnostics, 4.2 and earlier have the Probe/MediatorCommander however that was been incorporated into just the Probe and Commander which reduces the amount of setup effort. Note as of this writing, the latest Diagnostics is 6.6.The Profiler/Probe is installed on the application server (e.g. BEA WebLogic) and instrumented in order to collect statistics. The instrumentation is in the form of adding an additional parameter for the startup script for the application server. The instrumentation also deviates with each type of application server specifically to the JVM type (e.g. HotSpot JVM, JRockit JVM). By default, you can view the data locally on port 35000 or via the Java Profiler application.
A Commander is installed to collect all the monitoring data from the probes. The processing of the monitoring individual probes are then performed on the Commander where you can view it via port 2006 on the browser. The Commander also servers as the centralizing the entire Diagnostics collecting the data from all the probes that is connected to it.

So with Diagnostics, how does it compliment LoadRunner? LoadRunner does not report specified/drill down to module-level, or provide information of problematic modules (that maybe causing the bottleneck). It only reports system bottleneck in a bigger picture, such as CPU, Memory, Network or Disk usage. With Diagnostics, you can go a step further by monitoring the memory usage of the application server, breaking them down to each module level.

For example, in a J2EE application launched from BEA WebLogic, the process that runs the application is a java.exe. From an OS-level monitoring perspective, we can only know the CPU, Memory or Disk utilization on the process level, or even further to a thread level. However, we do not know what are the internals that maybe causing the bottlenecks, such as Garbage Collection type and frequency, the amount of objects created in the JVM or the method in the JVM that is using the most processing time. As such, Diagnostics will come into play to fill up the gap.Through this implementation, you can work down from the server-level (OS) into the application-level systematically.

Having mentioned the above, Diagnostics do have it’s caveats which it is important for you to take note: (1) Diagnostics can only report modules/calls that are made on the Application Server. Therefore, Store Procedure Calls on the Database Servers will not be reported during the profiling. (2) The probe is considered an intrusive agent as it is required to be installed on the application server (which maybe scrutinized with security or server team causing inconvenience). (3) Lastly, the probe creates a monitoring overhead in the application server as it examines the byte codes in the JVM. For this, the recommendation is not to perform a large load test in conjunction with the Diagnostics but rather, run a mini load test of about 10% of the actual load. Another good tip before kicking off with Diagnostics, log a service request with HP to verify the compatibility for the probe with the Application Server . They should be able to tell you what they had tested (QAed) before and this will greatly save you alot of unnecessary time when the setup of Diagnostics really take place. Having said that, the instrumentation and compatibility is quite dependent on the type of JVM that the Application Server is running on. Therefore, know the version and type of JVM well before proceeding.
Troubleshooting; when the probes are not reporting the data back to the Controller (in a LoadRunner setup), it can be tied to various reasons. However, the main ones maybe caused by (1) unsuccessful or no instrumentation (remember, you need to instrument the application server after the installation), (2) ports between the Controller, Diagnostics and Probe are not opened, or (3) application server or Diagnostics server are not running When working with LoadRunner/Performance Center, always ensure that the Diagnostics setup is fully functional before adding the Diagnostics Module into the LoadRunner monitoring. This is an effective method in troubleshooting Diagnostics/LoadRunner integration problems in a load test setup.

Licensing; Diagnostics is bounded by (1) the number of probes that can be installed and (2) the number of Diagnostics Server that is been implemented. This license is solely used for Diagnostics only which should not be associated with the one in LoadRunner. In order for Diagnostics to work with LoadRunner/PC, an additional license used by LoadRunner called, Diagnostics Module License is required. This will allow the monitoring data from the Diagnostic server to be incorporated into the load test result.

To start familiarizing the Diagnostics Software (Probe and Commander), it will be advisable to spend at least two days to explore the features and getting used to the setup/instrumentation. BEA WebLogic offers free downloads (after you have registered) of their J2EE application server with a working example, Avitek Medical Record application server that can be used for your familiarization.

How to set automatic alerts in QC10.0 from GUI

When a QC user wants to get alerted for his defects whenever he want, he control this by using two features:

1.Automatic Follow ups

2. Automatic Alerts

First one is simple to set automatic follow ups.

prior settings needed for automatic follow ups:

1.Set autoemail configuration from tools->project customize

2. select "Send an Email now" Check box

3. Test it now by clicking the Test now button just beside it

to receive an email, the user should be associated with the valid e-mail id

4 now to set the automatic follow up go to the defect module, select particular defect and click on follow up flag

5. A pop up window will be displayed asking to select the date and to write description of the follow up

6.Select the date and enter some meaning description and click ok.

7. Go to the particular defect to which you set an automatic follow up and you can find a gray colored flag in the follow up column of that defect.This indicates that you are following up with that defect

Now for automatic alerts also you will find an icon "!"(Exclamatory mark) and follow the same procedure.But it will not happen? why ? ( if you are able to set it up from GUI as an administrator, please let me know)

Here I am explaining how to set it up from QC database as administrator

Log in as administrator into project you have in QC(most of the times its demo),Select the ALERT Table

Customize your project to receive automatic email when assigned to , modify , status for all defects

Execute an SQL Query to retrieve the defects to which you want to set an alerts

1 for follow ups

2. automatic alerts

If you want to set alerts , execute update SQL Query to set all the defects with digit 2 in the column

prior to that make sure the flag for auto-email feature is "Y" if not fire an SQL query to set it to Y

then go to defect module and change the defect details like status or description or severity

Then you will receive an email to alert the changes you made



Am I missing any thing to mention here, feel free to add your comments to this post to refine it

soon I am updating screenshots for this, so by then I can add your comments along with your name as well

Sunday

LoadRunner interview questions and Answers -1

LoadRunner interQ1.How to get current system time?


A. Method#1: This function is developed to usein Mercury Load Runner peformance tool.

This main use of this functions to return the current system time at any given

point of time while load runner script is running.This functiona can be used to

report transaction times , script starti time and end time.



long get_secs_since_midnight(void)
{
char * curr_hr; /* pointer to a parameter with current clock hr */
char * curr_min; /* pointer to a parameter with current clock min*/
char * curr_sec; /* pointer to a parameter with current clock sec */
long current_time, /* current number of seconds since midnight */
hr_secs, /* current hour converted to secs */
min_secs, /* current minutes converted to secs */
secs_secs; /* current number of seconds */


curr_hr = lr_eval_string("{current_hr}>");
curr_min = lr_eval_string("{current_min}");
curr_sec = lr_eval_string("{current_sec}");

hr_secs = (atoi(curr_hr)) * 60 * 60;
min_secs = (atoi(curr_min)) * 60;
secs_secs = atoi(curr_sec);

current_time = hr_secs + min_secs + secs_secs;

return(current_time);
}


Method#2:
lr_save_datetime( Today is H M S DATE_NOW curr_time );
lr_output_message(lr_eval_string( {curr_time} ));

Method#3:
ou can utilise the parameter value for "Date/Time" where Vugen can extract the date for you depending on the format you want. This is much simpler in writing codes.

Method#4:

typedef long time_t;

Action()

{

time_t t;

lr_message("Time in seconds since 1/1/70: ldn" time(&t));

lr_message("Formatted time and date: s" ctime(&t));

return 0;

}

Q2:why we insert the rendezvous point while running the scenario.

Ans#1 : It is an interrupt point to stop current VUser script execution untill remaining VUsers scripts also executed upto that point.It maintains ' Load Balancing ' while running VUsers script.

Ans#2:Rendezvous word is come from Greek. Renduvous means Meeting Point .

consider one scenario that contain 4 Actions.

1. Download the site

2. Login

3. Cash withdraw

4. Logout.

100 Vusers perfrom the actions one by one. but we wants emulate the heavy load at Case withdraw action. So we will insert rendezvous point at Cash withdraw action. Vusers will wait at cash withdraw action once all the users will arive at cash withdraw then controller release all users simultaneously.

Q3:How do you identify the scenarios to be tested in performance testing for web application?

Ans:soon it will be answer(this a frequent question from many companies)

Q4:What is load testing?

- Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.

Q5:What is Performance testing?

- Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.

Q6:Did u use LoadRunner? What version?

- Yes. Version 8.xx or 9.xx(now a days,use it as 9.xx)
Explain the Load testing process? -
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.

Q7:When do you do load and performance Testing?

- We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

Q8:What are the components of LoadRunner?

- The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.

Q9:What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.

Q10:What Component of LoadRunner would you use to play Back the script in multi user mode?
- The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.

Q11:What is a rendezvous point?

- You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

Q12:What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.

Q13:Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.

Q14:Why do you create parameters?

- Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.

Q15:What is correlation? Explain the difference between automatic correlation and manual correlation?

- Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

Q16:How do you find out where correlation is required? Give few examples from your projects?

- Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.

Q17:Where do you set automatic correlation options?

- Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

Q18:What is a function to capture dynamic values in the web Vuser script?

- Web_reg_save_param function saves dynamic data information to a parameter.

Q19:When do you disable log in Virtual User Generator, When do you choose standard and extended logs?

- Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.

Q20: How do you debug a LoadRunner script?

- VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.

Q21 : How do you write user defined functions in LR? Give me few functions you wrote in your previous project?

- Before we create the User Defined functions we need to create the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.

Q22:What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.

Q23:Where do you set Iteration for Vuser testing?

- We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.

Q24:What is Ramp up? How do you set this?

- This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be
specified. To set Ramp Up, go to ‘Scenario Scheduling Options’

Q25:What is the advantage of running the Vuser as thread?

- VuGen provides the facility to use multithreading. This enables more Vusers to be run per
generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of
Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.

Q26 : If you want to stop the execution of your script on error, how do you do that?

- The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the option “Continue on error in Run-Time Settings.

Q27: What is the relation between Response Time and Throughput?
- The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.

Q28:Explain the Configuration of your systems?

- The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.

Q29 : How do you identify the performance bottlenecks?

- Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.

Q30:How did you find web server related issues?

- Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
occurred during scenario, the number of http responses per second, the number of downloaded pages per second.

Q31 : How did you find database related issues?

- By running “Database” monitor and help of “Data Resource Graph” we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues

Q32:Explain all the web recording options?

Q33:What is the difference between Overlay graph and Correlate graph?

- Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.

Q34:How did you plan the Load? What are the Criteria?

- Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.

Q35:What does vuser_init action contain?

- Vuser_init action contains procedures to login to a server.

Q36:What does vuser_end action contain?
- Vuser_end section contains log off procedures.

Q37:What is think time? How do you change the threshold?

- Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.

Q38:What is the difference between standard log and extended log?

- The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.

Q39:Explain the following functions:

- lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would
be reasonable to conclude that the bandwidth is constraining the volume of
data delivered.
Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario:
The number of concurrent Vusers
The number of hits per second
The number of transactions per second
The number of pages per minute
The transaction response time that you want your scenario
Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load
increases. At 56 Vusers, there is a sudden, sharp increase in the average response
time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.

Q40:What is correlation? Explain the difference between automatic correlation and manual correlation?

- Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

Q41:Where do you set automatic correlation options?

- Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

Q42:What is a function to capture dynamic values in the web vuser script?
- Web_reg_save_param function saves dynamic data information to a parameter.
view questions and Answers -1

Tip of the day : Finding a software Testing Job

here are many people who would like to get software testing jobs, but they are unsure about how to approach it. This may seem like a dream j...