Sunday

Upgrade from Quality Center 9.2 or Quality Center 10.00 to A Lifecycle Management 11

Steps to Upgrade from QC 9.2 or QC 10.00 to ALM 11:

********Recommendation is to try on a Test server, then production**************


Pre-requisites:
  1. Check the following page for system requirements of ALM 11:
  1. Move the project repository from the Database (DB) to the File System (QC 9.2 only)
Note: Repository over DB is not supported in ALM11
Workarounds:
2. Install the latest QC patch for the QC 9.2 version which includes a tool to download the repository to the file system
  1. Before migration, deactivate all projects from the Site Administrator
Migration of QC 9.2 or QC 10.00 to ALM 11
  1. Backup the projects’ databases/schemas and Site administration database/schema (default name “qcsiteAdmin_db”) from the QC 9.x database server
    1. For SQL server:
    1. For Oracle Server:
  1. Backup the projects’ repositories from the QC 9.x or QC10.00 server. Each project has the location of the repository at the project properties in the Site Administration web site.
    1. Document ID KM189097 - What needs to be backed up when backing up Quality Center
  1. Restore the project databases/schemas and the Site administration’s database/schema at the ALM 11 database server (if you are using a new database server)
Note: If upgrading QC, there’s no need to move projects databases/schemas however is important to take backups.
  1. Install ALM11, if you are using the same application server you must uninstall QC 9.2 or QC10.00
QC has 2 different databases, the project databases and the site admin database. When migrating, to keep users, user configurations, and site administration parameters you must install using the same database name that was used to restore the project in your DB server. Select “Upgrade a copy” or “Upgrade the existing schema” option during installation.
  1. Paste/Restore the repositories in the expected repository location in the new installation
  2. Restore the projects
a) Login Site Administrator and create a new empty project.
b) Go to the new empty project’s folder repository and make a copy of the dbid.xml file.
c) Remove your projects from the Site Administration (Reason: it has connections to the old database server). Do not delete the project because it will remove the project schema from the database server.
d) Go to the project’s folder and rename the existing “dbid.xml” file to “dbidold.xml”
e) Paste a copy of the “dbid.xml” file from the empty new project.
f) Edit the “dbid.xml” to match your project environment description as follow:
** Put the project name **
XXXXX
XXXXX
XXXXX
XXXXX
** Put the project schema name **
XXXXX
XXXXX
XXXXX
** Put the correct path of the repository folder of this project**\
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
82a311c5-a440-4ecd-97a2-e97331a447XX
PROJECT_NAME: the name of the project, for example: NEW_PROJECT.
DB_NAME: the name shown in the database list, for example: NEW_PROJECT_DB
PHYSICAL_DIRECTORY: by default, this values is: :\Program Files\Mercury\Quality Center\repository\qc\\
PROJECT_UID: Keep the same amount of digits and modify the last two values to get a unique ID
  1. From the site administrator restore projects using the “dbid.xml” file edited for all projects.
  2. Verify, repair and upgrade projects
    1. Right click on it and select “Maintain Project”
    2. Select Verify Project
    3. After verification finished, select Repair project
    4. After project repair, select Upgrade project
Notes:
  • After upgrade the project you can not change its repository folder location, the reason of that is that there is a new feature in ALM that optimizes the project’s repository, there will be a job implementing this after upgrade the project for at least 1 day if the project is small and 5 days if the project is large.
Here is the information from the administrator guide (page 97):
After upgrading a project to ALM 11.00, ALM migrates the project repository directories to a new file structure in the default project repository location. If the migration process fails, you must fix the problems manually in the project repository. You can also configure the speed at which the migration is performed. For more information, see "Repository Migration" from Administrator Guide on page 120.

Documents useful in this process can be downloaded from here :

HP ALM User Guide in PDF


HP ALM Installation Guide in PDF

HP ALM Whats New in PDF

HP ALM Upgrade Best Practice in PDF

HP Administrator Guide in PDF

Migration of QC 9.x to QC10.0

Detailed steps on migration of QC 9.x to QC10.0 :

1) Add ‘Oracle 10G’ as new DB Server in QC 9.x instances
2) Access QC 9.x Site Admin. Create a copy of the project to be migrated from QC 9.x instances, pointing that to the newly created Oracle server. After this step we will have a copy of the project connected to a Oracle server
Note: It is recommended to have same Domain Names in QC 9.x and QC 10 instances for the migration. Since the Database name will be created based on the domain selected, it will help in the smooth migration of the projects.
3) Copy over the repository of the project from QC 9.x location and to the repository location of QC 10
Note:
a) Creating a temp project in QC 10 domain will give us the repository location
b) If repository is copied from QC 9 ‘Migration’ Domain it is recommended to copy that to QC 10 ‘Migration’ Domain
4) Edit the dbid.xml file available in the QC 9.x repository location. Copy that over again to the QC 10 repository location (Click ‘yes’ on the alert message when it asks for override)

Note: This step is required while restoring the dbid.xml file from the QC 10 Client. Since QC 10 will open in a window for browsing, it can only browse from Windows location and not Linux location
5) Access QC 10 Site Admin. Select the Domain as Migration (Create the Domain if not available already) and click on Restore
6) Select the updated dbid.xml file from the location (refer step 4)
7) Click on Restore. The message will appear as Restored successfully.
8) Right click on the migrated project and say ‘Maintain Project> Verify’. Click on the button ‘Verify project’ in the window
Note: Since the project is migrated from SQL Server to Oracle, this task will show Schema differences. Additionally it will show any difference in the data which needs Repair
9) Right click on the migrated project and say ‘Maintain Project> Repair’
10) Right click to select ‘Maintain Project> Verify’ again. The Verify should be successful now
11) Right click to select ‘Maintain Project> Upgrade’. Click on Upgrade Project button in the resultant window
12) Once the Upgrade is Successful, Right click to Active the project.
13) Project is now ready for use in QC 10.

P.S. You can use the task sheet to make sure you follow all the steps.Download the task sheet here

Thursday

How to configure Oracle Database Monitoring - LoadRunner

Configure Oracle Database Monitoring in LoadRunner


Most of the time, with every application, there comes with the Database setup and most of the time, it’s Oracle (being the biggest player in the DB industry). Fundamentally,

(1) it requires a Oracle client to be installed on the machine as a native client.

(2) A valid account and privileges to the Oracle V$ tables that holds statistics.

(3) Ensure that you can properly query from the Controller using the SQL tools and extract data from the V$ table and you should be fine with the setup.

Let’s go through an overview for implementing the Oracle DB. Bascially, we are doing in this sequence:

1. Get an account to the V$ tables that contains the monitoring data.
2. Install 32-bit Oracle client on the Controller machine.
3. Ensuring that proper configuration is done to connect to the DB using TNSNAMES.ora and defining the Oracle path.
4. Login to the DB using SQL*PLUS and run a query to see if you can collect the statistics from the V$ tables.
5. Launch LoadRunner Controller, configure the monitor and start your load test!

1. Request for an account and password to be created in the database instance (unless you are the DBA). Example, loadtester, and grant them access to read the following table. These tables consist of the statistic of the database instance.

V$SESSTAT
V$SYSSTAT
V$STATNAME
V$INSTANCE
V$SESSION

2. Ensure that Oracle client libraries are installed on the Controller. Remembered that in order to monitor, you need local installation of
the client in order for LoadRunner to query the monitoring data. If you do not have the client libraries, download it from Oracle Downloads. Download only the 32-bit Oracle client.
3. If you have the client installed, ensure the 32-bit Oracle client is installed on the Controller, not the 16-bit client.
4. Verify that %OracleHome%\bin is included in the path environment. This can be done by going to START > My Computer > Properties
> “Advanced Tab” > Enviromental Settings
5. Configure the TNSNAMES.ora file on the Controller. TNSNAMES.ora file is a SQL*Net configuration file that defines databases addresses for establishing connections to them. This file normally resides in the ORACLE_HOME\NETWORK\ADMIN directory. (Source: OraFaq) Attached is an example of the TNSNAMES.ora file. Click here to download.
6. Ensure login successful with the created account (e.g. loadtester) with SQL*Plus from the Controller machine.
7. Ensure the privileges are given properly by typing the following command. If they don’t return results or return access rights
issues, it’s best to sort it out again with the DBA

SELECT * FROM V$SESSTAT
SELECT * FROM V$SYSSTAT
SELECT * FROM V$STATNAME
SELECT * FROM V$INSTANCE
SELECT * FROM V$SESSION

8. Launch LoadRunner Controller as per norm and add the Oracle Database Monitor. When prompt, enter the account name (e.g. loadtester), it’s corresponding password and the destination server name. Statistics should be drawn from the V$ table and displayed in
Controller. With the above, you should be able to successfully configure the monitoring environment and the Controller with minimal difficulty.

Note:
You can change the sampling interval for the monitor using the vmon.cfg file located in {loadrunner-installed-dir}
\monitors\vmon.cfg

nca_connect_server in oracle NCA protocol with LoadRunner

Function: nca_connect_server

Purpose : nca_connect_server function when on execution,

The nca_connect_server function connects to an Oracle NCA database server using the specified host, port number, and module.


How to use :

int nca_connect_server( LPCSTR host, LPCSTR port, LPCSTR command_line );

Explanation of nca_connect_server function :

host : The host name or IP address of the database server

port : The port number of the database server.

Command_line : The command line specified when starting the application.

Example : the nca_connect_server function connects to the sol01 server at port 9000. The command line specifies the database module and the user ID for access to the database.

nca_connect_server("sol01", "9000"/*version=107*/,
"module=e:\\appsnca\\fnd\\7.5\\forms\\us\\fndscsgn
userid=applsyspub/pub@vision fndnam=apps");
nca_edit_set("FNDSCSGN.SIGNON.USERNAME.0", "VISION");
nca_obj_type("FNDSCSGN.SIGNON.USERNAME.0",'\t',0);
nca_edit_set("FNDSCSGN.SIGNON.PASSWORD.0", "WELCOME");
nca_button_press("FNDSCSGN.SIGNON.CONNECT_BUTTON.0");


Possible Errors we may encounter :

Error 1#
Error
-----
Action.c(85): Error: C interpreter run time error: Action.c (85): Error -- memory violation : Exception ACCESS_VIOLATION received.
Action.c(85): Notify: CCI trace: Action.c(85): nca_connect_server(0x010418dc "xww.np-orcl-01.world.xerox.com", 0x010418d6 "11122", 0x01041878 "module=/home/oraapps/r11i_dmo_d25dv1_fs/...")

Error 2 #
Error: nca_connect_server: cannot communicate with host (host name and port number of the application under test)

Error 3 #

Monday

How to increase Ranking of a Blog in google

Knowing Search Engine Ranking Factors


1. Keywords in the title of the page
2. Keywords in the headings of the page (h1, h2, h3, etc.)
3. Keywords in domain name
4. Keywords in the file names of the pages, images or any other content
5. Keywords in bold, italics and underlined text
6. Keywords in alt text for the images
7. Keywords in the text of different color/size
8. Variations of keywords on the page (Eg: eat, ate, eaten, eating, eater, etc.)
9. Keyword density on the page
10. Appearance of the keyword in the above half of the page
11. Keywords in the description meta tag
12. Keywords in the keywords meta tag
13. Uniqueness of title, description and keywords with other pages of the site
14. Age of the domain
15. Age of the page
16. TLD of the domain (Eg: .in domains will rank better in google.co.in domain)
17. Hosting location
18. Pagerank of the site
19. Quality of the content (well written, informative, not duplicate)
20. Number of backlinks
21. Quality (PR) of the backlinks
22. Anchor text used for the backlinks
23. Title text (if any) used for the backlinks
24. Age of the backlink
25. Where the backlink appears on the page (top of the pages, bottom of the page)
26. Is the link in-content or exists with other links (pointing to some other site and separated with some specific text [like “,” or “|” ])
27. Number of total links on the page which is linking to you
28. Is the theme of the linking page is same as the linked page
29. Relevance of the keyword with the primary subject of the overall website.
30. Website load time (server response time and the size of the page)
31. Availability of the server (those 99.99% uptime things)
32. Outgoing links (to whom are you linking? Are the related?)
33. Number of pages in the site
34. Website navigation and accessibility (reach-ability of all the pages easily)
35. Links to the internal pages (yes, this will improve the ranking of the main page as well if the navigation is correct)
36. How many people come back from the site to the search engine after searching for the keyword (my be Google tracks this with toolbar)
37. Number of internal page links on your pages (not more than 100)
38. Number of external links on your pages (try to minimize)
39. Backlinks from .gov, .edu links (some thinks, its not a factor)
40. Using “-” hyphen (and not “_” underscore) in your file names wherever space is needed.
41. How often the site is updated
42. Duplicate content from other sources (negative effect)
43. Linking to bad sites (called bad neighborhood) (negative effect)
44. Keyword stuff on the pages or over optimization (negative effect)
45. The speed by which new links are building up for a site. Is it consistent over a longer period?
46. Backlinks from DMOZ and Yahoo directory are considered quite valuable.
47. Avoid session identifiers (SID)
48. Deletion of the pages (dead links on the site) without proper 301/302 redirect (negative effect)
49. HTML validation of the document (I am not sure how much effect it makes)
50. History of the domain (did it do anything bad in the past?)
I hope it's useful for you and if you find interesting articles in this blog you can put my link in your blog.
Software Error Detection Through Testing and Analysis

Thursday

Test Estimation - People's view on it

I do not want to talk about much when you are going to read in another minute ,opinions of different testing experts on this

Question about Estimation is like :

Hi! Til nowadays in my company (I work in the IT division of an Insurance company) we used to estimate test time base on experience and faith ;-)! After an accessment superiorly was decided to change this. Now, I'm in charge of buiting some algorithm or matrix that allows us to estimate test time in the most accurate way possible for us. We have several systems that manage Policies and Claims ( life, non Life), etc and several front ends. Do you have any sugestion of how may I begin and what would be the best solution?

Replies followed as below for above question on Software Test Estimation :

Reply#1 :

It pains me to think of someone spending time to make their estimates more accurate, instead of thinking how to deliver better quality software. I think your managers are misguided.

That said, the best way to have a hope that your estimates will, on average, be a reasonable idea of the size of the effort facing you is to divide the features you are about to develop into very small chunks. That helps you to identify the risks more specifically, and address high-risk areas first, so that if something takes longer than you expected, you didn't leave it to the very end.

It sounds like you might be estimating the time it takes you to do manual regression testing, though, rather than the time it takes to test a new feature? If this is the case, ask the programmers at your company for help in thinking how you can automate these regression checks, so that you have time for more important activities such as exploratory testing of new functionality.


Reply#2:


We are already working on automation of regression test. We bought HPquick test and we ghave a team working on it. I live in Portugal and althought we have a test team since about 10 years ago we are still not well organized. My company has a low rate of acomplishing project dates :-(- Development almost always is late a than we get shor of testing time, Other thing is that is a normal procedure here to have less time for system testing than to UAT testing and so you can imagine... Users always find dozens of incidents. Sometimes we even haven't finishd our testing when they beggin. We are now trying to optimize and change this process.

Reply# 3 : If it is a totally new functionality which was developed from scratch and you have to provide some info asap, then you can say that testing will take dev_time * 2. Usually this formula works just fine for high level estimates.

for most accurate estimates you have to divide product into small features, prioritize them and do estimates for each. So when you do planning for the next test round you can just summ all feature estimates that must be in scope of this run.


Reply# 4 :

Start recording estimates and actual durations for every project, including past projects if the data is available.

Determine which attributes distinguish projects from each other in your shop (product area, size, complexity, risk, etc, etc) and record these as well.

Over time, you'll end up with a base of data to use for estimating upcoming projects.

And if you must avoid the human element and build an algorithm, your attributes become your input, and the baseline data becomes your output.

(This is not they way I would do it myself, since I believe estimations based on experience are best, but if you must build something to automate estimation this might be the way to do it.)


Reply# 5 :


Thanks you very much for your suggestion! It is nort easy here.... We had TATA here for an accessement abut our test process (they used TPI Msthodology) and from their report, my company decided to improve according to that document. I work in software testing since 2002 and previously I was an analyst in the financial system develloping team. I have the foundation level certification and I went to attendthe Test management course from ISTQB(didn't have time yet to have te exam), I read a lot about software testing but leading this work team (about improving test process) is some guy that arrived 33 month ago to work in the team (with no previous experience in software testing) and it isn't very easy to convince him about how to do things. That why I openned this discussion, to change ideas with people with experience on this area. Thanks for all the suggestions. After the estimating subject, I have to improve the way we organize our manual system tests including to implement product risk analysis, prioritise requirements, how to built case tests. And the work base here is a very poor requirements list (never with more than half a dozen requirements even in very extensive or complex projects) and lots of later requests from clients (many of them occurs during uat test :-().


Reply # 6 :


These may help (at least for amusement):

http://strazzere.blogspot.com/2011/01/estimation-guesstimation-and....

http://strazzere.blogspot.com/2010/04/there-are-always-requirements...


Reply # 7 :


Well you made me laught when I read the story about guesstimation! It looked like most of my projects. The second link look useful for me, thanks! I'm not an as proficient and senior as you on software testing and all I can learn from others experience is very nice. I'm a curious person and I always like to learns about everything and as my job is software testing Iwant to learn as much as I can, i read a lot about this subject but i can learn much more from others experience.As I told before i work for an insurance company. The biggest in my country (we have 30% of domestic market) but we still have much to improve.

Reply # 8 :

Hi, my first post so go easy on me.

I also looked into estimation as part of our company TPI, here is a 'summary' of my findings:

Test work break down; break down the expected tasks into a series of small ones, the theory being that these estimates will be more accurate. This is the method that most people currently use.

Pros:

- This is quick and easy to implement.

- Only requires test leads input

- Could work well with ‘Test Specification’ papers.

Cons:

- This assumes that the break down is accurate and predictable

- A whole series of small margin of error will lead to an overall large margin of error.

- Relies on test leads experience and understanding of the required tests for a requirement.

Metrics based; this requires us to track past experience and test effort, once there is a set of data from previous projects ‘past information’ can be used. This would be a more quantitative approach based on prior experience.

Pros:

- Produces accurate estimates.

- Judgment is based upon documented experience.

- Quick to produce estimates once the relevant information has been collated.

Cons:

- What makes a reasonable history of projects?

- Need to collect relevant information to base estimates on e.g. function points, lines of code, number of requirements/use cases.

Worst case Best case; this method can be used in combination with other methods estimation techniques. The user will produce Worst case, most likely and Best case estimates for each of the areas they are estimating. The difference between the best and worst displays the degree of confidence in the estimates, and weights the estimate accordingly.

Expected time = (Best + (4 x Most likely) + Worst)/6

Pros:

- Same as test work break down.

- Added weighting depending upon confidence.

Cons:

- Relies on test leads experience.

- Have to produce 3 estimates.

Wideband Delphi; this works on the principle of more heads are better than one, estimates are produced by several members of a team and then a meeting is set up to discuss the estimates. The estimators then discuss why there are differences in the estimates (if any) and the estimates are refined until everyone is in agreement.

Pros:

- Multiple people are more likely to spot any missed/unnecessary tasks.

- Draws on experience from multiple people.

- Could work with the ‘Test Specification’ papers.

Cons:

- Could be quite time intensive; need estimates from multiple people, and a meeting where everyone needs to agree.

- Could be difficult to put together a committee with the relevant experience.

Reply # 9


Hi, In our organization we have considered consider 7% of development Effort would constitute for Integration Testing and 20% of Development for System Testing. We also consider 5 influential factors like (Business Risk, Technology, Complexity, Development Team Efficiency and Test Team Experience).

1. We classify the 5 Influential factors priority as either Major or Minor.

2. Then we classify the Impact of the Factors as High, Medium and Low.

3. We have scale where Priority is Major, then High is 8, Medium is 4 and Low is 2. If priority is Minor, then High is 4, Medium is 2 and Low is 1

4. Then Risk Factor = sum of all the Factor values/sum of Medium factors based on priority

5. Percentage Effort = 27% * Risk Factor.

6. Testing Hours = Percentage Effort * Development Hours.


How Practice Free online Tests for Test Automation toolsSoftware

Dear,

Today I am happy let you know that there is a website to practice all certifications before you give the real certification exam.

Especially as a Software Tester, You might be interested in different certifications like

1.Quick Test Professional ( QTP) - Functional Automation Testing Tool from HP (formerly Mercury)

2. QualityCenter (QC) - Centralized Test Management Tool and tracking system from HP (formerly Mercury)

3. LoadRunner - Industry leading Performance test tool from HP (formerly Mercury)

For all above tools and many more software technologies like Java, PHP, UNIX etc. the website called Skill Sign is offering a free practice test (Mock Test to check your knowledge level for the real certification exam).

Thanks to Skillsign to provide a platform for practicing the tests. You can visit their website and take the certification. this require two minute registration to take the tests. So go ahead and register today to get certified

As usual please leave the feed back on this article in comments section

QTP Certifiaction : HP - M16 Details

Today, I just came across a post on QTP Certification details on Testing forum QualityTesting.info.

One of the members is asking for the details of the QTP Certification. I am thinking of writing my own experiences on this what I observed from my mentors and team mates across globe discussing "Do We really need Certification in Software Testing"

before writing my own words, Let me first provide the details of the QTP Certification.

QTP Certification Details:

Exam Code : HP0-M16
Exam Name : HP QuickTest Professional Software 9.2
Exam Fees : US $60

Total Questions : 58
Passing Criteria : 70 %

Exam Place : Any pro metric center

Exam Preparation :
- Just go through the user guide.pdf that ships along with the trail license of the QTP software

-You should read google groups discussions,yahoo groups and Linkedin Discussions on QTP problems then only you will enhance your skill set in QTP.

- Practice any approved web application with trail version ( P.S. Do not use Automation tools on google.com as it is a license violation and you may be victim of that)

- You should know the all basic functionality of QTP and how it works

Note : You have to create HP Student ID before registration. Because at the time of registration it asks for HP Student ID. Here is the link of the site to create HP Student No.

To create student No : http://192.170.77.229/hpcp/English/ProfileCreate.aspx

For other details of the exam, you can refer following link
http://h10017.www1.hp.com/certification/exam_registration.html

One have to clear both exams to get certificate :
- HP0-M15 : HP Quality Center 9.2 Software and
- HP0-M16 : HP QuickTest Professional 9.2 Software.

For more details on the same, refer http://h10017.www1.hp.com/certification/credential/index.html?credc...

More details are available on HP Website : ftp://ftp.hp.com/pub/hpcp/epgs/HP0-M16_EPG.pdf

QC 10.0 Enterprise edition with Windows 7

Today Morning, the first Issue I worked with QC is to provide the solution for Compatibility issue with QC 10 Enterprise Edition with Windows 7.

So far I have not used QC on Windows 7 till my test coordinator contacted me with this problem.

I am very much thankful to test coordinator who make me to look into this issue as I made my hands dirty with both windows 7 installation and QC trail version on personal laptop first before I provided the solution to customer.

The problem seems the general and frequent for all QC Administrators works on XP.

Well I do not want to talk much on this. Lets jump into problem directly

Here is the problem I am listing for you :

Following client components were not downloaded successfully:

1 . ExtensibilityAPI.dll :

(Error 5)

Failed to open file for writing

2 . OTAClient.dll :

(Error 5)

Failed to open file for writing

3 . WebClient.dll :

(Error 5)

Failed to open file for writing

4 . QCClientUI.ocx :

(Error 5)

Failed to open file for writing

5 . wexectrl.exe :

(Error 5)

Failed to open file for writing

6 . XGO.ocx :

(Error 5)

Failed to open file for writing

7 . OTAXml.dll :

(Error 5)

Failed to open file for writing

8 . OtaReport.dll :

(Error 5)

Failed to open file for writing

9 . SRunner.ocx :

(Error 5)

Failed to open file for writing

10 . TdComandProtocol.exe :

(Error 5)

Failed to open file for writing

11 . sr_exec_agnt.exe :

(Error 5)

Failed to open file for writing

12 . MercResourceLogger.dll :

(Error 5)

Failed to open file for writing

13 . bp_exec_agent.exe :

(Error 5)

Failed to open file for writing

14 . Free_MSR_Player.exe :

(Error 5)

Failed to open file for writing

15 . dsoframer.ocx :

(Error 5)

Failed to open file for writing

16 . VugenTestType.dll :

(Error 5)

Failed to open file for writing

17 . QCClient.UI.Customization.dll :

(Error 5)

Failed to open file for writing

18 . QCClient.UI.Components.BPT.TPC.dll :

(Error 5)

Failed to open file for writing

19 . BuiltInPlugin.dll :

(Error 5)

Failed to open file for writing

20 . ReportViewerForTD.dll :

(Error 5)

Failed to open file for writing

21 . QTGrid.dll :

(Error 5)

Failed to open file for writing

22 . ArgsEditor.dll :

(Error 5)

Failed to open file for writing

23 . Arguments.dll :

(Error 5)

Failed to open file for writing

24 . ConfigViewerForTD.dll :

(Error 5)

Failed to open file for writing

25 . ExGrid.dll :

(Error 5)

Failed to open file for writing

26 . CompStrgHelper.dll :

(Error 5)

Failed to open file for writing

27 . BuiltInScriptView.dll :

(Error 5)

Failed to open file for writing

28 . QAIAd.dll :

(Error 5)

Failed to open file for writing

29 . WITestType.dll :

(Error 5)

Failed to open file for writing

30 . ExtensibilityAPI.dll :

Cannot load type library

31 . ExtensibilityAPI.dll :

(Error 1008) Es wurde versucht, auf ein Token zuzugreifen, das nicht vorhanden ist.

Cannot register type library

32 . OTAClient.dll :

Cannot load type library

33 . OTAClient.dll :

(Error 1008) Es wurde versucht, auf ein Token zuzugreifen, das nicht vorhanden ist.

Cannot register type library

34 . WebClient.dll :

Cannot load type library

35 . WebClient.dll :

(Error 1008) Es wurde versucht, auf ein Token zuzugreifen, das nicht vorhanden ist.

Cannot register type library



Solution:

in fact I got the partial solution to this from HP forums on QC and rest I dig out on my own.

I am listing the step by step procedure for you here to make it resolved

Using QC with Win 7 "


Windows 7.0 includes many new security features to prevent users from accidentally damaging the windows installation. This in turn will prevent QC from being able to install properly. It is important to understand that in Windows 7, a standard user with administrator privileges is not equal to the built in administrator user.


Here you can Follow the instructions below to get QCclient to work with win7.

1. Ensure that User control accounts setting is disabled.

 a. in the Windows globe>search for program and files> type UAC
 b. click on the link to bring up the UAC.
 c. Move the slider down to never notify.
 d. Restart the computer.

2. Disable Data Execution prevention (DEP)

 a. In the Windows globe>search for program and files > type CMD
 b. Right click on the link and run as administrator.
 c. In the command prompt type,
Code:
________________________________________
bcdedit /set {current} nx AlwaysOff
________________________________________

 d. Restart the computer.

2. Launch IE and connect to Quality Center and should be able to work within QC.

3 Change the user control accounts setting back to the original setting and restart.


P.S. If you ran into other problems with QC with Windows 7, I strongly recommend you to look into this discussion thread

Test Automation Frame works - Functional Testing


"We must always remember: our ultimate goal is to simplify and perpetuate a successful automation framework."

Do you know how to satisfy the above successful statement. If Yes, I still recommend you to read this article on Software Test Automation Frame works those are suitable for all functional test tools like QTP,Selenium,Silk Test ,Rational Etc.

To give you the glimpse on this article, I am re writing the summary that I extracted from this excellent article

In order to keep up with the pace of product development and delivery it is essential to implement an effective, reusable test automation framework.

We cannot expect the traditional capture/replay framework to fill this role for us. Past experience has shown that capture\replay tools alone will never provide the long-term automation successes that other more robust test automation strategies can.

A test strategy relying on data driven automation tool scripts is definitely the easiest and quickest to implement if you have and keep the technical staff to handle it. But it is the hardest of the data driven approaches to maintain and perpetuate and often leads to long-term failure.

The most effective test strategies allow us to develop our structured test designs in a format and vocabulary suitable for both manual and automated testing. This will enable testers to focus on effective test designs unencumbered by the technical details of the framework used to execute them, while technical automation experts implement and maintain a reusable automation framework independent of any application that will be tested by it.

A keyword driven automation framework is probably the hardest and potentially most time-consuming data driven approach to implement initially. However, this investment is mostly a one-shot deal. Once in place, keyword driven automation is arguably the easiest of the data driven frameworks to maintain and perpetuate providing the greatest potential for long-term success. There are also a few commercially available products maturing that may be suitable for your needs.

What we really want is a framework that can be both keyword driven while also providing enhanced functionality for data driven scripts. When we integrate these two approaches the results can be very impressive!

The essential guiding principles we should follow when developing our overall test strategy (or evaluating the test strategy of a tool we wish to consider):

  • The test design and the test framework are totally separate entities.
  • The test framework should be application-independent.
  • The test framework must be easy to expand, maintain, and perpetuate.
  • The test strategy/design vocabulary should be framework independent.
  • The test strategy/design should isolate testers from the complexities of the test framework.
I also have some customized material on Frame works and tutorial on Selenium and QTP e-Books.

Please respond via comments if you need these books or if you have any questions on this article

Things I Like to Have in my Test Automation Suites

Source :

P.S.: I dedicated this to the author who lists the below valuable contents useful for any automation test engineer

I've used lots of different tools - some commercial, some open-source, some home-grown - for test automation. I usually use a mix of such tools in my overall automation efforts.

Over the years, I have found some nice-to-have features and attributes that I end up looking for, or building, as I assemble a new Test Automation Suite. Some of these attributes are part of the tools themselves. Others come about because of the way I assemble my Test Suites and tools into a complete package.

(For the purposes of this article, assume I am talking only about Functional Test Automation, involving scripts.)

Some things are must-haves, and most are obvious:
  • Run in my environment
If I'm running in a Windows shop, I may not be allowed to introduce a bunch of Linux machines (and vice-versa).
  • Automate my System-Under-Test
My Test Suite must be able to automate the system I'm testing. If the system is web-based, my scripts must be able to automate my browser (or sometimes, multiple types of browsers). If the system is Java, the scripts must be able to automate a Java system.
  • Be able to "see" most of the objects in my System-Under-Test
Since I usually want my scripts to validate the contents of the system at various points during the automation, I need them to be able to "see" the contents. Sometimes the scripts have a direct way to do this (usually with standard controls, it's built-in). Sometimes, my scripts have to be creative (for example, with some non-standard controls, I might have to compare images, or copy data to the clipboard, in order to "see" the contents).
  • Usable by my test team
In general I don't favor having just "special ists" able to use the Test Automation Suite. I strongly prefer that most of the team be able to use the test system, and contribute to building it up over time.
  • Be affordable
Obviously, the Test Suite has to be affordable. The commercial software and maintenance fees have to be within my budget. But also the hardware needed to run it, the training required, etc, etc - all need to be affordable.
  • Be generally more efficient than strictly manual testing
Seems pretty obvious. Once everything is considered, if it's more efficient to perform the testing manually, then perhaps I don't need a Test Automation Suite after all.
Other things are nice-to-have:
  • Detect changes in the System-Under-Test
Bug reports, checkin comments, and build summaries provide clues as to what changed in the current build of my system-under-test. But, often, they don't tell the whole story.

I like to be able to depend on my Test Suite to detect unexpected changes, so I can then dig in and find out if this was intentional or if it was a bug.

For example, when I build a Test Suite for a web-based system, I like to capture the non-dynamic text of each page, and compare it to a baseline. If there's a difference, it might mean that I have an intentional change, or it might mean a bug. If it's intentional, then I want to be able to easily update the baseline, so it's ready for the next test run.
  • Create Smoke Tests which run after every build
I see this as one of the basic uses for my Test Automation Suite. I want to be able to quickly run a test after the build, so I can assess whether or not my team should bother to dig in and spend their time testing it. If the system-under-test passes the smoke test, we can proceed. If not, we reject the build until it is fixed.

If the builds occur overnight, I like to be able to schedule this smoke test so that it runs after the build and so that the results are ready for me when I get in the next morning. Sometimes, this allows me to run a larger overnight test and still have the results ready for the morning.
  • Run unattended
It's important that I don't have to sit and watch my Test Suite run, or to help it along. Otherwise, I may not be saving much time. If the Suite can run by itself, overnight, then I can take advantage of hours and machines that might otherwise be unused.
  • Run overnight, and have a report ready the next morning
There are really two parts to this - overnight, and results the next morning. Running overnight allows me to take advantage of "free time". But for this to be effective, I need a good post-run log of what happened during those overnight hours.
  • Automate the boring, repetitive stuff
This is where Test Automation Suites should shine. They should be able to simply run the same things over and over. People get bored doing this, or they get less attentive after they have seen the same thing several times. Automated Scripts don't have this problem.
  • Run predictably and repeatedly
I need confidence that I can run my Test Suite and expect it to run correctly each time. It seems obvious, but this also means that the System-Under-Test also needs to be running, along with any parts of the system. If they are flakey, then I can't depend on being able to run my tests when I need them.

Additionally, I can't have a database that may or may not be available, or may have unpredictable data in it. Ideally, I like to have my Test Suite start with an empty database, and use my scripts to populate it to a known state. If I have to share my database with other testers, I want to be able to focus only on my part of the database, and not have other testers' actions cause my scripts to go awry.
  • Randomize
Almost all scripting languages have a randomize function. This often turns out to be very useful in varying wait times, varying the order that tests are run, and varying the data sent to the System-Under-Test.
  • Perform timings
When I run my Test Suite, I want to time certain actions. I use those timings to detect when things are starting to go awry in my System-Under-Test.

Unexpected changes in timings can point to new bugs, or sometimes just unexpected changes under the covers.
  • Run some load, stress, and volume tests
As part of my suite of Test Automation tools, I need load testing capabilities. Sometimes this can be fulfilled (perhaps only to a small extent) by my Functional Test Automation Suite.
  • Isolate failures easily
My Test Suite needs to provide some way for me to easily isolate where bugs are occurring. I don't want to run a long test that only has a Pass/Fail result. Instead, I want my Suite to tell me where the real failure occurred, as much as possible.
  • Run many tests, in spite of unexpected failures along the way
Some Automated Test Suite are overly-sensitive to failures. That is, once a single problem occurs, the rest of the tests fail. What was intended to be a 500-test Suite, effectively can only test until the first failure occurs - the rest becomes useless.

But, the reason for running these tests is to find bugs! Hopefully, I can find many of them - not just one!

I want my Test Suite to be able to recover and continue when it encounters a bug, or an unexpected situation. This is not always possible, but the chances of continued testing can be greatly enhanced by have each test (or at least each group of tests) able to reset the System-Under-Test back to a known state and continue. The better Test Suites can do this quite well, while others cannot.
  • Start wide, build depth later
I like to build an evolving set of tests in my Test Suite. When I first start out, I want to cover features lightly, so that I can get at least some coverage in a lot of areas. Later on, I'll go back and add depth in the important areas.

I want a Test Suite that lets me do this simply - create a "small" script which can run and be useful, then later enhance the same script to make it more useful - without having to throw things away and start over.
  • Automate what users do first (Getting Started Manual?)
I like to try to automate important, useful things first. When customers first use the System-Under-Test, I want them to have a good experience. If we have a Getting Started Manual or equivalent Help page, that's often a good place to start.
  • Isolate the maintenance effort
Test Suites are constantly evolving - due to added tests, changing requirements, and changes in the System-Under-Test. I want to be able to maintain these tests without having to constantly throw large chunks away and rewrite them.
  • Produce "readable" scripts
I want lots of people on my QA Team to be able to go in and at least understand what the Test Suite is doing. That's often made simpler by having a scripting language that is readable. It's often aided by having well-commented scripts, too.
  • Ability to reset the environment as needed
I like to have a Test Suite that's able to reboot a machine and continue. I find that's often needed for a really full-featured Suite.

I also like to be able to re-initialize a database, or kill a stuck program or two.
These things allow me to create tests that can survive the unexpected, and run longer without manual intervention.
  • Avoid false failures
If my Test Suite logs a lot of "false failures" then I will be forced to spend a lot of time investigating them, before I can determine if they represent real bugs or not. So, I want a Test Suite that can accurately log an error when there is a real error, and not when there isn't.

Also, when a single failure occurs, I don't want every test after that to fail unnecessarily. To that end, I need my individual Test Cases to be able to set my System-Under-Test to a known state - even if a failure occurred in the previous test.
  • Extensible - since we cannot predict all uses
I never used to think extensibility would be very important. But over time, I find more unanticipated needs for my test tools. So I want my tools to be as flexible as possible, and able to be extended to handle objects that I hadn't anticipated - perhaps non-standard objects that we haven't yet developed.
  • Survive trivial changes to the System Under Test
When minor changes occur in my System-Under-Test, I don't want my Test Suite to decide that every change is a bug. That's why, for example, I avoid full screenshots for verification points. Too many things can change on the screen - many of which are just incidental and don't represent bugs.

I want to be able to create verification points for specific needs of my Test Case, and ignore everything else.
  • Validate during tests, and at the end as appropriate
I want the option to validate aspects of my System-Under-Test as my Test Case runs, and optionally validate at the end of the run as well.

So I may need to "look" at parts of the system at any time, but I don't want my Test Suite to always look at everything.

I may want to drive my System through various actions, and check things along the way.

But sometimes, I just want to drive my System to a particular state, then look at the a database export, for example. While it's driving the System, I may not want to validate anything automatically along the way at all.
  • Ability to select and run subsets of the entire test suite
I often build a large, relatively complete regression.

But sometimes, I don't want to run that entire regression - I just want to run a subset. Perhaps I want to quickly verify a fix, or a portion of my System on a new platform, etc.

If I've constructed my Test Suite correctly, it should be simple to select and run just a portion.
  • Ability to select and skip particular tests
It's often necessary to skip particular tests in a Test Automation Suite.

Sometime, it's necessary to skip a test until a bug fix is available. Sometimes, the Test itself needs work. Sometimes, the code that the Test exercises is being re-built and running the Test wouldn't be productive.

Skipping tests can sometimes be achieved by commenting out the statement that invokes that test, sometimes there are other methods. Either way, this will happen, so it should be simple.
  • Variable log levels (Verbose, Normal, Minimal)
The ability to log minimally sometimes, and verbosely other times is very useful.
When I run a subset of my full Regression Suite in order to narrow in on the root cause of a bug, I want lots of details in my logs. I want to know pretty much everything that was done, and what was seen along the way.

But when I just run my full nightly Regressions, I usually want much less information - usually just what Test Cases were run, and what errors were found.
  • Minimize dependencies between scripts
Ideally each script in a Test Suite is independent of all others. It can run by itself, or in any position with the entire Suite. That's the ideal.

In reality, it can be more efficient to have some dependencies. So, for example a script initializes the database, then another starts the System-Under-Test, then the third populates the database to a base state.

In general, I don't want strong dependencies if it's not necessary.
  • Minimize learning curve
A QA team changes over time. People leave, or move on to other roles. New people arrive.

I want the QA team to be able to learn how to use the Test Automation Suite in fairly short order. Part of that is hiring the right people. Part of that is using a tool that can be learned relatively quickly, and that is well-documented.
  • Minimize maintenance time
As my System-Under-Test changes, I don't want to spend too much time updating my Test Automation Suite in order to let it run against the new build.

I need to be sure that my Test Suite isn't too brittle - it must not be overly-sensitive to minor changes in the System. But changes inevitably happen, so the Test Suite still must be easy to maintain.
  • Minimize post-run analysis time
If I run my Test Suite overnight, unattended, then I must be able to come in the next morning and quickly understand what ran, and what problems were found. I want that to be simple and quick. I want to be able to quickly see where the errors were, and be able to dig in, write the relevant bug reports, and get on to the rest of the day's work.
  • Minimize dependence on golden machines
While it's not always possible to avoid completely, I don't want my Test Suite to depend on being run only on a particular machine. I want to be able to pick the Test Suite up, and move it to another machine (perhaps several other machines) as needed.

To that end, I want to avoid hard-coded wait times (which may be inappropriate on a faster or slower machine). I also want to place any necessary machine-specific details into variables that can be easily adapted to a new home.
  • Record and Playback capability
I never depend on Record-and-Playback as the sole method of developing a Script or a Test Suite. It's just not possible to develop robust, full-featured Test Suites that way.

On the other hand, a quick recording is very often a nice way to create Script code rapidly; code which can then be edited into the final form.

I've used Test Tools which didn't provide a recording capability. It's still usable, but not nearly as efficient.

Tip of the day : Finding a software Testing Job

here are many people who would like to get software testing jobs, but they are unsure about how to approach it. This may seem like a dream j...