(co-authored with Huan Nguyen)

If you are looking to run your regression suite quicker, this article shows you some tips and tools that you could use to gain better control and visibility to your HP QuickTest Pro (QTP) test execution. Unfortunately, there are several limitations to HP Quality Center (QC) that does not address some of the challenges we faced as Automation Testers. This Tech Tip will show you how to work around these limitations.

QC has a feature that allows automated HP QuickTest Pro (QTP) test scripts to be run on multiple hosts concurrently; similar to the illustration below. The QC and QTP that we will be discussing is based on version 10. 

QC does the job well instructing QTP to execute specific scripts on specific workstations. However, it does not address some of  the issues below by default:

  1. How can I run a test using only a subset of the data?
  2. How do I know which data sets is being used by each test on the workstation at the moment?
  3. How can I quickly distribute a single data set to run on various hosts?

The above-mentioned issues are most apparent when you are doing data creation with QC and QTP but also applicable to regression testing.

1. Selecting unique test data by specifying test row

To address the data selection issue, we need a framework that allows you to choose the test data that you want to use. In our implementation, we first added a new user field  for test instances in QC, called TestRow. Additionally, we customised the workflow to pass the TestRow data to QTP’s test input parameter TestArgs. Refer to the QC Admin Guide on how to pass data from QC to QTP by customising the workflow.

Once this has been set up, it is as simple as using the code below to extract the TestRow data in QTP.

Dim strTestArgs
strTestArgs = TestArgs("TestRow")

From our set up, TestRow accept the following value ranges:

  • 120 – run row 120 only.
  • 1;3;5 – run rows 1, 3 and 5.
  • 1-5; – run rows 1 to 5.
  • 1;3;6-10 – run rows 1, 3 and 6 to 10.

By allowing users to specify which row(s) to run, users can flexibly choose to use the unique data that they have prepared.  Each row represents a record.

2. Monitoring the test runs on individual hosts

Now that we have automated scripts running with different test rows on various hosts, it can be a time-consuming task logging  onto all the hosts to verify if the automated scripts are running as it should. Additionally, we may also have application and system resource constraints that prevents us from log in on to individual system/application to view the test run status. For example, it is usually not possible to RDP to the same machine concurrently, and some enterprise software may restrict a single session per user. With the limited resources, you might borrow existing resources from your colleagues who would want to keep their credentials confidential.

We can save time and reliance of others to check the status by implementing a test run monitoring tool. Some information that may be useful include:

  • Host name
  • User name
  • Current data row
  • Last updated time
  • Percentage to completion

To track the test run, this tool reads from a file periodically to grab the test run status. This file is updated by the automated scripts via the test framework. Below is a screenshot of what we have implemented.

With better visibility to how tests are performing, we can quickly step in to fix any issues when any of the tests failed, or quickly kick-off new tests when resources become available.

3. Test data distribution

What if you want to distribute the test data over several hosts to expedite test execution? In line with what we have discussed on TestRow, we can divide the data sets onto different hosts. For example:

It may be easy to configure each host to run an evenly distributed load when the load is divisible by the number of hosts . However, if you have, for example 678 rows of test data to run, and the test row starts from 17, it can start to be a bit cumbersome to divide the load evenly. Hence, we wrote an Excel macro to aid us in our quest of test data distribution. There are two parts to this macro: the first is to generate data ranges for each host; the second for re-executing failed iterations (very useful for data generation tasks).

Configure new test ranges

Observe the numbers we entered in the green cells below. We divided the load evenly for test rows 17 till 678 on 8 hosts. Clicking the “New Range” button generates the configuration for us.

Configure to re-run failed iterations

On top of generating the test run configurations, the macro can also go through test run results to pick up all the failed runs. This allows us to re-run all the failed iterations. First, a snippet of the results:

In this example, we had 3630 Purchase Orders (PO) that we wanted to create, testing the functionality of PO creation. Out of these that we ran, 220 failed. To pick up every single failed test scenario and construct the test run configuration manually can be painful. Based on the configuration in yellow, it picks up the test row and status when you click the “Update From Result” button and generates the configuration automatically. This is illustrated below.

The file can be downloaded via the link Test_Configuration_Generator.

Depending on the QC-QTP versions/framework implementation that you have, the suggestions above may or may not apply to your specific environments. However, if you are facing the same challenges as we have, we have found that the design and tools implemented have increased our effectiveness managing the execution of our tests. Hopefully, you may find it useful in your set up as well.

JDS are experts in managing and driving efficiency in large Quality Center deployments. Contact us to discuss how we can help you!

Leave a Reply