Generating custom reports with Quality Center OTA using Python

tech tips

Visualising manual test execution distribution

The Quality Center (QC) Open Test Architecture (OTA) API is a COM library that enables you to integrate external applications with Quality Center. This article focuses on how you can generate customised reports with the use of OTA and Python.

Even though QC provides reporting and they are customizable with DB query, the default reports are rigid and the representation of the data may not suit your need. In some cases, you may want to manipulate data as well. For example, building a search library using the data retrieved from QC, allowing you to quickly search a project for an automated test associated with a manual test or vice versa.

In our implementation, we decided to use OTA with Python as the language of choice because it is faster to manipulate complex data structure as compared to VBScript.


1. Install Python for Windows. You can find the installation file from the Python website at .

2. Install the OTA COM library (TDConnect.exe).

Script breakdown

Below are snippets of Python code used to grab data from QC. The individual steps are explained.

1.  Import the necessary libraries.

  • win32com - TDConnect object that we will be accessing to communicate with QC.
  • codecs - Allows us to format the text in various encoding.
  • re - Allow us to use regular expression.
  • json - Allow us to dump data in json format.


import win32com
from win32com.client import Dispatch
import codecs
import re
import json

2.  Initialising variables. These include the URL to QC, credentials, QC project names, and QC domain.

# Login Credentials
qcServer = "http://qcserver/qcbin/"
qcUser = "user"
qcPassword = "password"
qcDomain = "MyDomain"
projects = ["Project1","Project2"]
DataFile = "data.js"

3.  Connecting to QC. The most important bits are those in bold below.

testdict = {}
for project in projects:
  # Do the actual login
  td = win32com.client.Dispatch("TDApiOle80.TDConnection.1")
  if td.Connected == True:
    print "System: Logged in to " +project
    print "Connect failed to " +project

4.  Once you have the td connection, you can pretty much grab any stats you want from QC.

In our example below, we access the RunFactory object to grab all the tests ran. Once, we grabbed the tests, we grab statistics on run durationtotal runs, and average durationThese are separated by individual projects. In addition, we have also calculated the time taken to complete a test set.


  runFactory = td.RunFactory
  runFilter = runFactory.Filter
  testFactory = td.TestFactory
  testFilter = testFactory.Filter
  # Query the Test Factory
  testdict[project] = {}

  for run in runFactory.NewList(""):
    testset = run.TestSetId
      testFilter.SetFilter("TS_TEST_ID", run.TestId)
      test = testFactory.NewList(testFilter.Text).Item(1).Name
      print "Something wrong with Test Id: " + run.TestId
    if testset in testdict[project]:
      if test in testdict[project][testset]["Tests"]:
        if run.Field("RN_DURATION") is not None:
          testdict[project][testset]["Tests"][test]["Tot_Duration"] += run.Field("RN_DURATION")
          testdict[project][testset]["Tests"][test]["Tot_Runs"] += 1 
          testdict[project][testset]["Tests"][test]["Avg_Duration"] = testdict[project][testset]["Tests"][test]["Tot_Duration"]/testdict[project][testset]["Tests"][test]["Tot_Runs"]
        testdict[project][testset]["Tests"][test] = {}
        if run.Field("RN_DURATION") is not None:
          testdict[project][testset]["Tests"][test]["Tot_Duration"] = run.Field("RN_DURATION")     
          testdict[project][testset]["Tests"][test]["Tot_Runs"] = 1
          testdict[project][testset]["Tests"][test]["Avg_Duration"] = run.Field("RN_DURATION")
      if run.Field("RN_DURATION") is not None:
        testdict[project][testset]["TotalDuration"] += run.Field("RN_DURATION")    
        testdict[project][testset]["TotalRun"] += 1
      testdict[project][testset] = {}
      testdict[project][testset]["Tests"] = {}
      testdict[project][testset]["TotalDuration"] = 0 
      testdict[project][testset]["TotalRun"] = 0

5. Finally, clean up! We disconnect from td and close the file handler after dumping data out in JSON format.

  if td.Connected == True:
    print "System: Logged out from " +project
  td = None
fh =, 'w', encoding="utf-8")
fh.writelines (json.dumps(testdict))
fh = None

For more information about OTA API, you can refer to the OTA API Reference document.

One application of using the Python OTA is visualising manual test execution distribution data. In a large project with many testers, it is hard to identify which manual test is executed most often. Thus, it is more difficult to identify good candidates for automation. Using the above code, we produced two sets of data. One is the execution data with run durationtotal runs, and average duration. The second set of data is path data with test set ID and their path in test lab.

Execution Data (data.js)

Path Data (path.js)


JavaScript can pick up the exported data and do data massaging. Path data is mainly used for building the navigation tree, while the execution data is used to draw the chart. Clicking on a node in the navigation pane would update the distribution chart on the right of the screen.

The exported data is JavaScript code which assigns a variable to the JSON object.

// Data.js
var executionData = {"Project_Name": {"8192": {"TotalRun": 0, "Tests": {}, "TotalDuration": 0}, "8193": {"TotalRun": 0, "Tests": {}, …}

This executionData variable then can be accessed directly via JavaScript.

JQuery then can be used to draw the JS file directly without doing evaluation of the code.

$.getScript("data.js", function(data, textStatus, jqxhr) {
    $.getScript("path.js", function(data, textStatus, jqxhr) {  … // Process and display chart … } }

Looking at the chart, the test with the higher number of “Total Run Count” and higher number of “Total Run Duration” are good candidates for automation. After identifying these tests, it is necessary to discuss with the test owner to verify the finding and discuss among testers and team leaders to prioritize automation effort accordingly.

Tech tips from JDS