Generating Custom Reports with Quality Center OTA using Python

(co-authored with Joseph Hung)

Visualising manual test execution distribution

The Quality Center (QC) Open Test Architecture (OTA) API is a COM library that enables you to integrate external applications with Quality Center. This article focuses on how you can generate customised reports with the use of OTA and Python.

Even though QC provides reporting and they are customizable with DB query, the default reports are rigid and the representation of the data may not suit your need. In some cases, you may want to manipulate data as well. For example, building a search library using the data retrieved from QC, allowing you to quickly search a project for an automated test associated with a manual test or vice versa.

In our implementation, we decided to use OTA with Python as the language of choice because it is faster to manipulate complex data structure as compared to VBScript.

Preparation

1. Install Python for Windows. You can find the installation file from the Python website at http://www.python.org/getit/windows/ .

2. Install the OTA COM library (TDConnect.exe).

Script breakdown

Below are snippets of Python code used to grab data from QC. The individual steps are explained.

1.  Import the necessary libraries.

  • win32com – TDConnect object that we will be accessing to communicate with QC.
  • codecs – Allows us to format the text in various encoding.
  • re – Allow us to use regular expression.
  • json – Allow us to dump data in json format.

 

import win32com
from win32com.client import Dispatch
import codecs
import re
import json

2.  Initialising variables. These include the URL to QC, credentials, QC project names, and QC domain.

# Login Credentials
qcServer = "http://qcserver/qcbin/"
qcUser = "user"
qcPassword = "password"
qcDomain = "MyDomain"
projects = ["Project1","Project2"]
DataFile = "data.js"

3.  Connecting to QC. The most important bits are those in bold below.

testdict = {}
for project in projects:
  # Do the actual login
  td = win32com.client.Dispatch("TDApiOle80.TDConnection.1")
  td.InitConnectionEx(qcServer)
  td.Login(qcUser,qcPassword)
  td.Connect(qcDomain,project)
  if td.Connected == True:
    print "System: Logged in to " +project
  else:
    print "Connect failed to " +project

4.  Once you have the td connection, you can pretty much grab any stats you want from QC.

In our example below, we access the RunFactory object to grab all the tests ran. Once, we grabbed the tests, we grab statistics on run durationtotal runs, and average durationThese are separated by individual projects. In addition, we have also calculated the time taken to complete a test set.

 

  runFactory = td.RunFactory
  runFilter = runFactory.Filter
  testFactory = td.TestFactory
  testFilter = testFactory.Filter
  # Query the Test Factory
  testdict[project] = {}
 
  for run in runFactory.NewList(""):
    testset = run.TestSetId
    try:
      testFilter.SetFilter("TS_TEST_ID", run.TestId)
      test = testFactory.NewList(testFilter.Text).Item(1).Name
    except:
      print "Something wrong with Test Id: " + run.TestId
    if testset in testdict[project]:
      if test in testdict[project][testset]["Tests"]:
        if run.Field("RN_DURATION") is not None:
          testdict[project][testset]["Tests"][test]["Tot_Duration"] += run.Field("RN_DURATION")
          testdict[project][testset]["Tests"][test]["Tot_Runs"] += 1 
          testdict[project][testset]["Tests"][test]["Avg_Duration"] = testdict[project][testset]["Tests"][test]["Tot_Duration"]/testdict[project][testset]["Tests"][test]["Tot_Runs"]
      else:
        testdict[project][testset]["Tests"][test] = {}
        if run.Field("RN_DURATION") is not None:
          testdict[project][testset]["Tests"][test]["Tot_Duration"] = run.Field("RN_DURATION")     
          testdict[project][testset]["Tests"][test]["Tot_Runs"] = 1
          testdict[project][testset]["Tests"][test]["Avg_Duration"] = run.Field("RN_DURATION")
      if run.Field("RN_DURATION") is not None:
        testdict[project][testset]["TotalDuration"] += run.Field("RN_DURATION")    
        testdict[project][testset]["TotalRun"] += 1
    else:
      testdict[project][testset] = {}
      testdict[project][testset]["Tests"] = {}
      testdict[project][testset]["TotalDuration"] = 0 
      testdict[project][testset]["TotalRun"] = 0

5. Finally, clean up! We disconnect from td and close the file handler after dumping data out in JSON format.

  if td.Connected == True:
    td.Disconnect
    td.Logout
    print "System: Logged out from " +project
  td = None
fh = codecs.open(DataFile, 'w', encoding="utf-8")
fh.writelines (json.dumps(testdict))
fh = None

For more information about OTA API, you can refer to the OTA API Reference document.

One application of using the Python OTA is visualising manual test execution distribution data. In a large project with many testers, it is hard to identify which manual test is executed most often. Thus, it is more difficult to identify good candidates for automation. Using the above code, we produced two sets of data. One is the execution data with run durationtotal runs, and average duration. The second set of data is path data with test set ID and their path in test lab.

Execution Data (data.js)
 

Path Data (path.js)

 

JavaScript can pick up the exported data and do data massaging. Path data is mainly used for building the navigation tree, while the execution data is used to draw the chart. Clicking on a node in the navigation pane would update the distribution chart on the right of the screen.

The exported data is JavaScript code which assigns a variable to the JSON object.

// Data.js
var executionData = {"Project_Name": {"8192": {"TotalRun": 0, "Tests": {}, "TotalDuration": 0}, "8193": {"TotalRun": 0, "Tests": {},}

This executionData variable then can be accessed directly via JavaScript.

JQuery then can be used to draw the JS file directly without doing evaluation of the code.

$.getScript("data.js", function(data, textStatus, jqxhr) {
    $.getScript("path.js", function(data, textStatus, jqxhr) {// Process and display chart … } }

Looking at the chart, the test with the higher number of “Total Run Count” and higher number of “Total Run Duration” are good candidates for automation. After identifying these tests, it is necessary to discuss with the test owner to verify the finding and discuss among testers and team leaders to prioritize automation effort accordingly.

4 comments

Francis Miles

Hi. Such a great article, but there seems to be some missing code. the path.js part is missing completely. Could you possibly show us? Also, how did you generate the graphs, this would be a great addition for me. Thanks

The code is really nice one, its working perfectly well with my requirement, one thing I wanted to ask is, after testcase execution is finished I want to attach a result file to testcase. Currently I am doing it with below code,
attachmentPath = test.Attachments
nowAttachment = attachmentPath.AddItem(None)
nowAttachment.FileName = file_name
nowAttachment.Type = 1
nowAttachment.Post()

But problem is even if the name of file is same its not overwriting with existing file, intern it creates new file itself, how to fix this. thank you.

Can we programatically fetch the already existing reports in Quality Center(10.0)?
we have all reports defined, and have to publish these reports daily onto confluence pages!
Any help regarding this would be deeply appreciated.

Could you please give me advise, how to implement the similar script for automatically generate the test’s status report?
As a result I need to get the report that include next fild: “test_name”, status of the test (“pass”, “fail”, “not complete”, “not run”) and “tester_name”.

Hope for your experience and look forward to for your advice.

Best regards,
Lena

Leave a Reply