Build Your qTest Manager Repository: Test Design and Test Execution

qTest Manager provides considerable flexibility in how you organize your Test Cases. As a user, you can change their structure from project to project and from Release to Release.

As a rule of thumb, try to organize Test Design so that it is easy to link Test Cases to Requirements for traceability and reuse Test Cases in the future. On the other hand, Build your Test Execution structure so that it is easy to coordinate activities between different teams and look back at test log history. Ideally, Test Runs should be structured using the various hierarchies: Releases, Test Cycles, and Test Suites so that it is easy to analyze testing metrics.

Waterfall Methodology

Projects that follow a waterfall methodology have longer, less frequent Releases with multiple testing phases. For example, you might have a round of unit testing, integration testing, system testing, and user acceptance testing. Users can consider these examples for this type of development methodology.

By Testing Stage

In Test Design, organize Test Cases into folders by testing stage and then by product feature to make it easy to maintain tests specific to each stage that may need to be repeated in future Releases. In Test Execution, use Test Cycles to differentiate between the testing stages that need to occur for a certain Release. This approach can be useful when different teams are responsible for executing each testing phase.

Test Design Test Execution

By Feature Set

Another approach is to organize Test Design folders by product feature. Within each product, use subfolders to organize Test Cases for different testing phases. In Test Execution, you can separate by feature set and then by testing phase. This approach is particularly useful if different teams are responsible for creating and maintaining Test Cases in Test Design based on the product feature or if it is helpful to drill-down your Test Execution metrics by testing phase.

Test Design Test Execution

Agile Methodology

qTest Manager is ideal for Agile methodologies because it is flexible in organizing Releases and Sprints. In an Agile or scrum environment, you have more frequent Releases, each consisting of multiple Sprints. Each Sprint schedule may contain several user stories targeted for development. Tests may need to be tracked by Release or Sprint, as shown in the examples below.

By Release

In Test Design, organize your Test Cases based on a product feature. In Test Execution, use Releases to divide the testing activities.

Test Design Test Execution

By Sprint

Another approach is to organize Test Execution by Sprint. You can use what is considered as the "Release" object (the highest level object created in the Test Plan) for each Sprint. This approach can be helpful for customers who have very long Releases, with many Sprints within the Release. It can also be helpful when there are many user stories that need to be fully tested within each Sprint.

Test Design Test Execution

Complex System with Multiple Products

To track your testing for complex systems with multiple products which have their own development and testing teams, we suggest considering the following options.

Define a separate project for each product

This approach is useful if different teams track progress for their own products and each team requires its own metrics. This may also make sense if your external ALM, like Jira, has separate projects for each product.

Define one project for multiple products

We recommend using this option when products being tested have the same Release schedule. For example, the Mobile and Web products may be managed by different teams, but it is easy to track the activities of the two groups in one project if both teams are on the same Release schedule. In Test Execution, you can use the Cycle and Suite objects to separate one product from another product. This approach can also be used if a single project management group oversees testing across these products.

Test Design Test Execution

Testing Across Multiple Browsers

When testing across multiple browsers, such as Internet Explorer and Chrome, you may want to track the differences between the browsers but make sure you can trace back and compare how the tested browser varies from Release to Release. Two options are described below.

Separate Test Cases for each browser

One option is to organize your Test Cases by browser type, which means you have multiple Test Cases for a single test. This approach is useful if the steps to execute the test vary more between browsers (such as the steps to execute one Test Case on Internet Explorer are very different from the steps to execute another Test Case on Chrome).

Test Design

Test Run Configurations for Execution

We highly encourage using the Test Run Configurations feature for the greatest level of efficiency. This feature allows you to use a single Test Case (browser agnostic) and create multiple Test Runs, one Test Run for each browser. This approach is useful when the differences in Test Cases per browser are minimal. In Test Design, you only need to create and maintain a single test, while in Test Execution multiple Test Runs can be used to ensure proper testing has been done.

For more information about adding Test Runs, refer to Add Test Runs Based on Test Cases.

Customer Specific/Customized Releases

Another factor to consider when organizing your Test Case repository is whether you have customer-based or customized Releases. It is important to consider the balance of reusing common Test Cases for efficiency versus tracking functional differences. Two approaches can be used.

One Project for the Product for Multiple Customers

The first approach is to define a project per product and use separate Releases per customer. This approach is useful if there are very few Releases per customer and the Test Cases will vary minimally across the different customers.

One Project per Customer

Another approach is to use a separate project per customer. This option is especially useful if you have different Release schedules for each customer or if the application under test is innately different for each customer (such as different features).