Test Quality Audit app

This app finds the objects that have changed in a Pipeline’s Analysis system in the last 3 months, and identifies a minimal set of used, most-at-risk executables that would need to be tested in order to cover the changing objects. Used objects are retrieved from the Pipeline’s Usage system.

The most-at-risk objects are matched against the Pipeline’s Most-at-risk Search Test Repositories to identify hits and gaps.

  • Hits are most-at-risk object names for which test assets have been found.

  • Gaps are most-at-risk object names for which there are no available test assets.

The depth used by the app to search for referenced objects may be set by an Administrator in the Configuration - Impact Analysis screen’s ImpactAnalysisDepth field. If this field is not set, 10 is used as the default value.

By default, table content changes are included in the changing objects analyzed by the Test Quality Audit app. However, an Administrator may exclude table content changes from the analysis by setting the DisableTableContentAnalysis field to X in the Configuration - Impact Analysis screen.

DevOps categories

Testing

Prerequisites

The Test Quality Audit app uses a Pipeline to identify:

  • The Analysis system from which to retrieve changed objects.

  • An External Data Source with TYPE and NAME columns containing a list of business critical objects. This will be identified as most-at-risk if they are impacted by one or more changing objects. If the Business Critical Objects field in the Pipeline is not set, the Test Quality Audit app will use the Business Critical Objects External Data Source.

  • One or more Most-at-risk Search Test Repositories that will be searched to find test assets that match the most-at-risk executables.

Tester Business Critical

If the Pipeline’s Tester Business Critical checkbox is checked, the Pipeline’s Search Test Repositories are searched to find used objects that have associated tests. These objects are treated as business critical, so that impacted objects that have associated tests will be identified as most-at-risk.

Before the Test Quality Audit app is run, you must create a Pipeline that includes the RFC Destinations and Test Repositories to be used in the analysis.

If a support pack or transport has not been applied to the Analysis system, it must be disassembled before the Test Quality Audit app can analyze it. This can be done in SAP by running the SAINT transaction and selecting ‘Disassemble OCS Package’ from the Utilities menu. Alternatively, the support pack or transport may be disassembled in LiveCompare using the Package Disassembler app.

The app requires that SAP’s Where Used indexes be up to date on the Analysis system.

A LiveCompare Editor will need to run the Create Object Links Cache workflow from the Prerequisites templates folder to create an object links cache database for the Analysis System in the Pipeline. A system’s object links cache database should be no older than 7 days; its run date may be checked in the RFC Destination’s OLC tab. The Create Object Links Cache workflow may be run incrementally to update the object links cache database with any recent object dependency changes, and to refresh its run date.

A LiveCompare Editor will need to make sure that performance history data is available on the Usage System specified in the Pipeline. Select the RFC Destination in the LiveCompare hierarchy and click the PHD tab. Select the source for performance history data, and if necessary the number of months of data to retrieve, then click ‘Update Data’. The performance history data may also be retrieved using a schedule.

A LiveCompare Editor must run the Create Test Repository Cache workflow from the Prerequisites template folder order to populate the Pipeline’s Most-at-risk Search Test Repositories’ caches.

If required, the Business Critical Objects External Data Source should be populated with a set of business critical objects that are included in the set of most-at-risk executables if they are impacted by one or more changing objects. The External Data Source is populated from a .CSV file with TYPE and NAME columns. Use the External Data Source’s ‘Replace Data File’ option in the LiveCompare Studio to upload your own .CSV file. Note that the Business Critical Objects External Data Source is not used if a different External Data Source is specified in the Pipeline’s Business Critical Objects field.

Run the app

To run the Test Quality Audit app, select the app from the Apps screen and create an app variant. Complete the variant screen as follows:

  • Set the ‘Pipeline’ field to the Pipeline that contains the Analysis System, Most-at-risk Search Test Repositories and Business Critical Objects External Data Source to be used by the app.

Click ‘Run’. When the variant has completed, its results may be accessed from the App Cockpit screen.

App results

The Test Quality Audit app generates the following reports:

Dashboard report

The Test Quality Audit app generates a Dashboard which includes the following charts:

  • The Not Impacted, Impacted and Most-at-Risk column chart provides a summary of the number of not impacted, impacted and most-at-risk objects retrieved for the last 3 months.

  • The Test Hits and Gaps column chart summarizes the most-as-risk objects that were matched in each of the specified Most-at-risk Search Test Repositories (test hit objects), and the Most-at-risk objects that were not matched (test gap objects).

  • The Test Coverage and Gaps bar chart lists the test coverage and gaps for each of the specified Most-at-risk Search Test Repositories. Coverage is the number of used executables that are covered by at least one test. Gaps is the number of used executables that are not covered by at least one test.

  • The Top N Application Areas bar chart groups the most-at-risk, test hit and test gap objects by their Application Area, listing the top 5 Application Areas in terms of their number of most-at-risk objects.

  • The Test Hits Frequency column chart displays the frequency of test hits, showing the number of test hit objects that were found in just one of the last 3 months, in two of the last 3 months, and in all three of the last 3 months.

  • The Test Gaps Frequency column chart displays the frequency of test gaps, showing the number of test gap objects that were found in just one of the last 3 months, in two of the last 3 months, and in all three of the last 3 months.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Pipeline that was used in the analysis.

Note that a ‘test hit object’ refers to a most-at-risk object for which at least one test was found. The same object may occur in multiple tests, but in the graphs it only counts once.

Details report

The details Excel report includes the following spreadsheets:

Dashboard

This spreadsheet includes each of the Dashboard charts described above.

Home

This spreadsheet provides a summary view of all the Application Areas found during the analysis. For each Application Area, it displays the number of not impacted objects, the number of impacted objects, the number of most-at-risk objects, the number of test hits, and the number of test gaps.

  • Click a link in the Test Hits column to display the test hits in the Test Hit Details spreadsheet.

  • Click a link in the Test Gaps column to display the test hits in the Test Gap Details spreadsheet.

Test Hit Details

This spreadsheet lists the details for objects that matched one or more tests in the Pipeline’s Most-at-risk Search Test Repositories. It has the following columns:

APP_AREA

The application area in which the test hit object was found.

MONTHS

The number of months in which the test hit object was found.

TEST_REPOSITORY_TYPE

The type of the Test Repository on which a matching test was found.

TEST_REPOSITORY_NAME

The name of the Test Repository on which a matching test was found.

TEST_NAME

The name of the test in which a match for the test hit object was found.

STATUS

The status of the test. Covering means that the test covers the test hit object. Optimal means that it is an optimal test for the test hit object.

RANK

These values are a ranking of risk due to the depth of the impact and frequency of use of the test hit object. H is for high risk, M is for medium risk, and L is for low risk.

TESTED_OBJECT

The name of the object that was matched in the test.

TEST_PATH

The test’s path.

TEST_ID

The test’s identifier.

Test Gap Details

This spreadsheet lists the details for objects that did not match any tests in the Pipeline’s Most-at-risk Search Test Repositories. It has the following columns:

APP_AREA

The application area in which the test hit object was found.

MONTHS

The number of months in which the test gap object was found.

TO_BE_TESTED_OBJECT

The name of the object to be tested.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the app’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

Standard apps

LiveCompare Dashboard