Test Quality Audit Learning app

The Test Quality Audit Learning app simulates a run of the Test Quality Audit app by generating reports from sample data stored in External Data Sources. It runs in only a few minutes, and may be used in the following ways:

  • For demonstration purposes: to demonstrate to customers and prospects the reports they would expect to receive from the Test Quality Audit app.

  • For training purposes: to train consultants on how to run the Test Quality Audit app and navigate through each of its reports.

The Test Quality Audit Learning app does not require connections to any SAP systems or test repositories, and it may be run on servers where the SAP RFC libraries normally required have not been installed. This option is set during the installation of LiveCompare by selecting ‘Skip. Not using LiveCompare with SAP’ in the SAP RFC Library screen.

DevOps categories

Testing

Prerequisites

Install the Test Quality Audit Learning app package on your LiveCompare server by running LiveCompareLearningApps.exe from the LiveCompare distribution directory. Click through each of the Wizard screens to complete the installation.

Learning resources

LiveCompare includes the following resources that are used by the Test Quality Audit Learning app:

  • Dummy RFC Destinations named Learning-DEV, Learning PHD and Learning-QAS. These are stored in the Learning RFC Destinations folder.

  • A dummy Test Repository named Learning.

  • An External Data Source named Test Quality Audit Learning.

  • A dummy Pipeline named Learning. This is stored in the Learning Pipelines folder, and is configured as follows:

Field Value
Name Learning
Description LiveCompare Learning Pipeline
Tester Business Critical Not selected
Analysis System Learning-DEV
Comparison System Learning-QAS
Usage System Learning-PHD
Most-at-Risk Search Test Repositories Learning

The resources in the Learning Pipeline may not be added to any additional Pipelines as they were designed exclusively to be used by the Smart Impact Learning and Test Quality Audit Learning apps. Additionally, the Learning Pipeline is not processed when the Commit Configuration process is run for the Pipelines in the Guided Configuration’s Pipelines tab.

Run the app

To run the Test Quality Audit Learning app, carry out the following steps:

  1. Select the Test Quality Audit Learning app from the Apps screen’s Testing tab.

  2. In the App Cockpit screen, click ‘New Variant’, and create a new variant based on [app].

  3. In the My Variants table, click Run Variant button. to run the variant.

The Test Quality Audit app generates the following reports:

Dashboard report

The Test Quality Audit app generates a Dashboard which includes the following charts:

  • The Not Impacted, Impacted and Most-at-Risk column chart provides a summary of the number of not impacted, impacted and most-at-risk objects retrieved for the last 3 months.

  • The Test Hits and Gaps column chart summarizes the most-as-risk objects that were matched in each of the specified Most-at-risk Search Test Repositories (test hit objects), and the Most-at-risk objects that were not matched (test gap objects).

  • The Test Coverage and Gaps bar chart lists the test coverage and gaps for each of the specified Most-at-risk Search Test Repositories. Coverage is the number of used executables that are covered by at least one test. Gaps is the number of used executables that are not covered by at least one test.

  • The Top N Application Areas bar chart groups the most-at-risk, test hit and test gap objects by their Application Area, listing the top 5 Application Areas in terms of their number of most-at-risk objects.

  • The Test Hits Frequency column chart displays the frequency of test hits, showing the number of test hit objects that were found in just one of the last 3 months, in two of the last 3 months, and in all three of the last 3 months.

  • The Test Gaps Frequency column chart displays the frequency of test gaps, showing the number of test gap objects that were found in just one of the last 3 months, in two of the last 3 months, and in all three of the last 3 months.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Pipeline that was used in the analysis.

Note that a ‘test hit object’ refers to a most-at-risk object for which at least one test was found. The same object may occur in multiple tests, but in the graphs it only counts once.

Details report

The details Excel report includes the following spreadsheets:

Dashboard

This spreadsheet includes each of the Dashboard charts described above.

Home

This spreadsheet provides a summary view of all the Application Areas found during the analysis. For each Application Area, it displays the number of not impacted objects, the number of impacted objects, the number of most-at-risk objects, the number of test hits, and the number of test gaps.

  • Click a link in the Test Hits column to display the test hits in the Test Hit Details spreadsheet.

  • Click a link in the Test Gaps column to display the test hits in the Test Gap Details spreadsheet.

Test Hit Details

This spreadsheet lists the details for objects that matched one or more tests in the Pipeline’s Most-at-risk Search Test Repositories. It has the following columns:

APP_AREA

The application area in which the test hit object was found.

MONTHS

The number of months in which the test hit object was found.

TEST_REPOSITORY_TYPE

The type of the Test Repository on which a matching test was found.

TEST_REPOSITORY_NAME

The name of the Test Repository on which a matching test was found.

TEST_NAME

The name of the test in which a match for the test hit object was found.

STATUS

The status of the test. Covering means that the test covers the test hit object. Optimal means that it is an optimal test for the test hit object.

RANK

These values are a ranking of risk due to the depth of the impact and frequency of use of the test hit object. H is for high risk, M is for medium risk, and L is for low risk.

TESTED_OBJECT

The name of the object that was matched in the test.

TEST_PATH

The test’s path.

TEST_ID

The test’s identifier.

Test Gap Details

This spreadsheet lists the details for objects that did not match any tests in the Pipeline’s Most-at-risk Search Test Repositories. It has the following columns:

APP_AREA

The application area in which the test hit object was found.

MONTHS

The number of months in which the test gap object was found.

TO_BE_TESTED_OBJECT

The name of the object to be tested.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the app’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

Standard apps

Standard apps

LiveCompare Dashboard