Smart Impact Learning app

The Smart Impact Learning app simulates a run of the Smart Impact app by generating reports from sample data stored in External Data Sources. It runs in only a few minutes, and may be used in the following ways:

  • For demonstration purposes: to demonstrate to customers and prospects the reports they would expect to receive from the Smart Impact app.

  • For training purposes: to train consultants on how to run the Smart Impact app and navigate through each of its reports.

The Smart Impact Learning app does not require connections to any SAP systems or test repositories, and it may be run on servers where the SAP RFC libraries normally required have not been installed. This option is set during the installation of LiveCompare by selecting ‘Skip. Not using LiveCompare with SAP’ in the SAP RFC Library screen.

DevOps categories

Testing

Prerequisites

Install the Smart Impact Learning app package on your LiveCompare server by running LiveCompareLearningApps.exe from the LiveCompare distribution directory. Click through each of the Wizard screens to complete the installation.

Learning resources

LiveCompare includes the following resources that are used by the Smart Impact Learning app:

  • Dummy RFC Destinations named Learning-DEV, Learning PHD and Learning-QAS. These are stored in the Learning RFC Destinations folder.

  • A dummy Test Repository named Learning.

  • An External Data Source named Smart Impact Learning.

  • A dummy Pipeline named Learning. This is stored in the Learning Pipelines folder, and is configured as follows:

Field Value
Name Learning
Description LiveCompare Learning Pipeline
Tester Business Critical Not selected
Analysis System Learning-DEV
Comparison System Learning-QAS
Usage System Learning-PHD
Most-at-Risk Search Test Repositories Learning

The resources in the Learning Pipeline may not be added to any additional Pipelines as they were designed exclusively to be used by the Smart Impact Learning and Test Quality Audit apps. Additionally, the Learning Pipeline is not processed when the Commit Configuration process is run for the Pipelines in the Guided Configuration’s Pipelines tab.

Run the app

To run the Smart Impact Learning app, carry out the following steps:

  1. Select the Smart Impact Learning app from the Apps screen’s Testing tab.

  2. In the App Cockpit screen, click ‘New Variant’, and create a new variant based on [app].

  3. In the My Variants table, click Run Variant button. to run the variant.

App results

The Smart Impact Learning app generates the following reports:

Smart Impact Analysis Dashboard

The Smart Impact Learning app generates a Dashboard which includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the sample Test Repository.

  • The Changing Object Summary doughnut chart summarizes the changing objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including a sample date range for which performance history data was obtained, and the name of the Pipeline used in the analysis.

The Dashboard’s Additional Resources section includes links to the following Excel reports:

Function Details Report

The Function Details Excel report includes the following spreadsheets:

Dashboard

This spreadsheet includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the sample Test Repository.

  • The Changed Objects Summary doughnut chart summarizes the changing objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the sample Test Repository.

  • The Top 5 Application Areas bar chart lists the top 5 Application Areas, in terms of the number of most-at-risk objects in each Application Area.

  • The All, Covering and Optimal Tests column chart lists the number of found tests in each Application Area, the number of tests that cover at least one most-at-risk object, and the optimal number of tests that cover each of the most-at-risk objects.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including a sample date range for which performance history data was obtained, and the name of the Pipeline used in the analysis. The number of sample change IDs and changed objects are also shown.

Home

The Home spreadsheet provides a summary view of all the Application Areas found during the analysis. It has the following columns:

APP_AREA

The name of the Application Area in which the objects were found. (None) is used for objects that do not have an Application Area.

NOT_IMPACTED

The number of used objects in the Application Area that are not impacted by a changing object.

IMPACTED

The number of used objects in the Application Area that are impacted by a changing object, but not most-at-risk.

MOST_AT_RISK

The number of used objects in the Application Area that are impacted and most-at-risk; these are recommended for testing.

TEST_HITS

The number of most-at-risk objects in the Application Area that are covered by at least one test in the Pipeline’s Most-at-risk Test Repository.

TEST_GAPS

The number of most-at-risk objects in the Application Area that are not covered by any tests in the Pipeline’s Most-at-risk Test Repository.

IMPACTFUL_CHANGES

A count of the distinct impacting objects for each Application Area’s most-at-risk objects.

App Area Details

This spreadsheet lists the most-at-risk, impacted and not impacted objects, and their associated Application Areas and descriptions. It includes:

  • Impacted objects with one or more impactful changes (these objects are marked as most-at-risk).

  • Impacted objects with no impactful changes.

  • Used objects with no impactful changes.

The APP_AREA column displays the name of the Application Area in which the objects were found. (None) is used for objects that do not have an Application Area.

The STATUS column displays the status of each object, either Most-at-risk, Impacted or Not Impacted.

The usage count and number of users for each object is listed, obtained from sample performance history data. Click a hyperlink in the USERS column to display the users in the Impacted Users spreadsheet. Note that the USERS column is blank for objects whose status is ‘Used’.

The RISK column displays H, M, or L for most-at-risk objects. These values are a ranking of risk due to the depth of the impact and frequency of use of the object. H is for high risk, M is for medium risk, and L is for low risk.

The IMPACTFUL_OBJECTS column displays the number of changing objects that impact each object. Click a hyperlink in this column to display the impacting objects in the Impactful Objects spreadsheet. New most-at-risk objects (that have no impactful objects and a usage count of 0) are set to have themselves as a single impactful object.

The CUSTOM column is set to Y for custom used, impacted and most-at-risk objects.

The BUSINESS_CRITICAL column is set to Y for objects included in the sample set of business critical objects.

Impactful Objects

This spreadsheet lists the changing objects introduced by the transports, ChaRM change requests or objects analyzed by the Smart Impact Learning app. The spreadsheet lists used impacted objects in the NAME and TYPE columns, and the changing objects that impact them in the CHILD_NAME and CHILD_TYPE columns. The DEPTH column indicates the search depth at which the impacted object was found.

The CHANGE_ID column identifies the transport or ChaRM request that includes the impacting object. This column is set to ‘Objects’ if the changing object was specified in a list of objects.

For each used impacted object, the DYNP column lists the number of impacted screens for the object in the most-at-risk results. Click a hyperlink to display the object’s impacted screens in the Impacted DYNPs spreadsheet.

The CHILD_STATUS column lists the comparison status for the object on the Analysis and Comparison systems specified in the Learning Pipeline. Click a hyperlink in the CHILD_NAME column to display comparison details for the selected object.

Impacted DYNPs

This spreadsheet lists changing objects in the CHILD_NAME and CHILD_TYPE columns, their used impacted objects in the NAME column, and their associated screens, screen numbers and descriptive text in the DYNP_PROG, DYNP_NUM and DTXT columns.

Click a hyperlink in the NAME column to display tests that include the object in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s DYNP column, the Impacted DYNPs spreadsheet lists the impacted object’s associated screens.

Impacted Users

This spreadsheet lists the type and name of each impacted object, and its usage count for each user according to the sample performance history data. Each user’s type is also shown. Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s USERS column, the Impacted Users spreadsheet lists the users of the impacted object.

Test Hits & Gaps

This spreadsheet indicates whether each most-at-risk object is a hit or a gap in the sample Test Repository.

  • ‘Hits’ are most-at-risk object names for which test assets have been found.

  • ‘Gaps’ are most-at-risk object names for which there are no available test assets.

Test Hit Details

This spreadsheet includes the details for each test asset the sample Most-at-risk Search Test Repository that matched a most-at-risk object, including the test name, test path, and tested object. If the spreadsheet is accessed via a hyperlink, it displays the details for the linked test asset.

The RISK column displays H, M, or L for most-at-risk objects. These values are a ranking of risk due to the depth of the impact and frequency of use of the object. H is for high risk, M is for medium risk, and L is for low risk.

Each test is given a rank, either H (High), M (Medium) or L (Low), based on how recently it was last run, its passes and fails, the number of runs per day, and the number of test steps. More highly ranked tests should be prioritized over tests with a lower rank.

Test Data

This spreadsheet is not used.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Testing Details Report

The Testing Details Excel report includes the following spreadsheets:

Dashboard

This spreadsheet includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the sample Test Repository.

  • The Changing Object Summary doughnut chart summarizes the changing objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the sample Test Repository.

  • The Top 5 Application Areas bar chart lists the top 5 Application Areas, in terms of the number of most-at-risk objects in each Application Area.

  • The All, Covering and Optimal Tests column chart lists the number of found tests in each Application Area, the number of tests that cover at least one most-at-risk object, and the optimal number of tests that cover each of the most-at-risk objects.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including a sample date range for which performance history data was obtained, and the name of the Pipeline used in the analysis. The number of sample change IDs and changed objects are also shown.

Home

The Home spreadsheet summarizes the tests found in the sample Most-at-risk Search Test Repository by Application Area. It has the following columns:

APP_AREA

The Application Area associated with the tests. Click a link in the APP_AREA column to display the App Area Details spreadsheet filtered to display the optimal tests in the selected Application Area.

ALL

The number of distinct test IDs associated with any tested object belonging to the Application Area. Test IDs are found in the sample Most-at-risk Search Test Repository.

COVERING

The number of distinct test IDs in the set of test hits that are associated with any tested object belonging to the Application Area. A test hit is a test asset in the sample Most-at-risk Search Test Repository that matches a most-at-risk object.

OPTIMAL

The number of distinct test IDs in the set of optimal test hits that are associated with any tested object belonging to the Application Area.

TEST_GAPS

The number of objects in the test gaps set that are associated with the Application Area. A test gap is a most-at-risk object that does not have a matching test in the sample Most-at-risk Search Test Repository.

App Area Details

This spreadsheet lists tests that match objects that have been identified as most-at-risk. It includes the Application Area in which the object was found, the type and name of each Most-at-risk Search Test Repository where a matching test was found, including the path, name and ID of each matching test.

The STATUS column is set to ‘Optimal’ if the test is in the set of optimal tests to run, or to ‘Covering’ if the test covers at least one most-at-risk object.

The RISK column displays H, M, or L for most-at-risk objects. These values are a ranking of risk due to the depth of the impact and frequency of use of the object. H is for high risk, M is for medium risk, and L is for low risk.

The TEST_DATA column lists the number of SAP tables referenced in tests that match the most-at-risk object recommended for testing. Click a hyperlink in this column in this column to display the table names in the Test Data spreadsheet.

The TESTED_OBJECTS column lists the number of objects that are covered by the test. Click a hyperlink in this column to display the tested objects in the Test Hit Details spreadsheet.

Test Data

This spreadsheet is not used.

Test Hit Details

This spreadsheet includes the details for each test asset in the sample Most-at-risk Search Test Repository that matched a most-at-risk object, including the Application Area associated with each test, the test name, test path and tested object. If the spreadsheet is accessed via a hyperlink, it displays the details for the linked test asset.

The RISK column displays H, M, or L for most-at-risk objects. These values are a ranking of risk due to the depth of the impact and frequency of use of the object. H is for high risk, M is for medium risk, and L is for low risk.

Each test is given a rank, either H (High), M (Medium) or L (Low), based on how recently it was last run, its passes and fails, the number of runs per day and the number of test steps. More highly ranked tests should be prioritized over tests with a lower rank.

Test Hits & Gaps

This spreadsheet indicates whether each most-at-risk object is a hit or a gap in the sample Most-at-risk Search Test Repository.

  • ‘Hits’ are most-at-risk object names for which test assets have been found.

  • ‘Gaps’ are most-at-risk object names for which there are no available test assets.

Changes

This spreadsheet lists the changing objects introduced by the sample transports, ChaRM change requests or objects analyzed by the Smart Impact Learning app. The spreadsheet lists used impacted objects in the NAME and TYPE columns, and the changing objects that impact them in the CHANGE_NAME and CHANGE_TYPE columns. The DEPTH column indicates the search depth at which the impacted object was found.

Click an object link in the CHANGE_NAME column to display a comparison report for the object on the Pipeline’s Analysis and Comparison systems.

The CHANGE_ID column identifies the transport or ChaRM request that includes the impacting object. This column is set to ‘Objects’ if the changing object was specified in a list of objects.

For each used impacted object, the DYNP column lists the number of impacted screens for the object in the most-at-risk results. Click a hyperlink to display the object’s impacted screens in the Impacted DYNPs spreadsheet.

The CHANGE_STATE column lists the comparison status for the object on the sample Analysis and Comparison systems used by the Smart Impact Learning app. Click a hyperlink in the CHANGE_NAME column to display comparison details for the selected object.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the app’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

Standard apps

LiveCompare Dashboard