SmartImpact_RevTrac

This workflow is designed to allow the Smart Impact Analysis workflow to be run as a web service. It is provided with a set of transports on the Analysis system, and calls the SmartImpact_Wrapper workflow. The SmartImpact_Wrapper workflow calls the Smart Impact Analysis workflow.

The SmartImpact_RevTrac workflow identifies an optimal (most-at-risk) set of used executables which when tested will exercise each of the changing objects. The analysis is driven by the changing objects and by a set of used objects obtained from a Performance History system. Typically, this will be your production system.

By default, the Smart Impact_RevTrac workflow reports the following object types as used, impacted and most-at-risk.

Object Type Description
CLAS Class
FUNC Function Module
IDOC IDoc
PROG Program
SSFO Smart Form
TCOD Transaction
WAPA BSP Application

However, a user with Administrator privileges may customize these defaults as follows.

  1. Select the Administration > Configuration > Impact Analysis folder in the LiveCompare hierarchy.

  2. In the TypesToFind section, deselect the object types to be excluded from the used, impacted and most-at-risk results. Note that if no items are selected, all the above object types will be used as the default.

  3. Click ‘Save’ to save your changes.

These changes affect all subsequent runs of the SmartImpact_RevTrac workflow, but they do not affect any existing workflow results.

The changing objects are compared on the Analysis System and a Comparison System.

Prerequisites

If a support pack or transport has not been applied to the Analysis system. It must be disassembled before it can be analyzed by the SmartImpact_RevTrac workflow. This can be done in SAP by running the SAINT transaction and selecting ‘Disassemble OCS Package’ from the Utilities menu. Alternatively, the support pack or transport may be disassembled in LiveCompare using the Package Disassembler App.

The SmartImpact_RevTrac workflow requires that SAP’s Where Used indexes are up to date on the Analysis system. For further details, see the Step 1 (As-Is) - Checking the Integrity of the Where Used Indexes help topic.

Before running the Smart Impact App, you will need to run the Create Object Links Cache workflow from the Prerequisites templates folder to create an Object Links Cache database for the system that contains your changing objects. A system’s Object Links Cache database should be no older than 7 days; its run date may be checked in the RFC Destination’s OLC tab. The Create Object Links Cache workflow may be run incrementally to update the Object Links Cache database with any recent object dependency changes, and to refresh its run date.

If required, the Business Critical Objects External Data Source should be populated with a set of business critical objects that are always included in the set of optimal executables. The External Data source is populated from a .CSV file with TYPE and NAME columns. Use the External Data Source’s ‘Replace Data File’ option to upload your own .CSV file.

Select the Performance History system in the LiveCompare hierarchy and click the PHD tab. Enter a value (n) in the ‘Keep this many months of data:’ field, and click ‘Update Data’ to download the most recent <n> months of performance history data. You may also specify a schedule so that the available performance history data is downloaded at regular intervals. See the Retrieving Performance History Data help topic for details.

The TransportsToIgnore External Data Source contains Regular Expressions which are used to filter out transports containing custom objects. If required, this External Data Source may be edited in the LiveCompare Studio using the ‘Replace Data File’ option.

Creating RFC Destinations

Before you begin, you will need to create RFC Destinations for the Analysis, Performance History and Comparison systems.

Preparing the Workflow

To prepare the SmartImpact_RevTrac workflow, drag its workflow template from the Templates folder into your own workspace, and modify the workflow as follows:

To specify the Analysis system:

  1. Select the System1 parameter and choose ‘Edit RFC Destination’ from its context menu to display the RFC Destination dialog.

  2. Select the RFC Destination for the system to analyze, then click ‘Save’.

To specify the Comparison system, on which to compare the most-at risk executables:

  1. Select the System2 parameter and choose ‘Edit RFC Destination’ from its context menu to display the RFC Destination dialog.

  2. Select the RFC Destination for the performance history system, then click ‘Save’.

To specify the Performance History system, from which to extract usage data for executables:

  1. Select the System3 parameter and choose ‘Edit RFC Destination’ from its context menu to display the RFC Destination dialog.

  2. Select the RFC Destination for the comparison system, then click ‘Save’.

Save the workflow using the ‘Save’ toolbar button.

Running the Workflow

To run the SmartImpact_RevTrac workflow, click the ‘Run’ toolbar button, choose ‘Run Now’ from the diagram’s context menu, or press F5. The currently running workflow action is marked with an animated display. When the workflow execution has completed, select the ReportUrl dataset and choose ‘View Details’ from the context menu to access the generated report.

Registering the Workflow as a Web Service

To register the SmartImpact_RevTrac workflow as a web service and access its WSDL URL, carry out the following steps while logged in as user with LiveCompare Editor privileges.

  1. Copy the SmartImpact_RevTrac workflow to a new workspace. The workspace name should not include any spaces.

  2. Select the SmartImpact_RevTrac workflow in the workspace hierarchy and choose ‘Register as Web Service’ from the context menu.

  3. In the Choose Input Parameters screen, select System1, System2, Transports and System3. Click ‘Next’.

  4. Click ‘Next’ in the Choose Output Parameter screen.

  5. Click ‘Next’ in the Finish - Confirm Register Web Service screen.

  6. Select the Data > Web Services folder in the LiveCompare hierarchy.

  7. In the details pane, locate the WSDL link for the SmartImpact_RevTrac web service, and choose ‘Copy link address’ from the context menu.

Workflow Results

The Smart Impact Analysis workflow generates a Dashboard report which includes a treemap summarizing the not-impacted, impacted and most-at-risk objects, grouped by functional area. See the Dashboard Report Treemaps help topic for details. It also includes links to the following reports.

Smart Impact Analysis Dashboard

This report includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the specified Test Repository.

  • The Changing Object Summary doughnut chart summarizes the changing objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets.

The Smart Impact Analysis Dashboard includes the following Excel report:

Smart Impact Analysis Details

This report includes the following spreadsheets:

Dashboard

This spreadsheet includes the charts shown on the Smart Impact Analysis Dashboard.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Home

This spreadsheet lists the used, impacted and most-at-risk objects, and their associated Application Areas. It includes:

  • Impacted objects with one or more impactful changes (these objects are marked as most-at-risk).

  • Impacted objects with no impactful changes.

  • Used objects with no impactful changes.

The RISK column displays H, M, or L for most-at-risk objects. These values are a ranking of risk due to the depth of the impact and frequency of use of the object. H is for high risk, M is for medium risk, and L is for low risk.

The IMPACTFUL_OBJECTS column displays the number of changing objects that impact each object. Click a link in this column to display the impacting objects in the Impactful Objects spreadsheet.

New most-at-risk objects (that have no impactful objects and a usage count of 0) are set to have themselves as a single impactful object.

Impactful Objects

This spreadsheet lists used impacted objects in the NAME and TYPE columns, and the changing objects that impact them in the CHILD_NAME and CHILD_TYPE columns. The CHANGE_ID column identifies the transport that includes the impacting object. Details for the used impacted objects are listed in the DESCRIPTION, APP_AREA and DEVCLASS columns. Click a hyperlink in the CHILD_NAME column to display comparison details for the selected object. Click a hyperlink in the TEST_HITS column to display the associated test in the Test Hit Details spreadsheet.

For each used impacted object:

  • The DYNP column lists the number of impacted screens for the object in the most-at-risk results. Click a hyperlink to display the object’s screens in the Impacted DYNPs spreadsheet.

  • The USAGE column lists the object’s usage count according to the available performance history data.

  • The USERS column lists the number of users of the object according to the available performance history data. Click a hyperlink to display the object’s users in the Impacted Users spreadsheet.

  • The TEST_HITS column lists the number of matches found in the specified Test Repository. Click a hyperlink to display details for the matched token in the Test Hit Details spreadsheet.

Impacted DYNPs

This spreadsheet lists changing objects in the CHILD_NAME and CHILD_TYPE columns, used impacted objects in the TYPE and NAME columns, and their associated screens (including each screen’s descriptive text). Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s DYNP column, the Impacted DYNPs spreadsheet lists the impacted object’s associated screens.

Impacted Users

This spreadsheet lists each impacted object, and its usage count for each user according to the available performance history data. Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s USERS column, the Impacted Users spreadsheet lists the users of the impacted object.

Test Hits & Gaps

This spreadsheet indicates whether each most-at-risk executable is a hit or a gap in the specified Test Repository.

  • ‘Hits’ are most-at-risk object names for which test assets have been found.

  • ‘Gaps’ are most-at-risk object names for which there are no available test assets.

Test Hit Details

This spreadsheet includes the details for each test asset that matched a most-at-risk executable in the specified Test Repository. It also displays the total usage, top user and top usage details for each of the matched most-at-risk executables. If the spreadsheet is accessed via a hyperlink, it displays the details for the matching test asset.

The HAS_DATA column is set to ‘Y’ for Tosca test cases that match affected data, and to <blank> for test cases that do not match affected data. Affected data is defined by the key fields of table rows that are different, in the Analysis system only (added data), or in the Comparison system only (deleted data).

Test Data

If a Comparison System has been set and there are fewer than 1000 distinct changing objects, this spreadsheet lists the changing SAP tables and includes links to Object Differences reports showing data changes.

Input table key (TABK) objects appear in the Test Data worksheet as table key table contents (TABB) objects if their contents are different in each of the compared systems. Each table name is a hyperlink to the compared table key table contents for each of the changed tables.

Note that ‘refresh’ table keys are excluded from the comparison results. These are defined as having a length of 31 and * as their last character.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the App’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

Smart Impact Analysis Details

This Excel report includes the spreadsheets described here.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the App’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

The copied WSDL URL may be pasted into Rev-Trac’s Impact analysis URL field (minus the ?WSDL suffix).