Cross System Impact Analysis App

The Cross System Impact Analysis App is designed for gateway-deployed Fiori implementations where custom UI5 apps on a front-end system are supported by S/4 HANA services on a back end system. In this environment, changes to the back-end system may impact the apps on the front-end system.

The Cross System Impact Analysis App performs an impacted analysis across the system, identifying which front-end UI5 apps are impacted by changes to the back-end system. It also identifies an optimal or most-at-risk set of apps, which when tested will execute each of the changing objects.

The Cross System Impact Analysis App may be integrated with a Test Repository to identify Hits and Gaps.

  • ‘Hits’ are most-at-risk object names for which test assets have been found.

  • ‘Gaps’ are most-at-risk object names for which there are no available test assets.

If required, the Gap objects may be used to create test requirements and test execution lists in the specified Test Repository.

The Cross System Impact Analysis App is provided with one or more of the following on the back-end SAP system to be analyzed.

  • A set of transports

  • A set of objects

  • A set of ChaRM change requests

These provide the App with a set of changing objects. The App identifies an optimal or most-at-risk set of used UI5 apps which when tested will exercise each of the changing objects. The analysis is driven by the changing objects and by a set of used objects obtained from a Performance History system. Typically, this will be your production system.

The App produces a Dashboard report and an associated Excel report.

DevOps Categories

Testing

Prerequisites

If a support pack or transport has not been applied to the back-end system, it must be disassembled before it can be analyzed by the App. This can be done in SAP by running the SAINT transaction and selecting ‘Disassemble OCS Package’ from the Utilities menu. Alternatively, the support pack or transport may be disassembled in LiveCompare using the Package Disassembler App.

The App requires that SAP’s Where Used indexes are up to date on the back system. For further details, see the Step 1 (As-Is) - Checking the Integrity of the Where Used Indexes help topic.

Before running the Cross System Impact Analysis App, a LiveCompare Editor will need to run the Create Object Links Cache workflow from the Prerequisites templates folder to create an Object Links Cache database for the front-end and back-end systems. A system’s Object Links Cache database should be no older than 7 days; its run date may be checked in the RFC Destination’s OLC tab. The Create Object Links Cache workflow may be run incrementally to update the Object Links Cache databases with any recent object dependency changes, and to refresh their run dates.

If you plan to use the App to identify test hits and gaps in a Test Repository, the Create Test Repository Cache workflow from the Prerequisites template folder must be run first in order to populate the Test Repository’s cache. Refer to the template’s Help file for details.

You should make sure that performance history data is available on the RFC Destination selected for the ‘Performance History System’. Select the RFC Destination in the LiveCompare hierarchy and click the PHD tab. Select the source for performance history data, and if necessary the number of months of data to retrieve, then click ‘Update Data’. The performance history data may also be retrieved using a schedule. See the Retrieving Performance History Data help topic for details.

If required, the Business Critical Objects External Data Source should be populated with a set of business critical objects that are included in the set of most-at-risk executables if they are impacted by one or more changing objects. The External Data Source is populated from a .CSV file with TYPE and NAME columns. Use the External Data Source’s ‘Replace Data File’ option in the LiveCompare Studio to upload your own .CSV file.

  • The ChangingObjectsToIgnore External Data Source removes tables whose names begin with ENH or AGR from the set of changing objects.

  • The TransportsToIgnore External Data Source contains Regular Expressions which are used to filter out transports containing custom objects.

  • The UsedObjectsToIgnore External Data Source contains used objects that are to be ignored during the analysis.

If required, these External Data Sources may be edited in the LiveCompare Studio using the ‘Replace Data File’ option.

Running the App

To run the Cross System Impact Analysis App, select the App from the Apps screen and create an App variant. Complete the variant screen as follows:

  • If required, edit the ‘Transports’ table to provide a list of transports to be analyzed on the back-end system.

  • If required, edit the ‘ChaRM Change Requests’ string list to provide a list of ChaRM change requests to be analyzed on the back-end system.

  • If required, edit the ‘Objects’ table to provide a list of objects to be analyzed on the back-end system.

  • Set the ‘Backend Analysis System’ field to the RFC Destination for the back-end system that contains the transports or objects to be analyzed.

  • Set the ‘Backend Comparison System’ field to the RFC Destination for the back-end system RFC on which to compare the most-at-risk executables.

  • Set the ‘Frontend System’ field to the RFC Destination for the front-end system that contains the UI5 apps that may be impacted.

  • Set the ‘Performance History System’ field to the RFC Destination for the system from which performance history data has been obtained.

  • If required, set the ‘Solution Manager System’ field to the RFC Destination of an SAP Solution Manager system. The system is used to find transport objects associated with each of the specified ChaRM change requests.

  • If required, select a Test Repository in the ‘Test Repository’ field to identify test asset hits and gaps using the names of the most-at-risk objects. Choose --Not Selected-- if required to clear your selection.

  • Set the ‘Search Paths’ string list to limit the search for test assets to one or more project folders in the specified Test Repository. Each path should begin with the Subject folder, and a backslash (\) should be used to separate path components, for example Subject\Release1.

  • Set the ‘Create test execution?‘ switch to specify whether the matching test assets will be used to create test execution lists in the specified Test Repository. The test execution lists will be stored in a time-stamped folder named LiveCompare_<YYYY-MM-DD HH:MM:SS>.

  • If required, edit the ‘Execution List Configuration’ table to provide one or more configuration entries to be associated with the test execution, for example /Configurations/Environment1/TDS.

  • Set the ‘Execution Path‘ field to the path of an existing folder in which to store the test execution, for example Execution/TestExecutions. If this field is not set, the test execution will be created in the default Executions folder.

  • Set the ‘Execution Name‘ field to the name of the test execution to create. If this parameter is not set, a time-stamped name of the form LiveCompare_<YYYY-MM-DD HH:MM:SS> is used as the default.

  • Set the ‘Execute tests?‘ switch to specify whether test execution lists will be run in the specified Tosca Test Repository. If this switch is set to ‘Yes‘, the ‘Create test execution?‘ switch should also be set to to ‘Yes‘. Executing tests requires Tosca to be configured using DEX.

  • Set the ‘Create test requirements?’ field to true to create test requirements for most-at-risk objects that do not already have requirements in the specified Tosca, qTest or ALM Test Repository. Requirement names have the format LiveCompare_Most-at-risk_Gaps <YYYY-MM-DD HH:MM:SS> <Object Name - Object Description>.

  • Set the ‘Requirements Path’ field to the name of the root requirements folder to be used in the specified Tosca Test Repository. If this field is not set, ‘Requirements’ is used as the default.

Click ‘Run’. When the variant has completed, its results may be accessed from the App Cockpit screen.

App Results

The Cross System Impact Analysis App generates a Dashboard report, which includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the specified Test Repository.

  • The Changing Object Summary doughnut chart summarizes the changing objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.

  • Dashboard tiles display name of the back-end analysis system, the name of the front-end system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets.

The Dashboard report also includes links to the following reports:

Smart Impact Analysis Details

This report includes the following spreadsheets:

Help

This spreadsheet provides help for each of the spreadsheet reports.

Home

This spreadsheet lists the used, impacted and most-at-risk objects, and their associated Application Areas and descriptions. It includes:

  • Impacted objects with one or more impactful changes (these objects are marked as most-at-risk).

  • Impacted objects with no impactful changes.

  • Used objects with no impactful changes.

The STATUS column displays the status of each object, either Used, Impacted or Most-at-risk.

The usage count and number of users for each object is listed, obtained from the available performance history data. Click a hyperlink in the USERS column to display the users in the Impacted Users spreadsheet. Note that the USERS column is blank for objects whose status is ‘Used’.

The RISK column displays H, M, or L for most-at-risk objects. These values are a ranking of risk due to the depth of the impact and frequency of use of the object. H is for high risk, M is for medium risk, and L is for low risk.

The IMPACTFUL_OBJECTS column displays the number of changing objects that impact each object. Click a hyperlink in this column to display the impacting objects in the Impactful Objects spreadsheet. New most-at-risk objects (that have no impactful objects and a usage count of 0) are set to have themselves as a single impactful object.

The CUSTOM column is set to Y for custom used, impacted and most-at-risk objects.

The TESTS column lists the number of matching tests in the specified Test Repository. Click a hyperlink in this column to display the matching tests in the Test Hit Details spreadsheet.

The BUSINESS_CRITICAL column is set to Y for objects included in the Business Critical Objects External Data Source.

Impactful Objects

This spreadsheet lists used impacted objects in the NAME and TYPE columns, and the changing objects that impact them in the CHILD_NAME and CHILD_TYPE columns. The CHANGE_ID column identifies the transport that includes the impacting object.

For each used impacted object, the DYNP column lists the number of impacted screens for the object in the most-at-risk results. Click a hyperlink to display the object’s screens in the Impacted DYNPs spreadsheet.

Impacted DYNPs

This spreadsheet lists changing objects in the CHILD_NAME and CHILD_TYPE columns, used impacted objects in the TYPE and NAME columns, and their associated screens (including each screen’s descriptive text). Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s DYNP column, the Impacted DYNPs spreadsheet lists the impacted object’s associated screens.

Impacted Users

This spreadsheet lists each impacted object, and its usage count for each user according to the available performance history data. Each user’s type is also shown. Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s USERS column, the Impacted Users spreadsheet lists the users of the impacted object.

Test Hit Details

This spreadsheet includes the details for each test asset in the specified Test Repository that matched a most-at-risk executable, including the matched name. Each test is given a rank, either H (High), M (Medium) or L (Low), based on how recently it was last run, its passes and fails, the number of runs per day, and the number of test steps. More highly-ranked tests should be prioritized over tests with a lower rank. If the spreadsheet is accessed via a hyperlink, it displays the details for the matching test asset.

The HAS_DATA column is not used.

Test Data

This spreadsheet is not used.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the App’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

Standard Apps