Smart Impact App

The Smart Impact App is provided with one or more of the following on the SAP system to be analyzed.

  • A set of transports
  • A set of objects
  • A set of ChaRM change requests

These provide the Smart Impact App with a set of changing objects. The Smart Impact App identifies an optimal (most-at-risk) set of used executables which when tested will exercise each of the changing objects. The analysis is driven by the changing objects and by a set of used objects obtained from a Performance History system. Typically, this will be your production system.

The Smart Impact App also identifies the screens, users and roles impacted by the changing objects.

The Smart Impact App may be used to assess the impact of support packs, or the impact of custom development changes. The type of analysis performed depends on the number of changing objects.

  • If there are 1000 or more changing objects, a top-down analysis is performed beginning with the changing objects.
  • If there are fewer than 1000 changing objects, a bottom-up analysis is performed beginning with the used objects obtained from the Performance History system.

The changing objects are compared on the Analysis System and a Comparison System.

The Smart Impact App may be integrated with a Test Repository to identify Hits and Gaps.

  • ‘Hits’ are most-at-risk object names for which test assets have been found.
  • ‘Gaps’ are most-at-risk object names for which there are no available test assets.

If required, the Gap objects may be used to create test execution lists in the specified Test Repository.

The App produces a Dashboard report and an associated Excel report.

A user with LiveCompare Editor privileges must prepare this App by running the Create Object Links Cache workflow for the ‘Analysis System’ RFC Destination, and making sure that performance history data is available for the ‘Performance History System’ RFC Destination. If you plan to compare the changing objects, the IMG structure for the Analysis system must be downloaded before running the App. See the ‘Prerequisites’ section below for details.

DevOps Categories

Testing

Prerequisites

If a support pack or transport has not been applied to the Analysis system, it must be disassembled before it can be analyzed by the App. This can be done in SAP by running the SAINT transaction and selecting ‘Disassemble OCS Package’ from the Utilities menu. Alternatively, the support pack or transport may be disassembled in LiveCompare using the PacDsm App.

The App requires that SAP’s Where Used indexes are up to date on the Analysis system. For further details, see the Step 1 (As-Is) - Checking the Integrity of the Where Used Indexes help topic.

Before running the Smart Impact App for the first time, you will need to run the Create Object Links Cache workflow from the Impact Analysis templates folder to create an Object Links Cache database for the system that contains your changing objects. Check the system’s RFC Destination in the LiveCompare hierarchy first, and select the OLC tab to verify whether an Object Links Cache database has already been created. The Create Object Links Cache workflow may be run incrementally to update the Object Links Cache database with any recent object dependency changes.

If you plan to use the App to identify impacted IDocs, the Cache IDoc Impact Data workflow from the Tools templates folder must be run first in order to populate the IDoc Impact Cache External Data Source. Refer to the template’s Help file for details.

If you plan use the App to identify test hits and gaps in a Test Repository, the Create Test Repository Cache workflow from the Tools template folder must be run first in order to populate the Test Repository’s cache. Refer to the template’s Help file for details.

You should make sure that performance history data is available on the RFC Destination selected for the ‘Performance History System’. Select the RFC Destination in the LiveCompare hierarchy and click the PHD tab. Select the source for performance history data, and if necessary the number of months of data to retrieve, then click ‘Update Data’. The performance history data may also be retrieved using a schedule. See the Retrieving Performance History Data help topic for details.

If you plan to compare the changing objects, the IMG structure for the Analysis system must be downloaded before running the App. The ‘Test Data’ worksheet generated by the App includes reports that identify the locations of data changes in the downloaded IMG structure.

If required, the Business Critical Objects External Data Source should be populated with a set of business critical objects that are included in the set of most-at-risk executables if they are impacted by one or more changing objects. The External Data Source is populated from a .CSV file with TYPE and NAME columns. Use the External Data Source’s ‘Replace Data File’ option in the LiveCompare Studio to upload your own .CSV file.

The TransportsToIgnore External Data Source contains Regular Expressions which are used to filter out transports containing custom objects. If required, this External Data Source may be edited in the LiveCompare Studio using the ‘Replace Data File’ option.

Running the App

To run the Smart Impact App, select the App from the Home screen and create an App variant. Complete the variant screen as follows:

  • If required, edit the ‘Transports’ table to provide a list of transports to be analyzed.
  • If required, edit the ‘Objects’ table to provide a list of objects to be analyzed.
  • If required, edit the ‘ChaRM Change Requests’ string list to provide a list of ChaRM change requests to be analyzed.
  • Set the ‘Analysis System’ field to the RFC Destination for the system that contains the transports or objects to be analyzed.
  • Set the ‘SAP Solution Manager System’ field to the RFC Destination of an SAP Solution Manager system. The system is used to find transport objects associated with each of the specified ChaRM change requests.
  • Set the ‘Performance History System’ field to the RFC Destination for the system from which performance history data has been obtained.
  • Set the ‘Compare Changing Objects?’ switch to specify whether the most-at-risk executables will be compared on the Analysis and Comparison systems.
  • Set the ‘Comparison System’ field to the RFC Destination for the system on which to compare the most-at-risk executables.
  • If required, select a Test Repository in the ‘Test Repository’ field to identify test asset hits and gaps using the names of the most-at-risk objects. Choose --Not Selected-- if required to clear your selection.
  • Set the ‘Create test execution?’ switch to specify whether the matching test assets will be used to create test execution lists in the specified Test Repository. The test execution lists will be stored in a time-stamped folder named LiveCompare_<YYYY-MM-DD HH:MM:SS>.
  • Set the ‘Test Search Paths’ string list to limit the search for test assets to one or more project folders in the specified Test Repository. Each path should begin with the Subject folder, and a backslash (\) should be used to separate path components, for example Subject\Release1.
  • Click ‘Run’. When the variant has completed, its results may be accessed from the App Cockpit screen.

App Results

The impact App generates a Dashboard report which includes a treemap summarizing the not-impacted, impacted and most-at-risk objects, grouped by functional area. It also includes links to the following reports.

Smart Impact Analysis Dashboard

  • If there are fewer than 1000 changing objects, this report includes the following charts:
    • The Changing Object Summary pie chart provides a summary of the changing objects by their change type.
    • The Changing Objects by Type column chart provides a summary of the changing objects by their object type.
    • The Impacted & Most-at-risk column chart summarizes the impacted and most-at-risk objects by object type. The results for impacted Transactions (TCOD), BSP Applications (WAPA), Smart Forms (SSFO) and remote function calls are capped so that a maximum of 10 impacted objects of each returned for each changing object, in addition to any impacted business critical objects of that type. Impacted roles and IDocs are not capped.
    • The Most-at-risk & Test Coverage column chart shows the number of most-at-risk objects that have test coverage in the specified Test Repository.
    • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets.

    If there are 1000 or more changing objects, the report includes the following charts:

    • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.
    • The Most-at-risk Test Hits & Gaps pie chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the specified Test Repository.
    • The Impacted Objects by Type bar chart summarizes the impacted objects by their type.
    • The Most-at-risk by Application Area column chart displays the top 20 most-at-risk objects by Application Area. The remaining Application Areas are grouped together in the ‘Other’ category.

    The Smart Impact Analysis Dashboard includes the following Excel reports:

    Smart Impact Analysis Details

    If there are fewer than 1000 changing objects, this Excel report includes the following spreadsheets:

    Dashboard

    This spreadsheet includes the charts shown on the Smart Impact Analysis Dashboard (fewer than 1000 changing objects).

    Help

    This spreadsheet provides help for each of the spreadsheet reports.

    Home

    This spreadsheet lists changing objects (in the CHILD_TYPE and CHILD_NAME columns) and their associated change IDs and change types. The CHANGE_ID column may store a transport name, a ChaRM change request, or ‘Objects’ for changing objects that were specified individually. Changing objects that do not impact other objects are also included.

    The IMPACTED_OBJECTS column displays the number of impacted objects for each changing object. Click a link in this column to display the impacted objects in the Impacted Objects spreadsheet.

    Impacted Objects

    This spreadsheet lists used impacted objects in the NAME and TYPE columns, and the objects that impact them in the CHILD_TYPE and CHILD_NAME columns. Details for the used impacted objects are listed in the DESCRIPTION, APP_AREA and DEVCLASS columns.

    For each used impacted object:

    • The DYNP column lists the number of impacted screens for the object in the most-at-risk results. Click a hyperlink to display the object’s screens in the Impacted DYNPs spreadsheet.
    • The USAGE column lists the object’s usage count according to the available performance history data.
    • The USERS column lists the number of users of the object according to the available performance history data. Click a hyperlink to display the object’s users in the Impacted Users spreadsheet.
    • The TEST_HITS column lists the number of matches found in the specified Test Repository. Click a hyperlink to display details for the matched token in the Test Hit Details spreadsheet.
    • The CUSTOM column is set to ‘Yes’ for custom used objects.
    • The MAR column is set to ‘Y’ for impacted objects that are most-at-risk and recommended for testing. Each changing object will have no more than one most-at-risk executable. The BUSINESS_CRITICAL column is set to ‘Y’ for Business Critical executables, and to <blank> for objects that are not identified as Business Critical.
    Impacted DYNPs

    This spreadsheet list each used object and its associated screens (including each screen’s descriptive text). If a hyperlink is selected in the Impacted Objects spreadsheet’s DYNP column, the Impacted DYNPs spreadsheet lists the used object’s associated screens.

    Impacted Users

    This spreadsheet lists each impacted object, and its usage count for each user according to the available performance history data. If a hyperlink is selected in the Impacted Objects spreadsheet’s USERS column, the Impacted Users spreadsheet lists the users of the associated object.

    Role Details

    This spreadsheet lists details for the impacted roles, including each role’s name, and the details for its authorization objects. The STATUS column indicates the comparison status for each role on the Analysis and Comparison systems.

    Test Hits & Gaps

    This spreadsheet indicates whether each most-at-risk executable is a hit or a gap in the specified Test Repository.

    • ‘Hits’ are most-at-risk object names for which test assets have been found.
    • ‘Gaps’ are most-at-risk object names for which there are no available test assets.
    Test Hit Details

    This spreadsheet includes the details for each test asset that matched a most-at-risk executable in the specified Test Repository. It also displays the total usage, top user and top usage details for each of the matched most-at-risk executables.

    If there are 1000 or more changing objects, the report includes the following spreadsheets:

    Dashboard

    This spreadsheet includes the charts shown on the Smart Impact Analysis Dashboard (1000 or more changing objects). It also provides a summary of the Used, Impacted and Most-at-risk objects. Click a link in the Used or Standard column to display the objects in the Used, Impacted & Most-at-risk spreadsheet. The % Saving values for impacted and most-at-risk objects are calculated as follows:

    % Saving (Impacted) = (Total Used - Total Impacted) / Total Used * 100

    % Saving (Most-at-risk) = (Total Used - Total Most-at-risk) / Total Used * 100

    Help

    This spreadsheet provides help for each of the spreadsheet reports.

    Home

    This spreadsheet lists the used, impacted and most-at-risk objects, and their associated Application Areas. The IMPACTFUL_OBJECTS column displays the number of changing objects that impact each object. Click a link in this column to display the impacting objects in the Impactful Objects spreadsheet.

    Impactful Objects

    This spreadsheet lists used impacted objects in the NAME and TYPE columns, and the changing objects that impact them in the CHILD_NAME and CHILD_TYPE columns. Details for the used impacted objects are listed in the DESCRIPTION, APP_AREA and DEVCLASS columns. Click a hyperlink in the TEST_HITS column to display the associated test in the Test Hit Details spreadsheet.

    For each used impacted object:

    • The DYNP column lists the number of impacted screens for the object in the most-at-risk results. Click a hyperlink to display the object’s screens in the Impacted DYNPs spreadsheet.
    • The USAGE column lists the object’s usage count according to the available performance history data.
    • The USERS column lists the number of users of the object according to the available performance history data. Click a hyperlink to display the object’s users in the Impacted Users spreadsheet.
    • The TEST_HITS column lists the number of matches found in the specified Test Repository. Click a hyperlink to display details for the matched token in the Test Hit Details spreadsheet.
    Impacted DYNPs

    This spreadsheet lists changing objects in the CHILD_NAME and CHILD_TYPE columns, used impacted objects in the TYPE and NAME columns, and their associated screens (including each screen’s descriptive text). Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s DYNP column, the Impacted DYNPs spreadsheet lists the impacted object’s associated screens.

    Impacted Users

    This spreadsheet lists each impacted object, and its usage count for each user according to the available performance history data. Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s USERS column, the Impacted Users spreadsheet lists the users of the impacted object.

    Role Details

    This spreadsheet lists details for the impacted roles, including each role’s name, and the details for its authorization objects. The STATUS column indicates the comparison status for each role on the Analysis and Comparison systems.

    Test Hits & Gaps

    This spreadsheet indicates whether each most-at-risk executable is a hit or a gap in the specified Test Repository.

    • ‘Hits’ are most-at-risk object names for which test assets have been found.
    • ‘Gaps’ are most-at-risk object names for which there are no available test assets.
    Test Hit Details

    This spreadsheet includes the details for each test asset that matched a most-at-risk executable in the specified Test Repository. It also displays the total usage, top user and top usage details for each of the matched most-at-risk executables. If the spreadsheet is accessed via a hyperlink, it displays the details for the matching test asset.

    Test Data

    If the ‘Compare Changing Objects?’ switch is set to ‘Yes’, this spreadsheet lists the changing SAP tables and includes links to ‘Object Differences reports’ showing data changes. Input table key (TABK) objects appear in the Test Data worksheet as tables (TABL). Each table name is a hyperlink to the compared Table Key Table Contents for all the table keys of the associated SAP table.

    Analysis Input Data

    This Excel report contains a copy of the input parameters used to produce the App’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

    Smart Impact Analysis Details

    Analysis Input Data

    This Excel report contains a copy of the input parameters used to produce the App’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

    Standard Apps