Released Request Impact Analysis

This workflow runs the Smart Impact Analysis workflow to perform an impact analysis for any newly-released requests on the Analysis system, identified by the Find Released Requests workflow. These requests provide the changing objects that are used in the analysis.

The Excel report returned by the Smart Impact Analysis workflow is emailed to the specified recipients.

The workflow should be scheduled to run periodically. During each run, an Impact Analysis is performed for the transportable tasks from the analysis system that have not yet been processed.

Prerequisites

The workflow requires access to RFC Destinations for an Analysis system to which developer changes will be submitted, and a Performance History System from which performance history data will be retrieved.

The Create Object Links Cache workflow from the Prerequisites templates folder should be run to create an Object Links Cache database for the Analysis system.

You should make sure that performance history data is available on the RFC Destination selected for the Performance History System. Select the RFC Destination in the LiveCompare hierarchy and click the PHD tab. Select the source for performance history data, and if necessary, the number of months of data to retrieve, then click ‘Update Data’. The performance history data may also be retrieved using a schedule. See the Retrieving Performance History Data help topic for details.

The ChangingObjectsToIgnore External Data Source removes tables whose names begin with ENH or AGR from the set of changing objects. If required, these External Data Sources may be edited in the LiveCompare Studio using the ‘Replace Data File’ option.

LiveCompare should be configured to send emails. See the Integrating LiveCompare with your Email System section of the LiveCompare Installation and Configuration Guide for details.

Creating RFC Destinations

Before you begin, you will need to create RFC Destinations for the Analysis, Performance History and Comparison systems.

Preparing the Workflow

The Released Requests Impact Analysis workflow should be run in a separate workspace for each system to be analyzed. To prepare the Released Requests Impact Analysis workflow, carry out the following steps.

  1. Create a new workspace whose name reflects the system to be analyzed, for example RRIA - <Analysis System>.

  2. Select the Templates > Impact Analysis > Released Requests Impact Analysis template in the LiveCompare hierarchy and choose ‘Copy to Workspace’ from the context menu.

  3. Select RRIA - <Analysis System> as the target workspace, and click Copy. A number of dependent templates will also be copied.

  4. Copy the Templates > Impact Analysis > Initialize Request Store workflow to the RRIA - <Analysis System> workspace.

  5. If required, configure the Initialize Request Store workflow to set the date from which to retrieve released requests. See the template’s help file for details.

  6. Run the Initialize Request Store workflow in the RRIA - <Analysis System> workspace.

Select the Released Requests Impact Analysis workflow in the RRIA - <Analysis System> workspace, and configure the workflow as follows.

  1. Set the Analysis System RFC Destination parameter to the RFC Destination for the system that contains the requests to be analyzed.

  2. Set the Performance History System RFC Destination parameter to the RFC Destination for the system from which performance history data has been obtained.

  3. Set the Comparison System RFC Destination to the RFC Destination for the system on which to compare the most-at-risk executables.

  4. If required, set the Test Repository parameter to a Test Repository in which to test asset hits and gaps using the names of the most-at-risk objects.

  5. Set the Test Search Paths String List parameter to limit the search for test assets to one or more project folders in the specified Test Repository. Each path should begin with the Subject folder, and a backslash (\) should be used to separate path components, for example Subject\Release1.

  6. Set the Create test execution? Boolean parameter to specify whether the matching test assets will be used to create test execution lists in the specified Test Repository.

  7. Set the Execution Name String parameter to the name of the test execution to create. If this parameter is not set, a time-stamped name of the form LiveCompare_<YYYY-MM-DD HH:MM:SS> is used as the default.

  8. Set the Execution Path String parameter to the path of an existing folder in which to store the test execution, for example Execution/TestExecutions. If this field is not set, the test execution will be created in the default Executions folder.

  9. Set the Execute tests? Boolean parameter to specify whether test execution lists will be run in the specified Tosca Test Repository. If this parameter is set to ‘true‘, the Create test execution? parameter should also be set to ‘true‘. Executing tests requires Tosca to be configured using DEX.

  10. Set the Create test requirements? Boolean parameter to true to create test requirements for most-at-risk objects that do not already have requirements in the specified Tosca, qTest or ALM Test Repository. Requirement names have the format LiveCompare_Most-at-risk_Gaps <YYYY-MM-DD HH:MM:SS> <Object Name - Object Description>.

  11. Set the Requirements Path String parameter to the name of the root requirements folder to be used in the specified Tosca Test Repository. If this field is not set, ‘Requirements’ is used as the default.

  12. Set the Email To String List parameter to a list of email recipients for the Developer Impact Analysis report. Each email address should be stored a separate string entry.

  13. Set the From String parameter to the EmailFromAddress value stored LiveCompare’s Configuration – Email screen. You may need to check this setting with a LiveCompare Administrator.

  14. Customize the Subject String parameter and Message String Multiline parameter as appropriate.

Scheduling Workflows

The Developer Impact Analysis workflows should be run using a schedule. The following workflows will need to be scheduled. To schedule a workflow, select it in the LiveCompare hierarchy and choose ‘Schedule Run’ from the context menu.

Create Object Links Cache

This workflow should be copied from the Prerequisites templates folder to each of your DIA - <Analysis System> workspaces. It should be configured to create an Object Links Cache database for the analysis system and then scheduled to run once each week. If possible, the workflow should be scheduled to run when no developer tasks are being performed on the analysis system (for example, outside office hours).

Released Requests Impact Analysis

This workflow runs the Smart Impact Analysis workflow to perform an impact analysis for any newly-released requests on the Analysis system, and sends report links to the specified email recipients. It should be scheduled to run as required on a daily basis, for example several times each day, depending on how frequently you wish to review the impact analysis reports.

Workflow Results

The Released Requests Impact Analysis workflow generates a Dashboard report which includes a treemap summarizing the not-impacted, impacted and most-at-risk objects, grouped by functional area. See the Dashboard Report Treemaps help topic for details. It also includes links to the following reports.

Smart Impact Analysis Dashboard

This report includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the specified Test Repository.

  • The Changing Object Summary doughnut chart summarizes the changing objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets.

The Smart Impact Analysis Dashboard includes the following Excel report:

Smart Impact Analysis Details

This report includes the following spreadsheets:

Dashboard

This spreadsheet includes the charts shown on the Smart Impact Analysis Dashboard.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Home

This spreadsheet lists the used, impacted and most-at-risk objects, and their associated Application Areas. It includes:

  • Impacted objects with one or more impactful changes (these objects are marked as most-at-risk).

  • Impacted objects with no impactful changes.

  • Used objects with no impactful changes.

The RISK column displays H, M, or L for most-at-risk objects. These values are a ranking of risk due to the depth of the impact and frequency of use of the object. H is for high risk, M is for medium risk, and L is for low risk.

The IMPACTFUL_OBJECTS column displays the number of changing objects that impact each object. Click a link in this column to display the impacting objects in the Impactful Objects spreadsheet.

New most-at-risk objects (that have no impactful objects and a usage count of 0) are set to have themselves as a single impactful object.

Impactful Objects

This spreadsheet lists used impacted objects in the NAME and TYPE columns, and the changing objects that impact them in the CHILD_NAME and CHILD_TYPE columns. The CHANGE_ID column identifies the transport that includes the impacting object. Details for the used impacted objects are listed in the DESCRIPTION, APP_AREA and DEVCLASS columns. Click a hyperlink in the CHILD_NAME column to display comparison details for the selected object. Click a hyperlink in the TEST_HITS column to display the associated test in the Test Hit Details spreadsheet.

For each used impacted object:

  • The DYNP column lists the number of impacted screens for the object in the most-at-risk results. Click a hyperlink to display the object’s screens in the Impacted DYNPs spreadsheet.

  • The USAGE column lists the object’s usage count according to the available performance history data.

  • The USERS column lists the number of users of the object according to the available performance history data. Click a hyperlink to display the object’s users in the Impacted Users spreadsheet.

  • The TEST_HITS column lists the number of matches found in the specified Test Repository. Click a hyperlink to display details for the matched token in the Test Hit Details spreadsheet.

Impacted DYNPs

This spreadsheet lists changing objects in the CHILD_NAME and CHILD_TYPE columns, used impacted objects in the TYPE and NAME columns, and their associated screens (including each screen’s descriptive text). Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s DYNP column, the Impacted DYNPs spreadsheet lists the impacted object’s associated screens.

Impacted Users

This spreadsheet lists each impacted object, and its usage count for each user according to the available performance history data. Click a hyperlink in the NAME column to display the associated test in the Test Hit Details spreadsheet. If a hyperlink is selected in the Impactful Objects spreadsheet’s USERS column, the Impacted Users spreadsheet lists the users of the impacted object.

Test Hits & Gaps

This spreadsheet indicates whether each most-at-risk executable is a hit or a gap in the specified Test Repository.

  • ‘Hits’ are most-at-risk object names for which test assets have been found.

  • ‘Gaps’ are most-at-risk object names for which there are no available test assets.

Test Hit Details

This spreadsheet includes the details for each test asset that matched a most-at-risk executable in the specified Test Repository. It also displays the total usage, top user and top usage details for each of the matched most-at-risk executables. If the spreadsheet is accessed via a hyperlink, it displays the details for the matching test asset.

The HAS_DATA column is set to ‘Y’ for Tosca test cases that match affected data, and to <blank> for test cases that do not match affected data. Affected data is defined by the key fields of table rows that are different, in the Analysis system only (added data), or in the Comparison system only (deleted data).

Test Data

If a Comparison System has been set and there are fewer than 1000 distinct changing objects, this spreadsheet lists the changing SAP tables and includes links to Object Differences reports showing data changes.

Input table key (TABK) objects appear in the Test Data worksheet as table key table contents (TABB) objects if their contents are different in each of the compared systems. Each table name is a hyperlink to the compared table key table contents for each of the changed tables.

Note that ‘refresh’ table keys are excluded from the comparison results. These are defined as having a length of 31 and * as their last character.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the App’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

Smart Impact Analysis Details

This Excel report includes the spreadsheets described here.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the App’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.