Web Services Impact Analysis

This workflow is provided with one or more of the following on the system to be analyzed:

  • A set of transports.

  • A set of objects.

  • A set of ChaRM change requests.

It identifies an optimal (most-at-risk) set of executables, which when tested will execute each of the changing objects obtained from the specified transports, object list, and transport objects associated with the specified ChaRM change requests. The analysis is driven from the active Web Services obtained from the VEPFUNCSOAPEXT table on the Performance History system. If required, the most-at-risk executables may be compared on the Analysis system and a Comparison system.

A Test Repository may also be specified. If this is done, the Test Repository is searched for test assets that refer to the names of the most-at-risk executables, and the workflow identifies ‘hits’ and ‘gaps’ in the specified Test Repository.

  • ‘Hits’ are most-at-risk object names for which matching test assets have been found.

  • ‘Gaps’ are most-at-risk object names for which there are no available test assets.

Optionally, the matching test assets may be used to create a test execution.

The workflow produces a Dashboard report and an associated Excel report.

The Web Services Impact Analysis workflow calls:

This workflow should be used to perform an impact analysis for a set of transports, a set of standard and custom objects obtained from transports, or a set of standard and custom objects obtained from one or more ChaRM change requests. To analyze the impact of a set of custom objects extracted from transports, use the Object Change Impact Analysis workflow in the Impact Analysis hierarchy folder.

Before You Begin

Before running the Web Services Impact Analysis workflow, please note the following:

If a support pack or transport has not been applied to the Analysis system. It must be disassembled before it can be analyzed by the Web Services Impact Analysis workflow. This can be done in SAP by running the SAINT transaction and selecting ‘Disassemble OCS Package’ from the Utilities menu. Alternatively, the support pack or transport may be disassembled in LiveCompare using the Package Disassembler App.

The Web Services Impact Analysis workflow requires that SAP’s Where Used indexes are up to date on the Analysis system. For further details, see the Step 1 (As-Is) - Checking the Integrity of the Where Used Indexes help topic.

The Create Object Links Cache workflow from the Prerequisites templates folder should be run for the Analysis system to create a cache of object links. See the workflow’s associated help file for details.

If required, the Business Critical Objects External Data Source should be populated with a set of business critical objects that are always included in the set of most-at-risk executables. The External Data source is populated from a .CSV file with TYPE and NAME columns. Use the External Data Source’s ‘Replace Data File’ option to upload your own .CSV file.

Creating RFC Destinations

Before you begin, you will need to create RFC Destinations for the Analysis, Performance History and Comparison systems.

Preparing the Workflow

To prepare the Web Services Impact Analysis workflow, drag its workflow template from the Templates folder into your own workspace, and modify the workflow as follows:

To specify the system to analyze:

  1. Select the Analysis System parameter and choose ‘Edit RFC Destination’ from its context menu to display the RFC Destination dialog.

  2. Select the RFC Destination for the system to analyze, then click ‘Save’.

To specify the Performance History system, from which to extract details for active Web Services:

  1. Select the Performance History System parameter and choose ‘Edit RFC Destination’ from its context menu to display the RFC Destination dialog.

  2. Select the RFC Destination for the system from which to extract details for active Web Services, then click ‘Save’.

To specify the Comparison system, on which to compare the most-at-risk executables (the name of this system is also used in the workflow’s summary results):

  1. Select the Comparison System parameter and choose ‘Edit RFC Destination’ from its context menu to display the RFC Destination dialog.

  2. Select the RFC Destination for the system to analyze, then click ‘Save’.

To specify the SAP Solution Manager system, from which to obtain transport objects associated with each ChaRM change request:

  1. Select the SAP Solution Manager System parameter and choose ‘Edit RFC Destination’ from its context menu to display the RFC Destination dialog.

  2. Select the RFC Destination for the system to analyze, then click ‘Save’.

To specify which transports to analyze:

  1. Select the Transport List parameter and choose ‘Edit Table’ from its context menu to display the Table Editor dialog.

  2. Enter one or more transport names from the Source system, or paste in a selection of transport names from an Excel spreadsheet, then click ‘Save’.

To specify a set of objects to analyze:

  1. Select the Objects parameter and choose ‘Edit Table’ from its context menu to display the Table Editor dialog.

  2. Enter one or more object names from the Analysis system, completing the NAME and TYPE columns, or paste in a selection of objects from an Excel spreadsheet, then click ‘Save’.

To specify one or more ChaRM change requests to analyze:

  1. Select the ChaRM Change Requests parameter and choose ‘Edit String List’ from the context menu to display the String List Editor dialog.

  2. Click 'Insert Row' to enter the names of each ChaRM change request, then click ‘Save’. Each ChaRM change request should be added as a separate string entry.

To specify whether the most-at-risk executables will be compared on the Analysis and Comparison systems, set the Compare Changing Objects? parameter to either true or false.

To specify a Test Repository in which to identify test asset hits and gaps using the names of the most-at-risk objects:

  1. Select the Test Repository parameter and choose ‘Edit Test Repository’ from its context menu to display the Test Repository dialog.

  2. Select the Test Repository in which to create the test execution, then click ‘Save’.

To limit the search for test assets to one or more project folders in the specified Test Repository:

  1. Select the Test Search Paths parameter and choose ‘Edit String List’ from the context menu to display the String List Editor dialog.

  2. Click 'Insert Row' to enter each search path, then click ‘Save’. Each search path should be added as a separate string entry. Search paths should begin with the Subject folder, and a backslash (\) should be used to separate path components, for example Subject\Release1.

To specify whether the matching test assets will be used to create a test execution in the specified Test Repository Project, set the Create test execution? parameter to either true or false.

If the Create test execution? parameter is set to true, set the Test Set Name parameter to the name to be used for the root test execution folder in which to store the matching test assets.

Save the workflow using the ‘Save’ toolbar button.

Running the Workflow

To run the Web Services Impact Analysis workflow, click the ‘Run’ toolbar button, choose ‘Run Now’ from the diagram’s context menu, or press F5. The currently running workflow action is marked with an animated display. When the workflow execution has completed, select the Report URL dataset and choose ‘View Details’ from the context menu to access the generated report.

Dashboard Report

The Web Services Impact Analysis workflow generates a Dashboard report, which includes the following charts:

The Used, Impacted & Most-at-Risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk Test Hits & Gaps pie chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the specified Test Repository.

  • The Impactful Objects by Type bar chart summarizes the changing objects that impact a used object.

  • The Most-at-risk by Application Area column chart summarizes the most-at-risk objects by Application Area.

Additional Resources

The Additional Resources section of the Dashboard report includes links to the following reports:

Smart Impact Analysis Details

This Excel report includes the following spreadsheets:

Dashboard

The dashboard spreadsheet includes the following charts.

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects. Click a link in the chart’s table to display the associated data. The % Saving values for Impacted and Most-at-risk objects are calculated as follows:

% Saving (Impacted) = (Total Used - Total Impacted) / Total Used * 100

% Saving (Most-at-risk) = (Total Used - Total Most-at-risk) / Total Used * 100

The Used, Impacted & Most-at-Risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk Test Hits & Gaps pie chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the specified Test Repository.

  • The Changes by Object Type bar chart summarizes the changing objects by object type.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Used, Impacted & Most-at-risk

This spreadsheet lists each of the used, impacted and most-at-risk objects. It also displays the total usage, top user and top usage details for each of the most-at-risk executables. The most-at-risk objects include any new custom programs, transactions and BSP applications from the transports, and any new standard objects of these types that are contained in custom transports.

Most-at-risk Details

This spreadsheet lists the most-at-risk executables and screens, these are recommended for testing. If the Compare Changing Objects? parameter in the workflow is set to true, click a hyperlink in the CHILD_NAME column to display comparison details for the selected executable on the Analysis and Comparison systems. When the CHILD_TYPE field stores the type TABL, the CHILD_NAME field is a hyperlink to the Table Key Table Contents for all the table keys of the associated SAP table.

The spreadsheet displays the total usage, top user and top usage details for each of the most-at-risk executables. It also includes the following details for the source objects that would be exercised by testing each executable.

Column Description
DATE The source object’s last modification date.
CHANGES The number of transports on the analysis system that contain the changing object. This column is set to 0 if a child object is a table used in a view, and only the view is included in the analyzed transports.
FANIN The source object’s fan-in value. The fan-in value of an object is the number of other objects that are referencing it. Multiple references from the same object are counted as one access.
FANOUT The source object’s fan-out value. The fan-out value of an object is the number of other objects that are referenced in it. Multiple references to the same object are counted as one access.

Extra

This spreadsheet lists any testable changing objects, and testable objects found in the input transports or ChaRM change requests, excluding any objects that are in the most-at-risk set. These objects are candidates for testing regardless of whether they are used or recommended.

Testable objects include Programs (PROG), Transactions (TCOD) and BSP Applications (WAPA). The Extra spreadsheet includes objects from custom transports, and custom object from SAP transports. Standard objects found in SAP transports are not included.

Test Hits & Gaps

This spreadsheet indicates whether each most-at-risk executable is a hit or a gap in the specified HP

project.

  • ‘Hits’ are most-at-risk object names for which test assets have been found.

  • ‘Gaps’ are most-at-risk object names for which there are no available test assets.

The spreadsheet also displays the total usage, top user and top usage details for each of the most-at-risk executables.

Test Hit Details

This spreadsheet includes the details for each test asset that matched a most-at-risk executable in the specified Test Repository. It also displays the total usage, top user and top usage details for each of the matched most-at-risk executables.

Test Data

If the Compare Changing Objects? parameter is set to ‘Yes’, this spreadsheet lists the changing SAP tables and includes links to Object Differences reports showing data changes. Input table key (TABK) objects appear in the Test Data worksheet as tables (TABL). Each table name is a hyperlink to the compared table key table contents for each of the changed tables.

System Info

This spreadsheet lists the SAP system details for the Analysis, Performance History, Comparison and SAP Solution Manager systems.

STATUS Column

The Used, Impacted & Most-at-risk spreadsheet include a column named STATUS. This column is set to:

  • ‘Most-at-risk’ if the executable is most-at-risk. Most-at-risk executables are recommended for testing.

  • ‘Impacted’ if the executable is impacted by the changing objects but covered by a most-at-risk executable.

  • ‘Used’ if the executable is used, but not impacted by the changing objects.

BUSINESS_CRITICAL Column

The Used, Impacted & Most-at-risk and Most-at-risk Details, spreadsheets include a column named BUSINESS_CRITICAL. This column is set to ‘Y’ for Business Critical executables, and to <blank> for objects that are not identified as Business Critical.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the App’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.