EPM Automate for Oracle EPM Cloud Applications: How to Leverage the Replay Command

Are you a large company with several users on your Oracle EPM Cloud application? For companies that expect hundreds of users to be opening reports and forms and running business rules in their application at the same time, it is important to be aware of how this volume of actions will affect performance of the cloud application before going live. For this reason, Oracle has developed the ‘replay’ command in EPM Automate, which “replays the Oracle Smart View for Office load on a service instance to enable performance testing under heavy load to verify that user experience is acceptable when the service is under specified load.”

In this blog we will go over how to use the ‘Replay’ command, and discuss a use case of replaying reporting for a company with over 200 users.

Example Use Cases

Although we will just be discussing one possible way to use Replay, there are a variety of other ways the command can be used – including but not limited to:

  • Opening forms, entering information, submitting, and saving; creating ad-hoc forms and submitting data
  • Running business rules after saving a form or on their own in another process
  • Opening reports (including changing POVs)

Pre-requisites 

To create replays, these items must be downloaded ahead of time:

How to Use Replay

To use replay you will first need to record your Smart View actions. This is done through a free software called Fiddler, which will log all HTTP(s) traffic between the client and the internet ( https://www.telerik.com/fiddler). After Fiddler has been installed, some configuration will need to be done.

From Tools -> Options -> HTTPS tab: Enable ‘Capture HTTPS CONNECTs’ and ‘Decrypt HTTPS traffic’

At this point, if you open Smart View you will receive several warning messages. To avoid this, generate a Fiddler Root Certificate and add to the trusted list – this can be removed after your actions have been captured.

From the HTTPS tab, click ‘Actions’ then ‘Trust Root Certificate’

Click ‘Yes’ to trust the Fiddler Root certificate.

Click ‘Yes’ to install the certificate, then click ‘Yes’ to add certificate to the Machine Root List.

After the load scripts have been generated, click ‘Actions’ from the HTTPS tab and select ‘Reset All Certificates’.

Now that Fiddler is enabled and configured, click ‘Capture Traffic’ to capture the actions you take.

Logon to your EPM environment in Smart View and perform all actions you wish to replay. In this case, we will be opening several different reports. Note that you should not do any other unrelated activities at this time because they will be captured by Fiddler. That includes actions that are taking place in the background of your computer.

You should see your actions in Fiddler as they are taken.

After actions are completed, export your session and then save as an HTTPArchive v1.1 (.har) file so that it can be understood by the replay command.

The process of recording sets of activities can be repeated in the same way until you have a series of HAR files to be replayed. 

Next, we will create a replay file. This is the file that will be called out in the replay command. Create a CSV file with 3 columns: user name, password, and location of HAR file(s). Each row indicates a user – for example, 200 rows will replay the actions in the HAR files for 200 users. 

Now that our replay file is created, we are ready to use the replay command.

epmautomate replay REPLAY_FILE_NAME.csv [duration=N] [trace=true]

  • duration is an optional parameter that indicates the number of minutes the activities are executed (N = # of minutes)
  • trace=true is an optional setting that will create trace files in XML format

Instead of using trace for results, we have decided to put the replay results in a more readable format by putting them in a csv file. This file is then moved out of its default EPM Automate folder and into a desired folder.

Use Case – Performance Testing & Analysis

A large organization that operates all over North America and has ~200 Planning & Budgeting users, is in their testing period of an On-Premise to PBCS migration. They are looking to have the same reporting and analytics as they had On-Premise, with faster and updated PBCS technology. After the migration and several best practice application management updates in the interest of speed and performance, we were ready to test the new application’s performance.

For testing the speed in the case of a migration, there are 3 important performance indicators to consider:

  • Compare single report performance in PBCS vs. On-Premise.
  • Analyze how long it takes to open each report when all reports are being opened at once in PBCS.
  • Recognize single reports (or a tandem of reports) that are causing the whole system to slow down.

We used the ReplayResults.csv file that is produced by the replay process to analyze speed and performance of opening the reports quantitatively. Here is an example of the file that would be produced when a replay (.har file) is run:

As you can see, the results file will show us each action that was replayed and how long they took to perform – we will go over a few different ways to interpret these results.

These are iterations of the report being opened. The amount of iterations is based on the duration set in the replay command. For example, if the duration=10 in the batch file, the report will go through the opening process as many times as possible in a row in the 10 minutes the command is running. To determine how long it took to open the report a single time (on average), divide the duration by amount of iterations. If the above sequence appears 20 times on the replay file, that means the Market Summary report takes .5 minutes (or 30 seconds) to open, on average (10 minutes/20 iterations).

Let’s apply this to the 3 performance indicators:

1. Compare single report performance in PBCS vs. On-Premise: Set the amount of time it takes for the On-Premise reports to open as the benchmark. Analyze the opening speed of the PBCS reports against the On-Premise speed. The goal is to have post-migration PBCS reports open quicker.

a. Add a single .har file that opens the Balance Sheet On-Premise into the CSV file and run the replay command

b. Do the same for the Balance Sheet in PBCS.

c. Run the analysis on each replay file from above and compare speed of On-Prem vs. PBCS.

You want to ensure each report opens faster in PBCS individually before moving onto the next step.

2. Analyze how long it takes to open each report when all reports are being opened at once in PBCS by different users: Set a benchmark or expectation for how long the reports should take to open at a maximum, no matter how many other users are opening any number of reports. Make that number your duration in the replay command.

a. Add .har files of all PBCS reports that are used by analysts to the CSV file. These files are now queued up to all be opened at once upon deployment of the batch file.

Deploy the batch file.

b. Do the same analysis for each report. The speed of each report may be significantly slower. If one or more of the reports does not go through a full, single iteration during the allotted time (duration), this means it was not able to open in the “maximum” time allowed. This may mean some of the reports need maintenance to speed the opening up, which requires consideration for the third performance indicator to evaluate which report(s) is causing the most stress on the system.

c. Alternatively, you can extend the duration in the command and run again. Extend until each report goes through a couple iterations and then analyze to find the average speed.

3. Exclude one file at a time from process to find pain points.

a. Exclude an individual .har file from the CSV and run the command.

b. Create a CSV file for every report by excluding that report and maintaining all the other reports in the CSV. Run the command and analyze the results.

c. Observe which reports make the greatest impact the remainder of the reports’ (.har files’) performance.

For example, we ran the command with a sample of the reports (7 reports) included in the CSV file, with duration=5. It was important that each and every report opened in under 5 minutes, no matter the stress on the system. When we ran the batch file and analyzed the results, 2 of the reports were unable to go though one full iteration and open. In order to find the reasoning behind the slowed functionality, for each of the 7 reports, we excluded one and kept the other 6 and ran the batch. We found that those same 2 reports didn’t open in the allotted 5 minutes, unless the “Market Summary” report was excluded. We made the decision that the Market Summary report was the major pain point and there needed to be some structural and metadata changes to keep report opening under the 5-minute maximum.

Below is an analysis of the sample of the reports we tested. The data gathered is based on averages. Some of the reports are inherently larger, which makes comparing speeds of different reports unreasonable. In the table, each report speed is compared to itself under different conditions. On the heat map, the green represents conditions when the report runs the fastest. Typically, the report will run it’s fastest when it runs by itself in PBCS. The next most green area is when report 4 (Market_Summary) is excluded.

The replay tool allowed us to evaluate performance of reporting in PBCS and identify which report was causing the application to slow down. Because the tool allowed us to isolate any issues and fix them, we were able to show the client tangible analysis that every report opens and opens in under 5 minutes, no matter the number of users and related stress on the system. Additionally, the analysis shows the new PBCS system on average processing faster than the old, On-Premise system.

With just a few prerequisites, the Replay command allows companies, like the one we discussed, to easily test performance and speed of your EPM Cloud application. Before going live and hundreds of users are actively using the system, any possible issues can be troubleshot and fixed to ensure smooth and high-speed functionality, no matter the load stressing the application.

Blog post by Henry Rosenberg and Bathool Syed of Key Performance Ideas.


 

RECENTLY
FROM THE BLOG

 

6/28

OAC Webinar