Blog series by Deanna Sunde, Oracle ACE, of Key Performance Ideas.
Subscribe to our blog to receive the next blog automatically (see left hand column to enter your email)!
Want to hear Deanna live? Join her for her webinar series!
November 14th, 2017 | Part 1: The first part of this two part series will cover importing and exporting metadata and related use cases (including migrating from on premise), importing and exporting data with jobs (both in Essbase and Planning formats) including exporting text accounts and Smart List values. The discussion will also include exporting data with the DATAEXPORT command in a business rule.
December 14th, 2017 | Part 2: The second part of this two part series will cover loading data with Data Management, moving data between databases using Smart Push, Data Synchronization and xref/xwrite, including related use cases. Also discussed will be migrations between Test and Prod, backups and disaster recovery.
Today starts a series of blogs I am writing on metadata and data integration options for Oracle EPM Cloud implementations. The blogs will cover everything from Data Management, to Smart Push, to data exports using Business Rules. Also included will be descriptions of use cases and advanced topics such as drill-through to details loaded through Data. I will also point out which methods can be automated by using EPM Automate and which methods can be used by which applications – PBCS (Planning and Budgeting Cloud Service), EPBCS (Enterprise Planning and Budgeting Cloud Service) and FCCS (Financial Consolidation and Close Cloud Service).
Exporting and Importing Metadata
The first blog herein is on metadata exports and imports. Loading metadata is a prerequisite for loading data and any automation, so this seems like a good place to start. In the examples below, I am using the sample PBCS application called Vision. While my examples are for PBCS, they also apply to EPBCS and FCCS.
Editing Existing Metadata
This is not intended to describe all the properties and what they mean nor to describe all the options for adding members, moving members, etc., but it will point you in the right direction. You can manually edit existing metadata by navigating to “Create and Manage” > “Dimensions”. This looks very much like on premise Planning applications. I first selected the Product dimension, then drilled down on the hierarchy. To edit the member, click on the member you wish to edit and then click on the pencil. The various properties will be displayed. Make any changes that you need and then click the “Save” button.
Running a Refresh
Just like Planning on premise, after updating metadata, it is necessary to run a refresh to push the metadata from PBCS/EPBCS to the Essbase database. There are a few ways to do this.
Alternatively, you can select the “Refresh Database” icon.
Note that the above methods do not provide any options. The next method does. Navigate to “Overview” and under Actions, select “Refresh Database.”
To run a refresh, under “Actions” select “Refresh Database”.
Select “Create” and your options and then click the “Refresh Database” button. You can also save this as a job which can be used when automating processes with EPM Automate.
Editing Metadata with Smart View
You can also use the Excel Smart View add-in to edit metadata. This add-in allows editing of the metadata for both on premise and cloud implementations. (This will be a subject for another blog.)
I’m discussing exporting metadata before importing as this is a great way to see what the format needs to be like when you import the metadata. With PBCS and EPBCS you can export the data in two ways:
- Export Metadata Job
Export Metadata Job
To export the metadata, navigate to “Application” > “Overview”. Click on “Dimensions”, then click on the “Export” button, then click the “Create” button. For the location, select “Outbox”. This will send the file to the Planning outbox and it can be downloaded to your computer from there. Select the “Product” dimension and the delimiter of “Other”. Enter a pipe (|) in the box to specify the delimiter. I prefer to use a pipe delimiter as formulas, for example, may contain commas and semicolons, so those are typically not good choices. Since we have selected the Outbox, we need to click the “Save as Job” button.
Give this job a name.
If you are running this as “Local” (i.e. to be sent directly to your computer), then you would have selected the “Export” button.
Click the “Save” button, then click the “Close” button. To run this job immediately, click the wheel and then select “Submit”.
Click the “Close” button, then navigate to “Application” > “Jobs” to view the completion status.
When the job is complete, navigate to “Application” > “Overview” and select “Inbox/Outbox Explorer” from the “Action” menu.
Select the file and then click the wheel and select “Download File”. You can choose to open the file or to save the file. If you choose to save the file, it will be downloaded into the Downloads folder on your computer. The .txt file will be within a .zip file.
To schedule this export metadata job, navigate to Application > Jobs. Click on the “Schedule Jobs” button. Under “What type of job is this?” select “Export Metadata”. Then make your other selections.
When you go to your “Downloads” folder, the file will be a text file within a zip file.
You can then copy this file elsewhere to edit it. Be careful what type of program you use for editing. Excel can corrupt the file, for example, by removing leading zeros. The file may also be too large to open using Notepad. Other text editors such as TextPad or Notepad++ can open larger files more efficiently.
Note that one difference between on premise and the Cloud is that the header for the Child (the first column) has the name of the dimension “Product” instead of “Child”.
When you are working with the metadata .txt files, note that the columns can be in any order and not all columns are required. For example, if you are building a Customer dimension and all the members add up the hierarchy so that all the aggregation operators are +, then it is not necessary to include the aggregation columns, as that is the default.
Import Metadata Job
The Import Metadata Job is very similar to running an export. First we will upload our file into the Inbox. Navigate to “Application” > “Overview” and under “Actions”, select “Inbox/Outbox Explorer”.
Click the “Upload” button and browse to your .txt file. You may wish to click on the “Overwrite” checkbox if you have uploaded the file before. Then click the “Upload File” button.
You will then see a successful completion message. Click the “OK” button.
You will then see your file listed in the Inbox. Click the “Close” button.
In order to import the file, we will first create the job. Navigate to “Application” > “Overview”. Click on the “Import” button and then click on the “Create” button.
For location, select “Inbox”. Type in the name of the text file next to the Product dimension. For the File Type select other and type in the pipe, which is our delimiter. If you wish to clear all the members in the dimension before loading the file, click that checkbox.
Then click the “Save as Job” button. Give the job a name. If you wish to refresh the database upon successful completion, check the checkbox. I am choosing that option here as it saves a step. Click the “Save” button.
To run the job, navigate to “Application” > “Jobs”. Click the “Schedule Jobs” button and select “Import Metadata” and “Run Now”. Click the “Next” button.
Select your job and click the “Next” button and the click the “Finish” button.
The job will then run and it will be in “Processing” state. Notice that there are two jobs in processing state – the import job and the refresh database job.
Click the “Refresh” button to update the status. When complete, the jobs will look like:
Now that we have created an import job, EPM Automate can be used to automate the loading of files sourced from elsewhere, such as an ERP systems like Oracle Financials Cloud, Oracle-E Business Suite, SAP, etc. Key Performance Ideas has blogs on using EPM Automate to load metadata.
The Snapshot process in EPM Cloud uses what is called on the Cloud “Application Manager”. This is essentially the same as LCM (Lifecycle Management) for on premise Planning implementations. I would recommend using the previous method over Snapshots as it does not have the header information so the file is easier to work with, especially for new implementations.
For migrations from on premise, you would export using LCM and then edit the file as needed and then either upload as a snapshot to load it or upload in the Import Metadata Job format.
Navigate to “Application” > “Migration”.
Click on “Snapshots” and you will see last night’s backup that Oracle automatically takes during the application’s maintenance window, as well as any other Snapshots that were created.
Expand on the Artifact Snapshot and drill down on “HP-Vision” under “Global Artifacts” to see a list of the Standard Dimensions.
The Snapshot can be downloaded by clicking on the wheel and selecting “Download”. This will download the entire Snapshot including all artifacts (metadata, forms, Essbase data, etc.)
To create a download that is more manageable, I recommend that you create a new Snapshot of just the dimension(s) you wish to work with. You can do that by clicking on “Categories”, then drill down and select what you wish to export. Create the Snapshot (I chose just the Product dimension) and then download the file using “Save File” to save into the Downloads folder on your computer. The Snapshot is downloaded into a Zip file. If you go into the folder structure using Windows Explorer, you will see the .csv file for the dimension exported.
The top of the file has heading information and then rows like below.
You can then copy this file elsewhere and open it to make any necessary edits. The edited file could then be put into the same folder structure and zipped and up loaded to Snapshots by using the “Upload” button. After uploading it can then be loaded into PBCS by clicking the wheel and selecting “Import”.
When you are working with the metadata .csv files, note that the columns can be in any order and not all columns are required. For example, if you are building a Customer dimension and all the members add up the hierarchy so that all the aggregation operators are +, then it is not necessary to include the aggregation columns, as that is the default.
Next Blog of the Series
The next blog of this series will be on some use cases for using the metadata import/export for purposes other than straight metadata loads. To receive the next blog, simply subscribe to our blog (see left hand column to enter your email) You will receive an email with the blog the day I release it! Stay tuned! Also, if there are any specific questions you have or items you would like to see covered in this blog series, just shoot me an email at firstname.lastname@example.org.