Quantcast
Channel: ODTUG Aggregator
Viewing all 1880 articles
Browse latest View live

PBCS Backups with PowerShell: Part 1

$
0
0

I’ve spent a lot of time lately on homelab topics, so I thought I would take a break and put together a post on an EPM topic!  Today we’ll be talking about PBCS backups.

Why Do We Need PBCS Backups

You might be thinking, “Why do I need PBCS backups when Oracle does that for me?”  That’s an excellent question.  The problem is that while Oracle does perform nightly backups of PBCS, they overwrite that backup each night.  So at any given time I only have one backup.  To make things even worse, Oracle has a size limit on your PBCS instance.  That limit is 150GB.  This means that even if we had multiple backups on our pod, we’ll eventually start losing them to the data retention policies.

So what do we do?  We generate a new backup every night and download it to a local server.  The good news is that you almost certainly have a local server already running EPM Automate.  EPM Automate is the automation tool for Oracle’s EPM Cloud suite.  You can use EPM Automate to load data, execute calculations, update meta-data, and…perform backups.  So, we’ve established that we likely need more than a single night of backups, but how many do we need?  This will depend on a few things like the size of your applications and the frequency of change.  For our example, we will keep 30 days of daily backups.

Batch vs. PowerShell

Now that we have determined what we are backing up and how many backups we need to keep, we need to move on to actually performing the backups.  With EPM Automate, we have two commonly used options.  First, we have the old-school method of a batch file.  Batch files are great because they just work and you can find a ton of information on the web about how to do things.  Batch are, however, very limited in their ability to do things like e-mail notifications and remote calls without external tools.  That brings us to PowerShell.  PowerShell is essentially a batch that has the full set of .NET programming capability along with other goodies not directly from .NET.  What does that mean exactly?  That means there is very little I can’t do in PowerShell.

Directory Configuration

Before we configure anything, we need to get a folder structure put together to support scripting, logging, and the actual backup files.  You may already have a structure for your automation processes, but for our example, it will look something like this:

  • C:\Oracle
    • C:\Oracle\Automation
      • C:\Oracle\Automation\Backup
      • C:\Oracle\Automation\Log

EPM Automate Configuration

EPM Automate is a great tool, but we do need to perform a little bit of setup to get going.  For instance, while EPM Automate supports plain text passwords, that wouldn’t pass muster with most IT security groups.  So before we get into PowerShell, let’s encrypt our password.  This is a fairly easy process.  We’ll start up a command prompt and change directory to our EPM Automate bin directory:

cd\
cd Oracle\EPM_Automate\bin

Once we are in the right directory, we can encrypt our password:

epmautomate.bat encrypt YourPasswordGoesHere PickYourKey c:\Oracle\Automation\password.epw

Here are the parameters:

  • Command – the command EPM Automate will execute
    • encrypt
  • Password – the password of the account you plan to use
    • YourPasswordGoesHere
  • Key – you specify anything you want to use to encrypt the password
    • PickYourKey
  • Password File – The full path and file name of the password file that will be generated
    • c:\Oracle\Automation\password.epw

Once we execute the command, we should have our password file so that we can continue.  It should look something like this:


Backing Up PBCS with PowerShell

For our first part of this mini-series, we’ll stick with just a basic backup that deletes older backups.  In our next part of the series, we’ll go deeper into error handling and notifications.  Here’s the code..

Path Variables

#Path Variables
$EpmAutomatePath = "C:\Oracle\EPM_Automate\bin\epmautomate.bat"
$AutomationPath = "C:\Oracle\Automation"
$LogPath = "C:\Oracle\Automation\Log"
$BackupPath = "C:\Oracle\Automation\Backup"

We’ll start by defining our path variables.  This will include paths to EPM Automate, our main automation directory, our log path, and our backup path.

Date Variables

#Date Variables
$DaysToKeep = "-30"
$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysToKeep)
$TimeStamp = Get-Date -format "yyyyMMddHHmm"
$LogFileName = "Backup" + $TimeStamp + ".log"

Next we’ll define all of our data related variables.  This includes our days to keep (which is negative on purpose as we are going back in time), our current date, the math that gets us back to our deletion period, a timestamp that will be used for various things, and finally our log file name based on that timestamp.

PBCS Variables

#PBCS Variables
$PBCSdomain = "yourdomain"
$PBCSurl = "https://usaadmin-test-yourdomain.pbcs.us2.oraclecloud.com"
$PBCSuser = "yourusername"
$PBCSpass = "c:\Oracle\Automation\password.epw"

Now we need to set our PBCS variables.  This will include our domain, the URL to our instance of PBCS, the username we’ll use to log in, and the path to the password file that we just finished generating.

Snapshot Variables

#Snapshot Variables
$PBCSExportName = "Artifact Snapshot"
$PBCSExportDownloadName = $PBCSExportName + ".zip"
$PBCSExportRename = $PBCSExportName + $TimeStamp + ".zip"

We’re nearing the end of variables as we define our snapshot specific variables.  These variables will tell us the name of our export, the name of the file that we are downloading based on that name, and the new name of our snapshot that will include our timestamp.

Start Logging

#Start Logging
Start-Transcript -path $LogPath\$LogFileName

I like to log everything so that if something does go wrong, we have a chance to figure it out after the fact.  This uses the combination of our log path and log file name variables.

Log Into PBCS

#Log into PBCS
Write-Host ([System.String]::Format("Login to source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath "login" $PBCSuser $PBCSpass $PBCSurl $PBCSdomain

We can finally log into PBCS!  We’ll start by displaying our action and the current system time.  This way we can see how long things take when we look at the log file.  We’ll then issue the login command using all of our variables.

Create the Snapshot

#Create PBCS snapshot
Write-Host ([System.String]::Format("Export snapshot from source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath exportsnapshot $PBCSExportName

Again we’ll display our action and current system time.  We then kick off the snapshot process.  We do this because we want to ensure that we have the most recent snapshot for our archiving purposes.

Download the Snapshot

#Download PBCS snapshot
Write-Host ([System.String]::Format("Download snapshot from source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath downloadfile $PBCSExportName

Once the snapshot has been created, we’ll move on to downloading the snapshot after we display our action and current system time.

Archive the Snapshot

#Rename the file using the timestamp and move the file to the backup path
Write-Host ([System.String]::Format("Rename downloaded file: {0}", [System.DateTime]::Now))
Move-Item $AutomationPath\$PBCSExportDownloadName $BackupPath\$PBCSExportRename

Once the file has been downloaded, we can then archive the snapshot to our backup folder as we rename the file.

Delete Old Snapshots

#Delete snapshots older than $DaysToKeep
Write-Host ([System.String]::Format("Delete old snapshots: {0}", [System.DateTime]::Now))
Get-ChildItem $BackupPath -Recurse | Where-Object { $_.LastWriteTime -lt $DatetoDelete } | Remove-Item

Now that we have everything archived, we just need to delete anything older than our DateToDelete variable.

Log Out of PBCS

#Log out of PBCS
Write-Host ([System.String]::Format("Logout of source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath "logout"

We’re almost done and we can now log out of PBCS.

Stop Logging

#Stop Logging
Stop-Transcript

Now that we have completed our process, we’ll stop logging

The Whole Shebang

#Path Variables
$EpmAutomatePath = "C:\Oracle\EPM_Automate\bin\epmautomate.bat"
$AutomationPath = "C:\Oracle\Automation"
$LogPath = "C:\Oracle\Automation\Log"
$BackupPath = "C:\Oracle\Automation\Backup"

#Date Variables
$DaysToKeep = "-30"
$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysToKeep)
$TimeStamp = Get-Date -format "yyyyMMddHHmm"
$LogFileName = "Backup" + $TimeStamp + ".log"

#PBCS Variables
$PBCSdomain = "yourdomain"
$PBCSurl = "https://usaadmin-test-yourdomain.pbcs.us2.oraclecloud.com"
$PBCSuser = "yourusername"
$PBCSpass = "c:\Oracle\Automation\password.epw"

#Snapshot Variables
$PBCSExportName = "Artifact Snapshot"
$PBCSExportDownloadName = $PBCSExportName + ".zip"
$PBCSExportRename = $PBCSExportName + $TimeStamp + ".zip"

#Start Logging
Start-Transcript -path $LogPath\$LogFileName

#Log into PBCS
Write-Host ([System.String]::Format("Login to source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath "login" $PBCSuser $PBCSpass $PBCSurl $PBCSdomain

#Create PBCS snapshot
Write-Host ([System.String]::Format("Export snapshot from source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath exportsnapshot $PBCSExportName

#Download PBCS snapshot
Write-Host ([System.String]::Format("Download snapshot from source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath downloadfile $PBCSExportName

#Rename the file using the timestamp and move the file to the backup path
Write-Host ([System.String]::Format("Rename downloaded file: {0}", [System.DateTime]::Now))
Move-Item $AutomationPath\$PBCSExportDownloadName $BackupPath\$PBCSExportRename

#Delete snapshots older than $DaysToKeep
Write-Host ([System.String]::Format("Delete old snapshots: {0}", [System.DateTime]::Now))
Get-ChildItem $BackupPath -Recurse | Where-Object { $_.LastWriteTime -lt $DatetoDelete } | Remove-Item

#Log out of PBCS
Write-Host ([System.String]::Format("Logout of source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath "logout"

#Stop Logging
Stop-Transcript

The Results

Once you execute the PowerShell script, you should see something like this:

Conclusion

There we have it…a full process for backing up your PBCS instance.  The last step would be to set up a scheduled task to execute once a day avoiding your maintenance window.

The post PBCS Backups with PowerShell: Part 1 appeared first on EPM Marshall.


EPM Cloud Updates – September 2018 – EPBCS, FCCS, PCMCS, ARCS, EPRCS & EDMCS @_Jun_Zhang_

$
0
0
  Oracle releases the application updates documentation on new features and fixed issues from the Applications Release Readiness site. From there, you will be able to check the update for Customer Experience, Human Capital Management, Enterprise Resource Planning, Supply Chain Management, Enterprise Performance Management.   A quick recap of the EPM applications. Planning and Budgeting Cloud Enterprise Planning Cloud...

More Thoughts on Essbase Hybrid Aggregation

Join the 2019 ODTUG Board of Directors - Call for Nominations Now Open

$
0
0
The 2019 Board of Directors nominations are now open! Read this blog to learn more about the roles and responsibilities and to submit a nomination.

HFM - Planning for the upgrade @ericerikson

$
0
0
Everyone should have heard by now, but just in case you haven't. Oracle is planning for the release of HFM (actually, all of the on premise EPM products) version 11.2 in the spring.

As Oracle has stated this will be the last release of HFM that will require a full software installation (to their knowledge, safe harbor statement applies). Other updates after this will be via patch set updates. To do this installation, you cannot (or should not) just update the software on the existing servers. The software will need installing on a clean server, the database copied over and upgraded, interfaces tested, and then consolidation and data verified. A parallel close would be a prudent step.

To plan for this, think about budgeting for the new servers now. Most companies are in the middle of putting together their annual budget for next year, so now is the perfect time to plan this upgrade. Unless some design changes are incorporated at the same time, no consulting help should be needed unless it is to help with the infrastructure setup. If you are on an older version or you're having to do work-arounds to make things work with your current design/setup, then you should think about taking this opportunity to do an "application remodel."

The other option would be to plan a migration to FCCS, Oracle's cloud based consolidation and close solution. In this case you would be budgeting for an operating expense, not a capital expense, and not having to budget for the annual maintenance on the on-premise software. Migrating to FCCS will go much easier with some consulting help, but with the recently released HFM to FCCS migration tool, it goes more smoothly than before.



EPM Cloud Quick Tip: Self-Service Domain Relocation Tool @opal_epm

$
0
0
About 2 months ago, I used the (somewhat) new self-service domain relocation tool to migrate one EPM Cloud instance to another domain. However, this was not successful, and this is the quick tip that I wanted to share with this audience. I used the self-service domain relocation tool, which is available through the My Account … Continue reading EPM Cloud Quick Tip: Self-Service Domain Relocation Tool

ODTUG Kscope Session Highlights - Part 1

$
0
0
Curious about the content you'll see at ODTUG Kscope19? As we look ahead to opening abstract submissions for ODTUG Kscope19, we would like to share some of the content highlights from ODTUG Kscope18. The following presentation recordings demonstrate the high-caliber content you'll see at Kscope19.

Upcoming ODTUG Webinars - September

$
0
0
Our webinar calendar is constantly evolving! Check back frequently as more webinars will be added to the schedule.

Sad News for Essbase

$
0
0


While I try to be upbeat in my posts, today I'm afraid I can't. Oracle and the Essbase world lost a great friend. I just learned that Carol Crider from Oracle support has passed away.

I first met Carol when Hyperion called me to try to save a Planning implementation. I was given Carol as as my technical resource to help solve issues. She was kind, patient and gave me great recommendations and followed up quickly on issues I was having.  She knew the internal working of Essbase better than any support person I knew and was always willing to help.  She was the Essbase Goddess and would correct me when I misspoke.

At the KScope in Long Beach, I was doing a session on Essbase optimization and she walked in and sat in the front row. I got so flustered, afraid of getting corrected, I fumbled through content I thought I knew by heart.  She and I would later laugh at it. I think it amused her that she should would be so highly regarded.

Fast forward to this year's KScope in Florida. I saw her and she was her chipper self, concerned with the reputation support was getting and talking about how to improve it.  She said she was doing well and we had a few nice talks.

To hear the news, was really a shock.  I have information on the service this Saturday (Sep 8th) . I don't want to post it publicly , but if you want it email at gschwartzberg@interrel.com  Correction, per Carol's wishes, there will be no service and the family requests no flowers be sent.  If I hear about a charity, I will update this later


I'm not sure what you will be doing Carol as there are no bugs in software in Heaven. Take a long needed rest perhaps?
Carol Crider
Please know you will be missed

How to Connect Power BI to OBIEE @usanalytics

$
0
0

Microsoft’s Power BI is a powerful and popular self-service BI application. It allows reports to be developed at an incredible pace without compromising visualization flexibility or design, in addition to being many times more affordable than traditional Enterprise BI applications.

However, as great as Power BI is, it suffers from some of the same problems that all self-service BI applications do. Chief among them is the accuracy and security of the reporting data. Lucky for us, we still use our Enterprise BI application, which guarantees accuracy and security through a governed data warehouse. Leveraging our Enterprise BI application as a data source, while developing reports with Power BI, we can create a secure, accurate, and reliable workflow.

Discoverer to OBIEE Migration: 3 Methods You Can Use @usanalytics

$
0
0

Is your organization still using Oracle Discoverer? By now, you’re probably aware of the end of extended support for Discoverer, and that we are now well outside the extended support window. You could still be using Discoverer for several reasons, but the fact remains, having your business rely on a tool which is no longer supported is expensive, risky, and dangerous.

The migration paths aren’t exactly clear, though. Luckily, there’s an upgrade path to Oracle Business Intelligence (OBIEE) or Oracle Analytics Cloud (OAC) that is simple, fast, and affordable. If this sounds like exactly what you need, then feel free to skip down to the bottom. For those of you not convinced of the importance of migration, read on...

EPM Cloud Data Management – Incremental File Adapter

$
0
0
New functionality has been added to Data Management in the EPM Cloud 18.09 release, and in this post I am going to cover the incremental file adapter.

The monthly cloud release documentation contains the following summary:

“Data Management now includes an incremental file adapter to optimize the data load process. The new Incremental adapter compares the source data file with a prior version of the source data file and identifies new or changed records and loads only this data set. The user has the option to sort the input source data file before making the comparison or they can provide a pre-sorted file for better performance.”

So basically, if you are loading files which contains a full data set then before this release you would need to replace the existing the existing data in Data Management and reload, which, depending on the size on the source file, can cause performance issues. With the new incremental file adapter, the previous file will be compared with the latest file and only the differences will be loaded, so in theory performance should be much improved.

To get a good understanding of the adapter it is worth going through a simple example.

There are some important points that should be considered when working with the adapter:
  • The source data file must be a delimited data file.
  • Data files used must contain a one-line header, which describes the delimited columns.
  • Both numeric and non-numeric data can be loaded.
  • Any deleted records between the two files is ignored. In this case, you have to handle the deleted records manually.
  • If the file is missing (or you change the last ID to a non-existent run), the load completes with an error.
  • Sort options determine the level of performance using this feature. Sorting increases the processing time. Pre-sorting the file makes the process faster.
  • Only single period data loads are supported for an incremental load. Multi-period loads are not supported.
  • Drill down is not supported for incremental loads since incremental files are loaded in Replace mode and only the last version of the file comparison is present in the staging table.
  • Copies of the source data file are archived for future comparison. Only the last 5 versions are retained. Files are retained for a maximum of 60 days. If no incremental load is performed for more than 60 days, then set the Last Process ID to 0 and perform the load.
Some of the above points will become clearer with the following example, so let’s get started.

The file I am going to begin with only contains a few records as this is just meant to be a simple example.


The file satisfies the first two points, it contains a header record which describes the columns, and it is delimited.

I first thought that the incremental file adapter would be configured from the source system area in Data Management, but this is not the case; it is created as a target application.

You need to go to target application and add a new data source.


Once “Data Source” has been selected, a new window will open where you can select “Incremental File” as the source system.


This will open the file select window where you can select an existing file or upload one.


The filename will be the basis for the name of the target application


If you intend to use multiple files with the same naming convention, then you should use the prefix option to make the target applications unique.

Selecting the file at this point is only to define the dimension details in the application details.


The dimension names are automatically created from the header record in the source file.

The steps to create an integration are the same as you would normally go through, so I am only going to highlight some of them.

In the import format creation, the newly created application should be available as a source.


The source columns of the file are then mapped to the target application.


The location set up is no different.


On to the data load rule which is slightly different than a standard file-based load, the file is selected in the source filters section.


There are two other properties in the source filters, the incremental processing options has the following possible values:

Do not sort source file - Source file is compared as provided. This option assumes that the source file is generated in the same order each time. In this case, the system performs a file comparison, and then extracts the new and changed records. This option makes the incremental file load perform faster.

Sort source file—Source file is sorted before performing the file comparison for changes. In this option the source file is first sorted. The sorted file is then compared to the prior sorted version of this file. Sorting a large file consumes a lot system resources and performs slower.

For this example, I am going to go with not sorting the source file, not that it would make much difference in my case because the file is so small. Once I get the opportunity I would like to test the performance of having to sort a large file.

Next is the last process ID, the first time you load a file this would be set to 0, when the load is run again the ID will be updated to match the process ID.

If there are no differences between the latest file and the previous file or the source file doesn’t exist, the ID will be kept as the last successful load ID.

To reload all the data the process ID should be set back to 0 and the file defined in the source filter will be considered the baseline file.

If you are going to change the sorting option then to stop data issues you will need to reset the process ID back to 0 and reload.


After selecting the source file, some simple pass through mappings are added, which means the file can be loaded.


The data load acts like any other, it is only when you take a look at the log you get an idea of what is going on.

INFO  [AIF]: Executing the following script: /u02/Oracle/Middleware/EPMSystem11R1/products/FinancialDataQuality/bin/plugin/IncrementalFile.py
INFO  [AIF]: ************************************************************************************
INFO  [AIF]: *      IncrementalFile.py Started for LoadID: 95
INFO  [AIF]: ************************************************************************************
DEBUG [AIF]: loadID: 95
DEBUG [AIF]: ruleID: 14
DEBUG [AIF]: LAST_PROCESS_ID: 0
DEBUG [AIF]: INCREMENTAL_PROCESSING_OPTION: NO_SORT_NO_MISSING
DEBUG [AIF]: Source file: /u03/inbox/inbox/incremental.csv
DEBUG [AIF]: Data file: /u03/inbox/data/appdata/sourcefiles/14/Vision_SOURCE_95.dat
DEBUG [AIF]: Deleting old source files from /u03/inbox/data/appdata/sourcefiles/14
DEBUG [AIF]: impGroupKey: MT_Incremental_IF
DEBUG [AIF]: Import format delimiter = ,
DEBUG [AIF]: Instantiated FileDiffUtils.
DEBUG [AIF]: Full load.  No incremental file.
DEBUG [AIF]: dataFileName: Vision_SOURCE_95.dat

It looks like it is a python/jython script that is handling the incremental files and differences, the file is moved into a source file directory which is numbered by the load rule ID, in this case 14 and the file format is <TARGET_APP>_SOURCE_<LOADID>.dat

As the last process ID is 0 then it is considered a full load and no incremental differences are required and the file is loaded.

If you go back into the load rule you can see the last process ID has been updated to reflect the load ID of the last import.


Now, time to add additional records to the same file. I selected not to sort the source file in the load rule, so I made sure the records were sorted.


Run the process again and this time in the logs you can see that a difference file has been created and loaded, the format of the difference file is <TARGET_APP>_DIFF_<LOADID>.dat

DEBUG [AIF]: Source file: /u03/inbox/inbox/incremental.csv
DEBUG [AIF]: Data file: /u03/inbox/data/appdata/sourcefiles/14/Vision_SOURCE_96.dat
DEBUG [AIF]: Deleting old source files from /u03/inbox/data/appdata/sourcefiles/14
DEBUG [AIF]: impGroupKey: MT_Incremental_IF
DEBUG [AIF]: Import format delimiter = ,
DEBUG [AIF]: Instantiated FileDiffUtils.
DEBUG [AIF]: Diff file: /u03/inbox/data/Vision_DIFF_96.dat
DEBUG [AIF]: Header line: Account,Entity,Product,Amount
DEBUG [AIF]: dataFileName: Vision_DIFF_96.dat

Back in the load rule the last process ID has been updated.


In the workbench I spotted an issue.


There was one record that was from the previous load, in theory this should not have been loaded again.

I was thinking maybe it was the way I sorted the file, so I decided to run the process again but this time set the processing option to sort the file.


I reset the last process ID back to zero and uploaded my original file and ran the process again.


In the logs you can see entries to show that the source file is being sorted.

DEBUG [AIF]: Sorting data file started.
DEBUG [AIF]: Sorting data file completed.

Uploaded the incremental file and ran the process.


In the logs you can see the source file is being sorted and a difference file created.

DEBUG [AIF]: Sorting data file started.
DEBUG [AIF]: Sorting data file completed.
DEBUG [AIF]: Header line: Account,Entity,Product,Amount
DEBUG [AIF]: dataFileName: Vision_DIFF_98.dat

I was hoping that the issue would be fixed and the record from the full data load would not be included in the incremental load.


Unfortunately, it is still loading the record, so unless I am missing something there is a problem with the script that is working out the differences.

I thought I would have a look at the files that have been generated by Data Management, the files can be downloaded using EPM Automate or REST.


The files were sorted in exactly the same way as in my original testing so at least I know I got that correct, though the difference file is not correct as it contains a record from the first full load.


I found out if I created the incremental file by appending the records to the end of the original file.


Then set the processing option not to sort the source file.


Reset the last process ID back to zero, reloaded both files, this time the incremental data was correct and did not contain the record from the original load.


Anyway, moving on, if I try to load the file again with no changes then the difference file will contain no records.


The process will be displayed as a warning.


No records will exist in the workbench.


Going back to one of the earlier points:

“Only the last 5 versions are retained. Files are retained for a maximum of 60 days.”

You will see an entry in the log when go over 5 versions.

DEBUG [AIF]: Deleting old source files from /u03/inbox/data/appdata/sourcefiles/14
DEBUG [AIF]: Deleted file: /u03/inbox/data/appdata/sourcefiles/14/Vision_SOURCE_95.dat

For each load rule there will be a maximum of 5 archived source files on the file system.


It is still possible to run the data load rule with EPM Automate or REST and pass in the filename.

First the file can be uploaded.


Next the data load rule can be executed by passing in the filename.


In the process logs you will see an entry for EPM Automate and the filename.

DEBUG [AIF]: epmAutomateFileName: sep-18-actuals.csv
DEBUG [AIF]: Source file: /u03/inbox/inbox/sep-18-actuals.csv

I did begin to have a look at loading larger files to check out the performance when sorting is enabled, but I seem to hit more problems.

I started with a file like in my previous example but this time containing 50,000 records. I then loaded an incremental file containing 100,000 records which included the 50,000 from the original file.

For some reason 199,000 records were loaded to Data Management.

In the log I could see:

DEBUG [AIF]: dataFileName: Vision_DIFF_116.dat

HEADER_ROW] Account,Entity,Product,Amount
INFO  [AIF]: EPMFDM-140274:Message - [TC] - [Amount=NN] 99990,4000,1200,9299Account,Entity,Product,Amount
INFO  [AIF]: EPMFDM-140274:Message - Rows Loaded: 199999
Rows Rejected: 1

I downloaded the file containing the differences and the file contained 200,001 records.


It seems to have created the difference file by joining the incremental file to itself.


I am concerned about some of these issues I have seen, it may just be me but if you are going to use the adapter functionality then please do carry out testing first.

I am going to leave it there for this introduction to the incremental file adapter, if I find any more details about the problems I have experienced then I will update this post.


EPM Cloud – Managing users with EPM Automate and REST API

$
0
0
New functionality has been added in the EPM Cloud 18.09 release to provide the ability to manage users and roles at an identity domain level with either EPM Automate or the REST API. In this post I am going to cover this functionality starting off with EPM Automate.

Four new commands have been added to EPM Automate and these are:
  • addusers – Creates new users in the identity domain based on the contents of a comma separated file.
  • removeusers – Deletes identity domain accounts based on the contents of a comma separated file.
  • assignrole – Assigns an identity domain role to all users that are contained in a comma separated file.
  • unassignrole – Unassigns an identity domain role to all users that are contained in a comma separated file.
Please note, to be able to use these commands you will need to be logged in with an account that has the “Identity Domain Administrator” role.

The comma separated files have to be uploaded to the cloud instance first using the “uploadfile” command before you can use the new commands, I would have preferred it if it could have been done in a single command without having to upload files but unfortunately that is not the way it has been developed.

I am quickly going to go through each command and provide an example.

Let’s start off with adding new users to the identity domain.

Before you can use the “addusers” command, you will need a file containing the new user information in the correct format.

The file can contain as many users you would to like to add, for demo purposes I am just going to be adding one user.

The file is required to be in the following format:


The format is the same as if you were importing a batch of users through Oracle Cloud My Services.

Now that I have the file ready I can upload it using EPM Automate, I have assumed the user has been logged in and a file with the same name does not already exist in the cloud instance, you can easily use the “deletefile” command first to remove it if needed.


The file will then be available in the “Inbox/Outbox Explorer”


The format to add users with EPM Automate is:

epmautomate addusers <FILE_NAME> <userPassword=PASSWORD > <resetPassword=true|false>

FILE_NAME is the name of the comma separated file containing the new user information which I just uploaded.

userPassword is a default password that is assigned to the new users. It will need to meet the minimal password requirements for identity domain passwords.

resetPassword defines whether the new users must change the password the first time they log in. I recommend this is always set to true.

An example to create the users contained in the file that was just uploaded is:


The response from issuing the command will include how many new users were processed, including the number of successful and failed user creations.

If successful, the user should then be available in the identity domain. Oracle cloud should also send out an email to the new user with their account information.


As I set the reset password parameter to true, the first time the user logs in they will be redirected to the “Identity Self Service” and require changing their password.


Now that the user has been created we can assign a new identity domain role.

The format for the EPM Automate command is:

epmautomate assignrole <FILE_NAME> <ROLE>

FILE_NAME is the name of a comma separated file containing the user login for the users that you want to assign an identity domain role to.

ROLE is one of the predefined identity domain roles which are:
  • Service Administrator
  • Power User
  • User
  • Viewer
The file requires to be in the following format:


I hoped I could use the same file used for creating the users as it also contains the “User Login” information, but when I tried that I received the following EPM Automate error:

EPMAT-1:Failed to assign role for users. File does not have valid header. Please provide a valid header in file.

Before being able to use the “assignrole” command the above file was uploaded using EPM Automate.


After the file has been uploaded, the “assignrole” command can be executed, in my example I am assigning the “Power User” role to the user in the “assignUsers.csv” file.


The response is the same as when adding users, as the command was successful the user has been assigned the role.


Unassigning a role is no different to assigning a role, the format for the command is:

epmautomate unassignrole <FILE_NAME> <ROLE>

I don’t feel I need to explain what the parameters are this time.

In the following example I am using the same file I uploaded for assigning roles to users, this time I am going to unassign the “Power User” role.


The EPM Automate command completed successfully and a quick check in “My Services” confirms the role has been removed.


On to the final command to delete users from the identity domain.

The format for the EPM Automate command is:

epmautomate removeusers FILE_NAME

The filename should contain all the users you want to remove, and the file format is the same as when assigning/unassigning roles.


In the following example using EPM Automate, I upload the file containing the users to remove and then remove them with the “removeusers” command.


Back in “My Services” the user has been deleted.


As the EPM Automate utility is built on top of the REST API then all the above commands will be available using REST.

So let us repeat the available functionality using a REST client. There are lots of free clients out there so pick the one you prefer. As usual I will be using the Boomerang REST client for Chrome.

First, I am going to delete the CSV file in the cloud instance containing the users and then upload a new one.

The REST URL format to delete files is:

https://<cloud_instance>/interop/rest/11.1.2.3.600/applicationsnapshots/<filename>

A DELETE method is used, so to delete the “newusers.csv” file a request would be made to the following:


If the deletion was successful a status of 0 will be returned.


If the file does not exist, a status of 8 will be returned and an error message.


Now to upload a new file containing the users to add in the identity domain. I have covered uploading files using the REST API in the past which you can read about here, so there is no need for me to go into much detail again.

The REST URL to upload files is:

https://<cloud_instance>/interop/rest/11.1.2.3.600/applicationsnapshots/<filename>/contents?q={"isLast":<true/false>,"chunkSize":<sizeinbytes,"isFirst":<true/false>}

A POST method is required, for example:



In the body of the post I added the user information for the file.


You can include as many users as you want to add.

The request header content type equals “application/octet-stream”

A response status of 0 will be returned if the file was uploaded.


If the file already exists, you would receive something like:


The file is now available in the “Inbox/Output Explorer”.


On to adding the user contained in the file using the REST API.

The URL format for managing users is:

https://<cloud_instance>/interop/rest/security/v1/users

A POST method is required, and the body of the request should contain the filename and the default and reset password values. These are the same parameters which are used with EPM Automate commands.


It is a shame that the user information could not have been included the body of the request instead of having to upload a file.

The response will contain job information for adding new users. It includes a URL which can be accessed to check the job status.


A GET request can be made to keep checking the job status until it completes.


A status of 0 means the operation was successful, just like with EPM Automate details are included to inform how many new users creations were processed and how many succeeded or failed.

As the job was successful the new user has been added and is available in “My Services”


We can now move on to assigning a role for the new user.

I uploaded the following file:


In my example the file only contains a single user but it can contain as many as you want to assign a role to.

To assign a role to the users contained in a file the same URL format is required as when adding users.

A PUT method is required, and the body of the request should include the filename, the role name and a job type of “ASSIGN_ROLE”


This time I am going to assign the “User” identity domain role.

The current job status for assigning the role is returned, once again it includes a URL to check the job status.


The job status can then be checked until it completes.


As the operation was successful, the “User” role has been assigned to the user contained in the file.


To unassign roles is very similar to assigning. A PUT method is required, and the body of the request should contain the filename containing the users that the role should be unassigned from, the role name and a job type of “UNASSIGN_ROLE”


The current job status is returned in the response.


The job status can be checked until it completes.


As the job was successful the “User” role has been removed for the user contained in the specified file.


To remove users, a file should be uploaded containing the user login details of the users to remove.

I uploaded the following file.


A DELETE method is required, and the URL should include the filename containing the users to remove.


No surprise that job status information is returned in the response.


The job status can be checked until it completes.


The response confirms the user was removed. A look in “My Services” confirms the user has been removed.


I was going to include an example using scripting, but I think you should get the idea. I have covered scripting with the REST API many times in the past so just have a look at my previous posts if you are unclear.

Welcome to ODTUG Kscope19!

$
0
0
Welcome to a new ODTUG Kscope conference planning year! After a much needed hiatus, the team has kicked off the new conference season, and we are super excited about going back to Seattle!

FCCS Updates (September 2018): Toolkit for HFM Migration, Drilling Down from Summary Members, Considerations & More

$
0
0

The Septemer updates for Oracle's Financial Consolidation and Close Cloud Service (FCCS) are here! This blog post outlines new features, including a toolkit for Hyperion Financial Management (HFM) migration to FCCS, drilling down from summary members, considerations, and more.

The monthly update for FCCS will occur on Friday, September 21 during your normal daily maintenance window.


PBCS and EPBCS Updates (September 2018): REST APIs for Managing Users, Unified User Preferences, Upcoming Changes & More

ODTUG Board of Directors Nomination Deadline is September 20!

$
0
0
Are you a paid ODTUG member (or will you be as of September 30, 2018)? Do you have a lot to offer the ODTUG community? Do you have a passion for ODTUG and time to commit to serving on the board? If so, then I encourage you to submit your name to be considered for the 2019-2020 BOD.

ODTUG Kscope18 Session Recordings & Presentations Now Available to ODTUG Members

$
0
0
If you are a paid ODTUG member who was unable to attend ODTUG Kscope18, we have great news for you! The ODTUG Kscope18 session presentations and recordings are NOW AVAILABLE to you!

What to Know Before Moving Hyperion to the Cloud

$
0
0

There’s a lot of information out there about moving from Oracle Hyperion to the Oracle EPM Cloud, which makes sense — there’s a lot you need to know. However, that saturation of content can be difficult to sift through and keep organized. This blog post is your solution.

In it you’ll find a high-level overview of what you need to know before moving your on-prem tools to the Oracle EPM Cloud, along with links to more in-depth content. You’ll have a single point of reference to answer your questions about moving to the cloud.

ODTUG Kscope Session Highlights - Part 2

$
0
0
Part 2 - Curious about the content you'll see at ODTUG Kscope19? As we look ahead to opening abstract submissions in the coming weeks, we would like to share some of the content highlights from ODTUG Kscope18.
Viewing all 1880 articles
Browse latest View live