The impact of this Essbase data discrepancy is profound. The data between Database reports and Essbase reports can drastically differ giving inconsistent results under such scenario causing more confusion. Also, data between different environments (say Dev or Test or Prod) can start varying for the same historic month - obviously due to the outline differences causing the data loads to behave differently - succeeding for some and failing for some.
This scenario can be avoided by keeping a separate step during the transactional data loading process. This step will be a quality control check to make sure that all the segment codes for which the data we intend to load to the Essbase cubes already exist in the outline. If yes, then only proceed with the loading phase (to avoid the data falling off during loading) - else even if a single member is missing in the outline, then trigger an alert to take necessary remedial action. In case you are thinking how to run a compare check with the outline, it's easy - either use a customized report script to export a dump of all the members for that specific dimension using IDESCENDANTS or use the table (or file) that was used as source to build the outline. Any of these can be used as the outline data reference, and then can be used to compare with the transactional data.
How do you make sure your Essbase data quality is perfect?