1) Difference Between ASO & BSO?
1) Difference Between ASO & BSO?
ASO is Essbase's alternative to the sometimes cumbersome BSO method of storing data in an Essbase
database. The differences between the two are as follows:
BSO ASO
Essbase creates a data block for each unique You manage the show with Tablespaces. You got
combination of sparse standard dimension to accept/alter the allocation of default & temp
members (providing that at least one data value storage areas & disk space, as per the
exists for the sparse dimension member requirements.
combination). The data block represents all the
dense dimension members for its combination of
sparse dimension members.
Essbase creates an index entry for each data block.
The index represents the combinations of sparse
standard dimension members. It contains an entry
for each unique combination of sparse standard
dimension members for which at least one data
value exists.
All data is stored, except for dynamically calculated The ASO database efficiently stores not only zero
members. All data consolidations and parent-child level data, but can also store aggregated
relationships in the database outline are stored as hierarchical data with the understandings that
well. stored hierarchies can only have the no-
While the block storage method is quite efficient consolidation (~) or the addition (+) operator
from a data to size ratio perspective, a BSO assigned to them and the no-consolidation (~)
database can require large amounts of overhead to operator can only be used underneath Label Only
deliver the retrieval performance demanded by the members. Outline member consolidations are
business customer. performed on the fly using dynamic calculations
and only at the time of the request for data. This
is the main reason why ASO is a valuable option
worth consideration when building an Essbase
system for your customer.
The database outline must be loaded into memory The database outlines are created and stored in
as a single element what can be considered a page-able format.
This means that instead of Essbase loading the
entire database outline into memory, the page-
able outline can be loaded into memory either
one page or section at a time. This can free up
resources and can help make data retrieval and
data aggregations faster by reducing the amount
of memory consumed by large database outlines.
- Several databases stored in one application. - Aggregate storage applications have some
- No reserved names for application and database limitations that do not apply to block storage
names applications with regard to consolidations,
- Account dimension supports all types of calculations, and overall robust functionality.
calculations and attribute members. - Can store only one database per application.
- Calculation scripts are supported. - Names reserved for table spaces cannot be used
- Uncomplicated write back ability (before v 9) as application or database names.
- Formulas are allowed in all dimensions with no - Accounts dimension does not support time
restrictions. balance members and association of attribute
- Outline changes do not automatically clear data dimensions.
values, even if a data source is used to both modify - On non-account dimensions, there are
members and load values. Therefore, incremental restrictions on label only members and dynamic
data loads are supported for all outlines. (before time series members. Members tagged as
version 9) dynamic hierarchies have no restrictions on the
- Currency conversion is supported. consolidation settings. Stored hierarchy members
can only be tagged as label only or (+) addition.
- Non-account dimensions support only
consolidation operator (+) addition.
- Calculation scripts are not supported.
- Formulas are allowed only on account
dimension members and allowed with certain
restrictions.
- Only Level 0 cells whose values do not depend
on formulas in the outline are loaded.
- Data values are cleared each time the outline is
structurally changed. Therefore, incremental data
loads are only supported for outlines that do not
change.
- Currency conversion is not supported without
the use of special MDX queries. This method can
have a negative effect on performance.
- For better performance, the outline dimensions - Easy optimization, massive data scalability,
must be defined as Dense or Sparse, based on data reduced disk space, and up to 100 times faster.
density, which can sometimes be difficult to get - Database creation is accomplished by either
exactly right. migrating a BSO outline or defined as new after
- Database calculation—calculation script or outline application creation.
consolidation. - Outline dimensions will not need to be
- Calculation order will need to be defined in the designated as dense or sparse.
calc scripts and is predetermined in a default - Outline is validated every time a database is
outline calculation. started.
- Unrestricted write back ability which can be - Database calculation or aggregation of the
dangerous if care is not exercised. database can be predefined by defining
- No automatic update to values after data load. aggregate views.
- Necessary calculation scripts need to be specially - Calculation order is not relevant for database
executed, including any default calculations. calculation, but is relevant for dynamic
- Sometimes requires large amounts of resources. calculation formulas.
- Limited write back ability
- At the end of a data load, if aggregation exists,
the values in aggregation are recalculated and
updated automatically.
- Aggregate storage database outlines are page-
able. This feature significantly reduces memory
usage for very large database outlines.
As you can see, there are some substantial differences and some very good reasons to use one type of
database over another. To give you our idea of the ideal application of ASO and BSO, read below:
ASO Database: The ASO database is ideal for dynamically built Essbase cubes that are usually
Read Only and used for reporting, presentation, and analysis. This type of database would also tend to
have a rather large outline where at least one dimension has a significant amount of members. A parts
dimension or product dimension comes to mind.
Behind this ASO database would be a large BSO parent Essbase database, from which the dynamic ASO
databases are built on the fly.
BSO Database: The BSO database is ideal for virtually any size cube, but where performance is
not necessarily the number one priority. Accuracy and completeness of data would be the main
consideration. The BSO database is ideal as the large parent database where users from many different
departments can trigger jobs which will dynamically build ASO reporting cubes on an as needed basis.
The typical BSO database is ideally suited for financial analysis applications.
Of course, these are just one possibility or scenario. The beauty of Essbase is that you can do most
anything with it. Heck, you could easily have a large Oracle relational database as the backend data
source for your ASO cubes. The possibilities are endless!
4) What is the Extension of Cal Scripts and Rule File? .csc and .rul
To refer to a substitution variable in your report script, place an ampersand (&) in front of the variable
name. For example, use &CurrentMonth in your report script to reference the substitution variable
CurrentMonth. When the query is executed, &CurrentMonth is substituted with the value defined in
the IBM DB2 OLAP Server™ or Hyperion Essbase server.
While substitution variables help reduce maintenance in report scripts, someone still has to manually
change the values in the IBM DB2 OLAP Server or Hyperion Essbase server. As an alternative in DB2
Alphablox applications, you could use Java™ methods in your JSP pages to automatically calculate a
value for the current month or other reporting period, then substitute that value in your report scripts.
For example, say Market & Scenario are Standard Sparse dimensions with EAST, WEST as children of Market and
ACTUAL,BUDGET as children of Scenario. Measures & Time are Standard Dense Dimensions with SALES, COGS as children
of Measures and JAN,FEB as children of Time.
EAST - ACTUAL
WEST - ACTUAL
Now, as per above information, there will be two Data blocks created.
Total is 4 combinations. Hence, there will be 4 cells will be created for each BLOCK.
29) Can you give the same name to different members in UDA?
30) What is Data Cache?
Data blocks can reside on physical disk and in RAM. The amount of memory allocated for blocks is called
the data cache. When a block is requested, the data cache is searched. If the block is found in the data
cache, it is accessed immediately. If the block is not found in the data cache, the index is searched for
the appropriate block number. The block's index entry is then used to retrieve the block from the proper
data file on disk.
31) What is the Difference between the Data Cache and Data File Cache?
32) What is the Size of your cube?
35) What is the difference between Dynamic calc and Dynamic Calc and Store?
36) How does Essbase consolidate Data?
37) Different types of Dimension Building?
40) Explain me about your project? And tell me any difficulties that you have faced and how did you
resolve it?
42) What are the different steps so that you can optimize the performance your cube?
Step 1: The Starting Line: Model Analysis
- Minimize the number of dimensions. Do not ask for everything in one model
- Minimize complexity of individual dimensions. Consider UDAs and Attribute Dimensions in order to
reduce the size of some of the dimensions
- Examine the level of granularity in the dimensions.
Step 2: Order The Outline: Hour-glass model
- Dense dimensions from largest to smallest. Small and large is measured simply by counting the number
of Stored members in a dimension. The effect of sparse dimension ordering is much greater than dense
dimension ordering.
- Sparse dimensions from smallest to largest. This relates directly to how the calculator cache functions
Step 3: Evaluate Dense/Sparse Settings
- Finding the optimal configuration for the Dense/sparse settings is the most important step in tuning a
database.
- Optimize the block size. This varies per operating system, but in choosing the best Dense/sparse
configuration keep in mind that blocks over 100k tend to yield poorer performance. In general, Analytic
Services runs optimally with smaller block sizes
Step 4: System Tuning: System tuning is dependent on the type of hardware and operating
- Keep memory size higher
- Ensure there is no conflict for resources with other applications
Step 5: Cache Settings
- The actual cache settings recommended is strongly dependent on your specific situation.
- To measure the effectiveness of the cache settings, keep track of the time taken to do a calculation and
examine the hit ratio statistics in your database information.
Step 6: Optimize Data Loads
- Know your database configuration settings (which dimensions are dense and sparse).
- Organize the data file so that it is sorted on sparse dimensions. The most effective data load is one
which makes the fewest passes on the database. Hence, by sorting on sparse dimensions, you are
loading a block fully before moving to the next one.
- Load data locally on the server. If you are loading from a raw data file dump, make sure the data file is
on the server. If it is on the client, you may bottleneck on the network
Step 7: Optimize Retrievals
- Increase the Retrieval Buffer size. This helps if retrievals are affected due to dynamic calculations and
attribute dimensions.
- Increase the Retrieval Sort Buffer size if you are performing queries involving sorting or ranking.
- Smaller block sizes tend to give better retrieval performance. Logically, this makes sense because it
usually implies less I/O.
- Smaller reports retrieve faster.
- Attribute may impact the calculation performance which usually has a higher importance from a
performance standpoint.
- If you have a lot of dynamic calculations or attribute dimensions
- Higher Index cache settings may help performance since blocks are found quicker
Step 8: Optimize Calculations
- Unary calculations are the fastest. Try to put everything in the outline and perform a Calc All when
possible.
- You should FIX on sparse dimensions, IF on dense dimensions. FIX statements on sparse dimensions
only brings into memory blocks with those sparse combinations which the calc has focused on. If
statements on dense dimensions operate on blocks as they are brought into memory.
- Use the Two Pass Calculation tag. Try to avoid multiple passes on the database. In the case where the
calculation is a CALC
- Use Intelligent Calc in the case of simple calc scripts
Step 9: Defragmentation
Fragmentation occurs over time as data blocks are updated. As the data blocks are updated, they grow
(assuming you are using compression) and the updated blocks are appended to the page file. This tends
to leave small free space gaps in the page file.
Time - The longer you run your database without clearing and reloading the more likely it is that it has
become fragmented.
Incremental Loads - This usually leads to lots of updates for blocks.
Many Calculations/Many Passes On The Database - Incremental calculations or calculations that pass
through the data blocks multiple times leads to fragmentation.
Step 10: Partition
By breaking up one large database into smaller pieces, calculation performance may be optimized.
Because this adds a significant layer of complexity to administration, this is the last of the optimization
steps we list. However, this does not mean that has the least impact.