0% found this document useful (0 votes)
2K views

1) Difference Between ASO & BSO?

Uploaded by

jaribam
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views

1) Difference Between ASO & BSO?

Uploaded by

jaribam
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

1) Difference between ASO & BSO?

ASO is Essbase's alternative to the sometimes cumbersome BSO method of storing data in an Essbase
database. The differences between the two are as follows:
BSO ASO
Essbase creates a data block for each unique You manage the show with Tablespaces. You got
combination of sparse standard dimension to accept/alter the allocation of default & temp
members (providing that at least one data value storage areas & disk space, as per the
exists for the sparse dimension member requirements.
combination). The data block represents all the
dense dimension members for its combination of
sparse dimension members.
Essbase creates an index entry for each data block.
The index represents the combinations of sparse
standard dimension members. It contains an entry
for each unique combination of sparse standard
dimension members for which at least one data
value exists.
All data is stored, except for dynamically calculated The ASO database efficiently stores not only zero
members. All data consolidations and parent-child level data, but can also store aggregated
relationships in the database outline are stored as hierarchical data with the understandings that
well. stored hierarchies can only have the no-
While the block storage method is quite efficient consolidation (~) or the addition (+) operator
from a data to size ratio perspective, a BSO assigned to them and the no-consolidation (~)
database can require large amounts of overhead to operator can only be used underneath Label Only
deliver the retrieval performance demanded by the members. Outline member consolidations are
business customer. performed on the fly using dynamic calculations
and only at the time of the request for data. This
is the main reason why ASO is a valuable option
worth consideration when building an Essbase
system for your customer.
The database outline must be loaded into memory The database outlines are created and stored in
as a single element what can be considered a page-able format.
This means that instead of Essbase loading the
entire database outline into memory, the page-
able outline can be loaded into memory either
one page or section at a time. This can free up
resources and can help make data retrieval and
data aggregations faster by reducing the amount
of memory consumed by large database outlines.
- Several databases stored in one application. - Aggregate storage applications have some
- No reserved names for application and database limitations that do not apply to block storage
names applications with regard to consolidations,
- Account dimension supports all types of calculations, and overall robust functionality.
calculations and attribute members. - Can store only one database per application.
- Calculation scripts are supported. - Names reserved for table spaces cannot be used
- Uncomplicated write back ability (before v 9) as application or database names.
- Formulas are allowed in all dimensions with no - Accounts dimension does not support time
restrictions. balance members and association of attribute
- Outline changes do not automatically clear data dimensions.
values, even if a data source is used to both modify - On non-account dimensions, there are
members and load values. Therefore, incremental restrictions on label only members and dynamic
data loads are supported for all outlines. (before time series members. Members tagged as
version 9) dynamic hierarchies have no restrictions on the
- Currency conversion is supported. consolidation settings. Stored hierarchy members
can only be tagged as label only or (+) addition.
- Non-account dimensions support only
consolidation operator (+) addition.
- Calculation scripts are not supported.
- Formulas are allowed only on account
dimension members and allowed with certain
restrictions.
- Only Level 0 cells whose values do not depend
on formulas in the outline are loaded.
- Data values are cleared each time the outline is
structurally changed. Therefore, incremental data
loads are only supported for outlines that do not
change.
- Currency conversion is not supported without
the use of special MDX queries. This method can
have a negative effect on performance.
- For better performance, the outline dimensions - Easy optimization, massive data scalability,
must be defined as Dense or Sparse, based on data reduced disk space, and up to 100 times faster.
density, which can sometimes be difficult to get - Database creation is accomplished by either
exactly right. migrating a BSO outline or defined as new after
- Database calculation—calculation script or outline application creation.
consolidation. - Outline dimensions will not need to be
- Calculation order will need to be defined in the designated as dense or sparse.
calc scripts and is predetermined in a default - Outline is validated every time a database is
outline calculation. started.
- Unrestricted write back ability which can be - Database calculation or aggregation of the
dangerous if care is not exercised. database can be predefined by defining
- No automatic update to values after data load. aggregate views.
- Necessary calculation scripts need to be specially - Calculation order is not relevant for database
executed, including any default calculations. calculation, but is relevant for dynamic
- Sometimes requires large amounts of resources. calculation formulas.
- Limited write back ability
- At the end of a data load, if aggregation exists,
the values in aggregation are recalculated and
updated automatically.
- Aggregate storage database outlines are page-
able. This feature significantly reduces memory
usage for very large database outlines.

As you can see, there are some substantial differences and some very good reasons to use one type of
database over another. To give you our idea of the ideal application of ASO and BSO, read below:
ASO Database: The ASO database is ideal for dynamically built Essbase cubes that are usually
Read Only and used for reporting, presentation, and analysis. This type of database would also tend to
have a rather large outline where at least one dimension has a significant amount of members. A parts
dimension or product dimension comes to mind.
Behind this ASO database would be a large BSO parent Essbase database, from which the dynamic ASO
databases are built on the fly.
BSO Database: The BSO database is ideal for virtually any size cube, but where performance is
not necessarily the number one priority. Accuracy and completeness of data would be the main
consideration. The BSO database is ideal as the large parent database where users from many different
departments can trigger jobs which will dynamically build ASO reporting cubes on an as needed basis.
The typical BSO database is ideally suited for financial analysis applications.

Of course, these are just one possibility or scenario. The beauty of Essbase is that you can do most
anything with it. Heck, you could easily have a large Oracle relational database as the backend data
source for your ASO cubes. The possibilities are endless!

2) What is Incremental Dimensional Build and one time Dimension Build?


Incremental dimension building is deferred-restructure dimension build process. Particularly in
situations where you want to build an outline from multiple data sources and save time by deferring
restructure until after all files have been processed. When we used to build one or more dimensions
from a data file, without restructuring the database we use incremental dimensional build.
Incremental Dimensional Build:
Builds and reads from different data sources for dimension builds and delay restructuring until all data
sources have been processed. If you make frequent changes to a database outline, consider enabling
incremental restructuring. When incremental restructuring is enabled, Essbase defers restructuring so
that a change to the database outline or to a dimension does not cause structural change. Essbase
restructures the index and, if necessary, the affected block the next time the block is accessed.

3) What are the different types of log files?


1. Separate log file for all applications 2. Essbase.log – for the Essbase server
3. Essbase _service.log 4. Shared_Services_Client.log
5. EssbasePlugin.log – in the “lcm” folder 6. Configtool.log – all config logs
7. EssbaseExternalizationTask.log 8. easserver.log
9. eas_install.log 10. essbaseserver-install.log
11.essbaseclient_install.log 12. error logs

4) What is the Extension of Cal Scripts and Rule File? .csc and .rul

5) Why are Filters used?


Filters are used for data level security. If we want to grant access to all dimensions then we wouldn't use
a filter. Just grant read access for the database to the user and/or group. The only reason you would use
a filter is if you wanted to restrict access of the data for example, to restrict access to a particular
dimension member.

6) What are dense and sparse dimensions?


Dense: A dimension which has the high probability that data exists for every combination of dimension
members
Sparse: A dimension which has low probability that data exists for every combination of dimension
members

7) What are Filters? Data level security


Method of controlling access to database cells in Essbase. A filter is the most detailed level of security,
allowing you to define varying access levels different users can have to individual database values.

8) What are Attributes?


Classification of a member in a dimension. You can select and group members based on their associated
attributes. You can also specify an attribute when you perform calculations and use calculation
functions. Eg: The database in Sample Basic which has product dimension has some attributes like size,
package type, and flavor. We can add these attributes to the dimensions where we can retrieve the data
like for example to retrieve “coke with 8 Oz with bottles”, this is useful for generating reports.

9) What are different types of attributes?


Essbase supports two different types of attributes.
1. User-Defined attributes: The attributes that are defined by the user.
2. Simple attributes: Essbase supports some attributes, they are: Boolean, date, number, and string. (as
in Attribute Dimensions)

10) What is Substitution Variable?


Substitution variables act as global placeholders for information that changes regularly. Each variable
has a value assigned to it and can be changed at any time by the database administrator. The use of
substitution variables helps reduce maintenance of report scripts, eliminating the need for manual
changes to individual report scripts. For example, many report scripts refer to reporting periods, such as
current month or current quarter. By using substitution variables set such as CurrentMonth or
CurrentQuarter, you can change the assigned value in one place, and the appropriate report scripts are
dynamically updated when the report script is executed.

To refer to a substitution variable in your report script, place an ampersand (&) in front of the variable
name. For example, use &CurrentMonth in your report script to reference the substitution variable
CurrentMonth. When the query is executed, &CurrentMonth is substituted with the value defined in
the IBM DB2 OLAP Server™ or Hyperion Essbase server.

While substitution variables help reduce maintenance in report scripts, someone still has to manually
change the values in the IBM DB2 OLAP Server or Hyperion Essbase server. As an alternative in DB2
Alphablox applications, you could use Java™ methods in your JSP pages to automatically calculate a
value for the current month or other reporting period, then substitute that value in your report scripts.

11) How is Data Stored in Essbase?


BSO ASO
Essbase creates a data block for each unique You manage the show with Tablespaces. You got
combination of sparse standard dimension to accept/alter the allocation of default & temp
members (providing that at least one data value storage areas & disk space, as per the
exists for the sparse dimension member requirements.
combination). The data block represents all the
dense dimension members for its combination of
sparse dimension members.
Essbase creates an index entry for each data block.
The index represents the combinations of sparse
standard dimension members. It contains an entry
for each unique combination of sparse standard
dimension members for which at least one data
value exists.

12) What is an hour glass model?

13) Types of Build Methods?


Generation Reference
Level Reference
Parent child Reference

14) What is Two Pass Calculation?


Property set for members to correct incorrect aggregations.
The members tagged as two pass calc are recalculated when the second pass is executed, overwriting
the incorrect summation with the correct calculation, which is defined on the member formulas or calc
scripts.

15) What is TB First and TB Last?


TB First: in the Sample.Basic database, the accounts member Opening Inventory is tagged as TB First.
Opening Inventory consolidates the value of the first month in each quarter and uses that value for that
month’s parent. For example, the value for Qtr1 is the same as the value for Jan.
TB Last: in the Sample.Basic database, the accounts member Ending Inventory is tagged as TB Last.
Ending Inventory consolidates the value for the last month in each quarter and uses that value for that
month’s parent. For example, the value for Qtr1 is the same as the value for Mar.

16) How do you calculate the Size of the data block?


Essbase creates a data block for each unique combination of sparse standard dimension members
(providing that at least one data value exists for the sparse dimension member combination).
The data block represents all the dense dimension members for its combination of sparse dimension
members.

For example, say Market & Scenario are Standard Sparse dimensions with EAST, WEST as children of Market and
ACTUAL,BUDGET as children of Scenario. Measures & Time are Standard Dense Dimensions with SALES, COGS as children
of Measures and JAN,FEB as children of Time.

Combinations of Standard Sparse dimensions is 

EAST - ACTUAL EAST - BUDGET


WEST - ACTUAL WEST - BUDGET

Say the following unique combinations has data

EAST - ACTUAL
WEST - ACTUAL

Now, as per above information, there will be two Data blocks created.

Now, Combinations of Standard Dense Dimensions is 

SALES - JAN COGS - JAN


SALES - FEB COGS - FEB

Total is 4 combinations. Hence, there will be 4 cells will be created for each BLOCK.

Size of DATA BLOCK = 8 * (No. of cells in that block) (in bytes)


= 8 * 4 = 64

Size of Cube = (No. of Blocks) * (Block Size) (in bytes)


= 2 * 64 = 128
17) How many data blocks are there in your cube?
18) What is meant by Descendents and can you give me the best example to describe it?
19) How do you do the Data Load?
20) Where is ISMember Command used?
22) What is intelligent Calculation?
Intelligent Calculation allows Essbase to remember which data-blocks in the database need to be
calculated based on new data coming in, and which haven’t been impacted (and don’t need calculation).
Intelligent calculation is wonderful when you are running a default calc.
SET UPDATECALC OFF command tells Essbase to ignore it during the calc script.
23) What is meant by Clean block and Dirty block?
With Intelligent Calculation on when a calculation is run Essbase flags blocks as 'clean', and when data is
loaded or changed blocks are flagged as 'dirty'. (It is important to notice that this is not done at the 'cell'
level but instead it is done at the block level.) This helps reduce calculation time by allowing the Essbase
Calculator to skip the blocks flagged as ‘clean’.
24) Commands of Intelligent calculation?
Set UpdateCalc On;
Set UpdateCalc Off;
Set ClearUpdateStatus After;
Set ClearUpdateStatus Only;
Set ClearUpdateStatus Off;
25) How do you calculate the subset of a cube?
CALC DIM;
CALC ALL EXCEPT DIM(Product);
CALC ALL EXCEPT MBR (mbrlist);

26) Difference between standard dimension and attribute dimension?


Standard Dimensions represent the core components of a business plan and often relate to
departmental functions.
Attribute dimensions are a special type of dimension and are associated with standard dimensions.
Through attribute dimensions, you group and analyze members of your standard dimensions. It does not
associate any data.

27) Difference between standard dimension and attribute dimension?


A UDA is a user defined attribute; it’s basically a tag you can assign to a member that you can then
reference in calcs, security, and reporting to recall those members that have been tagged. UDA's are
fairly flexible as you can assign them across different hierarchy levels.
Attribute dimensions are more rigid, but provide more reporting functionality. Attributes have to be
consistent with the level they are applied to in the dim, in other words you can't have the same attribute
for a member at level 0 and another member at level 1. Attributes are part of a physical dimension that
users can render in a report as a replacement or augmentation to the base dimension they are
associated with. One powerful use is for cross tab reporting. Attribute dims are dynamically calculated
and provide summary level and/or subtotals for the attribute members.
So Attributes are more powerful than UDA's, but they take more work to set up and have more rigid
rules around using them. UDA's are easier to add on and more flexible, but not as robust for reporting.
The way I choose which one to use is based on the need. If the purpose is for end user reporting, I
usually try to use an Attribute dim in most cases. If the need is around identifying members for security
or calculations, I'll often lean towards UDA's. In many cases the two overlap and it just comes down to
preference or what the is best suited for the design and maintenance of the database.
One other point to remember is that you cannot assign an attribute to a dense dimension.

28) What is meant by XREF Function?


XREF function enables a database calculation to incorporate values from a different database.
Syntax: @XREF (locationAlias [, mbrList])
LocationAlias: A location alias for the data source. A location alias is a descriptor that identifies the data
source. The location alias must be set on the database on which the calculation script will be run. The
location alias is set by the database administrator and specifies a server, application, database,
username, and password for the data source.

29) Can you give the same name to different members in UDA?
30) What is Data Cache?
Data blocks can reside on physical disk and in RAM. The amount of memory allocated for blocks is called
the data cache. When a block is requested, the data cache is searched. If the block is found in the data
cache, it is accessed immediately. If the block is not found in the data cache, the index is searched for
the appropriate block number. The block's index entry is then used to retrieve the block from the proper
data file on disk.

31) What is the Difference between the Data Cache and Data File Cache?
32) What is the Size of your cube?

33) What is shared members? Can shared members have children?


The data associated with the member comes from another member with the same name.
No. Shared members should always be level 0 members.

34) What are the different storage properties in Essbase?


Store Data, Dynamic Calc, Dynamic Calc & Store, Never Share, Label Only, Shared Member

35) What is the difference between Dynamic calc and Dynamic Calc and Store?
36) How does Essbase consolidate Data?
37) Different types of Dimension Building?

38) What is Label only? Give the example of it?


Although a label only member has no data associated with it, it can still display a value. The label only
tag groups members and eases navigation and reporting. Typically, label only members are not
calculated.

39) Difference between calc all and Calc Dim?

40) Explain me about your project? And tell me any difficulties that you have faced and how did you
resolve it?

41) What is Commit Block?


Commit Block controls how often blocks in memory are written to disk while busy loading or calculating
a cube. You want to minimize disk writes, as this takes up a lot of processing time, so set this to be quite
high. The default setting is 3000 blocks; if your block size is relatively small (< 10KB) make this much
higher, 20000 to 50000. This setting alone can cause dramatic performance improvements specifically
on Calc All operations and cube loads.

42) What are the different steps so that you can optimize the performance your cube?
Step 1: The Starting Line: Model Analysis
- Minimize the number of dimensions. Do not ask for everything in one model

- Minimize complexity of individual dimensions. Consider UDAs and Attribute Dimensions in order to
reduce the size of some of the dimensions
- Examine the level of granularity in the dimensions.
Step 2: Order The Outline: Hour-glass model
- Dense dimensions from largest to smallest. Small and large is measured simply by counting the number
of Stored members in a dimension. The effect of sparse dimension ordering is much greater than dense
dimension ordering.
- Sparse dimensions from smallest to largest. This relates directly to how the calculator cache functions
Step 3: Evaluate Dense/Sparse Settings
- Finding the optimal configuration for the Dense/sparse settings is the most important step in tuning a
database.
- Optimize the block size. This varies per operating system, but in choosing the best Dense/sparse
configuration keep in mind that blocks over 100k tend to yield poorer performance. In general, Analytic
Services runs optimally with smaller block sizes
Step 4: System Tuning: System tuning is dependent on the type of hardware and operating
- Keep memory size higher
- Ensure there is no conflict for resources with other applications
Step 5: Cache Settings
- The actual cache settings recommended is strongly dependent on your specific situation.
- To measure the effectiveness of the cache settings, keep track of the time taken to do a calculation and
examine the hit ratio statistics in your database information.
Step 6: Optimize Data Loads
- Know your database configuration settings (which dimensions are dense and sparse).
- Organize the data file so that it is sorted on sparse dimensions. The most effective data load is one
which makes the fewest passes on the database. Hence, by sorting on sparse dimensions, you are
loading a block fully before moving to the next one.
- Load data locally on the server. If you are loading from a raw data file dump, make sure the data file is
on the server. If it is on the client, you may bottleneck on the network
Step 7: Optimize Retrievals
- Increase the Retrieval Buffer size. This helps if retrievals are affected due to dynamic calculations and
attribute dimensions.
- Increase the Retrieval Sort Buffer size if you are performing queries involving sorting or ranking.
- Smaller block sizes tend to give better retrieval performance. Logically, this makes sense because it
usually implies less I/O.
- Smaller reports retrieve faster.
- Attribute may impact the calculation performance which usually has a higher importance from a
performance standpoint.
- If you have a lot of dynamic calculations or attribute dimensions
- Higher Index cache settings may help performance since blocks are found quicker
Step 8: Optimize Calculations
- Unary calculations are the fastest. Try to put everything in the outline and perform a Calc All when
possible.
- You should FIX on sparse dimensions, IF on dense dimensions. FIX statements on sparse dimensions
only brings into memory blocks with those sparse combinations which the calc has focused on. If
statements on dense dimensions operate on blocks as they are brought into memory.
- Use the Two Pass Calculation tag. Try to avoid multiple passes on the database. In the case where the
calculation is a CALC
- Use Intelligent Calc in the case of simple calc scripts
Step 9: Defragmentation
Fragmentation occurs over time as data blocks are updated. As the data blocks are updated, they grow
(assuming you are using compression) and the updated blocks are appended to the page file. This tends
to leave small free space gaps in the page file.
Time - The longer you run your database without clearing and reloading the more likely it is that it has
become fragmented.
Incremental Loads - This usually leads to lots of updates for blocks.
Many Calculations/Many Passes On The Database - Incremental calculations or calculations that pass
through the data blocks multiple times leads to fragmentation.
Step 10: Partition
By breaking up one large database into smaller pieces, calculation performance may be optimized.
Because this adds a significant layer of complexity to administration, this is the last of the optimization
steps we list. However, this does not mean that has the least impact.

43) What is Partition?


44) What are different types of Partition? Have you ever worked on partitioning?

You might also like