What is Metadata?
Metadata is a relational database that contains information to facilitate the transfer of data between Data warehouse and Microstrategy.
Metadata stores object definitions and connection information. Mstr uses metadata to translate user request and this definition is used to create SQL queries using SQL Engine.
Logical Size in Microstrategy:
The logical size of a table depends on the number of attributes in that table and position of that attribute in Hierarchy.
Hierarchy 1: Year–>Qtr–>Month–>Day
Hierarchy 2: Country–>State–>City
- Calculate the number of levels in Hierarchy
Hierarchy 1: 4(Attributes)
Hierarchy 2: 3(Attributes)
Calculate the logical size using the below formula
(Level in Hierarchy/number of levels in Hierarchy)*10
Example for Year: (1/4)*10=2.5 ; Qtr: (2/4)*10=5 ;…
Note: The lowest level attribute for every hierarchy is always 10.
Dynamic Aggregation: Displaying data on the fly
Dynamic aggregation allows metric values to be aggregated at different levels,
depending on the attributes included on the report, without having to re-execute it against the data warehouse.
Moving Attributes from Report template to Report objects, metric values are dynamically aggregated across the new level in the report.
Derived Metrics: Creating Metrics on the fly
These metrics are created based on existing metrics in the report. Since derived metrics are evaluated in-memory, their computation does not require any SQL to execute in the database.
Since derived metrics are created within a report, they can only be used for the report in which they are created. They cannot be saved as individual objects in the project, and therefore cannot be applied to other reports in the project.
View Filter: Filtering data on the fly
A view filter dynamically restricts the data without re-executing the report against the warehouse. There is no impact in SQL view, and it improves the response time of reports. We can apply view filters only on the Report Objects.
Derived Elements: Defining Attribute elements on the fly
A derived element is a grouping of attribute elements on a report. Derived elements are evaluated on the report dataset without regenerating or re-executing SQL.
Derived elements are defined by using a list, filter, or calculation to combine attribute element data.
Intelligent Cubes: Providing MOLAP analysis with I-Cubes
Mstr I-Cube Technology allows you to create multi-dimensional cubes that are stored in Microstrategy I-Server. Cubes are static, and we have to Refresh or Publish the cubes to get live data. Consolidations and Custom Groups cannot be included in Icubes or cube reports, however, we can use derived elements. Prompts cannot be included in Icubes, however, cube reports can use prompts that access data included in Icubes.
OLAP Features can be used in Reports are as follows:
VLDB Properties Settings: we can customize the SQL that Mstr generates.
VLDB (Very Large Database) properties are available at multiple levels in mstr.
- Report (Highest Priority)
- Database Instance
- DBMS (Default)
Mstr Documents and Dashboards concepts:
- Datasets – Reports, Icubes, Cube Reports
- Selectors – Checkboxes, Radio button, Dropdown, Button Bar, Link Bar, Fish Eye,…
- Panel Stack/Panel Stack Selector
- Graphs/Graph Types – Bar,Line,Area,Stacked Bar,Bubble,Bar and Line,etc….
- Widgets – Heat Map, Bubble Grid, Cylinder, Gauge, Funnel, Data Cloud, Micro Charts, etc….
- Auto Text
A compound attribute is an attribute whose primary key is made up of the combination of two or more columns.
To create a compound key (composite key), users must create multiple attribute forms, group them together, and set the form group as the key for the attribute.
Open attribute editor (right-click on attribute and select ‘Edit’)
Select the required forms to create the key
From the ‘Edit’ menu, choose Group as shown below
Once you create the attribute click save.
‘Update Schema’ from the Schema menu
users must update the schema when there are any changes in schema objects.
All members of a compound key must be part of the same form category.
Users cannot delete a form group while it is set as key. They have to first set another attribute form as the key.
Consolidations vs Custom Groups
Consolidations are static, we can create consolidations on attribute elements. simply a grouping of attribute elements is called Consolidation. This calculation takes place in an analytical Engine so this helps for better performance when compared to Custom Group.
A custom group is a special filter that can be placed on a template and is comprised of an ordered collection of elements called custom group elements. Each custom group element can be labeled with a header and includes a logical expression of attribute, metric, object, advanced, or custom group banding qualifications. These calculations take place in SQL Engine and we get more passes for the custom group.
Command Manager enables administrators to complete many of their routine tasks more quickly and easily.
1) Creating, modifying, and deleting configuration objects (schedules, users, groups, and so forth)
2) Assigning privileges to users and groups
3) Creating, modifying, and deleting attributes, facts, filters, and metrics
4) Publishing, listing, and deleting Intelligent Cubes etc
Sample Scripts :
1) To Create User Group in Command Manager :
Create User Group “Developers” MEMBERS “User1”, “User2”;
In this example, the script performs the following actions:
• Creates a user group named Project Designers
• Assigns Developer and User to the Project Designer group
• Grants the MicroStrategy Architect privileges to the Project Designer group
B) List Privileges :
LIST [ALL | GRANTED] PRIVILEGES FOR USER “” ;
eg : LIST ALL PRIVILEGES FOR USER "User1";
2) To import an update package using Command Manager:
IMPORT PACKAGE “” [FOR PROJECT“”];
is the name and location of the update package, and
is the name of the project to which the update is to be
If the package made any changes to the project schema, you need to
update the schema for the changes
to take effect. The syntax for
updating the schema in a Command Manager script is:
UPDATE SCHEMA [REFRESHSCHEMA] [RECALTABLEKEYS]
[RECALTABLELOGICAL] [RECALOBJECTCACHE] FOR PROJECT “”;
3) CREATE DBLOGIN “” [LOGIN] [PASSWORD “”];
eg: CREATE DBLOGIN “Training” LOGIN “Training” PASSWORD “Password”;
4) APPLY SECURITY FILTER “” [FOLDER] TO ([USER] “” | [USER]
GROUP “”) ON [PROJECT] “”;
5) To invoke Procedures :
EXECUTE PROCEDURE Delete_Users_In_Group(“”);
Performance Tuning Techniques
- Use Where clause to filter rows
- Use Table joins rather than multiple Queries
- Use case expressions rather than multiple Queries
- Add Indexes to Tables
- Use Where clause rather than Having
- Use Union All Rather than Union
- Use Exists Rathen than IN and Distinct
- Take the Microstaregy generated SQL and run in DB and see how long it runs. .
- Check for any cross joins in the report if there is any cross join it will effect the performance..
- Check of number of passes the report generates . Take each pass of the SQL and
identify the pass thats running for long time..
- Look if you have all required filters passed in all the SQL passes.
- Check for report result volume and see if its huge..
- Check for the join orders some times the join order will cause performance issue.
- Take some assitance from DBA and see if any indexes on DB help for performance of query.
- Creating aggregate tables will solve performace issue.
- You should make sure tabl STATs are recent because sometime we try to investigate
the bad performance reasons in MSTR and it turns out it was due to table stats were not recent.
- Run Explain on queries which are taking long time so it chooses the shortest access path by using predefined utilities..
- Look into using Caches and Intelligent Cubes. If you’ve got reports that are non-prompted or commonly subscribed, setup a job to cache them first.
- Avoid using Custom Groups.
- Study your query and consult DBAs for long running queries to see if there are better ways they could be written. Common examples would be Date attributes that are defined in the database as datetime, but really have no time associated with them and are defined in MicroStrategy as Date. Because of this, the query will do a Convert() on the date when joining. In a SQL Server environment, that will cause some slow down. In this case, you can edit the Column Alias of the attribute to be Date instead of DateTime to get the same results, faster
By tuning different settings in VLDB:
- Sub query type :
If the SQL of your Report has any Sub queries, You can use the “Use Temporary Table” option available in Query Optimization in VLDB Properties. For reports with several relationship filters, temporary table syntax may execute significantly faster on the database.
By checking if there exist any sub query in SQL ,if yes then we try to tune that by using a VLDB Property called ‘Query Optimization’. In Query Optimization we have option called sub query type which eliminates the sub queries.
- SQL global optimization
Check if there exists any unnecessary case statements, if yes then by using VLDB property called SQLglobal optimization (level) we can eliminate the case statements, unused duplicate SQL passes.
• Level 0: No optimization
• Level 1: Remove Unused and Duplicate Passes
• Level 2: Level 1 + Merge Passes with Different SELECT
In MicroStrategy SQL Generation Engine 9.0, two more levels of optimization exist:
Level 3: Level 2 + Merge Passes, which only hit DB tables, with Different WHERE
Level 4: Level 2 + Merge Passes with Different WHERE
- WHERE clause driving table
WHERE clause driving table is used to control which table the engine should use to apply the filter (WHERE clause). This setting is used as an optimization for some databases which perform better when columns coming from smaller tables are used in the where clause.
By default for all databases, the MicroStrategy Engine applies the filter to the fact table. We can apply filter at look up table so that data retrieval will be fast.
- JOIN type:
We can change join types from Join89 to 92 and vice versa.
Join 89 was the earlier standard wherein the joins are performed in the WHERE clause.
Join 92 is the later standard wherein the joins are performed in the FROM clause.
- Intermediate Tables:
Controls the type of intermediate tables that are created during report execution.
This VLDB property specifies the type of intermediate storage structures the SQL Engine creates. If the database supports true temporary tables, then the true temporary table’s option is the default setting. If not, the permanent table’s option is the default setting.
The possible values for this VLDB property are:
• Permanent table
• Derived table
• Common table expression
• True temporary table
• Temporary view
Dataset Optimizations/Performance Tuning
- Reduce the Number of Datasets
- Remove Unused Datasets,Metrics,and Attributes
- Use In Memory Cubes as Datasets
- Turn on Caching
- Define Dataset Joins
- Use Filter Selectors and Prompts
- Use Appropriate Number of Thresholds
- Use Textboxes Over Small Grids
- Reduce the Number of Graphs,Grids, or Advanced Visualizations
- Convert Multiple Grids and Graphs into a Single Advanced Visualization
- Reduce Number of Nested Panels
- Enable Drilling or Links
- A selector with many items (for example, the buttons or check boxes) increases the
time it takes for the dashboard to execute. For example, if you increase the number of
items by a factor of ten, server execution times can increase up to 50%.
In essence, a larger number of items translates into a larger dataset.
- Flash Mode provides better performance when selectors have many targets,
that is, the Grid/Graphs and/or panel stacks affected by the selectors.
- A selector that controls attributes displayed on a Grid/Graph performs
faster than a selector that controls attributes that are displayed on a panel Stack.
- Nesting panel stacks (that is, placing a panel stack on a panel) increases
client rendering time. To reduce that time, include data in both panel stacks,
not just the nested panel stack.
- In Flash Mode, after the dashboard is initially loaded,
manipulations such as choosing a selector item are executed on the client machine.
In contrast, such manipulations in Interactive Mode send additional requests from
Web Server to Intelligence Server. Since Flash Mode uses minimal server resources
after the initial load is complete, system overhead is reduced for multiple users
concurrently manipulating their dashboards.
Therefore, Flash Mode has faster response times for manipulations,
regardless of the number of users accessing the dashboard.
However, these same users must accept longer document execution times due
to the initial loading of Flash.
- Graphs perform better in Flash Mode than in Editable Mode and Interactive Mode.
- Use Selector as Slice instead of filter wherever it is appropriate.