Query Performance Tuning

Exclusive offer: get 50% off this eBook here
Expert Cube Development with SSAS Multidimensional Models

Expert Cube Development with SSAS Multidimensional Models — Save 50%

Expert tips and tricks for designing Analysis Services Multidimensional Models with this book and ebook

$29.99    $15.00
by Alberto Ferrari Chris Webb Marco Russo | February 2014 | Enterprise Articles

In this article by Chris Webb, Alberto Ferrari, and Marco Russo, authors of Expert Cube Development with SSAS Multidimensional Models, we will learn about the steps that you'll need to go through in order to ensure your cube is as responsive as possible.

(For more resources related to this topic, see here.)

Understanding how Analysis Services processes queries

We need to understand what happens inside Analysis Services when a query is run. The two major parts of the Analysis Services engine are:

  • The Formula Engine: This part processes MDX queries, works out what data is needed to answer them, requests that data from the Storage Engine, and then performs all calculations needed for the query.
  • The Storage Engine: This part handles all the reading and writing of data, for example, during cube processing and fetching all the data that the Formula Engine requests when a query is run.

When you run an MDX query, then, that query goes first to the Formula Engine, then to the Storage Engine, and then back to the Formula Engine before the results are returned back to you.

Performance tuning methodology

When tuning performance there are certain steps you should follow to allow you to measure the effect of any changes you make to your cube, its calculations or the query you're running:

  • Always test your queries in an environment that is identical to your production environment, wherever possible. Otherwise, ensure that the size of the cube and the server hardware you're running on is at least comparable, and running the same build of Analysis Services.
  • Make sure that no-one else has access to the server you're running your tests on. You won't get reliable results if someone else starts running queries at the same time as you.
  • Make sure that the queries you're testing with are equivalent to the ones that your users want to have tuned. As we'll see, you can use Profiler to capture the exact queries your users are running against the cube.
  • Whenever you test a query, run it twice; first on a cold cache, and then on a warm cache. Make sure you keep a note of the time each query takes to run and what you changed on the cube or in the query for that run.

Clearing the cache is a very important step—queries that run for a long time on a cold cache may be instant on a warm cache. When you run a query against Analysis Services, some or all of the results of that query (and possibly other data in the cube, not required for the query) will be held in cache so that the next time a query is run that requests the same data it can be answered from cache much more quickly. To clear the cache of an Analysis Services database, you need to execute a ClearCache XMLA command.

To do this in SQL Management Studio, open up a new XMLA query window and enter the following:

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/ engine"> <ClearCache> <Object> <DatabaseID>Adventure Works DW 2012</DatabaseID> </Object> </ClearCache> </Batch>

Remember that the ID of a database may not be the same as its name—you can check this by right-clicking on a database in the SQL Management Studio Object Explorer and selecting Properties. Alternatives to this method also exist: the MDX Studio tool allows you to clear the cache with a menu option, and the Analysis Services Stored Procedure Project (http://tinyurl.com/asstoredproc) contains code that allows you to clear the Analysis Services cache and the Windows File System cache directly from MDX. Clearing the Windows File System cache is interesting because it allows you to compare the performance of the cube on a warm and cold file system cache as well as a warm and cold Analysis Services cache. When the Analysis Services cache is cold or can't be used for some reason, a warm filesystem cache can still have a positive impact on query performance.

After the cache has been cleared, before Analysis Services can answer a query it needs to recreate the calculated members, named sets, and other objects defined in a cube's MDX Script. If you have any reasonably complex named set expressions that need to be evaluated, you'll see some activity in Profiler relating to these sets being built and it's important to be able to distinguish between this and activity that's related to the queries you're actually running. All MDX Script related activity occurs between Execute MDX Script Begin and Execute MDX Script End events; these are fired after the Query Begin event but before the Query Cube Begin event for the query run after the cache has been cleared and there is one pair of Begin/End events for each command on the MDX Script. When looking at a Profiler trace you should either ignore everything between the first Execute MDX Script Begin event and the last Execute MDX Script End event or run a query that returns no data at all to trigger the evaluation of the MDX Script, for example:

SELECT {} ON 0 FROM [Adventure Works]

Designing for performance

Many of the recommendations for designing cubes will improve query performance, and in fact the performance of a query is intimately linked to the design of the cube it's running against. For example, dimension design, especially optimizing attribute relationships, can have significant effect on the performance of all queries—at least as much as any of the optimizations. As a result, we recommend that if you've got a poorly performing query the first thing you should do is review the design of your cube to see if there is anything you could do differently. There may well be some kind of trade-off needed between usability, manageability, time-to-develop, overall 'elegance' of the design and query performance, but since query performance is usually the most important consideration for your users then it will take precedence. To put it bluntly, if the queries your users want to run don't run fast, your users will not want to use the cube at all!

Expert Cube Development with SSAS Multidimensional Models Expert tips and tricks for designing Analysis Services Multidimensional Models with this book and ebook
Published: February 2014
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

Performance-specific design features

Once you're sure that your cube design is as good as you can make it, it's time to look at two features of Analysis Services that are transparent to the end user, but have an important impact on performance and scalability: measure group partitioning and aggregations. Both of these features relate to the Storage Engine and allow it to answer requests for data from the Formula Engine more efficiently.

Partitions

A partition is a data structure that holds some or all of the data held in a measure group. When you create a measure group, by default that measure group contains a single partition that contains all of the data. Enterprise Edition and BI Edition of Analysis Services allow you to divide a measure group into multiple partitions; Standard Edition is limited to one partition per measure group, and the ability to partition is one of the main reasons why you would want to use Enterprise Edition or BI Edition over Standard Edition.

Why partition?

Partitioning brings two important benefits: better manageability and better performance. Partitions within the same measure group can have different storage modes and different aggregation designs, although in practice they usually don't differ in these respects; more importantly they can be processed independently, so for example when new data is loaded into a fact table, you can process only the partitions that should contain the new data. Similarly, if you need to remove old or incorrect data from your cube, you can delete or reprocess a partition without affecting the rest of the measure group.

Partitioning can also improve both processing performance and query performance significantly. Analysis Services can process multiple partitions in parallel and this can lead to much more efficient use of CPU and memory resources on your server while processing is taking place. Analysis Services can also fetch and aggregate data from multiple partitions in parallel when a query is run too, and again this can lead to more efficient use of CPU and memory and result in faster query performance. Lastly, Analysis Services will only scan the partitions that contain data necessary for a query, and since this reduces the overall amount of IO needed this it can also make queries much faster.

Building partitions

You can view, create, and delete partitions on the Partitions tab of the Cube Editor in SSDT. When you run the 'New Partition' wizard or edit the Source property of an existing partition, you'll see you have two options for controlling what data is used in the partition:

  • Table Binding: This means that the partition contains all of the data in a table or view in our relational data source, or a named query defined in your DSV. You can choose the table you wish to bind to on the Specify Source Information step of the New Partition wizard, or in the Partition Source dialog if you choose 'Table Binding' from the Binding Type dropdown box.
  • Query Binding: This allows you to specify a SQL SELECT statement to filter the rows you want from a table; SSDT will automatically generate part of the SELECT statement for you, and all you'll need to do is supply the WHERE clause. If you're using the New Partition wizard, this is the option that will be chosen if you check the 'Specify a query to restrict rows' checkbox on the second step of the wizard; in the Partition Source dialog you can choose this option from the Binding Type dropdown box.

It might seem like query binding is the easiest way to filter your data, and while it's the most widely-used approach it does have one serious shortcoming. Since it involves hard-coding a SQL SELECT statement into the definition of the partition, any change to your fact table such as the deletion or renaming of a column can mean the SELECT statement produces error when it is run, and this means the partition processing will fail. If you have a lot of partitions in your measure group – and it's not unusual to have over one hundred partitions on a large cube – altering the query used for each one is somewhat time-consuming. Instead, binding a partition to a view in your relational database will make this kind of maintenance much easier, although you do of course now need to generate one view for each partition.

It's very important that you check the queries you're using to filter your fact table for each partition. If the same fact table row appears in more than one partition, or if fact table rows don't appear in any partition, this will result in your cube displaying incorrect measure values.

On the Processing and Storage Locations step of the wizard you have the chance to create the partition on a remote server instance, functionality that is called Remote Partitions. This is one way of scaling out Analysis Services: you can have a cube and measure group on one server but store some of the partitions for the measure group on a different server, something like a linked measure group but at a lower level. It can be useful for improving processing performance in situations when you have a very small time window available for processing but in general we recommend that you do not use remote partitions. They have an adverse effect on query performance and they make management of the cube (especially backup) very difficult.

Also, on the same step you have the chance to store the partition at a location other than the default of the Analysis Services data directory. Spreading your partitions over more than one volume may make it easier to improve the IO performance of your solution, although again it can complicate database backup and restore.

After assigning an aggregation design to the partition (we'll talk about aggregations in detail next), the last important property to set on a partition is Slice. The Slice property takes the form of an MDX member, set or tuple – MDX expressions returning members, sets or tuples are not allowed, however - and indicates what data is present in a partition. While you don't have to set it the slice property we strongly recommend that you do so for ROLAP partitions, and even for MOLAP partitions for the following reasons:

  • While Analysis Services does automatically detect what data is present in a partition during processing, it doesn't always work as well as you'd expect and can result in unwanted partition scanning taking place at query time in a number of scenarios. The following blog entry on the SQLCat team site explains why in detail (not all of the future improvements mentioned in this article have actually been implemented at the time of writing): http://tinyurl.com/partitionslicing.
  • It acts as a useful safety mechanism to ensure that you only load the data you're expecting into a partition. If, while processing, Analysis Services finds that data is being loaded into the partition that conflicts with what's specified in the Slice property, then processing will fail.

More details on how to set the Slice property can be found in Mosha Pasumansky's blog entry on the subject here: http://tinyurl.com/moshapartition.

Planning a partitioning strategy

We now know why we should be partitioning our measure groups and what to do to create a partition. The next question is: how should we split the data in our partitions? We need to find some kind of happy medium between the manageability and performance aspects of partitioning – we need to split our data so that we do as little processing as possible, but also so the division of data means that as few partitions are scanned as possible by our users' queries. Luckily, if we partition by our Time dimension we can usually meet both needs very well. It's usually the case that when new data arrives in a fact table it's for a single day, week, or month, and it's also the case that the most popular way of slicing a query is by a time period. Therefore, it's almost always the case that when measure groups are partitioned they are partitioned by Time. It's also worth considering, though, if it's a good idea to partition by Time and another dimension; for example, in an international company you might have a Geography dimension and a Country attribute, and users may always be slicing their queries by Country too – in which case it might make sense to partition by Country.

Measure groups that contain measures with the Distinct Count aggregation type require their own specific partitioning strategy. While you should still partition by Time, you should also partition by non-overlapping ranges of values within the column you're doing the distinct count on. A lot more detail on this is available in the following white paper: http://tinyurl.com/distinctcountoptimize.

It's worth looking at the distribution of data over partitions for dimensions we're not explicitly slicing by to see how well partition elimination will work for them. You can see the distribution of member data IDs (the internal key values that Analysis Services creates for all members on a hierarchy) for a partition by querying the Discover_Partition_Dimension_Stat DMV, for example:

SELECT * FROM SystemRestrictSchema($system.Discover_Partition_Dimension_Stat ,DATABASE_NAME = 'Adventure Works DW 2008' ,CUBE_NAME = 'Adventure Works' ,MEASURE_GROUP_NAME = 'Internet Sales' ,PARTITION_NAME = 'Internet_Sales_2003')

The following screenshot shows what the results of this query look like:

There's also a useful Analysis Services stored procedure that shows the same data and any partition overlaps included in the Analysis Services Stored Procedure Project (a free, community-developed set of sample Analysis Services stored procedures): http://tinyurl.com/partitionhealth. This blog entry describes how you can take this data and visualize it in a Reporting Services report: http://tinyurl.com/partitionslice.

We also need to consider what size our partitions should be. In general between 5 and 60 million rows per partition, or up to around 3 GB, is a good size. If you have a measure group with a single partition of below 5 million rows then don't worry, it will perform very well, but it's not worth dividing it into smaller partitions. It's equally possible to get good performance with larger partitions. It's also best to avoid having too many partitions as well – if you have more than a thousand it may make SQL Management Studio and SSDT slow to respond, and it may be worth creating fewer, larger partitions assuming these partitions stay within the size limits for a single partition we've just given.

A good reference for performance tuning your partitioning strategy is the SQL Server 2008 R2 Analysis Services Performance Guide which is available as follows: http://tinyurl.com/ssasR2perf.

Unexpected Partition scans

Even when you have configured your partitions properly it's sometimes the case that Analysis Services will scan partitions that you don't expect it to be scanning for a particular query. If you see this happening the first thing to determine is whether these extra scans are making a significant contribution to your query times. If they aren't, it's probably not worth worrying about; if they are, there are some things to try to attempt to stop it happening.

Expert Cube Development with SSAS Multidimensional Models Expert tips and tricks for designing Analysis Services Multidimensional Models with this book and ebook
Published: February 2014
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

Aggregations

An aggregation is simply a pre-summarized data set, similar to the result of a SQL SELECT statement with a GROUP BY clause, that Analysis Services can use when answering queries. The advantage of having aggregations built in your cube is that it reduces the amount of aggregation that the Analysis Services Storage Engine has to do at query time, and building the right aggregations is one of the most important things you can do to improve query performance. Aggregation design is an ongoing process that should start once your cube and dimension designs have stabilized and which will continue throughout the lifetime of the cube as its structure and the queries you run against it change.

Creating an initial aggregation design

The first stage in creating an aggregation design should be to create a core set of aggregations that will be generally useful for most queries run against your cube. This should take place towards the end of the development cycle when you're sure that your cube and dimension designs are unlikely to change much, because any changes are likely to invalidate your aggregations and mean this step will have to be repeated. It can't be stressed enough that good dimension design is the key to getting the most out of aggregations. Removing unnecessary attributes, setting AttributeHierarchyEnabled to False where possible, building optimal attribute relationships and building user hierarchies will all make the aggregation design process faster, easier and more effective. You should also take care to update the EstimatedRows property of each measure group and partition, and the EstimatedCount of each attribute before you start, and as these values are also used by the aggregation design process. BIDS Helper adds a very useful new button to the toolbar in the Partitions tab of the Cube Editor which will update all of these count properties with one click. Remember that if you are developing against a small subset of your data you will have to enter these count values manually, so that the values are similar to what they would be if you were using a full-sized production database.

To build this an initial set of aggregations we'll be running the Aggregation Design Wizard. This can be run by clicking on the Design Aggregations button on the toolbar of the Aggregations tab of the Cube Editor. This wizard will analyze the structure of your cube and dimensions, look at various property values you've set, and try to come up with a set of aggregations that it thinks should be useful. The one key piece of information it doesn't have at this point is what queries you're running against the cube, so some of the aggregations it designs may not prove to be useful in the long run, but running the wizard is extremely useful for creating a first draft of your aggregation designs.

You can only design aggregations for one measure group at a time; if you have more than one partition in the measure group you've selected, the first step of the wizard asks you to choose which partitions you want to design aggregations for. An aggregation design can be associated with many partitions in a measure group, and a partition can be associated with just one aggregation design or none at all.

We recommend that, in most cases, you have just one aggregation design for each measure group for the sake of simplicity. However, if processing time is limited and you need to reduce the overall time spent building aggregations, or if query patterns are different for different partitions within the same measure group, it may make sense to apply different aggregation designs to different partitions.

The next step of the wizard asks you to review the AggregationUsage property of all the attributes on all of the cube dimensions in your cube; this property can also be set on the Cube Structure tab of the Cube Editor.

The following screenshot shows the Review Aggregation Usage step of the Aggregation Design Wizard:

The AggregationUsage property controls how dimension attributes are treated in the aggregation design process. The property can be set to the following values:

  • Full: This means that the attribute or an attribute at a lower granularity directly related to it by an attribute relationship will be included in every single aggregation the wizard builds. We recommend that you use this value sparingly, for at most one or two attributes in your cube because it can significantly reduce the number of aggregations that get built.You should set it for attributes that will almost always get used in queries. For example, if the vast majority of your queries are at the month granularity it makes sense that all of your aggregations include the Month attribute from your Time dimension.
  • None: This means that the attribute will not be included in any aggregation that the wizard designs. Don't be afraid of using this value for any attributes that you don't think will be used often in your queries; it can be a useful way of ensuring that the attributes that are used often get good aggregation coverage.

Note that attributes with AttributeHierarchyEnabled set to False will have no aggregations designed for them anyway.

  • Unrestricted: This means that the attribute may be included in the aggregations designed, depending on whether the algorithm used by the wizard considers it to be useful or not.
  • Default: The default option applies a complex set of rules, which are:
  • The granularity attribute (usually the key attribute, unless you specified otherwise in the dimension usage tab) is treated as Unrestricted.
    • All attributes on dimensions involved in many-to-many relationships, unmaterialized referenced relationships, and data mining dimensions are treated as None. Aggregations may still be built at the root granularity that is the intersection of every All Members on every attribute.
    • All attributes that are used as levels in natural user hierarchies are treated as Unrestricted.
    • Attributes with IsAggregatable set to False are treated as Full.
    • All other attributes are treated as None.

The next step in the wizard asks you to verify the number of EstimatedRows and EstimatedCount properties we've already talked about, and gives the option of setting a similar property that shows the estimated number of members from an attribute that appear in any one partition. This can be an important property to set; if you are partitioning by Month, although you may have 36 members on your Month attribute, a partition will only contain data for one of them.

On the Set Aggregation Options step you finally reach the point where some aggregations can be built. Here you can apply one last set of restrictions on the set of aggregations that will be built, choosing to either:

  • Estimated Storage Reaches: This means you build aggregations to fill a given amount of disk space.
  • Performance Gain Reaches: This is the most useful option. It does not mean that all queries will run n percent faster, nor does it mean that a query that hits an aggregation directly will run n percent faster. Think of it like this: if the wizard built all the aggregations it thought were useful to build (note: this is not the same thing as all of the possible aggregations that could be built on the cube) then, in general, performance would be better.

    Some queries would not benefit from aggregations, some would be slightly faster, and some would be a lot faster; some aggregations would be more often used than others. So if you set this property to 100% the wizard would build all the aggregations that it could, and you'd get 100% of the performance gain possible from building aggregations. Setting this property to 30%, the default and recommended value, will build the aggregations that give you 30% of this possible performance gain – not 30% of the possible aggregations, usually a much smaller number. As you can see from the following screenshot, the graph drawn on this step plots the size of the aggregations built versus overall performance gain, and the shape of the curve shows that a few, smaller aggregations usually provide the majority of the performance gain.

  • I Click Stop: This means carry on building aggregations until you click on the Stop button. Designing aggregations can take a very long time, especially on more complex cubes because there may literally be millions or billions of possible aggregations that could be built. In fact, it's not unheard of for the aggregation design wizard to run for several days before it's stopped!
  • Do Not Design Aggregations: This allows you to skip designing aggregations.

The approach we suggest taking here is to first select I Click Stop and then click on the Start button. On some measure groups this will complete very quickly, with only a few small aggregations built. If that's the case click on Next; otherwise, if it's taking too long or too many aggregations are being built, click on Stop and then Reset, and then select Performance Gain Reaches and enter 30% and Start again. This should result in a reasonable selection of aggregations being built; in general around 50-100 aggregations is the maximum number you should be building for a measure group, and if 30% leaves you short of this try increasing the number by 10% until you feel comfortable with what you get.

In the final step of the wizard, enter a name for your aggregation design and save it. It's a good idea to give the aggregation design a name including the name of the measure group to make it easier to find if you ever need to script it to XMLA.

It's quite common that Analysis Services cube developers stop thinking about aggregation design at this point. This is a serious mistake; just because you have run the Aggregation Design Wizard does not mean you have built all the aggregations you need, or indeed any useful ones at all! Doing Usage-Based Optimization and/or building aggregations manually is absolutely essential.

Usage-Based Optimization

We now have some aggregations designed, but the chances are that despite our best efforts many of them will not prove to be useful. To a certain extent we might be able to pick out these aggregations by browsing through them; really, though, we need to know what queries our users are going to run before we can build aggregations to make them run faster. This is where Usage-Based Optimization comes in; it allows us to log the requests for data that Analysis Services makes when a query is run and then feed this information into the aggregation design process.

To be able to do Usage Based Optimization, you must first set up Analysis Services to log these requests for data. This involves specifying a connection string to a relational database in the server properties of your Analysis Services instance and allowing Analysis Services to create a log table in that database. The white paper "Configuring the Analysis Services Query Log" contains more details on how to do this (it's written for Analysis Services 2005 but is still relevant for Analysis Services 2012), and can be downloaded from http://tinyurl.com/ssasquerylog.

The query log is a misleading name, because as you'll see if you look inside it doesn't actually contain the text of MDX queries run against the cube. When a user runs an MDX query, Analysis Services decomposes it into a set of requests for data at a particular granularity and it's these requests that are logged. A single query can result in no requests for data, or it can result in as many as hundreds or thousands of requests, especially if it returns a lot of data and a lot of MDX calculations are involved. When setting up the log you also have to specify the percentage of all data requests that Analysis Services actually logs with the QueryLogSampling property – in some cases if it logged every single request you would end up with a very large amount of data very quickly, but on the other hand, if you set this value too low you may end up not seeing certain important long-running requests. We recommend that you start by setting up this property to 100 but that you monitor the size of the log closely and reduce the value if you find that the number of requests logged is too high.

Once the log has been set up, let your users start querying the cube. Explain to them what you're doing and that some queries may not perform well at this stage. Given access to a new cube it will take them a little while to understand what data is present and what data they're interested in; if they're new to Analysis Services it's also likely they'll need some time to get used to whatever client tool they're using.

Therefore, you'll need to have logging enabled for at least a month or two before you can be sure that your query log contains enough useful information. Remember that if you change the structure of the cube while you're logging, the existing contents of the log will no longer be usable.

Last of all, you'll need to run the Usage-Based Optimization Wizard to build new aggregations using this information. The Usage-Based Optimization Wizard is very similar to the Design Aggregations Wizard, with the added option to filter the information in the query log by date, user, and query frequency before it's used to build aggregations. It's a good idea to do this filtering carefully; you should probably exclude any queries you've run yourself, for example, since they're unlikely to be representative of what the users are doing, and make sure that the most important users' queries are over-represented.

Once you've done this you'll have a chance to review what data is actually going to be used before you actually build the aggregations.

On the last step of the wizard you have the choice of either creating a new aggregation design or merging the aggregations that have just been created with an existing aggregation design. We recommend the latter. What you've just done is optimize queries that ran slowly on an existing aggregation design, and if you abandon the aggregations you've already got, it's possible that queries which previously had been quick would be slow afterwards.

This exercise should be repeated at regular intervals throughout the cube's lifetime to ensure that you built any new aggregations that are necessary as the queries that your users run change. Query logging can, however, have an impact on query performance so it's not a good idea to leave logging running all the time.

Processing aggregations

When you've created or edited the aggregations on one or more partitions, you don't need to do a full process on the partitions. All you need to do is to deploy your changes and then run a ProcessIndex, which is usually fairly quick, and once you've done that queries will be able to use the new aggregations. When you run a ProcessIndex Analysis Services does not need to run any SQL queries against the relational data source if you're using MOLAP storage.

Monitoring partition and aggregation usage

Having created and configured your partitions and aggregations, you'll naturally want to be sure that when you run a query, Analysis Services is using them as you expect. You can do this very easily by running a trace with SQL Server Profiler.

To use Profiler, start it and then connect to your Analysis Services instance to create a new trace. On the Trace Properties dialog choose the Blank template and go to the Events Selection tab and check the following:

  • Progress Reports\Progress Report Begin
  • Progress Reports\Progress Report End
  • Queries Events\Query Begin
  • Queries Events\Query End
  • Query Processing\Execute MDX Script Begin
  • Query Processing\Execute MDX Script End
  • Query Processing\Query Cube Begin
  • Query Processing\Query Cube End
  • Query Processing\Get Data From Aggregation
  • Query Processing\Query Subcube Verbose
  • Query Processing\Resource Usage

Then clear the cache and click on Run to start the trace.

Analysis Services 2012 also includes the ability to view information on MDX query plans through the Profiler. The following blog post gives more details on how to do this: http://tinyurl.com/evalnodes , but the truth is that the information available here is so difficult to interpret it is almost useless for any practical purpose.

Once you've done this you can either open up your Analysis Services client tool or you can start running MDX queries in SQL Management Studio. When you do this you'll notice that Profiler starts to show information about what Analysis Services is doing internally to answer these queries.

Interpreting the results of a Profiler trace is a complex task and it's very easy to pick out some useful information relating to aggregation and partition usage. Put simply:

  • The Query Subcube Verbose events represent individual requests for data from the Formula Engine to the Storage Engine, which can be answered either from cache, an aggregation or base-level partition data. Each of these requests is at a single granularity, meaning that all of the data in the request comes from a single distinct combination of attributes; we refer to these granularities as 'subcubes'. The TextData column for this event shows the granularity of data that is being requested in human readable form; the Query Subcube event will display exactly the same data but in the less friendly format that the Usage-Based Optimization Query Log uses.
  • Pairs of Progress Report Begin and Progress Report End events show that data is being read from disk, either from an aggregation or a partition. The TextData column gives more information, including the name of the object being read; however, if you have more than one object (for example, an aggregation) with the same name, you need to look at the contents of the ObjectPath column to see what object exactly is being queried.
  • The Get Data From Aggregation event is fired when data is read from an aggregation, in addition to any Progress Report events.
  • The Duration column shows how long each of these operations takes in milliseconds.
  • The Resource Usage event is fired at the end of query execution and gives a summary of the number of disk read operations for the query, the number of rows scanned, and the number of rows returned by the Storage Engine.

At this point in the cube optimization process you should be seeing in Profiler that when your users run queries they hit as few partitions as possible and hit aggregations as often as possible. If you regularly see queries that scan all the partitions in your cube or which do not use any aggregations at all, you should consider going back to the beginning of the process and rethinking your partitioning strategy and rerunning the aggregation design wizards. In a production system many queries will be answered from cache and therefore be very quick, but you should always try to optimize for the worst-case scenario of a query running on a cold cache.

Building aggregations manually

However good the aggregation designs produced by the wizards are, it's very likely that at some point you'll have to design aggregations manually for particular queries. Even after running the Usage-Based Optimization Wizard you may find that it still does not build some potentially useful aggregations. The algorithm the wizards use is very complex and something of a black box, so for whatever reason (perhaps because it thinks it would be too large) it may decide not to build an aggregation that, when built manually, turns out to have a significant positive impact on the performance of a particular query.

Before we can build aggregations manually we need to work out which aggregations we need to build. To do this, we once again need to use Profiler and look at either the Query Subcube or the Query Subcube Verbose events. These events, remember, display the same thing in two different formats - requests for data made to the Analysis Services storage engine during query processing - and the contents of the Duration column in Profiler will show how long in milliseconds each of these requests took. A good rule of thumb is that any Query Subcube event that takes longer than half a second (500 ms) would benefit from having an aggregation built for it; you can expect that a Query Subcube event that requests data at the same granularity as an aggregation will execute almost instantaneously.

The following screenshot shows an example of trace on an MDX query that takes 700 ms:

The single Query Subcube Verbose event is highlighted, and we can see that the duration of this event is the same as that of the query itself so if we want to improve the performance of the query we need to build an aggregation for this particular request. Also, in the lower half of the screen we can see the contents of the TextData column displayed. This shows a list of all the dimensions and attributes from which data is being requested – the granularity of the request – and the simple rule to follow here is that whenever you see anything other than a zero by an attribute we know that the granularity of the request includes this attribute. We need to make a note of all of the attributes, which have anything other than a zero next to them and then build an aggregation using them; in this case it's just the Category attribute of the Product dimension.

So now we know what aggregation we need to build, we need to go ahead and build it. We have a choice of tools to do this; we can either use the functionality built into SSDT, or we can use some of the excellent functionality that BIDS Helper provides. In SSDT, to view and edit aggregations, you need to go to the Aggregations tab in the cube editor. On the Standard view you only see a list of partitions and which aggregation designs they have associated with them; if you switch to the Advanced view by pressing the appropriate button on the toolbar, you can view the aggregations in each aggregation design for each measure group. If you right-click in the area where the aggregations are displayed you can also create a new aggregation and once you've done that you can specify the granularity of the aggregation by checking and unchecking the attributes on each dimension. For our particular query we only need to check the box next to the Category attribute, as follows:

The small tick at the top of the list of dimensions in the Status row shows that this aggregation has passed the built-in validation rules that SSDT applies to make sure this is a useful aggregation. If you see an amber warning triangle here, hover over it with your mouse and in the tool tip, you'll see a list of reasons why the aggregation has failed its status check.

If we then deploy and run a ProcessIndex , we can then rerun our original query and watch it use the new aggregation, running much faster as a result:

The problem with the native SSDT aggregation design functionality is that it becomes difficult to use when you have complex aggregations to build and edit. The functionality present in BIDS Helper, while it looks less polished, is far more useable and offers many benefits over the SSDT native functionality, for example:

  • The BIDS Helper Aggregation Design interface displays the aggregation granularity in the same way (that is using 1s and 0s, as seen in the following screenshot) as the Query Subcube event in Profiler does, making it easier to cross reference between the two.
  • It also shows attribute relationships when it displays the attributes on each dimension when you're designing an aggregation, as seen on the right-hand side in the following screenshot. This is essential to being able to build optimal aggregations.
  • It also shows whether an aggregation is rigid or flexible.
  • It has functionality to remove duplicate aggregations and ones containing redundant attributes, and search for similar aggregations.
  • It allows you to create new aggregations based on the information stored in the Query Log.
  • It also allows you to delete unused aggregations based on information from a Profiler trace.
  • Finally, it has some very comprehensive functionality to allow you to test the performance of the aggregations you build (see http://tinyurl.com/testaggs).

Unsurprisingly, if you need to do any serious work designing aggregations manually we recommend using BIDS Helper over the built-in functionality.

Summary

In this article, we've learned how to optimize the performance of our cube so our users' queries execute as quickly as possible. We've looked at how Analysis Services processes queries and what the roles of the Storage Engine and the Formula Engine have in this; we've seen how building partitions and aggregations can improve the performance of the storage engine.

Resources for Article:


Further resources on this subject:


About the Author :


Alberto Ferrari

Alberto Ferrari  (alberto.ferrari@sqlbi.com) is a consultant and trainer who specializes in Business Intelligence with the Microsoft BI stack. He spends half of his time consulting for companies who need to develop complex data warehouses, and the other half in training, book writing, conferences, and meetings. He is a SQL Server MVP and a SSAS-Maestro.

He is a founder, with Marco Russo, of www.sqlbi.com, where they publish whitepapers and articles about SQL Server Analysis Services technology. He co-authored several books on SSAS and PowerPivot.

Chris Webb

Chris Webb (chris@crossjoin.co.uk) has been working with Microsoft Business Intelligence tools for 15 years in a variety of roles and industries. He is an independent consultant (www.crossjoin.co.uk) and trainer (www.technitrain.com) based in the UK, specializing in SQL Server Analysis Services, MDX, DAX, Power Pivot, and the whole Power BI stack. He is the co-author of MDX Solutions with Microsoft SQL Server Analysis Services 2005 and SQL Server Analysis Services 2012: The BISM Tabular Model. He is a regular speaker at user groups and conferences, and blogs about Microsoft BI at http://cwebbbi.wordpress.com/.

Marco Russo

Marco Russo is a Business Intelligence consultant and mentor. His main activities are related to data warehouse relational and multidimensional design, but he is also involved in the complete development lifecycle of a BI solution. He has particular competence and experience in sectors such as financial services (including complex OLAP designs in the banking area), manufacturing, gambling, and commercial distribution. Marco is also a book author, and in addition to his BI-related publications, he has authored books about .NET programming. He is also a speaker at international conferences such as TechEd, PASS Summit, SQLRally, and SQLBits. He achieved the unique SSAS Maestro certification and is also a Microsoft Certified Trainer with several Microsoft Certified Professional certifications.

Books From Packt


  Instant Microsoft SQL Server Analysis Services 2012 Dimensions and Cube [Instant]
Instant Microsoft SQL Server Analysis Services 2012 Dimensions and Cube [Instant]

SSQL Server Analysis Services 2012 Cube Development Cookbook
SQL Server Analysis Services 2012 Cube Development Cookbook

 Instant SQL Server Analysis Services 2012 Cube Security [Instant]
Instant SQL Server Analysis Services 2012 Cube Security [Instant]

 Getting Started with SQL Server 2012 Cube Development
Getting Started with SQL Server 2012 Cube Development

 Expert Cube Development with Microsoft SQL Server 2008 Analysis Services
Expert Cube Development with Microsoft SQL Server 2008 Analysis Services

MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook
MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook

SQL Server 2012 Reporting Services Blueprints
SQL Server 2012 Reporting Services Blueprints

 PostgreSQL Server Programming
PostgreSQL Server Programming


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software