Subscribe: Darren Gosbell [MVP] - Random Procrastination
http://geekswithblogs.net/darrengosbell/Rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
average  darren gosbell  dax studio  dax  gosbell archive  new  originally posted  posted darren  query  release  server  studio 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Darren Gosbell [MVP] - Random Procrastination

Darren Gosbell [MVP] - Random Procrastination



Business Intelligence, SQL Server, .Net, Powershell and MDX Madness



Copyright: Darren Gosbell
 



ProcessAdd bug in AMO 2016

Sun, 13 Nov 2016 22:26:15 GMT

Originally posted on: http://darren.gosbell.com/archive/2016/11/14/201133.aspx

I saw the question below on the MSDN forum about processAdd not working in AMO 2016 and I thought it sounded strange so I did some investigation:

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/91785c40-a03f-4f46-abc1-9a51ed5e4283/processadd-using-amo-sql-2016-systemnotimplementedexception?forum=sqlanalysisservices

When I ran Redgate Reflector over the Microsoft.AnalysisServices.Core.dll I came across this little gem:

(image)


Where it checks if the object is of a type IQueryBinding from the Miocrosoft.AnalysisServices.Core namespace and as far as I can see nothing currently implements this interface. What this means is that if you pass any of the built-in binding classes to the Process method - it will always throw a NotImplemented exception. 

I've posted a bug here https://connect.microsoft.com/SQLServer/feedback/details/3110077 and apparently it's a known issue and a fix is already in the pipeline. However there is also a relatively simple workaround which involves creating a class which implements IQueryBinding in your own project.

The IQueryBinding interface is thankfully not that complicated and a full implementation is outlined below:

public class MyBinding : Microsoft.AnalysisServices.Core.IQueryBinding
    {
        public MyBinding(string dataSourceID, string queryDefinition)
        {
            DataSourceID = dataSourceID;
            QueryDefinition = queryDefinition;
        }
        public string DataSourceID { get; set; }
        public string QueryDefinition { get; set; }
        public ISite Site { get; set; }
        public event EventHandler Disposed;
        public void Dispose() { }
    }

You can then simply create a MyBinding instance and pass this in to the Process method for you your partition:

var qb = new MyBinding("Adventure Works DW", "SELECT * FROM table");
partition.Process(ProcessType.ProcessAdd, qb);

(image) (image)



DAX Studio 2.5.0 Release

Wed, 19 Oct 2016 19:46:16 GMT

Originally posted on: http://darren.gosbell.com/archive/2016/10/20/dax-studio-2.5.0-release.aspxThe next version of DAX Studio has just been released. You can download this release and read the release notes here Note: In this release we have updated the versions of the Microsoft ADOMD.Net and AMO libraries we reference to use the SQL 2016 versions. This gives us scope to access some of the new functionality in Power BI and SQL Server 2016, but may mean that you are prompted to download these when you upgrade from a previous version. Some of the highlights of this release are: New Features Added an option to trace Direct Query events There is now an option under File > Options where you can enable extra events in Server Timings for Direct Query based models. These events add extra overhead so you should only enable this option before you start tracing a Direct Query model and you should disable this option once you are finished. Added Dynamic syntax highlighting Earlier versions of DAX Studio contained a hard coded list of syntax highlighting keywords and functions. The lists of keywords and functions used for syntax highlighting is now dynamically discovered from the data source. This has advantages when dealing with Power BI in particular which can get new functionality added from one month to the next. Added rows and KB to server timings tab Analysis Services 2016 and Power BI have added information to the server timing events that includes information about the number of rows and the size of data returned from each of the timing events. If this information is found it is now surfaced in the server timings tab. Optimized DaxFormatter calls The old version of the DaxFormatter.com API required a second call if there was an error to find out the details of the error. The nice people at DaxFormatter.com have updated their API so that this is no longer necessary. Added an option to specify the default separator style In the 2.4 release we introduced an option where you could convert on demand between the 2 different separator styles. But all queries had to be executed using the UK/US style. The UK/US style is where a comma (,) is used as the list and thousands separator and the period (.) is used as the decimal separator. eg. EVALUTE FILTER( 'Product' , 'Product'[List Price] > 1.25 ) The European/Other style is where a semi-colon (;) is used as the list separator, the thousands separator is a period (.)  and the comma (,) is used as the decimal separator. eg. EVALUTE FILTER( 'Product' ; 'Product'[List Price] > 1,25 ) Now you can choose which style you want to use as your default in File > Options menu. Added an error message when you attempt to open a .dax file that no longer exists Prior to this version if you clicked on an entry in your recent file list which pointed to a file that had been renamed or deleted you would just get a blank window with no idea what went wrong. Now there will be an error posted to the output window tell you what went wrong. Bug Fixes Fixed a bug where server timing traces were also listening for query plan events Fixed incorrect removal of square brackets from MDX results Fixed a race condition that happened sometimes when trying to capture both Query Plans and Server Timings [...]



DAX Studio 2.4.1 Released

Mon, 25 Jul 2016 20:37:23 GMT

Originally posted on: http://darren.gosbell.com/archive/2016/07/26/dax-studio-2.4.1-released.aspxThe next version of DAX Studio has just been released. You can download this release and read the release notes here Some of the highlights of this release are: Toggle Delimiters A toggle delimiter feature has been added to switch between regional settings styles (commas vs semi-colons). Currently queries still need to be executed using the "US" style comma delimiters, but if you have expressions using semi-colon delimiters then you can easily switch back and forth. This can also be handy if you find an example expression online that is using different delimiters than what you have in your model. this will swap between using comma delimiters and semi-colon style delimiters Intellisense Improvements Added a Function Insight tooltip to intellisense which willl appear after the opening bracket for a function that will show the parameters for a function as well as it's description The loading and caching of intellisense data has also been vastly improved. The slightly delay when starting to type a query against a large model should be gone and you should also not see issues when trying to edit a query in a window that is currently executing a query. [...]



DAX - Creating a measure for Top 5 and Others

Sat, 04 Jun 2016 05:38:07 GMT

Originally posted on: http://darren.gosbell.com/archive/2016/06/04/179446.aspxOften when you have a requirement to show a measure for the TopN of a given attribute the users will want to see the subtotal of all the other items.With multi-dimensional cubes and MDX you can create calculated members in a dimension. But you can’t do this in a tabular model using DAX . You could create a calculated column, but then your TopN would be fixed and could not respond dynamically to changes in filters as calculated columns are calculated and stored at processing time.So one way to make this work is to have a placeholder row (or rows) with "Others" in the table which you are doing the TopN over.The first step in this technique is to create a Rank measure (this saves having to re-type the rank logic multiple times). In this example I’m doing a top 5 Brands.Brand Rank:=RANKX ( ALL ( Brand[BrandName] ), [Sales Amount] )Then we create a top 5 measure that returns the sales amount if the rank is less than or equal to 5 and will return the sum of all ranks if the current brand is "Others" otherwise it will return BLANK()Top5 := IF (     [Brand Rank] <= 5,     [Sales Amount],     IF (         HASONEVALUE ( Brand[BrandName] ),         IF (             VALUES ( Brand[BrandName] ) = "Others",             SUMX ( FILTER ( ALL ( Brand[BrandName] ), [Brand Rank] > 5 ), [Sales Amount] )         )     ) ) Then if we create a pivot table with this measure we get the following result. In the picture below I’m showing the raw Sales amount so you can see how the amounts get re-assigned in the Top5 calculation.This also works as you change the pivot table. When we introduce the Segment on the rows we can show a figure for “Others” within each segment. (Note that for this to work I had to create 2 placeholder “Others” rows, one in each segment). There is a link at the bottom of this post to the file that these screenshots were taken from. At this point, I’m pretty happy. It seems to be working well although calculating ranks is rather expensive so I’m wondering how it will work against larger data sets. I know that there are definitely ways to make this faster if you are using the current generation of DAX engines (Excel 2016, Power BI or SQL 2016) as we have access to things like variables and the INTERSECTS function which we could use in conjunction with TOPN. But I think there are also opportunities to make this faster with “v1” DAXRunning this algorithm against the Adventure Works Product Name column results in a performance profile like the following (using the server timings feature of DAX Studio)Which shows 86% of the time being spent in the formula engine. I’m pretty sure that a fair bit of this time is spent calculating ranks greater than 5 for the “Others” calculation. So I thought it might be faster to a TOPN in descending order and get everything except for the first 5. The formula for that is the followingTop5v2 := IF (     [Brand Rank] <= 5,     [Sales Amount],     IF (         HASONEVALUE ( Brand[BrandName] ),         IF (             VALUES ( Brand[BrandName] ) = "Others",             SUMX (                 TOPN (                     COUNTROWS ( ALL ( Brand[BrandName] ) ) - 5,                     ALL ( Brand[BrandName] ),                     [Sales Amount], 1                 ),                 [Sales Amount]             )         )     ) ) The modified version of the calculation has a much better performance profile, taking about 1/3 of the time and with an almost 50/50 split between the Formula Engine and the Storage Engine.You can download the a sample workbook with these formulas from the link below: https://onedrive.live.com/redir?resid=98546E1B65A78A74!14166&aut[...]



Dax Studio 2.3.2 released

Tue, 03 Nov 2015 00:10:11 GMT

Originally posted on: http://darren.gosbell.com/archive/2015/11/03/dax-studio-2.3.2-released.aspxThis latest release includes a lot of UI polish features and we also now have support for SQL 2016 and Excel 2016 which is mainly an update to the installer and some changes to the Query Plan and Server Timings features to deal with some changes to the xmlSql that comes back in the trace events. Following the theory that a picture is worth a thousand words – below are screenshots of the changes in this release. The File –> Open menu now includes a list of recently opened files. For performance testing you can now set the run button to always doe a clear cache before executing a query. This mode is easily selectable using the new arrow menu on the run button. The model dialogs all have updated styling and now including a shaded overlay so that the active portions of the screen are clearly visible. An options pane has been added to the File menu for setting global program options A Query History pane has been added which records all of the queries run by Dax Studio. If you have the Server Timings feature enabled the Server duration and the FE / SE timing are also tracked. You can double click on an item in the query history to insert it back into the editor. This is great for performance tuning as you can easily see which variation of the query was fastest and returned the expected number of rows and then bring it back into the editor. The metadata pane now loads asynchronously. In earlier versions the loading of the metadata pane was a blocking operation and the user interface could go unresponsive for a short time while loading large models. Now the metadata is loaded on a background thread so the interface remains responsive and the pane that is updating is greyed out to indicate that the load is still in progress. . The new “Define Measure” feature, which is a right-click option on a calculated measure, is a great way to either see how a measure was defined without opening up the whole model. Or you can use it as a starting point to test some variations on the logic. There are also a number of small bug fixes and tweaks and a number of issues that were raised on codeplex that have been fixed (we always tag closed issues with the release they were fixed in) [...]



DAX Studio 2.2.6 released

Thu, 23 Jul 2015 21:27:49 GMT

Originally posted on: http://darren.gosbell.com/archive/2015/07/24/dax-studio-2.2.6-released.aspx

Just release an update to DAX Studio bundling up the following fixes and enhancements:

  • Support for the upcoming release of Power BI Desktop
  • Dynamic Ribbon resizing
  • Remembering of Window size and location
  • fixing display of hierarchies in Excel 2013
  • fixing quoting of keywords in table names
  • fixing an error when duplicate column names are returned in a query
  • fixing an issue when changing databases when connected to a Multi-Dim server
  • fixing an issue when connecting as a non-admin

I’m particularly happy with the dynamic resizing that is part of the Fluent.Ribbon control. It took a bit of effort to figure out, but it provides a much better experience than having to scroll the ribbon when window size is reduced.

So on a wide screen display you still get the full ribbon in all it’s glory

(image)

Then on an older 1024px wide screen a few of the labels in the Edit and Find groups drop off

(image)

Then things dynamically shrink then the groups finally start to collapse into dropdown menus. So if you want to go split screen on a small display everything still works even at 500px wide without needing to scroll the ribbon.

(image)

(image) (image)



Announcing DAX Studio 2.2.0

Tue, 23 Jun 2015 20:58:25 GMT

Originally posted on: http://darren.gosbell.com/archive/2015/06/24/announcing-dax-studio-2.2.0.aspxToday we released DAX Studio 2.2 which includes a number of big enhancements. You can download the new release from http://daxstudio.codeplex.com/releases Below is an outline of the new functionality you will find in this release. Intellisense DAX Studio now has intellisense support, it's a fairly basic implementation, but I think I've got it working fairly smoothly. Note that it’s based on the best practice of always prefixing columns with the tablename and never prefixing a measure with a table name. So if you type ‘table name’[  you will get intellisense for all the columns in the ‘table name’ table. But if you just type [ you will be presented with a list of all the measures in the model. The intellisense brings up table, column and measure metadata information from the current model: As well as functions and keywords, including the descriptions for functions. The function metadata is coming from the DICOVER_FUNCTIONS dmv so it should accurately reflect the capabilities of the current data source (So you will see some additional functions when connected to newer data sources such as Power BI Designer): Tracing (Query Plans & Server Timings) One of the most significant new features is support for the tracing features (Server TIming & Query Plans) against PowerPivot. This has involved a significant amount of work re-building the internal code which handles this, but the end effect is that the Query Plan and Server Timings buttons are now enabled when connected to a PowerPivot model.  We've also altered the layout of the server timings information I found when I was reading the results that I was always comparing Total Duration vs Total CPU and comparing the ratio of these two figures, so I've added a little "parallelism factor" under the Total CPU. I also found that I was looking to compare FE vs SE duration and the number of SE Queries to the number of SE Cache hits, so I've moved them onto the same lines. Support for Multi-Dimensional SSAS Servers Although it’s a little bit of a novelty, DAX Studio now properly supports connecting to multi-dimensional servers. This kind of worked previously if the default database for your multi-dimensional instance only had a single cube, but threw an error if it had multiple cubes. I’m now injected in a CUBE= parameter in the connection string when connected to a multi-dimensional server so this should no longer be an issue. Support for Multiple Power BI Designer instances We now have support for multiple instances of Power BI Designer, so if you have more than one copy of the designer open you can see the names of the .pibx files that you have open and choose which one to connect to. Highlighting of unnatural hierarchies Unnatural hierarchies can have a negative impact on performance so we highlight those in the metadata pane. Bug Fixes There have been a number of minor bugs that have been fixed along the way, including a number that were not officially reported, but were discovered during testing. FIXED - Doubling up of backslash characters when using DAX Formatter FIXED - Unable to connect to SSAS servers when not an administrator FIXED - changing the connection for a query window would sometimes throw an error FIXED - source database changing on metadata refresh when connected to a SSAS server FIXED - cannot connect to a PowerPivot model where the file path contains single quotes Known Issues The following issue which was first identified in 2.1.1 still remains: "Linked Table" output for the Excel add-in only works for PowerPivot models if there is at least one table in the model that was created by linking data from an Excel table. We have not been able to find a workaround to this yet, it looks like it might be due to a bug in t[...]



SSAS Tabular–per table LastProcessed dates

Wed, 13 May 2015 11:52:43 GMT

Originally posted on: http://darren.gosbell.com/archive/2015/05/13/ssas-tabularndashper-table-lastprocessed-dates.aspxI saw a question yesterday on the MSDN forums ask how to go about exposing the LastProcessed datetime per table. Marco has a great post here (https://www.sqlbi.com/articles/last-process-date-in-ssas-tabular/) about a number of ways to get the last processed date at the database level. But none of these techniques work at the table level. Having a look through the various DMVs that are available and none of them seemed to have this information apart from DISCOVER_XML_METADAT which returns a large XML document which can be tricky to query. However the information in this DMV is also used to populate the AMO object model. So it should be possible to use AMO to pull out the last processed datetime per table and then push it in using a variation of the processAdd push mode processing. I started by creating a new table in my database called LastProcessed using the following query as a source SELECT CAST('' as varchar(256)) as TableName, Getdate() as LastProcessed WHERE 1=2 This generates an empty table with a string column and a datetime column which will be the target for our processAdd push processing. I was not exactly sure how to construct the XMLA for the processAdd command so I create a simple SSIS package using a Dimension Processing Destination in a DataFlow task and captured the XMLA it produced using SQL Profiler. The command that Profiler captured was missing the section that you would use for pushing a batch of data so I adapted one of Greg Galloway’s excellent XMLA processAdd examples and then wrapped that in a powershell script. The powershell script I created can be found here https://gist.github.com/dgosbell/8f4e2a6aa022a6fdd288 And can be run with a simple command like the following: .\update-ssaslastupdated -serverName "localhost" -databaseName "advwrksLastProcessed" -tableName "LastProcessed" Once that succeeds you can either expose the whole table or you could hide it and build measures to just expose the dates for specific tables. Below is an example of querying this table in my adjusted Adventure Works database. You can see that ‘Internet Sales’ and ‘Reseller Sales’ have been processed at different times to the other tables in the database. [...]



BIDS Helper 1.7.0 released

Thu, 23 Apr 2015 21:02:49 GMT

Originally posted on: http://darren.gosbell.com/archive/2015/04/24/bids-helper-1.7.0-released.aspxA few days ago the BIDS Helper team released of v1.7.0 which you can download from here: https://bidshelper.codeplex.com/releases This version comes with a few bug fixes, and some new functionality, but the most significant enhancements are to the BIML Engine thanks to our friends at Varigence. I’ve copied and pasted the release notes below. Note that there are some possible breaking changes in the new BIML Engine, so if you have existing BIML script please read this section carefully. ====== Release Notes ======= Fixed Issues: Fixes to all features so they work when the project is inside a solution folder. Fixed Biml project connections in Visual Studio 2013. Reports launch in the proper version of the Report Viewer control in Visual Studio 2012 and 2013. Improvements Biml updates to bring it up-to-date with Mist 4.0 Biml updates to support emitting SSIS 2014 packages from Visual Studio 2013. Biml brings the Output window to the front, displays a progress bar, and displays a more useful errors dialog Enabling the ability to add a translation for the default language in Tabular Translations Editor to makefrictionless renaming easier. Improvements to Tabular Actions Editor: Adding button to test all MDX expressions ensuring the drillthrough column dropdowns only show columns from related tables Adding a button to let you copy an existing action Adding a button to populate the drillthrough action columns with all columns in related tables (since it's quicker to manually delete than manually add columns from this list) Allowing MeasureGroupMeasures to apply to any action type Printer Friendly Dimension Usage for Tabular Biml Details There are a huge number of changes to Biml in this release. Below are highlights of some of the more notable changes and those changes that were most requested by users. BREAKING CHANGES RootNode.PackageProjects has now moved entirely into RootNode.Projects, including all namespaces. This is done to better support other project types such as analysis services projects. The mitigation is to change all instances of the tag to the tag. MITIGATION: If backwards compatibility with older versions of Mist is required, the PackageProjects/Projects element name can be conditionalized based on the tools version. Removed QuerySource and Destination. These were abstractions that wrapped OleDbSource/AdoNetSource and OleDbDestination/AdoNetDestination in a single interface that offered the intersection of their features and automatically chose the correct component type to emit based on the referenced connection type. These language elements were not particularly useful, were very narrowly used, and their removal will prevent unintentional use by new customers. MITIGATION: Change any instances of QuerySource or Destination to the appropriate OleDbSource/AdoNetSource and OleDbDestination/AdoNetDestination All instances of the word Hadron have been removed from the project. Most notably, namespaces that included “Hadron” have 1:1 replacements with “Biml” MITIGATION: Empty classes were added in the removed namespaces to prevent code with legacy namespace imports from breaking scripts AstDimensionNode and AstFactNode have been removed. SSAS metadata is now added underneath the AnalysisMetadata element of table objects. This is unlikely to affect most BIDSHelper users, since SSAS emission is only supported in Mist. AnalysisServices connection property OverriddenConnectionString is renamed to ConnectionString. The Provider property is removed as it is no longer nee[...]



SSRS–adding a pipe delimited text export option in Sharepoint Integrated mode

Fri, 10 Apr 2015 10:37:34 GMT

Originally posted on: http://darren.gosbell.com/archive/2015/04/10/ssrsndashadding-a-pipe-delimited-text-export-option-in-sharepoint-integrated.aspxWe are currently looking decommission an old reporting portal based on another technology stack and replace it with SSRS running in Sharepoint integrated mode. However one of the requirements was to support a “pipe delimited” CSV export format. There are a number of posts you can find about changing reportserver.config when running SSRS in standalone mode, but I could not find much information about how to do this when running in Sharepoint Integrated mode. MSDN documentation lists the various cmdlets, https://msdn.microsoft.com/en-us/library/gg492249.aspx but does not show examples of the parameter values which is pretty useless as some of the parameters require XML fragments. This blog got me close http://blog.kuppens-switsers.net/it/customize-ssrs-extension-settings-sharepoint-integrated-mode/ and when combined with the information from this one http://www.mssqltips.com/sqlservertip/3379/modify-sql-server-reporting-services-rendering-format-in-sharepointintegrated-mode/ I was able to come up with the following powershell script which does what I needed. You’ll need to run this from the Sharepoint Management Shell (I logged on to the console of one of our app servers to do this) using an account that has farm admin rights. $svrDirectives = @"     CSV (Pipe Delimited) "@ $extConfig = @“     |     false     csv     False     ASCII "@ $typeName = "Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering" $apps = get-sprsserviceapplication foreach ($app in $apps) {   New-SPRSExtension -identity $app -ExtensionType "Render" -name "CSVPipeDelimited" -TypeName $typeName –ServerDirectives $svrDirectives –ExtensionConfiguration $extConfig } This code loops over all the service applications in the current farm so even if you have a multi tenancy type situation it *should* work. Although if you are in this situation you should do your own testing. Our farm only has the one SSRS service application. [...]



DAX Studio 2.1 Released

Tue, 17 Mar 2015 20:17:13 GMT

Originally posted on: http://darren.gosbell.com/archive/2015/03/18/dax-studio-2.1-released.aspxToday I am pleased to announce the release of the latest update to DAX Studio – v2.1.0 You can get it from the releases page on codeplex: http://daxstudio.codeplex.com/releases Below is an outline of what’s new in 2.1. A big thanks to Daniele Perilli for his assistance with the graphics and Marco Russo for his work on the Query Plans and Server Timings tabs plus his help with testing this release. UI Refresh Thanks to assistance from Daniele Perilli on the graphics side we now have a lot more consistency in our ribbon and we’ve moved all the buttons onto the Home tab and moved the database dropdown to the metadata pane. This has enabled us to fit all the buttons you’d use regularly onto the home ribbon. Metadata Search We now support searching in the metadata pane. If you hover over the little magnifying glass in the top right of the metadata pane the search control will fly out and as you type the metadata will filter down to only show objects containing the characters you’ve typed. The icon changes to a green colour when a search is active so that you know that you are not looking at the full set of metadata. Search and Replace We now support searching for text within a query document, including highlighting all matching text. And we support replacing of text Both search and replace include the ability to do case-sensitive searches, regular expressions, wildcards and full word matches. These dialogs use the same hotkeys as Visual Studio Ctrl-F for Find and Ctrl-H for Replace. Improved Server Timings Tab We now show a much nicer view of the aggregate timing details as well as showing the detailed scan events with their timings. You also have the option of showing cache and internal events although these are hidden by default. Improved Query Plan tab The QueryPlans are now pre-processed to make them easier to read and the total number of records for each line is split out so that it can be clearly seen and so that you can sort by this column to find the operations which are traversing large numbers or rows. Save Query Results to a File Thanks to the codeplex user mogular who submitted an initial code sample for this feature. You can now export query results to either a comma-separated (csv) file or a tab-delimited (txt) file by choosing the File output option. Using Semantic Versioning for releases I’ve never really had a strict way of assigning version numbers, but I was not really happy with using the full 4 part version that is generated by .Net. So I’ve decided to start using a variation of Semantic Versioning for the releases. This uses a 3 part version number .. where: – gets updated for significant or breaking changes – gets updated for new features – gets updated for fixes to existing features and updates to a higher number reset the lower ones. Fixes There have been numerous small fixes but some of the notable ones are: Excel Add-in Regional Support – there was a bug in v2.0 which caused the Excel add-in to fail to load correctly on PCs that had regional settings that did not use a period (.) as the decimal separator. This has now been fixed and the Excel add-in should work regardless of the regional settings on the PC New Version notification – this was actually silently failing in v2.0 Updates to syntax highlighting definitions Improvements to the Excel “Linked Table” output  Known Issues "Linked Table" output for the Excel add-in only works for PowerPivot models if there is at least one table in the model that was created by linking data from an Excel table. We have not been [...]



The Care and Feeding of SSAS in Production - SQL Saturday 365 Melbourne

Sun, 01 Feb 2015 19:31:55 GMT

Originally posted on: http://darren.gosbell.com/archive/2015/02/02/the-care-and-feeding-of-ssas-in-production---sql.aspx

(image) We are not far away from SQL Saturday #365 which is occurring on Feb 7th, 2015. If you live in or around Melbourne it’s worth considering making the effort to come along. We have a great line up of 30 different sessions with speakers from 7 different countries including 16 MVPs.

This year I’m doing a talk called “The Care and Feeding of Analysis Services in Production”. I was partly inspired by the “Accidental DBA” topics that I’ve seen people doing for the relational engine and figured it was time someone did something similar for SSAS.

I’m going to talk about both Tabular and Multi-Dimensional, so there should be something in there for everyone.

Below is the session abstract:(image)

A lot of the information you'll find on Analysis Services is focused around the initial creation of databases and models, but once you have a solution deployed to production then what? In this session we will look at what it takes to run an Analysis Services server in production. What are the basics that you need to know about how the server works. Including things like threading, memory usage and locking. How can you monitor the health of your server? What tools can you use to find out what's happening on your server? We'll have a look at what you should be monitoring to make sure your system is running properly and run through what to investigate went things don't run as smoothly as you'd like.

(image) (image)



DAX Studio v2–Christmas comes early

Mon, 08 Dec 2014 12:18:47 GMT

Originally posted on: http://darren.gosbell.com/archive/2014/12/08/dax-studio-v2ndashchristmas-comes-early.aspx

Read the following in your best movie trailer voice….

It’s been a long time coming….

19 months in the making….

5,000 lines of XAML code….

7,000 lines of C# code…

With the ability to run both inside of Excel and as a Standalone program

Version 2 of DAX Studio is finally here.

Maybe that’s a bit melodramatic, but maybe you get a hint of how exciting it is to finally be able to share this with you all. Version 2 is pretty close to full re-write of the user interface. And in the process there have been a number of difficult hurdles to overcome. So it’s taken a bit longer than anticipated, but it’s finally ready.

Below is a screenshot of the new user interface, which I think you’ll agree looks pretty slick.

(image)  

You can download the latest version from http://daxstudio.codeplex.com under the downloads tab. The documentation tab has also been updated to cover all the new features in in v2.

Some of the exciting new features are:

  • An “Office 2013” ribbon window
  • True tabular metadata (modelled of the PowerView metadata pane)
  • Integrated tracing support
  • Bracket matching
  • Version update notification
  • comment / uncomment support
  • a single universal installer
  • plus many more 

But probably more importantly the code has been re-architected in a more modular structure so it should be easier to extend and improve going forward. There are still plenty of features that I’d like to add and it should be possible to do some smaller releases now that the major re-structuring is complete.

enjoy :)

(image) (image)



SQL Saturday #365–Melbourne Feb 7 2015

Mon, 17 Nov 2014 10:37:55 GMT

Originally posted on: http://darren.gosbell.com/archive/2014/11/17/sql-saturday-365ndashmelbourne-feb-7-2015.aspx

Things have been a little quite around here recently, one of the reasons for that is that I’ve been hard at work as a committee member for SQL Saturday 365 which will be held in Melbourne, Australia on Feb 7th 2015.

Following on from Melbourne’s first highly successful event earlier this year, next year's event promises to be bigger and better.

Located again at the Caulfield Campus of Monash University, the event will have a mixture of local, interstate and international speakers. And for the very first time there will be full day pre-con sessions available for a very reasonable price on the Friday before the event.

You can find out more information about the event here:
https://www.sqlsaturday.com/365/eventhome.aspx

You can register to attend here:
https://www.sqlsaturday.com/365/register.aspx

If you would like to present a session at SQL Saturday you can submit a proposal here:
https://www.sqlsaturday.com/365/callforspeakers.aspx

Pre Cons
This year there will also be 3 Pre-Con Full Day Sessions on the Friday.  These present amazing value for money with a full day training by top experts for only $315 ($265 for early-bird registrations). The session details and registration links are as follows:

BIML Bootcamp
with Reeves Smith, Peter Avenant, Warwick Rudd and Paul Schmidt

Mastering Execution Plan Analysis
with Paul White

Practical Power BI
with Peter ter Braake

Full details of the PreCon sessions and how to register are in the links.

(image) (image)



The perils of calculating an Average of Averages

Sun, 27 Jul 2014 21:18:50 GMT

Originally posted on: http://darren.gosbell.com/archive/2014/07/28/the-perils-of-calculating-an-average-of-averages.aspxI've seen questions around issues calculating averages come up a few times in various forums and it came up again last week and I feel that there is some benefit in walking through the details of this issue. For many of you the following will be nothing new, but I'm hoping that this may serve as a reference that you can point to when you get requests for this sort of calculation. The core issue here is really a fundamental mathematical one. Personally I see it surfacing most often in DAX and MDX as those are languages that I spend a lot of time with, but also because of their multi-dimensional natures you need to be able to write generic calculations that will work regardless of how the end users slice and dice the data. The discussions invariably start with a statement like the following: "I have a calculated measure that an average, but my totals are calculating incorrectly" There are 2 different issues I see relating to this. The first one is trying to use the AVG() function in MDX. Basically if you want an average calculation that works with all your different dimensions then avoid this function. The AVG function in MDX calculates the average over a fixed set. You may be able to use it in a static query, but to calculate an average in your MDX script simply create the two base measures - a sum and a count, then divide the sum by the count. This is not as much of an issue in DAX as the built-in AVERAGE, AVERAGEA and AVERAGEX generally work as expected. The other sort of question that I see is related to how the totals are calculated and the question is usually something like the following: "I have an average measure calculated by doing sum / count - which produces the correct average for each row, but the total is calculated as "sum of sums" / "sum of counts" and my user wants to see it as the average of all the averages." And to put it bluntly this requirement is invalid. You should never "total" a series of averages by averaging them. The easiest way to explain why this is the case is to illustrate with some data. So let's have a look at a few scenarios. The first problem you will see with the "average of averages" approach is that it gives too much weight to outlying amounts. Category Amount Count Average Bikes 1,000 1 1,000 Helmets 10,000 1,000 10 TOTAL 11,000 1,001 ??? Given the data above how should we calculate the total average? if we do the "average of averages" approach we have: (1000 + 10) / 2 = 505 If we take the SUM(Amount) / SUM(Count) approach we get the following: 11000 / 1001 = 10.99 This is an extreme example to prove a point, but which do you think is correct? Should the 1 bike we sold for $1000 skew the average to $505 or should the fact that it was just one product out of 1001 mean that the average should only be $10.99? Your business user might be happy seeing a higher average amount, but what if the situation was reversed and we had sold 1000 bikes and just one helmet? This would make the "average of averages" still equal 505 while recalculating the average at the total level would give us $999.01 - I know which calculation I think is giving a better indication of the total average sales. It's possible that you may be thinking at this p[...]