Subscribe: Mike Diehl's WebLog
http://weblogs.asp.net/miked/rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
conference  data  database project  database  package  project  query  rows  schema  server  source  sql server  sql  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Mike Diehl's WebLog

Mike Diehl's WebLog



Much aBlog about nothing...



 



Power Query to Promote Headers and removing Special Characters

Thu, 31 Mar 2016 22:50:21 GMT

I had an Excel spreadsheet that had "nicely" formatted column headers that made the text appear on separate lines (ie. the text included Alt-Enter linefeeds). I wanted to use it as a source for Power Query (and PowerBI), but remove the linefeeds in the names before promoting them to the column header.

I found a post on MSDN forums that did something similar:

https://social.technet.microsoft.com/Forums/en-US/1e40ed70-3c61-4829-ba38-85bb2e9438b0/promote-multiple-rows-to-header?forum=powerquery

In the thread, Ehren posted a snippet that combines the first four lines of the source to become the column header names:

= let
    firstN = Table.FirstN(Source, 4),
    renames = List.Transform(
        Table.ColumnNames(Source),
        each {_, Text.Combine(List.Transform(Table.Column(firstN, _), each Text.From(_)), ",")}),
    renamedTable = Table.RenameColumns(Source, renames)
in
    Table.Skip(renamedTable, 4)

I modified this slightly to only use the first line, but remove the linefeeds using the special token "#{lf}"

= let
     firstN = Table.FirstN(Source, 1),
     renames = List.Transform(
         Table.ColumnNames(Source),
         each {_, Text.Combine(
                     List.Transform(Table.Column(firstN, _), each Text.Trim(Text.Replace(Text.From(_),"#(lf)"," "))
                   ),"")}),
     renamedTable = Table.RenameColumns(Source, renames)
 in
     Table.Skip(renamedTable, 1)

Ehren's original instructions still apply to using this formula:

Click the little fx button to add a custom step to your query, and paste this in. It handles doing the promotion for all columns in the table without the need for hard-coding. (It assumes the previous step in the query is called Source. Please update it accordingly if that's not the case.)

Mike




SQL Server Maintenance Plans - how I use them

Mon, 04 May 2015 22:18:08 GMT

Jonathan Cox (@hackdba) has a good post for starting out on SQL Maintenance Plans.

I like the maintenance plan wizard too, but it doesn't really give good advice about some of the operations.

The critical maintenance pieces are backup and integrity checking. You gotta do both of these. And you need to understand full backups and log backups and database recovery modes (Full & Simple primarily) otherwise you'll end up with a disk full error eventually. The Cleanup task also ensures that you delete your old backup files and don't fill up the backup drive either. I could (and maybe should) write another post about the right kind of SQL backups and recovery models, but lots of people already have. If you're reading this and don't know what I mean, then please google SQL backups and recovery models and educate yourself (before it's too late!).

I don't like to use the maintenance plan for index maintenance, because it's a very large hammer approach to a mostly non-existent problem. Ola Hallengren has a much better approach for index optimization. The default settings for this script are much better choices with regards to choosing when to REORG or REBUILD (and there's no point in doing both).

As far as I'm concerned, updating statistics is optional in most scenarios.

PLEASE PLEASE PLEASE DO NOT shrink databases and/or log files in a maintenance plan.

Mike




A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

Thu, 01 Aug 2013 03:47:00 GMT

I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $cur[...]



Query Logging in Analysis Services

Thu, 01 Aug 2013 02:32:00 GMT

On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a tr[...]



Joel's Predictions for 2011 (and Mike's comments)

Fri, 31 Dec 2010 21:32:00 GMT

Joel has posted his predictions for 2011. I find his predictions very interesting, mostly because I am crappy at doing predictions myself. However, I am seldom at a loss for commenting on someone else's work:  1. The Kanban Influence: I have seen a little bit of this, and I like what I see. I would like to try to implement this in the project I am currently on, but I think it will take a lot of education of many involved in the project, as most of them don't even know the term. 2. Digital Entertainment Crossing the Chasm: In December 2009, our family purchased a high-def digital cable video recorder (DVR), and a new LCD flatscreen TV. The DVR has completely changed my viewing habits, I seldom watch programs live anymore. I watched the Vancouver olympics on two channels, in near-real-time, by using the two tuners on the DVR and using pause/skip to split our viewing across the channels, and avoid commercials. The new TV has USB connections, so we occasionally watch videos and view photos on the TV as well. We recently received an XBox 360 as a Christmas gift and I am probably going to explore its integration with Windows Media player on our home PCs. 3. Many App Stores: I recently noticed an "App Store" tab in the latest version of uTorrent. It lists the uTorrent add-ons available for download. 4. Kinecting with your PC We also received the Kinect with our Xbox 360, and it significantly changes the gaming experience, I think. Still a little lagging in responsiveness in some games and situations, and really hard to be precise, but it's a huge leap over the WiiMote (which we also have). In terms of using Kinect on the PC, there is already a burgeoning community for this: http://kinecthacks.net/  6. Mobile Really Race Heats Up: I am following this somewhat, but our house is certainly not "bleeding edge" with our mobile phones. My daughter is on a pay-as-you-go plan, the cheapest service for what she needs - texting and very little voice calls for $15/month. My wife has a relatively old phone, and she is on a pick-5 plan with unlimited texts (no data). What she likes most about the phone is the large number buttons, which makes me wonder about the aging demographic and when the mobility companies will start catering directly to people who shouldn't need to put on their reading glasses to send/read a text or make a phone call. My (non-smart) phone has a full qwerty keyboard (about the size of a Blackberry) and I have still refused to read/send email from my phone. I do find myself using the web occasionally from my phone, but primarily I use my phone for texts and voice (in that order). With those limited phones, I still pay between $120-$150 per month. That seems crazy, and I can't imagine a decent plan for a smart phone would reduce my costs. Add $30-40 per month for my landline that hardly ever gets used anymore, and I have long since concluded that I pay MTS too much (but changing providers probably wouldn't reduce anything either). What I would love as a feature on my phone: voice recognition for texts - I speak and the phone types. Or the phone reads aloud the texts that I receive.   7. Cloud Apps will gain momentum: This is only recently that I have looked at Windows Azure, and it has changed the way I think of software architectures. The development experience "just worked" - things that I thought would be pretty complex to do (configure Visual Studio 2010 to deploy an app to the cloud) worked first try, and pretty darn simply. The large multi-national company that I am currently working on a project for may never use a public cloud for their applications, but they are starting to use Verizon cloud services, and I have already talked to some people there about using the Azure app fabric in a corporate cloud and how it would change their IT and Development processes significantly. 8. Storage Class Memory: I would so love to buy an SSD drive for my laptop. That would be sweet. 'Nuff sa[...]



Imaginet is hiring

Tue, 01 Jun 2010 20:08:00 GMT

We have an immediate need for new staff members!   
  • Project Manager
  • Systems Analysts
  • Test Manager
  • Web Developer
  • Database Admin

Please contact me if you are interested.




SQL Table stored as a Heap - the dangers within

Fri, 30 Apr 2010 16:55:00 GMT

Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead read[...]



SSIS Bulk Insert task that bit me in the butt...

Fri, 02 Oct 2009 17:22:00 GMT

I've been working on SSIS packages that extract data from production databases and put them into data warehouses, and recently I hit an issue using the Bulk Insert task that bit me real good.  When you create a Bulk Insert task in the control flow of your package, the properties you generally edit are: 1. The target connection (which references a connection manager) 2. The target table 3. The source file (which references a file-type connection manager).  I did that, ran the package in Visual Studio with my local file against a dev SQL database on a test server and it all worked just fine. I ran it again, and it failed, due to a primary key violation - so I needed to make the execution of the task conditional, so long as the table was empty, I would run the task, otherwise if it contained anything, I would skip the task. This was harder to do than I thought it would be. I started by creating a variable to hold the row count of the table, then an Execute Sql Task to run a statement on the target table (select count(*) as RowCount from targetTable) and set the variable value to the column in the resultset of the statement. Now I go to look for an IF construct and there isn't any such thing. The closest was a For Next loop that I went down a rabbit-trail trying to use, and having it execute only zero or once, and I couldn't get that to work. Is there magic between the @variable syntax in the initialize, condition, and iteration expressions and the package user:variable declarations that make those work together? I still don't know the answer to that. Then I thought of using the Expression on the dependency arrow from the task that got the row count from the target table. So I joined the Row Count task to the Bulk Insert task using the green arrow, then edited the dependency to be dependent both on Success of the row count task and the value of the user:rowCount variable I had created. That worked.  Believe it or not, that isn't really what bit me in the butt. Now I had a package that I could execute multiple times and it would work properly. My buddy Jeremy would say that it is "idempotent".  What bit me was when I went to execute the package in another environment.  I moved the .dtsx file to a test server and used the Execute Package Utility. I set the values for the connection managers in the package to the new server connections (and the new location of the bulk copy file), and ran the package, and it worked. Just to make sure it was "idempotent", I ran it again. It failed this time. Another PK violation. Why? It took me a while to find the problem. Eventually it came down to the target table property of the Bulk Insert Task - the value of this property was not just a two part table name, but it also included the database name. It just so happened that the database I was testing with from my Visual Studio is on the same server as when I was testing with the Execute Package utility. So, the first time I ran it with the Execute Package utility with the modified connection manager settings, it was querying the *real* target database for the number of rows, and getting back 0. Then it executed the bulk insert task into the *original* database I was testing with on the server (that I happened to clear out the rows from the table), and the bulk insert worked. The second time, the number of rows was still 0, and it tried to do the bulk insert into the same database, despite the fact that the connection manager was pointing to a different database. I can understand why this was done this way, since when you use the bcp command line utility, you need to database-qualify the table you are moving data into, because bcp doesn't specify the database otherwise. But the Bulk Insert task was using the T-SQL statement BULK INSERT which is already database specific, so you don't generally qua[...]



SQL Database diagramming and VSTS Data Dude

Wed, 29 Jul 2009 00:14:00 GMT

At Imaginet, we use Visual Studio Team Edition for Database Professionals (Data Dude) on our projects to manage database schemas, keep them in source control, unit testing, and lots of other nice features.

But it doesn't do database models well. Or at all, for that matter. I really would like the Database Diagramming tool in SQL Management Studio and Visual Studio to be able to go against a database project. But no, it can only go against an actual database.

Here is what we do to be able to model our tables and relationships with the diagramming tool and still use Data Dude.

For every project, we have a number of database "instances" - usually named after the project (I'll use the name Northwind from here on) with a suffix for the "environment", such as Northwind_Dev and Northwind_Test.

We also have another called Northwind_Schema, which is considered the "gold" standard for the schema of the project database. I'll start by creating that schema database and create tables in it using the database diagramming tool in SSMS. I can fairly quickly create a number of tables, and have a diagram for each subject area of the data. It also means my documentation is getting built at the same time as my database (in my world, the diagram forms the large part of the required database documentation). And these diagrams, like Xml comments in C# or VB, are also very close to "the code", and will keep current with the state of the schema database. Models created in other tools then exported to a database are very hard to keep accurate in the long run. When it comes time to snapshot the documentation for the database, we can fairly quickly embed pictures of the database models in Word or OneNote or some other documentation tool.

At the same time as I am modelling the database in Northwind_Schema, I create a database project in Visual Studio called Northwind. If I have the Northwind_Schema database in a state that I like (for first draft), I will use the Import Schema from Database wizard when creating the new database project. Otherwise, I'll just create an empty database project.

When I am happy with Northwind_Schema, I use a Schema Comparison to compare the Northwind_Schema database to the Northwind database project. I will update the database project with the changes that are in Northwind_Schema, then run any local tests against the database project before checking in.

Upon checkin, we have Team System automatically build the database and deploy it to Northwind_Dev, which is available for any developers on the project to use as they code other areas of the project. In the project I am working on now, we use LINQ and CSLA-based entities for our data access layer, so I will keep our LINQ model synchronized with the database project as well (usually by dragging tables onto the LINQ designer surface from the Northwind_Schema database).

If we ever lose Northwind_Schema, it is easy to rebuild it from the database project, because the database project in source control is "more true" than the Northwind_Schema instance. (However, we can lose the diagrams by rebuilding Northwind_Schema).

As I said above, I would actually prefer to do my diagramming in Visual Studio, against a database project rather than a database, and in that way I could also keep the diagrams in source control. But with the Northwind_Schema database, I can model new subject areas or do fairly major refactoring prior to checking out the database project files.

In my next post, I'll talk about how we build and manage stored procedures in project databases.




MS BI Conference: Monday Keynote

Tue, 07 Oct 2008 01:31:00 GMT

Here are my notes on the Monday morning keynote: About 3000 attendees at the conference, over 60 countries represented. There is BI in Halo3: whenever you look at competitor stats or weapon effectiveness, this is implemented using BI tech Madison - MS has acquired DATAllegro, a company that was accomplishing low TCO MPP (massively parallel processing) scale out of BI. Using standard enterprise servers, you can process queries on very large data warehouse databases very quickly. They demonstrated a hardware setup of a MPP cluster: one control node, 24 compute nodes, and at least as many storage nodes (ie. shared disks). They loaded 1 trillion (yes trillion) rows in the fact table, and a bunch of dimension tables, such that the data warehouse contained over 150 TeraBytes of data. Then they sliced the fact table up onto the 24 SQL instances on the compute nodes (each compute node then had 1/24 of the trillion rows) and replicated the dimension tables to all compute nodes. Using SQL 2008 (and its new star join optimization) they then issued a query on the fact table and the related dimension tables to the cluster, where the control node passed the query along to the compute nodes, they each processed it, and returned the results back to the client. On one screen they had Reporting Services (the client app) and on another, a graphic display of the CPU and disk stats for the control node and all 24 compute nodes, each node having 8 CPUs. When the Report was being displayed, the query got processed, and you could see the CPU usage go up on many of the nodes, then disk usage on each of the nodes, then the activity would subside and the reporting view would then display the results. It was all done in under 10 seconds. It was truly impressive. Now, that was with essentially read only data, so you could probably "roll your own" MPP system, given the time and hardware. It's not a huge technical problem to scale out read-only data. If they could show the same demonstration except with a SSIS package *loading* a trillion rows into the cluster, that would have been astounding - it's a much different and more difficult problem. Still, I was impressed. Gemini - this is "BI Self Service" - the first evidence of this is an Excel addin that the always-entertaining Donald Farmer demonstrated. He used the addin to connect to a data warehouse and in a spreadsheet showed 20 million rows. We didn't *see* all 20 million rows, but he did sort it in under a second, and then filtered it (to UK sales only, about 1.5 million rows) in under a second. That performance and capacity was on what he said was a <$1000 computer with 8 GB RAM, similar to what he purchased for home a few weeks ago. Aside from the jaw-dropping performance, he used the addin to dynamically link the data from analysis services with another spreadsheet of user-supplied data (I think it was "industry standard salary" or something). The add-in was able to build a star-schema in the background automatically and then make it available in the views they wanted in Excel  ( a graph or something? I can't remember). So it was showing the fact that sometimes the data warehouse doesn't have all the data needed for users to make decisions, so they got the data themselves, rather than wait for IT to get it in the DW. Ok, cool. So then he published that view into Sharepoint using Excel services, and the user-supplied data went along with it. So centrally publishing that view means it can be utilized by others in the enterprise, rather than sharing via email or a file share or something. From the IT perspective, he showed a management view (dashboard) in SharePoint showing usage stats of "Sandboxes" (the thing they are currently calling these publications) and they could see how popular this particular sandbox was, and then take steps[...]



MS BI Conference 2008 - First Impressions

Mon, 06 Oct 2008 21:24:00 GMT

It has been over a year since I last blogged, but I want to restart with some posts about the BI Conference I am attending this week.

Chris and I flew to Vancouver yesterday and drove down to Seattle in a Camry Hybrid. Sitting in the lineup at the US border for an hour drained the batteries on the Camry so it had to restart the engine to recharge a couple of times, for about 10 minutes each time. Seemed odd to discharge so much battery just sitting in a lineup and moving 10 feet every five minutes. Anyway...the trip display shows that our fuel efficiency was under 8 liters/100km on the trip down. That also seems a little poor compared to my Golf TDI that gets 4.5 liters/100 km regularly.

We registered last night and wandered the Company Store for a bit - saw uber-geek stuff there and we thought of getting something for Cam, our uber-geek on the team at Imaginet. The conference package was predictable: a nice back-pack, a water bottle, a pen, a 2 GB USB stick, a SQL Server magazine, not as many sales brochures as last year, and a conference guidebook.

Last year's guide book was a small coil bound notebook with a section of blank pages at the end for taking notes. This year's edition has the same content - a description of all the sessions and keynotes and speakers, as well as sponsor ads, but it is missing the note-taking section. I really liked that section last year, so today I found myself scribbling notes on loose paper, and running out. I specifically left my (paper) notebook at home because I liked the smaller conference book instead, but now I am going back to the Company Store to buy a small notebook for the rest of the sessions.

The conference is trying to be more environmentally friendly - in the backpack was a water bottle and they encouraged you to refill that at the water stations rather than having bottled water. That's cool. For me, I would have preferred a coffee mug, since I had three cups of coffee over the day (in paper cups, and no plastic lids). In a strange twist, the breakfast and lunch dishes were on paper plates and not the real dishes like last year - one step forward, two steps back I guess. I can't figure it out.

In the main hall before the keynote address, there was an live band playing 80's hits. They were pretty good, but it seemed odd to have a bouncy energetic group on stage at 8:30 on a Monday morning, everyone was filing in and sitting down, morning coffee still just starting to kick in. The bass player was one or two steps beyond bouncy-happy. It reminded me of someone on a Japanese game show.

 




How to rename a Build Type in Team System (and a suggested naming convention)

Thu, 24 May 2007 14:33:00 GMT

I suppose this might be in the manual, but...

 If you want to rename a Build Type that you have created in a Team System Project, you need to open the Source Control Explorer window, dig down into the TeamBuildTypes folder under the project, and rename the folder that corresponds to the build type you want to change. After you check in that change, refresh the Team Builds folder in Team Explorer and you'll see your newly named Build Type.

 Remember to change any scheduled tasks you may have created to run your builds automatically.

One more thing about naming Build Types - because we like to have an email sent out to the team members after a build, we have found that a naming convention for the build types helps make it easier to easily recognize and organize the build notifications. We use a standard that includes the environment, the Team Project name, and the sub-solution as the name of the build. So we have build names like

 DEV Slam Customer Website - This builds the CustomerWebsite.Sln in the $\Slam\DEV branch.

QA Slam Customer Website - This builds the CustomerWebsite.Sln in the $\Slam\QA branch.

DEV Slam Monitor Service - This builds the MonitorService.Sln in the $\Slam\DEV branch.

QA Slam Monitor Service  - This builds the MonitorService.Sln in the $\Slam\QA branch.

 Having the project name in the build type helps because if you are a subscriber of lots of different builds for different projects, you cannot tell by looking at the email (other than this naming convention) which project the build is from.

 




MS BI Conference 14: Evening Reception

Fri, 11 May 2007 04:26:00 GMT

The evening reception was at the Experience Music Project/SciFi Museum Hall of Fame. Gary and I walked through the SciFi Museum. It was really great, lots of memorabilia from all the TV series, movies, as well as books, comics, magazines, scripts, photos, videos. Really cool. The only underrepresentaed Sci Fi series was Dr. Who - I saw one thing from that, the "Fun Gun". No Daleks. Gary has read a lot of sci-fi I found out.

 I thot it was going to be a banquet and awards ceremony - they had awards, but it was more like a standup reception. No tables except in a tent outside. It was pretty stuffy inside, so I hung out with Gary in the fresh air (well, he was smoking, and so was a lot of others around me, but for the most part it was fresh air).

It would have been a great night to have my wife along - she would have loved the sci-fi museum (and the Experience Music Project, very little of which I saw), and she would have been a classy-looking woman to have with me too. We'll come see this place another time.

Tomorrow morning keynote is Steve Ballmer. Somehow the chant "Business Intelligence, Business Intelligence, Business Intelligence" or "Information worker, information worker, information worker" doesn't really roll off the tongue. What will be his hook tomorrow?

 

 




MS BI Conference 13: BI Power Hour

Fri, 11 May 2007 00:33:00 GMT

Apparently the Power Hour has been something that has been happening at TechEd in past years. I vaguely recall seeing something about it once.

This was a great session - two slides in total I think. All demos, and the demos were different - kinda crazy, but still educational. Lots of free stuff thrown into the crowd. I got something for my daughter.

1st demo - Magic 8-ball vs Data Mining Neural Net algorithm.

In an Integration package, the guy took a table of customer demographics, and ran it in parallel through two different algorithms to predict whether the customer was a homeowner or not. One algorithm was a DM NeuralNet algorithm, the second was a Script that launched the Magic 8-ball window. Looking at the results, the 8-ball didn't do too badly. The demo was interesting in that it showed you could solicit feedback from the user who was executing it (the 8-ball was in a Windows Form, created on the fly in the package).

2nd Demo - by Hitachi Consulting, he demoed an implementation of Analytics for mobile devices. The framework they built helped push out reports, alerts, forms, to a mobile device. They used MS Communication Server to send an SMS text message to the phone, and when the phone received the text message, it used web services to pull back the content (alert, report, form, etc). So he sent out a "Price Change" alert. An RMA authorization form. A Sales report. He said they also had a method to ping the phone and tell it to erase all its content, in case it got lost or stolen. Very cool.

3rd demo - the guy said he wanted to find the geekiest thing to do with Integration Services. He took two sets of a million random numbers between 0 and 1, and through selection of them and applying an algorithm, he basically calculated the value of PI. He didn't tell us what it was until at the end it became obvious. Terribly geeky.

 4th demo - The guy had built a custom reporting services item, which took in a dataset (a summary of sales amounts for three sales reps in three categories), then was an interactive KPI mechanism for displaying the data. It presented the categories and reps in a 3x3 matrix, with a green and red button in the corner of each cell. If you decided the amount was good, you clicked in the red button and the cell got an X. If you thot it was a good amount, you clicked the green button, and the cell got a green O. (Get it? X's and O's in a 3x3 matrix?)

Last demo - Using Performance Point Server, they showed a web page with ten suitcases on it, and they invited someone to come up and play Deal or No Deal. She won $10 (in play money I think). Her highest offer was nearly $485,000.

Between each demo they threw out schwag, like t-shirts and hats and stuff.

We should definitely try this the next demo we do at Imaginet.




MS BI Conference 12: MOSS 2007

Fri, 11 May 2007 00:21:00 GMT

A rolling stone grows no MOSS.

This guy was very MOSSy, because he didn't roll very much. I was nodding off in this session because the presenter wasn't very passionate, or funny, or showing me anything that hadn't already been shown in the keynotes, or at the MSDN tour in December.

(Microsoft Office SharePoint Server, btw).

Cool thing about SQL Server 2005 SP2 is that it adds much better integration of SS Reporting Services into MOSS, it contains the reports repository and the published reports become a document library, with much better web parts for integrating into Sharepoint.

I snoozed a little during this session. Good thing I did, because I was really glad I was awake for the next session.




MS BI Conference 11: chalk talk on MDX

Thu, 10 May 2007 21:11:00 GMT

Two thumbs down.

Well, it probably was a great session, but I got there five minutes early, and already there was 30 people waiting outside the "room" it was in. So I went for lunch instead.

Chalk talks are the 2nd or 3rd class citizens in the sessions here at the conference, but they have the potential to be the most valuable. At least from my perspective. These sessions are real world, not lovey-dovey like the main sessions.

Microsoft, the chalk talks deserve a 1st-class upgrade. Please, I'm begging you.




MS BI conference 10: ProClarity

Thu, 10 May 2007 21:03:00 GMT

A woman who was formerly from ProClarity, now a product manager in the Performance Point Server group, presented this session on ProClarity.

"Interface to insight" - answering the WHY? in BI.

Tools for decision makers to explore large amounts of data and get rapid insight.

Simple data navigation, powerful calculations, and advanced visualizations of data.

Reports tell you what happened. Dashboards tell you what is happening now, and ProClarity Analytics helps understand Why it is happening.

ProClarity Analytics Server (PAS) is an IIS App. There is also a SQL database of business metadata. The clients are thin-client web-based, thick client web-based (ActiveX control), and Windows-based thick client.

This product, to me, addresses a lot of the "last mile" gap between SQL Server Analysis Services cubes and stuff, and the user.

the KPI builder helps make calculated measures with no MDX at all, easily, and publish to the PAS.

Advanced visualizations: heat map, like spaceMonger, shows boxes stacked together, with the size indicating one measure (sales amount), and the colour indicating another KPI (profit margin, good/warning/bad).

Decomposition tree - view a measure, break it down by category, then by another dimension, and so on. Hard to describe, cool to see.

She was great - perfect balance of architecture slides to show how the thing fits together, with lots of demo time with the product itself.

 




MS BI Conference 9: Thursday keynote

Thu, 10 May 2007 20:34:00 GMT

This morning's first keynote had content about Katmai, the next release of SQL Server. (there was other stuff before that, about the BI platform and pervasiveness and yadda yadda.)SQL 2005 SP2 includes stuff for Excel for data mining. This tool takes an ordinary spreadsheet and applies a data mining algorithm to it, such as categorization. It submits the data to SSAS, builds a mining model, trains it with the data, and adds the results of the mining as a new column in the spreadsheet. All without the user needing to know anything about mining, other than what kind of scenario they want. I remember seeing this in yesterday's keynote, where a tall, blonde, smart woman (she may have been from Canada, I saw her at the MS Canada thing at Fox Sports Bar last night) demoed a scenario. She took a table in Excel which was a list of prospects and their demographics. Someone who generated the list had started ranking the prospects, but we didn't know how he decided their ranking. he had maybe 10% of the rows ranked. She took those rows as an example, submitted it for data mining, then it determined the rest of the rankings for the other rows, based on the example rankings. Very cool.This morning's demo showed another bunch of marketing prospects and their demographics. He submitted them for mining, asking it to categorize them into three groups. Group 1's demographics suggested that they were good prospects for SUV's (#children, age, income). Group 2's demos was more "poor student" starter vehicle types, and the third group was good for selling bicycles to. The coolest thing I thot was that this used the data mining engine in SSAS without needing a cube or anything, it left the spreadsheet as a plain spreadsheet, and the user didn't need to understand all the data mining stuff to do it.Katmai will be shipped in 2008. He didn't say WHEN. Some new datatypes natively supported - filestream, spatial coordinates, new date/times. It will include an Entity Data platform in .NET managed types, and LINQ of course. It'll support the occassionally-connected database (like mobile databases) and handle the synch stuff better.Microsoft bought OfficeWriter from Artisan, which is a set of tools that lets users author reports in Excel or Word, using the features of Word/Excel, and then publish the report to Reporting Services. Very nice. It was a "whyt didn't I think of that!" moment. Seemed simple enough to do. The spatial datatypes will be supported in the query optimizer and the indexes, so you can do geographical queries very quickly. Dr. Robert Kaplan, Harvard Business School, creator of the Balanced Score Card methodologies.Balanced score cards help business determine/measure their performance on more than just financial metrics. It helps to measure the more intangible assets, like quality, customer relationships, employee skills. There is a BSC for non-profit organizations too, which adds the mission perspective (how do we have an impact?)and support perspective (how do we attract resources and support for our mission?).Most organizations do not know how to execute a strategy.Principles: 1. Mobilize change through executive leadership. 2. Translate strategy to operational terms.3. Align the organization to the strategy.4. Motivate to make the strategy everyone's job.5. Govern to make strategy a central process.Mission - why we existValues - what is important to usVision - what we want to beStrategy - our game plan.Usually there is a gap from the Strategy to operations. We need to link the strategy to the operation[...]



MS BI Conference 7: Wednesday evening

Thu, 10 May 2007 06:26:00 GMT

The evening was the Partner Pavillion Expo reception. Open bar and light supper and you wander around the booths. Microsoft has areas where you can talk with the product managers. They have tables marked "Reporting and Analysis", "Integration and Data Warehousing", "Database engine", "ProClarity", "Performance Point Server", yadda yadda yadda. I wanted to talk to some of them from the Analysis Services team, to talk about the mutli-developer scenarios that I had been going through with my customer, and some of the problems I have had with team development in Analysis services. But I couldn't tell which guys were the SSAS ones, which MS people were just wandering around themselves, and I'm not great at starting up conversations with people I don't know anyway. And I wasnt' with anyone who would help bolster my courage. So I wandered around the tables, looking like I wanted to talk to someone if only they would come up to me and introduce themselves. It sounds stupid I know.I decided to go back to the hotel at that point, and chat with my DW for a while, and my boss was asking me about the day too. MS Canada was hosting a party at the Fox Sports Bar a block away this evening, and Gary, the sales guy from Imaginet who is also here, said he was going to go, so I headed down there about 9. I guess MS Canada just had an area of the bar, because there was still other "non-geek" types there - as evidenced by their lack of conference lanyards. It wasn't immediately obvious which area was the MS Canada reception, but there was a busy corner so I went over there, looking for Gary. It soon became apparent that Gary wasnt' there, and there was no one there I knew. Ramon, a manager at Imaginet, and former MS Canada guy, had sent me some contact info for the MS Canada BI tech lead, whose name I have since forgotten, so he was hoping I would connect with him and introduce myself. The bar was fairly loud, I didn't really know anyone, and pretty much everyone was already chatting amongst themselves, so I'm not one to stand at the edge of a group and horn in. Or go up to a complete stranger and introduce myself. If Ramon had been there, he would have known probably 75% of the people there, and he probably would have introduced me (actually, he would have "talked me up") to anyone that mattered. His reasons to go to a conference like this (and Gary's reasons too I bet) would be quite different than mine. He would have gone to network, to make contacts, to find business, and to come home with $150k worth of leads. Me, I come to these things to soak up the knowledge. To find the best sessions and learn stuff I don't know. I don't really like the vendor booths because I don't much like the sales pitch. I do try to think about how I would apply the things I am learning within Imaginet, or with my customers and on future projects. But I don't really think "hey, if we took that idea to Customer X, we might get $50k of work out of it". I suppose I should. My value equation, I think, is using the knowledge I gain from events like this to do my work better, to recognize ways I can add value to customers or leads when they come to me with a problem. Joel, on the other hand, is much better at this than I am, he comes out of these thigns with ten new product ideas, and a strategy to talk with a dozen of our customers about what they are missing because they haven't done X or Y yet, and look at what you could do! ("and the vill[...]



MS BI Conference 6: Chalk Talk on SQL Server Integration Services - moving from Dev to Test to Prod

Thu, 10 May 2007 06:09:00 GMT

This was the best session of the day for me. The guy from SQM (Solid Quality Mentors) was Spanish with a thick accent, but I found him easy enough to understand. Again, the chalk talk venue was bad, couldn't hear or see, so I sat in the front row.

 He talked about what he had learned and put into practice for SSIS packages.

At the Dev level, you treat packages like source code. They should be in source control. You edit them with BITS (Visual Studio). You test them with local or dummy databases.

At Test and Prod, you treat packages like executables, compiled code. You never edit them directly (with VS for exmaple).

He likes to store packages in the SQL Server store (essentially, in MSDB), when you deploy them from source control to Test or Prod. That way they get backed up with MSDB.

Packages should have "Package configuration" enabled, and you should use consistent naming conventions throughout - starting with the package name itself, which should include the Solution and Project name, so that when deployed, you can trace the package back to the source control system.

Usually the connection managers in packages should be consistently named - they usually represent logical names for data sources and destinations, which are configured at execution time with physical names (according to the environment).

He uses the SQL Server configuration option to store much of the package configuration in a table in the database - you can have a configuration database in each instance of SQL Server for Dev, Int, Test, Prod, and they each have their own values for configuration. So the Test Config database has config values that point logical sources and destinations to the Test source and Test destination, and so on.

Then, he adds a second configuration which is an XML configuration file, that configures the ConnectionManager for the SQL Configuration database. The XML file on the Test instance then points the package config to the Test Configuration database, and so on. You must have the xml configuration for the Config connection manager listed above the entry for the SQL Config configuration for this to work properly.

At the customer I have been working with, we used Xml configuration files and environment variables, and stored our packages in the file system instead of MSDB. We ran into a roadblock because they have both Test and Prod SQL instances on the same server, and SSIS does not have the concept of named instances. I was trying to use Xml config files like source controlled files, and that doesn't quite work. I like the ideas he came up with and I'll start implementing them when I return to that customer next week.