Subscribe: Random Thunks
Added By: Feedage Forager Feedage Grade B rated
Language: English
aracs utilities  assembly  create  key  new  null  security dictionary  security  select  server  sql server  sql  utilities jobs   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Random Thunks

Random Thunks

Spurious non-linear technical thunks from the words formost authority in useless random thunking.

Updated: 2014-10-04T23:37:30.598-04:00


Deploying signed assemblies in a single script


I’ve been writing a set of routines to control my database backups recently and one part of this was a CLR User Defined Data Type I’d created that allows some rudimentary file handling (with the appropriate security). By it’s very nature this assembly required the EXTERNAL ACCESS privilege and this is where the head-scratching came about. Liking clean install scripts with minimal payloads (single scripts are easier to check into Source Code and deploy across multiple environments) I needed some method of encapsulating the deploy and ensuring the appropriate asymmetric key, users etc. were created. However normally this would require either the key file or the assembly DLL to be copied along with the .Sql script – and this would break my 1 script rule! So next idea was to create the asymmetric key from the assembly directly within Sql server – but I couldn’t do that until I’d installed the assembly which would fail because it required the EXTERNAL ACCESS privilege; Thus leading to another game of Chicken and Egg. The solution was remarkably obvious – use a temporary SAFE assembly to create the asymmetric key then dispose of it directly afterwards – after all the assembly is only required at this juncture to create the asymmetric key. The first section of the script deals with previous execution cleanups etc: Use RandomThunks   ------------/ Start stage 1 : Drop previous objects /--------------------------------------------------   If Exists ( Select 1 From sys.assembly_types As T Inner Join sys.schemas As S On T.schema_id = S.schema_id Where = N'SqlFile' And S.Name = N'Maintenance.Database.Archives')     Drop Type [Maintenance.Database.Archives].SqlFile;   If Exists (Select 1 From sys.assemblies Where name = N'RandomThunks.CLR.SqlServerFileUDT' and is_user_defined = 1)     Drop Assembly [Skipjack.CLR.SqlServerFileUDT];   -- This next key should never exist when we run this as it's only use to create the asymmetric key before we Grant the External Access right to it.   If Exists (Select 1 From sys.assemblies Where name = N'TEMPORARY. DO NOT USE!' and is_user_defined = 1)     Drop Assembly [TEMPORARY. DO NOT USE!];      If Exists ( Select 1 From sys.server_principals Where name = 'SqlServerFileUDTUser' )     Drop Login SqlServerFileUDTUser;      If Exists ( Select 1 From sys.asymmetric_keys Where name = 'SqlServerFileUDTKey' )     Drop Asymmetric Key SqlServerFileUDTKey;   ------------/ End stage 1 : Drop previous objects /---------------------------------------------------- And the second part deploys the assembly with the requisite privilege: ------------/ Start stage 2 : Object Creation /--------------------------------------------------------   Declare @Assembly VarBinary(MAX); Set @Assembly = 0x4D5A9000030000000...   If Not Exists ( Select 1 From sys.schemas Where Name = 'Maintenance.Database.Archives')     Exec sp_ExecuteSql N'Create Schema [Maintenance.Database.Archives]';   -- Create a temporary assembly that we'll use to build the Asymmetric Key from   Create Assembly [TEMPORARY. DO NOT USE!] Authorization dbo     From @Assembly     With Permission_Set = Safe;   Create Asymmetric Key SqlServerFileUDTKey     From Assembly [TEMPORARY. DO NOT USE!];   -- Drop the temporary assembly - it's use is over.   Drop Assembly [TEMPORARY. DO NOT USE!];   Create Login SqlServerFileUDTUser     From Asymmetric Key SqlServerFileUDTKey;   -- This needs to be run at the Server level - encapsulating it in a sp_ExecuteSql allows the script to be DB name agnostic.   Exec sp_executeSql N'     Use master;       Grant External Access Assembly     [...]

Sql Agent Session Storage


I’ve been fiddling around with some jobs recently that required values to be kept from one job step to another. Not finding anything much exciting I went for plan Z – roll my own. They’re simple procedures but do work nicely. First I create a Schema that everything will be bundled under: If Not Exists ( Select 1 From sys.schemas Where Name = 'Aracs.Utilities.Jobs')     Exec sp_ExecuteSql N'Create Schema [Aracs.Utilities.Jobs]';   Now we’ll need a backing store: Create Table [Aracs.Utilities.Jobs].SessionSaverStore     (          JobId                UniqueIdentifier        Not Null         ,ParameterName        NVarChar(128)            Not Null         ,Value                Sql_Variant                Null     );   Next I create the SessionDestroyer Procedure (since it can be called from the next one this keeps the dependencies happy and calm): If Object_Id('[Aracs.Utilities.Jobs].SessionDestroyer') Is Null     Exec sp_ExecuteSql N'Create Procedure [Aracs.Utilities.Jobs].SessionDestroyer As';   Go   Alter Procedure [Aracs.Utilities.Jobs].SessionDestroyer   As       Declare     @JobHex                    NVarChar(32)                 ,@JobId                        UniqueIdentifier     ;          Select    @JobHex = Substring(Program_Name, 32, 32)       From    sys.dm_Exec_Sessions       Where    Session_Id = @@Spid;       Delete    [Aracs.Utilities.Jobs].SessionSaverStore       Where    JobId = Cast('' as xml).value('xs:hexBinary(sql:variable("@JobHex") )', 'VarBinary(MAX)');   Along comes the SessionSaverStore next: If Object_Id('[Aracs.Utilities.Jobs].SessionSaverStore') Is Null     Create Table [Aracs.Utilities.Jobs].SessionSaverStore         (              JobId                        UniqueIdentifier             ,ParameterName                NVarChar(128)             ,Value                        Sql_Variant         );          If Object_Id('[Aracs.Utilities.Jobs].SessionSaver') Is Null     Exec sp_ExecuteSql N'Create Procedure [Aracs.Utilities.Jobs].SessionSaver As';   Go   Alter Pr[...]

Bulk Copy within a Sql Server CLR Stored Procedure


Whilst it’s not immediately obvious, one can perform a Bulk Copy operation as part of a Sql Server CLR Stored Procedure. There are however some gotcha’s. First off is that you need to stay away from Context Connections for the actual operation. Whilst you can technically use a Context Connection for the first part, I’d personally not recommend you do for reasons that will become apparent shortly. The code below is taken from one of my procedures using a homebrew class that provides me with information regarding a table – including the various elements of a fully qualified 4 part name (Server.Database.Schema.Table). The first part of the code handles the bulk import: using (SqlConnection sourceCx = new SqlConnection(string.Format("Data Source={0};Initial Catalog={1};Integrated Security=SSPI;", sourceTable.ServerName, sourceTable.DatabaseName)))             {                 sourceCx.Open();                   using (SqlCommand command = new SqlCommand(string.Format("Select * From {0};", sourceTable.FqTableName2Part), sourceCx))                 {                     using (SqlDataReader reader = command.ExecuteReader())                     {                         using (SqlBulkCopy bulkCopy = new SqlBulkCopy(string.Format("Data Source={0};Initial Catalog={1};Integrated Security=SSPI;", destinationTable.ServerName, destinationTable.DatabaseName)))                         {                             bulkCopy.DestinationTableName = destinationTable.FqTableName2Part;                             bulkCopy.SqlRowsCopied += new SqlRowsCopiedEventHandler(bulkCopy_SqlRowsCopied);                             bulkCopy.DestinationTableName = destination;                             bulkCopy.BatchSize = 10000;                             bulkCopy.NotifyAfter = bulkCopy.BatchSize;                             bulkCopy.WriteToServer(reader);                             bulkCopy.Close();                         }                     }                 }[...]

Wither ‘Go’


Wow, it’s been another time and a bit since my last post. Still here, still banging my head against the proverbial wall… Came across this Oddness today. I had the following script segment: If Exists ( Select 1 From sys.tables Where Object_Id = Object_Id('Security.Dictionary') ) Drop Table [Security].Dictionary; Create Table [Security].Dictionary ( Id Int Identity(1, 1) Not Null ,HashValue Int Not Null ,ItemType TinyInt Null ,TextData NVarChar(MAX) ); Alter Table [Security].Dictionary With NoCheck Add Constraint PK_Dictionary Primary Key NonClustered ( HashValue ) With ( FillFactor = 50 ,Online = On ,Pad_Index = On ,Allow_Row_Locks = On ,Allow_Page_Locks = Off ) ;I needed to Add ItemType to the clustered key, so I dutifully made it a Not Null and tried again. If Exists ( Select 1 From sys.tables Where Object_Id = Object_Id('Security.Dictionary') ) Drop Table [Security].Dictionary; Create Table [Security].Dictionary ( Id Int Identity(1, 1) Not Null ,HashValue Int Not Null ,ItemType TinyInt Not Null ,TextData NVarChar(MAX) ); Alter Table [Security].Dictionary With NoCheck Add Constraint PK_Dictionary Primary Key NonClustered ( HashValue ,ItemType ) With ( FillFactor = 50 ,Online = On ,Pad_Index = On ,Allow_Row_Locks = On ,Allow_Page_Locks = Off ) ; Imagine my puzzlement when I received the following error:Msg 8111, Level 16, State 1, Line 12Cannot define PRIMARY KEY constraint on nullable column in table 'Dictionary'.Msg 1750, Level 16, State 0, Line 12Could not create constraint. See previous errors.This threw me for a loop since ItemType was quite clearly Not Null (Line 9). So I ran each segment in turn using Select and F5 and was even more non-plussed when each segment ran fine. I then ran the script again and once more, it ran without failure. With my trusty imaginary ‘WTF?’ Red Fez hat firmly planted on my noggin, I attempted to recreate the situation and low and behold I found that adding a Go batch separator thus separating the Create Table And the Alter Table made all the difference for the first run through: If Exists ( Select 1 From sys.tables Where Object_Id = Object_Id('Security.Dictionary') ) Drop Table [Security].Dictionary; Create Table [Security].Dictionary ( Id Int Identity(1, 1) Not Null ,HashValue Int Not Null ,ItemType TinyInt Not Null ,TextData NVarChar(MAX) );Go Alter Table [Security].Dictionary With NoCheck Add Constraint PK_Dictionary Primary Key NonClustered ( HashValue ,ItemType ) With ( FillFactor = 50 ,Online = On ,Pad_Index = On ,Allow_Row_Locks = On ,Allow_Page_Locks = Off ) ;If I were to guess what just happend here I'd say that I think the Alter Table sans the Go seperator was parsed based off the previous table specification, however embedding the Go seperator forced the parser to re-evaluate the new layout. Or something...Sql Server 2008, SP1 [...]

We interrupt your schedule to bring you this late breaking news


Off topic, however worthy of keeping for posterity. From the Department of Homeland Security comes this press release from the Office of the Press Secretary, and a statement on just how serious the US Government is on eradicating the 2009 H1N1 ‘Swine Flu’ virus.

And just incase they redact their statement, here’s the most chilling section:


One hopes this was a simple typo!

There can be only 1


Our resident architect came over today and posed me this little puzzler – he wanted a query that would return if 1 row or nothing based of a certain criteria. In other words if two rows met the criteria he wanted nothing pulled back. A strange request I’ll grant ye, and one that took me some time to bang my head against, however the answer was pretty neat: Create Schema Test;GoCreate Table Test.Only1RowTest( Key1 Int ,Key2 Int)GoInsert into Test.Only1RowTest Values (1,1),(2,2),(3,3),(1,2);-- Demonstrate pulling back no rows because 2 rows meet the criteriaWith Cte As ( Select Top(2) * From Test.Only1RowTest where Key1 = 1 ) Select Cte.* from Cte Inner Join ( Select Count(*) As c From Cte) As CteC On CteC.c = 1;-- Expand the criteria to further limit the row count and we get 1With Cte As ( Select Top(2) * From Test.Only1RowTest where Key1 = 1 And Key2 = 1 ) Select Cte.* from Cte Inner Join ( Select Count(*) As c From Cte) As CteC On CteC.c = 1;To break it down further, we create a common table expression and start off on the basis that returning 2 rows is just as bad as returning 2 million, so we limit the rows pulled back with a Top(2), we then bring back the results of the CTE as long as the # of rows returned =1 (no-one ever said an Inner Join had to join two columns together).Sql Server 2008 Build 10.0.1798.0 (RTM-CU4) [...]

Fritzin’ with the Guv’nor


It’s a feature I for one have been screaming desperate for – the Sql Server 2008 Resource Governor – the means to finally provide a T-SQL based process to throttle how much CPU and Memory certain activities can utilize. For me I need to it because I hate bringing the server down every time I need to run a fragmentation scan; with a 1.5TB DB on an undersized server, I need most of the resources dedicated to keeping the application happy. However I don’t want anything I do to kill the performance – and likewise I don’t want some schmuck running a badly written adhoc query in Management Studio preventing me from doing my job. And this is where the Resource Governor comes in to play by offering Resource Pools and Work Groups. Each Pool can be though of almost as an individual Sql Server instance operating on shared data.And to a certain extent this is true, however the pools do share some meta-data which I’ll explain later. Out of the box there are 2 Pools and 2 Groups pre-defined – One called Internal which is static and immutable and where Sql Server processes its internal processes, the other called Default which contains a undeletable but otherwise configurable group, also called Default. In this configuration and user processing is performed in the Default Pool and all internal processing is carried on in the Internal pool – even to the detriment of any other pool out there. This allows Sql Server to continue processing without other pools pressurizing it and as a result, no user processes are allowed in the Internal pool. Pools are the first level of control and are used to define the minimum and maximum CPU and memory workloads running inside them are ever allowed to use. Work Groups take it to the next level and allow not only a finer level of control but also the ability to move them between pools. Groups, for example can be defined to specify how much of a % they take out of the pool, the priority level and also the MAXDOP setting. Finally, users are associated with pools via use of a Classification Function. However, it is important to understand that the Resource Governor really only limits resources when there is a pinch point in total resource availability. Running identical queries on an otherwise silent server, one in a High priority pool and the other in a low one will have no obvious effect. However once the server is being pushed the resource governor provides a nice guarantee of resource availability to those sessions that require it and a good throttle to those that don’t. In the examples that follows I use two new pools and one associated work group in each pool. One pool will be used to process high priority tasks and the other low priority. Two users will be created, one will be assigned to the High Work Group, the other the Low Work Group. (NB: for brevities sake the users I’m creating here do not have any passwords; it goes without saying that this is exceptionally bad form and should not be practiced on any server that you value data, schema or any other objects on!) Create Resource Pool Low With ( Min_CPU_Percent = 0, Max_CPU_Percent = 30 ) ;Create Resource Pool High With ( Min_CPU_Percent = 60, Max_CPU_Percent = 100 ) ;Create Workload Group Low With ( Importance = Low ) Using Low;Create Workload Group High With ( Importance = High, Request_Max_Memory_Grant_Percent = 100 ) Using High;Alter Resource Governor Reconfigure;Use AdventureWorks2008;Create Login LowUser With Password=N'', Default_Database=AdventureWorks2008, Check_Expiration=Off, Check_Policy=Off;Create User LowUser For Login LowUser;Exec sp_AddRoleMember N'db_owner', N'LowUser'Create Login HighUser With Password=N'', Default_Database=AdventureWorks2008, Check_Expiration=Off, Check_Policy=Off;Create User HighUser For Login HighUser;Exec sp_AddRoleMember N'db_owner', N'HighUser'As can be seen, the Low pool is only given 30% of CPU to p[...]

Sql Server 2008 Management Studio 'pings’


Not had a chance to do a full investigation on this, however I did have a situation the other day where, despite me killing off all query connections and all connected Object Explorer windows, I still found that the 2008 Management Studio would ping one of my servers every three minutes or so. This only transpired because it was using an old cached Sql Server DB Principal password (I’d just changed it) and ChangeDirector was constantly wibbling about it.

Killing off MS altogether stopped the issue dead in its tracks. If I get a chance to I might go back to finding out what the real story was, but a little puzzler it was at the time.

Sql Server 2008 Build 10.0.1600.22 (RTM)

Clustering Quest ChangeDirector


Whilst Quest declare that ChangeDirector 2.1 supports Clustering I’m not convinced it goes the whole hog. However your intrepid Random-Thunker has gone through the small number of extra steps required to make it so.

First off if you’ve not already done so, install ChangeDirector onto both nodes using the normal methods and validate that it works just fine by moving the Sql Cluster service through each node. Validate with ChangeDirector that the agent is running correctly.

Now shut the Service down and modify the configuration file ON EACH NODE. For me the configuration file was on C:\Program Files\Quest Software\Quest Change Director for SQL Server\[SqlClusterName]\QuestChangeTrackerAgent.exe.config. Update this to point the Trace and Log folders on one of the shared drives used by the cluster. I used the drive dedicated to normally storing dumps and backups onto which kept it away from affecting the other drives in the SAN whilst still making it always available to the cluster.

Fire up the service again on the active node to verify all is well still then shut it down again and this time change it’s startup type from Automatic to Manual.

Create a new Generic Service Cluster Resource in the Cluster Administrator. The Service name to use is Quest Change Tracker Agent. I added a couple of dependencies for both the drive I was using to store the logs and traces on and also Sql Server itself. In addition I elected to change the settings to not affect the group; this way if it should so something daft it wasn’t going to take the whole cluster with it. For me the continuous availability of the cluster trumps ChangeDirector – your individual mileage may, as they oft say, vary. And oh, there are no registry keys to replicate so you don’t need to worry on that front.

Now you should be able to fire up the resource in the Cluster Administrator and off you go. For a final test move the Sql Cluster through each node verifying that it’s seen by ChangeDirector at all times.

Windows 2003, ChangeDirector

Unverifiable Passwords with Sql Configuration Manager


This was an odd one and much akin to a previous recent incident I’d had with the configuration manager (albeit this time on a totally different server); this last weekend we had one of our regular ‘let’s all work through the night’ maintenance windows. Such events are never fun to do but made even worse when the strangest of things happen.

This weekend it was the turn of the Sql Configuration Manager and it’s stubborn refusal to validate a new account password. No matter how many times I tried it would not take the new password, regardless if I pasted it in or typed it in. Each time I’d see an entry in the Security log telling me the attempt to logon with the credentials had failed.

Given that it was now getting close to 3AM I decided another approach based off the prior experience; I fired up the Services applet and pasted the password directly in there.

Well, would you Adam and Eve it – it only bloody worked.

Don’t have a good reason or explanation for this one other than – WTF?

Still, whatever it takes.

Sql Server 2005 SP2 No CR’s.

Maintaining hierarchical names in TFS 2008 from Project 2007


Waiting for the nested hierarchical tasks functionality in TFS 2010 is not currently an option for me right now in my quest to process tasks from MS Project 2007. Nor is manually updating each task title within Project to keep some sort of structure since that would be a real pain and be largely un-maintainable once you have more than a few tasks. Now it’s been a long while since I last messed around with VBA, so this is probably pretty ugly – however it does do the trick. In addition to keeping the hierarchical naming structure, it also marks summary tasks as not publishable to avoid polluting TFS with them. The entry point is HierarchicalTaskNames(). Dim TitleStack As CollectionDim taskId As IntegerSub RecursiveScanAndFix(ByRef t As Task) Dim child As Task Dim i As Integer Dim text As String text = t.Name ' If this task has children then add the name to the stack and continue on down If t.OutlineChildren.Count > 0 Then Push(text) t.Text25 = "No" ' Now look for the children For Each child In t.OutlineChildren RecursiveScanAndFix(child) Next child Pop() Else ' No, no children found. So these will be actual tasks. Therefore we need to prepend the hierarchy name onto the task ' First off look to see if we've already named this. If so we'll strip the previous hierarchical name off before we start If Left(text, 1) = "[" Then ' Yes we have. Hunt down the last ] and remove it from the text For i = Len(text) To 1 Step -1 If Mid(text, i, 1) = "]" Then text = Mid(text, i + 1) Exit For End If Next text = Trim(text) End If ' Now add the hierarchical name to the task title t.Name = GetTitleFromStack & " " & text taskId = t.ID End IfEnd SubSub HierarchicalTaskNames() Dim t As Task taskId = 1 TitleStack = New Collection While taskId <= ActiveProject.NumberOfTasks t = ActiveProject.Tasks(taskId) If t.OutlineChildren.Count > 0 Then RecursiveScanAndFix(ActiveProject.Tasks(taskId)) End If taskId = taskId + 1 End WhileEnd SubFunction Pop() As String If TitleStack.Count > 0 Then Pop = TitleStack.Item(TitleStack.Count) TitleStack.Remove(TitleStack.Count) End IfEnd FunctionFunction Push(ByVal Title As String) TitleStack.Add(Title)End FunctionFunction GetTitleFromStack() As String GetTitleFromStack = "[ " Dim i As Integer For i = 1 To TitleStack.Count GetTitleFromStack = GetTitleFromStack & TitleStack.Item(i) If i <> TitleStack.Count Then GetTitleFromStack = GetTitleFromStack & " | " End If Next GetTitleFromStack = GetTitleFromStack & " ]"End FunctionMS Project 2007 SP1 [...]

Wix Rollin’


What a difference a few hours makes. Yesterday afternoon I was feeling mightily peeved off that Microsoft had still not put any support in MSBuild for building Visual Studio Deployment Projects. I was with the folks in the various forums, blogs and other dark places where we all visit who cried ‘foul!’ and ‘unfair’ and MS’s decision. That was however until I rediscovered WiX – Windows Installer XML. I had seen WiX once before (quite some time before) and at the time I dismissed it as ‘Something I might care about later’. My TFS Build at the time were creating libraries or other code that didn’t required an installer so I really paid it no heed. Well, yesterday was the day that I cared about it. MS obviously have put a lot of faith in WiX – I read somewhere that they even use it internally to build the Office 2007 installer – if this is indeed the case then it’s no small feat. The installers I needed to create where by no-means on the same level so I figured at that time, when in Rome. Now granted WiX is a bit of a learning curve but there are tools out there that make your life easy – especially when you’ve already got a .vdproj that you need to WiXify. When converting existing deployment projects there’s a slight reverse around face in that it actually makes more sense to start with the end-project and work your way back to a new installer than attempt to build a new WiX project from scratch. So, in an attempt to help clarify a few steps, here’s my trip down converting an existing deployment project into a new fangled WiX project – complete with the need to reference binaries as part of the MS Enterprise Application block 4.1. Caveat Emptor: I’m not saying that these steps are the most efficient – however they did work for me! NB: I was using the WiX Beta 3 build 4805.0 to create my packages and Visual Studio 2008 Team Suite along with TFS 2008 to create my projects and builds. Step 1: Reverse engineer to deployment project. Sure you have the source, so why reverse engineer anything? Well, in our case the project builds an MSI file which is pretty easy to decompile given the tools available in WiX. So, first off copy the MSI to a temporary work area and execute the Dark WiX executable to decompile the MSI into it’s constituent parts – and create a WiX project file as a result. C:\tmp\1 Day\WiX>"C:\Program Files\Windows Installer XML v3\bin\dark.exe" -x Binary LogProcessorServiceSetup.msi LogProcessorServiceSetup.Wxs Microsoft (R) Windows Installer Xml Decompiler version 3.0.4805.0Copyright (C) Microsoft Corporation. All rights reserved.LogProcessorServiceSetup.msiC:\tmp\1 Day\WiX\LogProcessorServiceSetup.msi : warning DARK1060 : The _VsdLaunchCondition table is being decompiled as a custom table.dark.exe : warning DARK1065 : The AdvtUISequence table is not supported by the WiX toolset because it has been deprecated by the Windows Installer team. Any information in this table will be left out of the decompiled output.C:\tmp\1 Day\WiX\LogProcessorServiceSetup.msi : warning DARK1062 : The ModuleSignature table can only be represented in WiX for merge modules. The information in this table will be left out of the decompiled output.C:\tmp\1 Day\WiX\LogProcessorServiceSetup.msi : warning DARK1062 : The ModuleComponents table can only be represented in WiX for merge modules. The information in this table willbe left out of the decompiled output.C:\tmp\1 Day\WiX\LogProcessorServiceSetup.msi : warning DARK1066 : The MsiPatchHeaders table is added to the install package by a transform from a patch package (.msp) and not authored directly into an install package (.msi). The information in this table will be left out of the decompiled output[...]

Locating Signed Procedures in Sql Server


I recently had to redeploy a DB on another server with a whole new master key. This meant that I had to destroy and recreate all the existing Certificates and Keys that I’d created based off the previous master key. The biggest problem for me was locating all the pesky procedures that I’d signed previously that would need to be resigned with a new certificate; however locating this took a bit of time. Eventually I was able to come up with the following query:   Select Schema_Name(O.schema_id) + '.' + Object_Name(Cp.major_id) As 'Module' ,cp.crypt_type_desc As 'Method' ,Coalesce(, As 'Signing Object'From sys.crypt_properties As Cp -- Get the schema Inner Join sys.objects As O On O.object_id = Cp.major_id -- Asymmetric Key (if any) Left Join sys.asymmetric_keys As Ak On Ak.thumbprint = Cp.thumbprint -- Ceritificate (if any) Left Join sys.certificates As C On C.thumbprint = Cp.thumbprint   [...]

2005 Configuration manager oddness


I was doing a round of mass service account password changes today on one of our development servers and came across this bit of strangeness; despite making several attempts to update the account and password in the Configuration manager the SSIS service refused not only to start up, but also failed to give any indication as to why (the application logs were horribly silent on anything from either SSIS or even the service manager).

So I went into the back door and fired up the Services applet and refreshed the password in there. One message informing me the account had been granted the ‘Logon as a service’ privilege and the service was up.

What beats me is why the configuration manager refused on multiple times to accept the change. Not saying we should all ignore the config manager from now on but I myself will bear this in mind for future weirdness.

A tough lesson to learn


Apparently has learned the lesson of an what happens when you don’t have an effective disaster recovery strategy the hard way.


Using OS X as a operating System - $1998, Employing a technician who knows a thing or two about backups, $65,000. Not using mirroring as a backup process - Priceless. Life needs tapes. For everything else there is unemployment.

And hot on the heels of 2005 SP3 comes…


Cumulative Rollup 1 which essentially comprises CUs’ 10 and 11 for SP2.

In the good old days would MS just release something akin to a SP3a? Ah well..

Service pack 3 is here, CU1 is here.

Sql Server 2005 SP3 released!


Microsoft have finally released Sql Server 2005 SP3 – how long was that chaps? 1.2 eternities or 1.3? (I lost track after a few lifetimes).

Well, better late than never, least now I don’t need to grab the less supported CRs to get the latest round of fixes.

Management Studio folders


What MS giveth with one hand they unceremoniously forget to supply with the other; here I’m referring to being able to update the path of the folders used by Management Studio for storing projects etc.

Fortunately it’s an easy fix that, as usual, relies on the registry.

Fire up regedit (or your favorite registry hacker) and move yourself to

HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server\100\Tools\Shell

and update the folder paths there.

Still here!


Good lord, you know when you’re getting old when over a year zips past you and it feels like yesterday…

Things have gotten a wee broken here and there – lost my images for one and my Worldview has up and left me. Time to do some maintenance work and clean things up again and get everything all ship-shape & Bristol fashion.

In the meantime…

...or just wait for TFS 2008


Looks like TFS 2008 comes with the ability to permanently destroy SCC files via the command line. Also included with 2008 is the ability to create Continuous Integration Builds on Team Build right out of the box - no more do you need to customize TFS to allow that. All in all, 2008 sounds like it's a big step in the right direction.

Permanently deleting files from TFS Source Code Control


Not for the fainthearted this so continue with absolute caution - usual disclaimers apply most least don't blame me if you loose your entire TFS implementation, TFS Server and most of the rest of your development universe...

So, you're deleted a project from TFS using the TFSDeleteProject CLI command but still have crud left hanging around, taking up space or worse - still showing up if you recreate a project with the same name. The TFSDeleteProject command by nature will not delete much of anything - it simply marks a project as unused and moves it from TfsIntegration.dbo.tbl_projects to TfsIntegration.dbo.projects_tombstone (among other actions). As a result you're still left with a potentially large DB filled with useless crud.

Files in TFS SCC are located in three tables in the tfsVersionControl database: dbo.tbl_NameSpace, dbo.tbl_File and dbo.tbl_Content.

The root table is tbl_NameSpace - this table provides the full path to the object along with an ItemId. This ItemId is used to key into the tbl_Files table which yields a FileId that joins up with tbl_Content.

You'll want to carefully identify which items from tbl_Namespace you want to wipe out and removes the corresponding entries from the other two tables.

Backup your database first and make sure you've a sacrificial chicken on hand just in case...And did I happen to mention you should back up your database first...?

I'm sure there are some other tables out there affected by this but these appear to be the main three.

When True != True


Content Types were giving me a hard time recently, most specifically in trying to hide/show columns in a content type in a custom list/document library.

Handy hint to know here folks: The only valid values for True or False appear to be TRUE or FALSE. Anything else seems to be interpreted based on any number of seemingly random factors...

For example, this is now valid:


Katmai July CTP is out!


And with it comes a few new features including the final arrival of separate DATE and TIME datatypes and the ability for Mirrored system to perform page page repair by taking corrupted pages from the mirror.

I'm really begging to like the look and feel of 2008 now.

A full list of new features can be found here.

Vista ++ == ?


Hot on the heels of such illustrious internal code names such as Yukon, Avalon, Chicago, Cairo, Katmai and Whidbey (along with such lesser known luminaries such as Mönch and Yamazaki), comes the latest code name out of Redmond for Vista (Longhorn)’s replacement due in three years time. Gone are the names from the Whistler-Blackcomb resort (e.g. Freestyle and Harmony) or place names around the hallowed Seattle (e.g. Rainier, Everett and Orcas) or rather abstract names such as Godot (from the play “Waiting for Godot”) and Darwin. Nope, the internal marketing team have gone all out for the next version of the world’s most used O/S.

And just what have this mighty team of highly experienced folk decided to call this next incarnation of the Linux and OS/X beater – surely a name that symbolizes strength or purpose, or perhaps something more natural to represent simplistic strength, or perhaps the name of a city that symbolizes mankind's innate ability to overcome obstacles?

For all those who answered “Vienna” then sorry – you’re now wrong. Nope, apparently it’s new code name is “7”. Impressive huh?

Interestingly enough I think this answers one or two questions I had about workflow….

What you mean 'not found'?


Stupid, stupid, stupid.

You can either apply that to me thinking that attaching a file to my InfoPath task form would be easy or MS for allowing such a nasty insidious bug to crawl out into production.

Here's the issue. I have an InfoPath task form running under MOSS Enterprise forms services. Upon this form I have a couple of file attachment controls. One would thunk that given that file attachment controls seem to have been a part of HTML since the mid 1990's (RFC 1867) that this should be a totally elementary operation.

Heh, think again.

I found this out the hard way when, after browsing to my file and pressing the Upload button I was met with a rather curt 'The Selected File was not found' error message. A hunt around the blogsphere bore only one low dangling fruit : A WorkFlow that Uploads a Document via a Task using an InfoPath Form. which I gave a try but to no avail.

Turns out the author was 99.9998% there - except for the the direction of one slash! So, rather than regurgitate the entire post, here's the correct JavaScript segment you need:

However, when all's said and done I have to ask the question, why? Why do we have to slice up the WkTaskIP.aspx page to add this functionality in? I guess tomorrow I'll be one the ole Dog n' Bone to MS PSS to pursue this one further.

So, for anyone else out there with this issue, here's a string for the search engines to chow down on to help any wayward travelers like myself get to the answer they need (at least for now!): InfoPath Forms Services File Attachment The selected file was not found