Subscribe: Matevz Gacnik's Weblog
http://www.request-response.com/blog/SyndicationService.asmx/GetRss
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
dll system  dll  document  exe  namespace  new  server  service  system  version  windows server  windows  xml  xmlns 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Matevz Gacnik's Weblog

Matevz Gacnik's Weblog



Technology Philanthropy



 



Bing Daily Image - A Bing Image of the Day Download Tool

Wed, 16 Dec 2015 18:05:46 GMT

Ever wanted to have your Windows 10 desktop background look sharp? Like, Bing Gallery sharp? Let me help you. Here's a little tool I'm releasing today that allows you to: Get daily Bing Image of the Day to your desktop Perform bulk downloads, multiple images at a time Add image title and description to your wallpaper [1] Run it as a service It's all in one tool. Small, simple, efficient. Here's the parameter model it supports: BingDailyImage v1.0 - Download desktop wallpaper images from Bing Copyright © 2015, Matevž Gačnik www.request-response.com Gets Bing Picture of the Day images for today and a specified number of days back. Usage: BingDailyImage [-c] [-d [days]] [-f folder] [-t [top|bottom]] [-b]    -c             Get current Bing image    -d [days]      Specifies number of days to fetch.                   If you omit this parameter the tool will download                   last two weeks (14 days) of Bing wallpapers.    -f             Set download folder                   If you omit this parameter the folder will be                   set to  - '%USERPROFILE%\Pictures\Bing Wallpapers'.    -t             Add text (image title and description) to images                   You can specify text position [top, bottom]. Default is bottom.    -b             Set last downloaded image as desktop background    -s install     Installs BingDailyImage as a system service                   Use -f to specify service download folder path                   Use -t to let service add text to images    -s uninstall   Uninstalls BingDailyImage as a system service    -s start       Starts BingDailyImage service    -s stop        Stops BingDailyImage service    -s query       Queries BingDailyImage service state    -h             Displays help You can just do a BingDailyImage.exe -c to get the current daily image. By default, it will not tamper with background images, so you'll get the highest resolution available (1920x1200 or 1920x1080), like this: BingDailyImage v1.0 - Download desktop wallpaper images from Bing Copyright © 2015, Matevž Gačnik www.request-response.com Downloading Bing Image of the Day for 2015-12-16. Image date: 2015-12-16 Image title: Old Town in Salzburg, Austria Image description: When it's lit up like this with a cozy glow, we can admire… When there's a mountain in your city… We're looking at the Old Town portion of this Baroque city… Downloading background... Background for 1920x1200 found. Saving background... Done for 2015-12-16. Or do a BingDailyImage.exe -d 10 -t to get the last 10 and add a nice, transparent background text to them. Hell, do a BingDailyImage.exe -s install and forget about it. It's going to download new images once they are published to Bing's servers. All you need to do now is set your Windows 10 desktop background to be fetched from the download folder. Done. Here's the download. He[...]



Debugging Windows Azure DevFabric HTTP 500 Errors

Wed, 10 Apr 2013 14:04:49 GMT

While developing with Windows Azure SDK and local Azure development fabric, when things go wrong, they go really wrong.

It could something as obscure as leaving an empty element somewhere in Web.config or a certificate issue.

The problem is, that before Visual Studio attaches a debugger into a IIS worker process, a lot of things can go wrong. What you get, it this:

(image)

There's no Event Log entry, nothing in a typical dev fabric temp folder (always, check 'C:\Users\ \AppData\Local\Temp\Visual Studio Web Debugger.log' first), nada.

Poking deeper, what you need to do is allow IIS to respond properly. By default IIS only displays complete error details for local addresses. So, to get a detailed report you need to use a local address.

You can get a local address by fiddling with the site binding with IIS Manager and changing what Azure SDK does, so:

  • First, start IIS Management Console.
  • Then right click on your deployment(*).* and select Edit Bindings.
  • Select All Unassigned

(image)

If you hit your web / service role using the new local address (could be something like http://127.0.0.1:82), you are most likely getting the full disclosure, like this:

(image)

In this case, a service was dispatched into dev fabric, which had an empty element somewhere in Web.config. The web role was failing before Visual Studio could attach a debugger, and only HTTP 500 was returned through normal means of communication.

(image)



The Case of Guest OS Versioning in Windows Azure

Sun, 31 Mar 2013 20:40:01 GMT

There's a notion of Windows Guest OS versions in Windows Azure. Guest OS versions can actually be (in Q1 2012) either a stripped down version of Windows Server 2008 or a similiar version of Windows Server 2008 R2.

You can upgrade your guest OS in Windows Azure Management Portal:

(image)

Not that it makes much difference, especially while developing .NET solutions, but I like to be on the newest OS version all the time.

The problem is that the defaults are stale. In 1.6 version of the Windows Azure SDK, the default templates all specify the following:

xmlns=" http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"
osFamily="1"
osVersion="*">

The osFamily attribute defines OS version, with 1 being Windows Server 2008 and 2 being Windows Server 2008 R2. If you omit the osFamily attribute, the default is 1 too! Actually this attribute should probably move to the Role element, since it defines the version of the role's guest OS.

   xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"
  
osFamily="1"
   osVersion="*">
 
   
    

>




>

It doesn't make sense to have it normalized over all roles. Also, this schema makes it impossible to leave it out in VM role instances, where it gets ignored.

The osVersion attribute defines the version which should be deployed for your guest OS. The format is * or WA-GUEST-OS-M.m_YYYYMM-nn. You should never use the latter one. Asterisk, normally, means 'please upgrade all my instances automatically'. Asterisk is your friend.

If you want/need Windows Server 2008 R2, change it in your service configuration XML.

What this means is, that even if you publish and upgrade your guest OS version in the Azure Management Portal, you will get reverted the next time you update your app from within Visual Studio.

(image)



The Case of Lightweight Azure MMC Snap-In Not Installing on Azure SDK 1.6

Tue, 26 Mar 2013 10:36:53 GMT

There are a couple of Windows Azure management tools, scripts and PowerShell commandlets available, but I find Windows Azure Platform Management Tool (MMC snap-in) one of the easiest to install and use for different Windows Azure subscriptions.

The problem is the tool has not been updated for almost a year and is this failing when you try to install it on the latest Windows Azure SDK (currently v1.6).

Here's the solution.

(image)



Bleeding Edge 2011: Pub/Sub Broker Design

Wed, 05 Oct 2011 09:17:31 GMT

This is the content from my Bleeding Edge 2011 talk on pub/sub broker design and implementation.

(image)

Due to constraints of the project (European Commission, funded by EU) I cannot publicly distribute the implementation code at this time. I plan to do it after the review process is done. It has been advised, that this probably won't be the case.

Specifically, this is:

  • A message based pub/sub broker
  • Can use typed messages
  • Can be extended
  • Can communicate with anyone
  • Supports push and pull models
  • Can call you back
  • Service based
  • Fast (in memory)
  • Is currently fed by the Twitter gardenhose stream, for your pleasure

Anyway, I can discuss the implementation and design decisions, so here's the PPT (in slovene only).

Downloads: Bleeding Edge 2011, PPTs
Demo front-end: Here

(image)



The Case of Empty OptionalFeatures.exe Dialog

Tue, 07 Jun 2011 06:57:17 GMT

The following is a three day saga of empty 'Turn Windows Features on or off' dialog.

This dialog, as unimportant as it may seem, is the only orifice into Windows subsystem installations without having to cramp up command line msiexec.exe wizardry on obscure system installation folders that nobody wants to understand.

Empty, it looks like this:

(image)

First thing anyone should do when it comes to something obscure like this is:

  1. Reinstall the OS (kidding, but would help)
  2. In-place upgrade of the OS (kidding, but would help faster)
  3. Clean reboot (really, but most probably won't help)
  4. Run chkdsk /f and sfc /scannow (really)
  5. If that does not help, proceed below

If you still can't control your MSMQ or IIS installation, then you need to find out which of the servicing packages got corrupted somehow.

Servicing packages are Windows Update MSIs, located in hell under HKLM/Software/Microsoft/Windows/CurrentVersion/Component Based Servicing/Packages. I've got a couple thousand under there, so the only question is how to get to rough one out of there.

There's a tool, called System Update Readiness Tool [here] that nobody uses. Its side effect is that it checks peculiarities like this. Run it, then unleash notepad.exe on C:\Windows\Logs\CBS\CheckSUR.log and find something like this:

Checking Windows Servicing Packages

Checking Package Manifests and Catalogs
(f) CBS MUM Corrupt 0x800F0900 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~6.1.1.3.mum  Line 1:

(f) CBS Catalog Corrupt 0x800B0100 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~6.1.1.3.cat  

Then find the package in registry, take ownership of the node, set permissions so you can delete and delete it. Your OptionalFeatures.exe work again and it took only 10 minutes.

(image)



Clouds Will Fail

Wed, 04 May 2011 20:02:27 GMT

This is a slightly less technical post, covering my experiences and thought on cloud computing as a viable business processing platform.

Recent Amazon EC2 failure was gathering a considerable amount of press and discussion coverage. Mostly discussions revolve around the failure of cloud computing as a promise to never go down, never lose a bit of information.

This is wrong and has been wrong for a couple of years. Marketing people should not be making promises their technical engineers can't deliver. Actually, marketing should really step down from highly technical features and services, in general. I find it funny that there is no serious marketing involved in selling BWR reactors (which fail too), but they probably serve the same amount of people as do cloud services, nowadays.

Getting back to the topic, as you may know EC2 failed miserably a couple of weeks ago. It was something that should not happen - at least in many techie minds. The fault at AWS EC2 cloud was with their EBS storage system, which failed across multiple AWS availability zones within the same AWS region in North Virginia. Think of availability zones as server racks within the same data center and regions as different datacenters.

Companies like Foursquare, Tekpub, Quora and others all deployed their solutions to the same Amazon region - North Virginia and were thus susceptive to problems within that specific datacenter. They could have replicated across different AWS regions, but did not.

Thus, clouds will fail. It's only a matter of time. They will go down. The main thing clouds deliver is a lower probability of failure, not its elimination. Thinking that cloud computing will solve the industry's fears on losing data or deliver 100% uptime is downright imaginary.

Take a look at EC2's SLA. It says 99.95% availability. Microsoft's Azure SLA? 99.9%. That's almost 4.5 hours and almost 9 hours of downtime built in! And we didn't event start to discuss how much junk marketing people will sell.

We are still in IaaS world, although Microsoft is really pushing PaaS and SaaS hard. Having said that, Windows Azure's goal of 'forget about it, we will save you anyway' currently has a lot more merit that other offerings. It is indeed trying to go the PaaS and SaaS route while abstracting the physical machines, racks and datacenters.

(image)



Twitter Handle

Sun, 27 Mar 2011 16:54:45 GMT

I am more active on Twitter lately, finding it amusing personally.

Handle: http://twitter.com/matevzg

(image)



Load Test Tool for Windows Server AppFabric Distributed Cache

Thu, 09 Dec 2010 13:07:25 GMT

During exploration of high availability (HA) features of Windows Server AppFabric Distributed Cache I needed to generate enough load in a short timeframe. You know, to kill a couple of servers.

This is what came out of it.

(image)

It's a simple command line tool, allowing you to:

  • Add millions of objects of arbitrary size to the cache cluster (using cache.Add())
  • Put objects of arbitraty size to cache cluster
  • Get objects back
  • Remove objects from cache
  • Has cluster support
  • Has local cache support
  • Will list configuration
  • Will max out you local processors (using .NET 4 Parallel.For())
  • Will perform graceously, even in times of trouble

I talked about this at a recent Users Group meeting, doing a live demo of cache clusters under load.

Typical usage scenario is:

  1. Configure a HA cluster
    Remember, 3 nodes minimum, Windows Server 2008 (R2) Enterprise or DataCenter
  2. Configure a HA cache
  3. Edit App.config, list all available servers
  4. Connect to cluster
  5. Put a bunch of large objects (generate load)
    Since AppFabric currently supports only partitioned cache type, this will distribute load among all cluster hosts. Thus, all hosts will store 1/N percent of objects.
  6. Stop one node
  7. Get all objects back
    Since cache is in HA mode, you will get all your objects back, even though a host is down - cluster will redistribute all the missing cache regions to running nodes.

You can download the tool here.

(image)



P != NP Proof Failing

Thu, 19 Aug 2010 20:31:28 GMT

One of the most important steps needed for computer science to get itself to the next level seems to be fading away.

(image)

P vs NP

Actually, its proof from is playing hard to catch again. This question (whether P=NP or P!=NP) does not want to be answered. It could be, that the problem of proving it is also NP-complete.

The (scientific) community wants needs closure. If P != NP would be proven, a lot of orthodox legislature in PKI, cryptography and signature/timestamp validity would probably become looser. If P=NP is true, well, s*!t hits the fan.

(image)



Visual Studio 2010 Beta 2: Why this Lack of Pureness Again?

Sat, 24 Oct 2009 11:50:47 GMT

Developer pureness is what we should all strive for.

This is not it:

(image)

Conflicting with this:

[c:\NetFx4\versiontester\bin\debug]corflags VersionTester.exe
Microsoft (R) .NET Framework CorFlags Conversion Tool.  Version  3.5.21022.8
Copyright (c) Microsoft Corporation.  All rights reserved.

Version   : v4.0.21006
CLR Header: 2.5
PE        : PE32
CorFlags  : 1
ILONLY    : 1
32BIT     : 0
Signed    : 0

Visual Studio must display version number of 4.0 instead of 4 in RTM. It needs to display major and minor number of the new framework coherently with 2.0, 3.0, 3.5.

Otherwise my compulsive disorder kicks in.

(image)



ServiceModel: Assembly Size in .NET 4.0

Sat, 24 Oct 2009 11:39:05 GMT

I'm just poking around the new framework, dealing with corflags and CLR Header versions (more on that later), but what I found is the following.

Do a dir *.dll /o:-s in the framework directory. You get this on .NET FX 4.0 beta 2 box:

[c:\windows\microsoft.net\framework64\v4.0.21006]dir *.dll /o:-s

 Volume in drive C is 7  Serial number is 1ED5:8CA5
 Directory of  C:\Windows\Microsoft.NET\Framework64\v4.0.21006\*.dll

 7.10.2009 3:44   9.833.784  clr.dll
 7.10.2009 2:44   6.072.664  System.ServiceModel.dll
 7.10.2009 2:44   5.251.928  System.Windows.Forms.dll
 7.10.2009 6:04   5.088.584  System.Web.dll
 7.10.2009 5:31   5.086.024  System.Design.dll
 7.10.2009 3:44   4.927.808  mscorlib.dll
 7.10.2009 3:44   3.529.608  Microsoft.VB.Activities.Compiler.dll
 7.10.2009 2:44   3.501.376  System.dll
 7.10.2009 2:44   3.335.000  System.Data.Entity.dll
 7.10.2009 3:44   3.244.360  System.Data.dll
 7.10.2009 2:44   2.145.096  System.XML.dll
 7.10.2009 5:31   1.784.664  System.Web.Extensions.dll
 7.10.2009 2:44   1.711.488  System.Windows.Forms.DataVis.dll
 7.10.2009 5:31   1.697.128  System.Web.DataVis.dll
 7.10.2009 5:31   1.578.352  System.Workflow.ComponentModel.dll
 7.10.2009 3:44   1.540.928  clrjit.dll
 7.10.2009 3:44   1.511.752  mscordacwks.dll
 7.10.2009 3:44   1.454.400  mscordbi.dll
 7.10.2009 2:44   1.339.248  System.Activities.Presentation.dll
 7.10.2009 5:31   1.277.776  Microsoft.Build.dll
 7.10.2009 2:44   1.257.800  System.Core.dll
 7.10.2009 2:44   1.178.448  System.Activities.dll
 7.10.2009 5:31   1.071.464  System.Workflow.Activities.dll
 7.10.2009 5:31   1.041.256  Microsoft.Build.Tasks.v4.0.dll
 7.10.2009 2:44   1.026.920  System.Runtime.Serialization.dll

Funny how many engineer dollars are/were spent on middle tier technologies I love.

Proof: System.ServiceModel.dll is bigger then System.dll, System.Windows.Forms.dll, mscorlib.dll, and System.Data.dll.

We only bow to clr.dll, a new .NET 4.0 Common Language Runtime implementation.

(image)



Talk at Interoperability Day 2009

Wed, 03 Jun 2009 13:23:45 GMT

Interoperability day is focused on, well, interoperability. Especially between major vendors in government space.

We talked about major issues in long term document preservation formats and the penetration of Office serialization in real world...

.. and the lack of support from legislature covering long term electronic document formats and their use.

Here's the PPT [Slovenian].

(image)



Speaking at Document Interop Initiative

Sat, 16 May 2009 18:21:51 GMT

On Monday, 05/18/2009 I'm speaking at Document Interop Initiative in London, England.

Title of the talk is: High Fidelity Programmatic Access to Document Content, which will cover the following topics:

  • Importance of OOXML as a standards based format, especially in technical issues of long term storage for document content preservation
  • Importance of legal long term storage for document formats
  • Signature and timestamp benefits of long term XML formats
  • Performance and actual cost analysis of using publicly-parsable formats
  • Benefits of having a high fidelity programmatic access to document content, backed with standardization

The event is held at Microsoft Limited, Cardinal Place, 100 Victoria Street SW1E 5JL, London.

Update: Here's the presentation.

(image)



Designing Clean Object Models with SvcUtil.exe

Fri, 10 Apr 2009 21:06:43 GMT

There are a couple situations where one might use svcutil.exe in command prompt vs. Add Service Reference in Visual Studio. At least a couple. I don't know who (or why) made a decision not to support namespace-less proxy generation in  Visual Studio. It's a fact that if you make a service reference, you have to specify a nested namespace. On the other end, there is an option to make the proxy class either public or internal, which enables one to hide it from the outside world. That's a design flaw. There are numerous cases, where you would not want to create another [1] namespace, because you are designing an object model that needs to be clean. [1] This is only limited to local type system namespace declarations - it does not matter what you are sending over the wire. Consider the following namespace and class hierarchy that uses a service proxy (MyNamespace.Service class uses it): Suppose that the Service class uses a WCF service, which synchronizes articles and comments with another system. If you use Add Web Reference, there is no way to hide the generated service namespace. One would probably define MyNamespace.MyServiceProxy as a namespace and declared a using statement in MyNamespace.MyService scope with: using MyNamespace.MyServiceProxy; This results in a dirty object model - your clients will see your internals and that shouldn't be an option. Your class hierarchy now has another namespace, called MyNamespace.ServiceProxy, which is actually only used inside the MyNamespace.Service class. What one can do is insist that the generated classes are marked internal, hiding the classes from the outside world, but still leaving an empty namespace visible for instantiating assemblies. Leaving only the namespace visible, with no public classes in it is pure cowardness. Not good. Since there is no internal namespace modifier in .NET, you have no other way of hiding your namespace, even if you demand internal classes to be generated by the Add Service Reference dialog. So, svcutil.exe to the rescue: svcutil.exe    /namespace:*, MyNamespace    /internal    /out:ServiceProxyClass.cs The /namespace option allows you to map between XML and CLR namespace declarations. The preceding examples maps all (therefore *) existing XML namespaces in service metadata to MyNamespace CLR namespace. The example will generate a service proxy class, mark it internal and put it into the desired namespace. Your object model will look like this: Currently, this is the only way to hide the internals of service communication with a proxy and still being able to keep your object model clean. Note that this is useful when you want to wrap an existing service proxy or hide a certain already supported service implementation detail in your object model. Since your client code does not need to have access to complete service proxy (or you want to enhance it), it shouldn't be a problem to hide it completely. Considering options during proxy generation, per method access modifiers would be beneficial too. [...]



Bug in XmlSerializer, XmlSerializerNamespaces

Mon, 06 Apr 2009 12:22:33 GMT

XmlSerializer is a great peace of technology. Combined with xsd.exe and friends (XmlSerializerNamespaces, et al.) it's a powerful tool for one to get around XML instance serialization/deserialization. But, there is a potentially serious bug present, even in 3.5 SP1 version of the .NET Framework. Suppose we have the following XML structure:     This tells you that Envelope, and Body elements are in the same namespace (namely 'NamespaceA'), while Header is qualified with 'NamespaceB'. Now suppose we need to programmatically insert element into an empty, core, document. Core document:   Now do an XmlNode.InsertNode() of the following: ... We should get:   ...   To get the to be inserted part one would serialize (using XmlSerializer) the following Header document:   ... To do this, a simple XmlSerializer magic will do the trick: XmlSerializerNamespaces xsn = new XmlSerializerNamespaces(); xsn.Add("B", "NamespaceB"); XmlSerializer xSer = new XmlSerializer(typeof(Header)); XmlTextWriter tw = new XmlTextWriter(ms, null); xSer.Serialize(tw, h, xsn); ms.Seek(0, SeekOrigin.Begin); XmlDocument doc = new XmlDocument() doc.Load(ms); ms.Close(); This would generate exactly what we wanted. A prefixed namespace based XML document, with the B prefix bound to 'NamespaceB'. Now, if we would import this document fragment into our core document using XmlNode.ImportNode(), we would get:   ...   Which is valid and actually, from an XML Infoset view, an isomorphic document to the original. So what if it's got the same namespace declared twice, right? Right - until you involve digital signatures. I have described a specific problem with ambient namespaces in length in this blog entry: XmlSerializer, Ambient XML Namespaces and Digital Signatures. When importing a node from another context, XmlNode and friends do a resolve against all namespace declarations in scope. So, when importing such a header, we shouldn't get a duplicate namespace declaration. The problem is, we don't get a duplicate namespace declaration, since XmlSerializer actually inserts a normal XML attribute into the Header element. That's why we seem to get another namespace declaration. It's actually not a declaration but a plain old attribute. It's even visible (in this case in XmlElement.Attributes), and it definitely shouldn't be there. So if you hit this special case, remove all attributes before importing the node into your core document. Like this: XmlSerializerNamespaces xsn = new XmlSerializerNamespaces(); xsn.Add("B", "NamespaceB"); XmlSerializer xSer = new XmlSerializer(typeof(Header)); XmlTextWriter tw = new XmlTextWriter(ms, null); xSer.Serialize(tw, h, xsn); ms.Seek(0, SeekOrigin.Begin); XmlDocument[...]



DOW(n)

Thu, 09 Oct 2008 20:29:37 GMT

DOW is down almost 700, again. And...

... I just have to post this. Especially when almost everything (besides devaluation of the safe currency) has already been done.

If you had bought $1,000.00 of Nortel stock one year ago, it would now be worth $49.00.

With
Enron, you would have $16.50 of the original $1,000.00.

With
MCI/Worldcom, you would have less than $5.00 left.

If you had bought $1,000.00 worth of
Miller Genuine Draft (the beer, not the stock) one year ago, drunk all the beer then turned in the cans for the 10-cent deposit, you would have $214.00.

Based on the above, 401KegPlan.com's current investment advice is to take that $5.00 you have left over and
drink lots and lots of beer and recycle.

Via: http://401kegplan.com/keg/

We are going over the edge. We as a world economy.

And yea, I opened a keg of Guinness.

(image)



Bleeding Edge 2008: Postback

Thu, 09 Oct 2008 19:42:02 GMT

I'm sorry it took a week, but here we go.

This is my exit content from Bleeding Edge 2008. I'm also posting complete conference contents, just in case.

(image)

Thanks go out to Dušan, Dejan, Miha, Miha and Miha.

Downloads:

Remark: PPT in Slovene only. Code international.

Thank you for attending. Hope to see you next year!

(image)



XmlSerializer: Serialized Syntax and How to Override It

Fri, 29 Aug 2008 18:38:07 GMT

Recently I needed to specify exactly how I would like a specific class serialized. Suppose we have the following simple schema:                               Let's call this schema President.xsd. A valid XML instance against this schema would be, for example:    Barrack    Obama    47 Since we are serializing against a specific XML schema (XSD), we have an option of schema compilation: xsd /c President.xsd This, obviously, yields a programmatic type system result in a form of a C# class. All well and done. Now. If we serialize the filled up class instance back to XML, we get a valid XML instance. It's valid against President.xsd. There is a case where your schema changes ever so slightly - read, the namespaces change, and you don't want to recompile the entire solution to support this, but you still want to use XML serialization. Who doesn't - what do you do? Suppose we want to get the following back, when serializing:    Barrack    Obama    47 There is an option to override the default serialization technique of XmlSerializer. Enter  the world of XmlAttributes and XmlAttributeOverrides: private XmlSerializer GetOverridedSerializer() {    // set overrides for person element    XmlAttributes attrsPerson = new XmlAttributes();    XmlRootAttribute rootPerson =       new XmlRootAttribute("PresidentPerson");    rootPerson.Namespace = "http://schemas.gama-system.com/       president.xsd";    attrsPerson.XmlRoot = rootPerson;    // create overrider    XmlAttributeOverrides xOver = new XmlAttributeOverrides();    xOver.Add(typeof(Person), attrsPerson);    XmlSerializer xSer = new XmlSerializer(typeof(Person), xOver);    return xSer; } Now serialize normally: Stream ms = new MemoryStream(); XmlTextWriter tw = new XmlTextWriter(ms, null); xSer.Serialize(tw, person); This will work even if you only have a compiled version of your object graph, and you don't have any sources. System.Xml.Serialization.XmlAttributeOverrides class allows you to adorn any XML serializable class with your own XML syntax - element names, attribute names, namespaces and types. Remember - you can override them all and still serialize your angle brackets. [...]



Bleeding Edge 2008

Thu, 17 Jul 2008 13:50:06 GMT

I'm happy to announce our organization work is coming to fruition.

Bleeding Edge 2008 is taking the stage for the autumn season.

(image)
Portorož, Slovenia, October 1st, 9:00

Official site, Registration, Sponsors

Go for the early bird registration (till September 12th). The time is now.

Potential sponsor? Here's the offering.

(image)



Accreditus: Gama System eArchive

Sat, 05 Jul 2008 12:18:06 GMT

One of our core products, Gama System eArchive was accredited last week. This is the first accreditation of a domestic product and the first one covering long term electronic document storage in a SOA based system. Every document stored inside the Gama System eArchive product is now legally legal. No questions asked. Accreditation is done by a national body and represents the last step in a formal acknowledgement to holiness. That means a lot to me, even more to our company. The following blog entries were (in)directly inspired by the development of this product: Laws and Digital Signatures Reliable Messaging and Retry Timeouts Approaches to Document Style Parameter Models XmlSerializer, Ambient XML Namespaces and Digital Signatures Security Sessions and Service Throttling Reliable Message Delivery Reliable Message Delivery Continued Durable Reliable Messaging We've made a lot of effort to get this thing developed and accredited. The certificate is here. This, this, this, this, this, this, this, this, this and those are direct approvals of our correct decisions. [...]



Sysinternals Live

Thu, 19 Jun 2008 17:52:52 GMT

This is brilliant. Sysinternals tools are now (actually were already when I left for vacation) available live via a web (http and WebDAV) based resource on http://live.sysinternals.com and \\live.sysinternals.com. This means I can do the following: [c:\]dir \\live.sysinternals.com\tools  Directory of  \\live.sysinternals.com\tools\*  2.06.2008   1:16             .  2.06.2008   1:16             ..  2.06.2008   1:16             WindowsInternals 30.05.2008  17:55             668  About_This_Site.txt 13.05.2008  19:00         225.320  accesschk.exe  1.11.2006  15:06         174.968  AccessEnum.exe  1.11.2006  23:05         121.712  accvio.EXE 12.07.2007   7:26          50.379  AdExplorer.chm 26.11.2007  14:21         422.952  ADExplorer.exe  7.11.2007  11:13         401.616  ADInsight.chm 20.11.2007  14:25       1.049.640  ADInsight.exe  1.11.2006  15:05         150.328  adrestore.exe  1.11.2006  15:06         154.424  Autologon.exe  8.05.2008  10:20          48.476  autoruns.chm 12.05.2008  17:31         622.632  autoruns.exe 1.11.2006   ...  1.11.2006  15:06         207.672  Winobj.exe 30.12.1999  12:26           7.653  WINOBJ.HLP 27.05.2008  16:21         142.376  ZoomIt.exe      24.185.901 bytes in 103 files and 3 dirs 109.442.727.936 bytes free Or, I can fire up a Windows Explorer window (or press the start key, then type) and just type: \\live.sysinternals.com\tools. Or: [c:\]copy \\live.sysinternals.com\tools\Procmon.exe         C:\Windows\System32 \\live.sysinternals.com\tools\Procmon.exe =>         C:\Windows\System32\Procmon.exe      1 file copied Brilliant and useful. [...]



Demos from the NT Conference 2008

Thu, 15 May 2008 15:24:19 GMT

As promised, here are the sources from my NTK 2008 sessions [1].

Talk: Document Style Service Interfaces

Read the following blog entry, I tried to describe the concept in detail. Also, this blog post discusses issues when using large document parameters with reliable transport  (WS-RM) channels.

Demo: Document Style Service Interfaces [Download]

This demo defines a service interface with the document parameter model, ie. Guid CreatePerson(XmlDocument person). It shows three different approaches to creation of the passed document:

  1. Raw XML creation
  2. XML Serialization of the (attribute annotated) object graph
  3. XML Serialization using the client object model

Also, versioned schemas for the Person document are shown, including the support for document validation and version independence.

Talk: Windows Server 2008 and Transactional NTFS

This blog entry describes the concept.

Demo 1: Logging using CLFS (Common Log File System) [Download]
Demo 2: NTFS Transactions using the File System, SQL, WCF [Download]
Demo 3: NTFS Transactions using the WCF, MTOM Transport [Download] [2]

[1] All sources are in VS 2008 solution file format.
[2] This transaction spans from the client, through the service boundary, to the server.

(image)



Laws and Digital Signatures

Wed, 16 Apr 2008 17:32:29 GMT

Suppose we have a document like this:       value1     value2                                                                     1Xp...EOko=               nls...cH0k=                             9f3W...fxG0E=           AQAB                             MIIEi...ktYgN             This document represents data and an enveloped digital signature over the complete XML document. The digital signature completeness is defined in the Reference element, which has URI attribute set to empty string (Reference Uri=""). Checking the Signature The following should always be applied during signature validation: Validating the digital signature Validating the certificate(s) used to create the signature Validating the certificate(s) chain(s) Note: In most situations this is the optimal validation sequence. Why? Signatures are broken far more frequently then certificates are revoked/expired. And certificates are revoked/expired far more frequently then their chains. 1. Validating the digital signature First, get it out of there: XmlNamespaceManager xmlns = new XmlNamespaceManager(xdkDocument.NameTable); [1] xmlns.AddNamespace("ds", "http://www.w3.org/2000/09/xmldsig#"); XmlNodeList nodeList = xdkDocument.SelectNodes("//ds:Signature", xmlns);   [1] xdkDocument should be an XmlDocument instance representing your document. Second, construct a SignedXml instance: foreach (XmlNode xmlNod[...]



WCF: Reliable Messaging and Retry Timeouts

Tue, 08 Apr 2008 22:33:13 GMT

There is a serious limitation present in the RTM version of WCF 3.0/3.5 regarding control of WS-RM retry messages during a reliable session saga. Let me try to explain the concept. We have a sender (communication initiator) and a receiver (service). When a reliable session is constructed between the two, every message needs to come to the other side. In a request-reply world, the sender would be a client during the request phase. Then roles would switch during the response phase. The problem arises when one of the sides does not get the message acknowledgement in time. WCF reliable messaging implementation retries the sending process and hopes for the acknowledgement. All is well. The problem is that there is no way for the sending application to specify how long the retry timeout should be. There is a way to specify channel opening and closing timeouts, acknowledgement interval and more, but nothing will define how long should the initiator wait for message acks. Let's talk about how WCF acknowledges messages. During a request-reply exchange every request message is acknowledged in a response message. WS-RM SOAP headers regarding sequence definition (request) and acknowledgements (response) look like this: a1 a2    urn:uuid:6c9d...ca90 a3    1 a4 b1 b2    urn:uuid:6c99...ca290 b3    b4    b6    b7 Request phase defines a sequence and sends the first message (a3). In response, there is the appropriate acknowledgement present, which acks the first message (b3) with Lower and Upper attributes. Lines b4-b6 define a benign and super useful WCF implementation of flow control, which allows the sender to limit the rate of sent messages if service side becomes congested. When the session is setup, WCF will have a really small time waiting window for acks. Therefore, if ack is not received during this period, the infrastructure will retry the message. Duplex contracts work slightly differently. There, the acknowledgement interval can be set. This configuration option (config attribute is called acknowledgementInterval) is named inappropriately, since it controls the service and not the client side. It does not define the time limit on received acknowledgements, but the necessary time to send the acknowledgments back. It allows grouping of sent acks, so that multiple incoming messages can be acked together. Also, the infrastructure will not necessarily honor the specified value. Now consider the following scenario: The client is on a reliable network Network bandwidth is so thin that the sending message takes 20s to come through [1] Service instancing is set to Multiple The solution uses a request-reply semantics [1] It does not matter whether the initiator is on a dial up, or the message is huge. What happens? Service initiator sets up a reliable session, then: First mess[...]



Calculating Outsourcing Project Cost

Mon, 18 Feb 2008 22:01:39 GMT

Wow, Stephen.

This is one of the best ideas I've heard of hedging against the dollar in terms if IT outsourcing cost. And I mean it.

I'm not in a position of valueing the description made, but I am willing to take the pill, no matter what.

What everybody needs is only to get to one million sterling project, taking half a year.

That's it, hedging done or not.

(image)



Happy Birthday XML

Wed, 13 Feb 2008 19:36:09 GMT

This week XML is ten years old. The core XML 1.0 specification was released in February 1998.

It's a nice anniversary to have.

The XML + Namespaces specification has a built in namespace declaration of http://www.w3.org/XML/1998/namespace. That's an implicit namespace declaration, a special one, governing all other. One namespace declaration to rule them all. Bound to xml: prefix.

(image)

XML was born and published as a W3C Recommendation on 10th of February 1998.

So, well done XML. You did a lot for IT industry in the past decade.

(image)



European Silverlight Challenge

Fri, 04 Jan 2008 09:33:22 GMT

If you're into Silverlight, and you should be, check out http://www.silverlightchallenge.eu, and especially sign up on http://slovenia.silverlightchallenge.eu and join one of our many developers who will participate in this competition.

(image)

More here.

(image)



Mr. Larry Lessig

Fri, 09 Nov 2007 22:59:24 GMT

Mr. Lawrence Lessig is a founder of Stanford Center for Internet and Society. He's also a chairman for the Creative Commons organization.

Lessig is one of the most effective speakers in the world, a professor at Stanford, who tries to make this world a better place from a standpoint of stupidity in terms of the copyright law.

The following is published on the CC's site:

We use private rights to create public goods: creative works set free for certain uses. ... We work to offer creators a best-of-both-worlds way to protect their works while encouraging certain uses of them — to declare “some rights reserved.”

Therefore Creative Commons stands for the mantra of some rights reserved and not all rights reserved in terms of meaningful usage of digital technology.

Being a libertarian myself, I cannot oppose these stands. Balance and compromise are good things for content and intellectual products, such as software.

Watch his masterpiece, delivered at TED.

(image)



On Instance and Static Method Signatures

Sat, 03 Nov 2007 21:04:38 GMT

We talked today, with Miha, a C# Yoda. Via IM, everything seemed pointless. Couldn't find a good case for the cause of the following mind experiment: using System; class Tubo {   public static void Test() {}   private void Test() {}  } Note: I'm using Miha's syntax in this post. We have a static method called Test and an instance method, also called Test. Parameter models of both methods are the same, empty. Would this compile? It does not. The question is why and who/what causes this. There is actually no rationale behind not allowing this thing to compile since both, the compiler and the runtime know the method info upfront. Since the runtime has all the information it needs, it is strange that this would not be allowed at compile time. However, a (C#) compiler has to determine whether you, the programmer meant to access a static or an instance method. Here lie the dragons. It is illegal in most virtual machine based languages to have the same method name and signature for a static/instance method. The following is an excerpt from a Java specification: 8.4.2 Method Signature Two methods have the same signature if they have the same name and argument types. Two method or constructor declarations M and N have the same argument types if all of the following conditions hold: They have the same number of formal parameters (possibly zero) They have the same number of type parameters (possibly zero) Let be the formal type parameters of M and let be the formal type parameters of N. After renaming each occurrence of a Bi in N's type to Ai the bounds of corresponding type variables and the argument types of M and N are the same. Java (and also C#) does not allow two methods with the same name and parameter model no matter what the access modifier is (public, private, internal, protected ...) and whether the method is static or instance based. Why? Simple. Programmer ambiguity. There is no technical reason not to allow it. Consider the following: using System; class Tubo {   public static void Test();   private void Test();   public void AmbiguousCaller() { Test(); } } What would method AmbiguousCaller call? A static Test or instance Test method? Can't decide? That's why this is not allowed. And yes, it would call the instance method if this would be allowed, since statics in C# should be called using a class name, as in Test.Test(). Note that the preceding example does not compile. Also note, that it is legal to have a AmbiguousCaller body as Test() or Tubo.Test(). There is another ambiguous reason. Local variables in C# cannot be named the same as the enclosing class. Therefore, this is illegal: using System; class Tubo {   private int Tubo; } It is. Do a csc /t:library on it. Since you confirmed this fact, consider the following: using System; public class Tubo {   public static void StaticMethod() {}   public void InstanceMethod() {} } public class RunMe {   public static void Main()   {     Tubo Tubo = new Tubo();     Tubo.InstanceMethod();     Tubo.StaticMethod();   } } [...]



WCF: Passing Collections Through Service Boundaries, Why and How

Thu, 27 Sep 2007 21:04:47 GMT

In WCF, collection data that is passed through the service boundary goes through a type filter - meaning you will not necessarily get the intrinsic service side type on the client, even if you're expecting it. No matter if you throw back an int[] or List, you will get the int[] by default on the client. The main reason is that there is no representation for System.Collections.Generic.List or System.Collection.Generic.LinkedList in service metadata. The concept of System.Collection.Generic.List for example, actually does not have a different semantic meaning from an integer array - it's still a list of ints - but will allow you to program against it with ease. Though, if one asks nicely, it is possible to guarantee the preferred collection type on the client proxy in certain scenarios. Unidimensional collections, like List, LinkedList or SortedList are always exposed as T arrays in the client proxy. Dictionary, though, is regened on the client via an annotation hint in WSDL (XSD if we are precise). More on that later. Let's look into it. WCF infrastructure bends over backwards to simplify client development. If the service side contains a really serializable collection (marked with [Serializable], not [DataContract]) that is also concrete (not an interface), and has an Add method with the following signatures... public void Add(object obj); public void Add(T item); ... then WCF will serialize the data to an array of the collections type. Too complicated? Consider the following: [ServiceContract] interface ICollect {    [OperationContract]    public void AddCoin(Coin coin);    [OperationContract]    public List GetCoins(); } Since the List supports a void Add method and is marked with [Serializable], the following wire representation will be passed to the client: [ServiceContract] interface ICollect {   [OperationContract]   void AddCoin(Coin coin);   [OperationContract]   Coin[] GetCoins(); } Note: Coin class should be marked either with a [DataContract] or [Serializable] in this case. So what happens if one wants the same contract on the client proxy and the service? There is an option in the WCF proxy generator, svcutil.exe to force generation of class definitions with a specific collection type. Use the following for List: svcutil.exe http://service/metadata/address   /collectionType:System.Collections.Generic.List`1 Note: List`1 uses back quote, not normal single quote character. What the /collectionType (short /ct) does, is forces generation of strongly typed collection types. It will generate the holy grail on the client: [ServiceContract] interface ICollect {   [OperationContract]   void AddCoin(Coin coin);   [OperationContract]   List GetCoins(); } In Visual Studio 2008, you will even have an option to specify which types you want to use as collection types and dictionary collection types, as in the following picture: On the other hand, dictionary collections, as in System.Collections.Generic.Dictionary collections, will go through to the client no matte[...]



Approaches to Document Style Parameter Models

Mon, 24 Sep 2007 10:19:10 GMT

I'm a huge fan of document style parameter models when implementing a public, programmatic façade to a business functionality that often changes. public interface IDocumentParameterModel {    [OperationContract]    [FaultContract(typeof(XmlInvalidException))]    XmlDocument Process(XmlDocument doc); } This contract defines a simple method, called Process, which processes the input document. The idea is to define the document schema and validate inbound XML documents, while throwing exceptions on validation errors. The processing semantics is arbitrary and can support any kind of action, depending on the defined invoke document schema. A simple instance document which validates against a version 1.0 processing schema could look like this:    Add    10    21 Another processing instruction, supported in version 1.1 of the processing schema, with different semantics could be:    Store    77u/PEFwcGxpY2F0aW9uIHhtbG5zPSJod...mdVcCI Note that the default XML namespace changed, but that is not a norm. It only allows you to automate schema retrieval using the schema repository (think System.Xml.Schema.XmlSchemaSet), load all supported schemas and validate automatically. public class ProcessService : IDocumentParameterModel {    public XmlDocument Process(XmlDocument doc)    {       XmlReaderSettings sett = new XmlReaderSettings();       sett.Schemas.Add(, );       ...       sett.Schemas.Add(, );       sett.ValidationType = ValidationType.Schema;       sett.ValidationEventHandler += new          ValidationEventHandler(XmlInvalidHandler);       XmlReader books = XmlReader.Create(doc.OuterXml, sett);       while (books.Read()) { }       // processing goes here       ...    }    static void XmlInvalidHandler(object sender, ValidationEventArgs e)    {       if (e.Severity == XmlSeverityType.Error)          throw new XmlInvalidException(e.Message);    } } The main benefit of this approach is decoupling the parameter model and method processing version from the communication contract. A service maintainer has an option to change the terms of processing over time, while supporting older version-aware document instances. This notion is of course most beneficial in situations w[...]



XmlSerializer, Ambient XML Namespaces and Digital Signatures

Wed, 19 Sep 2007 20:57:57 GMT

If you use XmlSerializer type to perform serialization of documents which are digitally signed later on, you should be careful. XML namespaces which are included in the serialized form could cause trouble for anyone signing the document after serialization, especially in the case of normalized signature checks. Let's go step by step. Suppose we have this simple schema, let's call it problem.xsd:                                                                                         This schema describes a problem, which is defined by a name (typed as string), severity (typed as integer), definition (typed as byte array) and description (typed as string). The schema also says that the definition of a problem has an Id attribute, which we will use when digitally signing a specific problem definition. This Id attribute is defined as GUID, as the simple type GUIDType defines. Instance documents validating against this schema would look like this:   Specific problem   4   MD1sDQ8=   This is a specific problem. Or this: Oh my God: 1.1

Wed, 29 Aug 2007 16:40:16 GMT

Shame? Nokia?

Same sentence, as in Shame and Nokia?

There is just no pride in IT anymore. Backbones are long gone too.

(image)



Oh my God: 1.0

Wed, 29 Aug 2007 16:35:38 GMT

This post puts shame to a new level.

There is no excuse for having Microsoft Access database serving any kind of content in an online banking solution.

The funny thing is, that even the comment excuses seem fragile. They obviously just don't get it. The bank should not defend their position, but focus on changing it immediately.

So, they should fix this ASAP, then fire PR, then apologize.

Well-done David, for exposing what should never reach a production environment.

Never. Ever.

(image)



Apple vs. Dell

Wed, 08 Aug 2007 18:37:18 GMT

This is why one should buy best of both worlds. Mac rules on a client. Dell is quite competitive on the (home) server market.

We don't care about cables around servers. Yet.

(image)

So? 'nuff said.

(image)



Windows Vista Performance and Reliability Updates

Wed, 08 Aug 2007 06:13:20 GMT

These have been brewing for a couple of months. They're out today.

(image)

They contain a number of patches that improve fix Vista. You can get them here:

Windows Vista Performance Update:

Windows Vista Reliability Update:

Go get them. Now.

(image)



Midyear Reader Profile

Thu, 26 Jul 2007 20:58:25 GMT

Since it's summer here in Europe, and thus roughly middle of the year, here comes your, Dear Reader, profile.

Mind you, only last years data (June 2006 - June 2007) is included, according to Google Analytics.

Browser Versions:

(image)

Operating Systems:

(image)

Browser Versions and Operating Systems:

(image)

Screen Resolutions:

(image)

Adobe Flash Support:

(image)

And finally, most strangely, Java Support:

(image)

The last one surprised me.

(image)



Managed TxF: Distributed Transactions and Transactional NTFS

Mon, 23 Jul 2007 20:54:13 GMT

Based on my previous post, I managed to get distributed transaction scenario working using WCF, MTOM and WS-AtomicTransactions. This means that you have the option to transport arbitrary files, using transactional ACID semantics, from the client, over HTTP and MTOM. The idea is to integrate a distributed transaction with TxF, or NTFS file system transaction. This only works on Windows Server 2008 (Longhorn Server) and Windows Vista. Download: Sample code If the client starts a transaction then all files within it should be stored on the server. If something fails or client does not commit, no harm is done. The beauty of this is that it's all seamlessly integrated into the current communication/OS stack. This is shipping technology; you just have to dive a little deeper to use it. Here's the scenario: There are a couple of issues that need to be addressed before we move to the implementation: You should use the managed wrapper included here There is support for TransactedFile and TransactedDirectory built it. Next version of VistaBridge samples will include an updated version of this wrapper. Limited distributed transactions support on a system drive There is no way to get DTC a superior access coordinator role for TxF on the system drive (think c:\ system drive). This is a major downside in the current implementation of TxF, since I would prefer that system/boot files would be transaction-locked anyway. You have two options if you want to run the following sample: Define secondary resource manager for your directory This allows system drive resource manager to still protect system files, but creates a secondary resource manager for the specified directory. Do this: fsutil resource create c:\txf fsutil resource start c:\txf You can query your new secondary resource manager by fsutil resource info c:\txf. Use another partition Any partition outside the system partition is ok. You cannot use network shares, but USB keys will work. Plug it in and change the paths as defined at the end of this post. OK, here we go. Here's the service contract: [ServiceContract(SessionMode = SessionMode.Allowed)] interface ITransportFiles {    [OperationContract]    [TransactionFlow(TransactionFlowOption.Allowed)]    byte[] GetFile(string name);    [OperationContract]    [TransactionFlow(TransactionFlowOption.Allowed)]    void PutFile(byte[] data, string name); }  We allow the sessionful binding (it's not required, though) and allow transactions to flow from the client side. Again, transactions are not mandatory, since client may opt-out of using them and just transport files without a transaction. The provided transport mechanism uses MTOM, since the contract's parameter model is appropriate for it and because it's much more effective transferring binary data. So here's the service config:          &nb[...]



WS-Management: Windows Vista and Windows Server 2008

Sun, 22 Jul 2007 18:48:22 GMT

I dived into WS-Management support in Vista / Longhorn Server Windows Server 2008 this weekend. There are a couple of caveats if you want to enable remote WS-Management based access to these machines. Support for remote management is also built into Windows Server 2003 R2. WS-Management specification allows remote access to any resource that implements the specification. Everything accessed in a WS-Management world is a resource, which is identifiable by a URI. The spec uses WS-Eventing, WS-Enumeration, WS-Transfer and SOAP 1.2 via HTTP. Since remote management implementation in Windows acknowledges all the work done in the WMI space, you can simply issue commands in terms of URIs that incorporate WMI namespaces. For example, the WMI class or action (method) is identified by a URI, just as any other WS-Management based resource. You can construct access to any WMI class / action using the following semantics: http://schemas.microsoft.com/wbem/wsman/1/wmi denotes a default WMI namespace accessible via WS-Management http://schemas.microsoft.com/wbem/wsman/1/wmi/root/default denotes access to root/default namespace Since the majority of WMI classes are in root/cimv2 namespace, you should use the following URI to access those: http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2 OK, back to WS-Management and its implementation in Vista / Windows Server 2008. First, Windows Server 2008 has the Windows Remote Management service started up by default. Vista doesn't. So start it up, if you're on a Vista box. Second, depending on your network configuration, if you're in a workgroup environment (not joined to a domain) you should tell your client to trust the server side. Trusting the server side involves executing a command on a client. Remote management tools included in Windows Server 2008 / Windows Vista are capable of configuring the local machine and issuing commands to remote machine. There are basically two tools which allow you to setup the infrastructure and issue remote commands to the destination. They are: winrm.cmd (uses winrm.vbs), defines configuration of local machine winrs.exe (winrscmd.dll and friends), Windows Remote Shell client, issues commands to a remote machine As said, WS-Management support is enabled by default in Windows Server 2008. This means that the appropriate service is running, but one should still define basic configuration on it. Nothing is enabled by default; you have to opt-in. Since Microsoft is progressing to a more admin friendly environment, this is done by issuing the following command (server command): winrm quickconfig (or winrm qc) This enables the obvious: Starts the Windows Remote Management service (if not stared; in Windows Vista case) Enables autostart on the Windows Remote Management service Starts up a listener for all machine's IP addresses Configures appropriate firewall exceptions You should get the following output: [c:\windows\system32]winrm quickconfig WinRM is no[...]



Managed TxF: Support in Windows Vista and Windows Server 2008

Fri, 20 Jul 2007 14:59:16 GMT

If you happen to be on a Windows Vista or Windows Server 2008 box, there is some goodness going your way. There is a basic managed TxF (Transactional NTFS) wrapper available (unveiled by Jason Olson). What this thing gives you is this: try {    using (TransactionScope tsFiles = new TransactionScope       TransactionScopeOption.RequiresNew))    {       WriteFile("TxFile1.txt");       throw new FileNotFoundException();       WriteFile("TxFile2.txt");       tsFiles.Complete();    } } catch (Exception ex) {    Console.WriteLine(ex.Message); } WriteFile method that does, well, file writing, is here: using (TransactionScope tsFile = new TransactionScope       (TransactionScopeOption.Required)) {    Console.WriteLine("Creating transacted file '{0}'.", filename);    using (StreamWriter tFile = new StreamWriter(TransactedFile.Open(filename,            FileMode.Create,           FileAccess.Write,           FileShare.None)))    {       tFile.Write(String.Format("Random data. My filename is '{0}'.",           filename));    }    tsFile.Complete();    Console.WriteLine("File '{0}' written.", filename); } So we have a nested TransactionScope with a curious type - TransactedFile. Mind you, there is support for TransactedDirectory built in. What's happening underneath is awesome. The wrapper talks to unmanaged implementation of TxF, which is built in on every Vista / Longhorn Server box. What you get is transactional file system support with System.Transactions. And it's going to go far beyond that. I wrote some sample code, go get it. Oh, BTW, remove the exception line to see the real benefit. Download: Sample code This sample is provided without any warranty. It's a sample, so don't use it in production environments. [...]



Problem: Adding custom properties to document text in Word 2007

Mon, 09 Jul 2007 20:44:50 GMT

There is some serious pain going on when you need to add a simple custom document property into multiple Word 2007 text areas.

Say you have a version property that you would need to update using the document property mechanics. And say you use it in four different locations inside your document.

  • There is no ribbon command for it. There was a menu option in Word 2003 days.
  • There is no simple way of adding to The Ribbon. You have to customize the Quick Access Toolbar and stick with ugly, limited use icons forever or so.
    • You need to choose All commands in Customize Quick Access Toolbar to find Insert Field option.
  • This is not the only limiting option for a power user. The number of simplifications for the casual user is equal to the number of limitations for the power user. And yes, I know, casual users win the number battle.

(image)

So:

  1. Right click The Ribbon and select Customize Quick Access Toolbar
  2. Select All Commands and Insert Field
  3. Add it to Custom Quick Access Toolbar
  4. Click the new icon
  5. In Field names select DocProperty
  6. Select your value, in this example Version

Yes. Ease of use.

Please, give me an option to get my menus and keyboard shortcuts back.

Pretty please.

 

(image)



Apple iPhone Hacking

Sun, 08 Jul 2007 13:20:10 GMT

There's some serious iPhone hacking going on during the last week or so.

Here's the wiki page, that has lots of information on the project. Main devs, working on the unlock problem are from US and Hungary as it seems.

The forum is here. Deliverables here.

The progress is steady. There is a now a way to issue commands to the system, including moving of files, activation (which was achieved in three days) and directory listing inside the sandboxed filesystem.

I'm following the progress (with Hana - she's cheering along), because it's fun to know the internals of a locked down Apple device.

(image)



Out the Door: WS-ReliableMessaging 1.1

Tue, 03 Jul 2007 14:58:29 GMT

WS-RM 1.1 is finished. GoodTimestm.

OASIS published two specs:

WCF, as it turns out, will have support for WS-RM 1.1 implementation in Orcas. On this note, there is a new CTP out this week.

(image)



All Things Digital: Jobs + Gates

Thu, 31 May 2007 12:14:25 GMT

Microsoft Windows Vista Ultimate, $499.

Apple iPhone, $599.

Jobs and Gates sitting together. Priceless.

(image)



Durable Messaging Continues

Wed, 30 May 2007 20:46:14 GMT

Nick is continuing my discussion on durable messaging support in modern WS-* based stacks, especially WCF.

I agree that having a simple channel configuration support to direct messages into permanent information source (like SQL) would be beneficial.

A simple idea that begs for an alternative implementation of a WCF extensibility point, has some questions:

  • What happens when messages are (or should be) exchanged in terms of a transaction context?
  • How can we demand transaction support from the underlying datastore, if we don't want to put limitations onto anything residing beind the service boundary?
  • What about security contexts? How can we acknowledge a secure, durable sessionful channel after two weeks of service hibernation down time? Should security keys still be valid just because service was down and not responding all this time?
  • Is durability limited to client/service recycling or non-memory message storage? What about both?

Is [DurableService]enough?

(image)



NTK 2007: About Being Better

Mon, 21 May 2007 22:13:24 GMT

Having posted a disturbing post last year, I have a moral obligation to repost my current state of mind. This year's NT conference was way better, especially regarding fun-activating activities. It's not my cup of tea, when things are being cut, but I was impressed by the plethora of activities that were not present this year. Yeah, I know. When you've got activities that are not present, you're in it deep. Clarifying it, here it goes. I like it when things are not there (regarding fun during the NT conference). I like the fact that sponsors and partners were not given a carte blache. I like the fact that there were no naked ladies running around. Doh! I like the fact that I like the fact that there were no naked ladies running around. I like the fact that there was not enough free beer. I like the fact that parties were not too 'lomaniac. I like the fact that we, speakers, were pressured again. I like the fact that, above all, this was a step forward. What I don't I like (in 2007 incarnation): I don't like keynotes that have (almost, sorry) no content. I don't like keynotes that have no cues, allowing people to leave with no impression. I especially don't like keynotes that, having a plethora of technology to show, don't make attendees drool their asses off. I don't like 30 minute breaks. Sorry, too long. I don't like that attendees have no free evenings. It's just to reflect on what they've heard. And, as stated previously (and again) I don't like parties happening every evening on a technical conference. But that might just be the rule I have. If NTK continues in this year's fashion, we all did a good job. If only next year, they would get some pre/post conference options (hint) for the technical savvy. Luke, Kamenko, this is a major contribution to being on the right track again. Kudos. [This post can and probably will, position me into the nerd crowd. It is not my intention. For you, who know me personally, you know what I'm talking about. At a certain point of fun, everybody has enough.] [...]



CoreCLR Exposed

Sat, 12 May 2007 20:42:30 GMT

Jamie Cansdale of TestDriven.NET fame just posted an intro post describing the hosting of Silverlight's CLR - CoreCLR.

Having the ability to host the runtime inside a Win32/.NET process brings up new possibilities, especially running Silverlight apps outside the browser or developing test harnesses/unit tests for Silverlight intended object models.

Silverlight 1.1 alpha and CoreCLR currently expose a very trimmed down version of the Framework's BCL. This will change by RTM, but be aware that its sweet spot is still the browser.

I can imagine a world where Silverlight apps will not only be media/presentation related. Currently, the competing platform, though having a strong install base, has not reached into that space.

You could arm Flash developers with rocket launchers, but they would still lose a battle war against .NET developers armed with chopsticks. Every day of the week and twice on Sunday. I believe the number of souls will significantly influence the outcome in this case.

(image)



Silverlight Already on Microsoft.com

Sat, 12 May 2007 20:05:11 GMT

Today, Microsoft.com put a Silverlight enabled video of XBox 360 Elite on the site front page.

This must be one of the earliest technology adoptions for Microsoft.com. They host a 1.0 beta version of the Silverlight app and it got me thinking why would a high traffic site want to do this.

My conclusion?

Answer this: What's the quickest way to install the Silverlight runtime to a broad user base without using a ship vehicle?

Admittedly, installing the runtime is quick, painless and benign for your machine. As it should be.

Try it out. Then visit some sample galleries.

(image)



Microsoft NT Conference 2007: Talks

Sat, 12 May 2007 19:20:38 GMT

It's the time of the year again when the biggest local show gets going.

This year, I'm going to deliver three talks:

Tuesday, 15.5.2007, 18:00, Room D, WCF+Workflow: Et tu, Orcas?

Drilldown of: WCF support for syndication, POX, JSON, WCF + WF integration and WF instance correlation support.

Wednesday, 16.5.2007, 09:00, Emerald 1, MythBusters: Fat Clients and Thin Servers (with Dušan)

Drilldown of: Will not disclose. Stop by (if you dare). :)

Wednesday, 16.5.2007, 15:00, Emerald 2, WCF: About Messaging Sessions

Drilldown of: WCF messaging sessions, what they are, how to handle them, what's the underlying story.

Also, if you are a journalist, stop by for a press conference on Tuesday, 15.5.2007 at 1200 in the Press Center. Together with a client, we have a discussion on digital document archives, legislature and best practices.

(image)