Subscribe: <ChristophDotNet
Added By: Feedage Forager Feedage Grade B rated
Language: English
access  api management  api  app  application  azure  build  cloud  management  net  service  silverlight  user  windows 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: <ChristophDotNet


desc="My angle on brackets" />


Fault Domains, private registries and node setup scripts. My DC/OS acs-engine forks

Mon, 14 Aug 2017 15:16:00 GMT

Customers of mine have asked for some additional capabilities for DC/OS clusters created by acs-engine:

1. Make Azure Fault Domain and Upgrade Domains available to allow deployments to be FD/UD aware and lower risk of outages and data loss. The fork attaches attributes to each node that reflect the fault domain and the update domain the agent node is in: xtoph/fault-domain fork

2. Enable DC/OS to pull containers from private container registries or the Azure Container Registryxtoph-registry fork

3. Run configuration scripts on agent nodes as part of the provisioning process: agentscript fork

Te specify registry credentials see the docs.

To add a custom script:

  • Clone the repo: git clone
  • cd acs-engine
  • switch to the agentscript fork: git checkout agentscript
  • edit parts/ to perform the configuration tasks you require on each agent node
  • re-build acs-engine. For example launch the dev environment container using scripts/ and then run make build
  • edit examples/dcos.json to customize your DNS names and your SSH public key. Perform other edits to customize your cluster architecture.
  • run ./bin/acs-engine generate examples/dcos.json to create the ARM templates. The generated templates are in _output/
  • provision the DC/OS cluster: az group deployment create -g --template-file _output//azuredeploy.json --parameters @_output//azuredeploy.parameters.json.


Legacy ASP.NET 2.0 on IIS and SQL Express in a Windows Container

Wed, 26 Jul 2017 20:16:00 GMT

Looking for a demo of containerizing a legacy application (ASP.NET 2.0 WebForms can safely be called legacy, right?) into a Windows Server VM with IIS and SQL Express?

Take a look at my repo to build a Windows Server Core container running the ASP.NET BeerHouse Starter Kit to get you started.

UPDATE: The repo now also contains files to deploy the container into a Service Fabric or a kubernetes cluster to demostrate running a legacy app on a modern application platform.

Building kubernetes on Windows 10

Wed, 31 May 2017 17:50:10 GMT

Cool find of the day - thanks to @brendandburns.

You can build kubernetes from scratch in bash on Windows 10, provided you:

If you try from your User directory, e.g. C:\Users\joe\code\kubernetes, the build will fail because WSL cannot create folders with a ':' in the path during the build process. 

/mnt/c/Users/cschittk/repos/kubernetes$ ./hack/

+++ [0530 15:13:25] Verifying Prerequisites....

mkdir: cannot create directory ‘/mnt/c/Users/cschittk/repos/kubernetes/_output/images/kube-build:build-731ef7ee24-5-v1.8.3-1’: Invalid argument

!!! [0530 15:13:39] Call tree:

!!! [0530 15:13:39]  1: ./hack/ kube::build::build_image(...)

!!! Error in ./hack/../build/

  Error in ./hack/../build/ '((i<3-1))' exited with status 1

Call stack:

  1: ./hack/../build/ kube::build::build_image(...)

  2: ./hack/ main(...)

Exiting with status 1

That's because /mnt/c is a FAT or NTFS backed file system mounted as drivefs, which has a few limitations inherent to the underlying file system.

However, the bash's drive is not enitrely drvfs, even though it lives on the same underlying file system. When you take a closer look, you'll see that / and other folders are mounted using the bash's VolFS, which is much closer to the Linux file system:

$ mount

rootfs on / type lxfs (rw,noatime)

data on /data type lxfs (rw,noatime)

C: on /mnt/c type drvfs (rw,noatime)

root on /root type lxfs (rw,noatime)

home on /home type lxfs (rw,noatime)


Even though rootfs lives on your NTFS drive at %LocalAppData%\lxss\rootfs, WSL can create those paths:


and the kubernetes build will succeed in bash on Windows 10

How many assigned users do i have? The Graph API has the answer

Wed, 02 Sep 2015 18:02:17 GMT

My last post about Azure AD was about enabling single sign-on by letting your customers provision your application into their Azure AD tenant. SSO is important to make the end user login experience seamless. Who likes to put in credentials all the time. As a service provider, you have other interests as well. Quite importantly, you may want to charge for the service your offering. How do you do that when you allow user provisioning in your customers' tenants? You can't simply query your own identity store for how many users are registered because you allowed your customers to administer access in their directory. Fortunately, we have the Consent Framework when we allow a customer to provision an application into their tenant. The customers administrator has to agree to the level of access that the application will have to their directory. In the SSO scenario, the application only needs access to user profiles in the directory to authenticate and read user profile information. We can, however, request additional permissions when the app is provisioned to gain access to data that will allow us to report on users with access to the application in each tenant. First, we configure the application to require Directory Read Access. Now when an administrator provisions the application, it requires consent for read access to the directory in order to get provisioned. Now that we have consent to access to the directory we can query the directory with the Graph API. Let's see how that works. First, we find the application's ServicePrincipal in the directory. The ServicePrincipal represents the application in the directory and is the entity to which users access grants are assigned to. Details about the ServicePrincipal on Vittorio's blog. Note that the ServicePrincipal in the customer's directory kept the same displayName and that the appId is the Client ID from the directory where the application was originally published. With that knowledge, we can query the graph for the ServicePrincipal, for example with code like this: ActiveDirectoryClient activeDirectoryClient = new ActiveDirectoryClient(serviceRoot,    () => AuthenticationHelper.GetTokenFromRefreshTokenAsync(        cred.RefreshToken)); var principals = activeDirectoryClient.ServicePrincipals    .Where(principal => principal.AppId.Equals("59f88d84-651c-4444-b33c-c587f6812b8f"))    .ExecuteAsync().Result.CurrentPage.ToList();   The ActiveDirectoryClient class comes from the Active Directory Graph Client Library.  From the ServicePrincipal we can query the appRoleAssignedTo navigation property to get the users that are assigned access to that app.   var sp = principals.FirstOrDefault(); var userAssignments = (sp as IServicePrincipalFetcher).AppRoleAssignedTo.ExecuteAsync().Result;   It's worth pointing out that in order to get the AppRoleAssignments, we have to query via an IServicePrincipalFetcher. The IPagedCollection of AppRoleAssignment property sp.AppRoleAssignedTo is empty when you access it from the ServicePrincipal directly.  If we're putting all this together, we can issue Graph API queries to all tenants that provisioned the application to build a report on authorized users in each of those tenants. I've built a little page to show me the users from various tenants that have access to my application. Btw., the same pattern with the Fetcher also applies to other navigation properties, e.g. in order to get the oauth2PermissionGrants, you'd again query using an IServicePrincipalFetcher:   var fetcher = sp as IServicePrincipalFetcher; IPagedCollection grants = fetcher.Oauth2PermissionGrants.ExecuteAsync().Result; Now you can query and report on the users in tenants that use your applications without having to build your own tracking system. That's quite essential when you're looking to bill for your service. [...]

Managing Azure Machine Learning Service with API Management

Tue, 11 Aug 2015 15:00:54 GMT

Azure's Machine Learning service is one of my favorite examples how the cloud makes things easy, that would be really hard to do on premises. You get an idea of the mind boggling things you can build with pattern recognition, sentiment analysis, image processing, etc. when you browse the ML Gallery. However, once you've built and published your ML Web Service in Azure, then what? Of course you can integrate your new algorithm in your own applications. But what if you also want to publish an API to monetize your unique algorithm through an additional channel? Publishing an API takes more than making your service available through a URL. There are a few additional considerations: Restrict access you paying customers. Offering a trial version with a limited number of API calls to win new customers. Reporting how your API is used and by who to improve your API Reportng for billing. Limiting the call volume on the service to ensure all customers have a consistent experience Tracking usage for capacity planning. That's a pretty long TODO list. The good news is that Azure API Management already provides all that functionality. You just need to front end your Azure ML Web Service with an API Management endpoint, configure the service and voila, you're up an running.  Let’s take a closer look how this works. We start with a published Machine Learning Web Service. This one is based on the Credit Risk Prediction sample from the gallery: The service already comes with some API documentation, similar to what's available in the API Management Developer Portal. It provides the minimum necessary documentation to call the service, but it doesn't come with the rich analytics the access control capabilities or the policy engine offered by API Management. The documentation, which we get when we click on the Request/Response link, provides most of the information we need to configure API Management to front end the ML web service. You need the Endpoint Address for the Service to set up the API: The OData Endpoint goes into the Web service URL field. The Web API URL Suffix identifies this service within your API namespace. In this case we'll call it CreditRisk:   You'll also notice an additional benefits of API management. Your public API is really yours. You're not bound by the URL format defined by Azure ML. Instead of a URL cluttered with confusing identifiers, your URL is simple and its name reflects the capability you're providing. You Next we define the specifics of the API operation: When look at the request documentation you'll notice that we the Web service URL didn't include the resource and the query string: First, make sure you select the POST verb. The URL template allows us to define an operation name that expresses the funcntion performed by the operation. Instead of the generic name "execute" we can name the operation "score" to make the intent clear and avoid naming confusion with other ML-based services. The URL Rewrite template allows us to simplify the operation signature further. We can move the ML specific parameters api-version and details into the rewrite template since they are meaningless to your customer. Now we got the API and the operation configured. Make sure you also add the API to a Product to enable consumption of the API. Now, let's test the setup from the Developer Portal: Try it (by clicking on the big blue "Try It" button)   Once you navigated to the Try API page, double check that the Ocp-Apim-Subscription-Key is populated from your profile. It not, you must still associate the API with a Product. Note that Content-Type is set to application/json, the Request Body was prepopulated from the representation that we provided earlier and the super cool Trace is turned on so we can troubleshoot any problems. Now click Send and voila … 401 Unauthorized?? Well … I guess we're not quite there yet. Let's take another lo[...]

Setting up Web App Multitenant Azure AD Sample

Tue, 04 Aug 2015 12:50:00 GMT

Multitenancy in Azure AD is a very cool concept with the Consent Framework if you're and ISV or a Service Provider building a cloud service that you're looking to monetize. When you're building a service, you inevitably have to control access to your services with user identities. But that brings on the headaches and extra cost of identity management: How do I turn off access when a registered user leaves my customer's company so I can turn off access? How much does your helpdesk provider charges for a simple password reset call? Multitenancy in Azure AD is a way around that. It also allows your customer to take full advantage of their investments into managing corporate identities for accessing Office 365 or advanced security features like Advanced Threat Analytics or Privileged Identity Management. It also doesn't cost you, the service provider anything, it's up to your customer to decide which Azure AD capabilities they require, what they already have and how much they want to pay. You can start exploring multi tenancy in Azure with the WebApp-MultiTenant-OpenIdConnect-DotNet sample on github. Simply: Clone the repo Follow the configuration steps for the App Configure a connection string the Entity Framework data store (see below) Publish to an Azure Website (or publish using Visual Studio 2015) Configuring the Entity Framework data store is super simple, too. Since you're hosting the app in Azure it makes sense to use a Azure SQL Database as the data store for your app. A Basic Edition will do just fine since this is just a sample app you're using to explore the functionality.  Once you created the database, you add a connection string setting to the web.config:    That's all you need to get going with the sample. There is one more thing to note when you Sign Up, i.e. you go through the scenario where your customer's provides consent to provision the app into their own Azure AD tenant to allow their users to access the app. You trigger the consent code path by checking the little check box on the Sign Up screen. This indicates that you're signing in with the admin account of another Azure AD directory. This can be another directory within the same subscription to simplify things for testing purposes, or a directory in an entirely different tenant to simulate the real scenario of your customer signing up. The app will redirect you to the Azure AD sign in page. When you signed in with an account that has the Global Administrator or Service Administrator role assigned, you'll also get the consent page as part of the sign in flow. Now when you take a look at the Applications my company uses in the directory you see that the app has been provisioned: Be sure you're signing in with a user account that's homed in that directory, i.e. NOT the Microsoft ID you may be using to sign into the Azure Management portal. When you do, you get AADSTS50020: User account '' from external identity provider '' is not supported for application '. The account needs to be added as an external user in the tenant. Please sign out and sign in again with an Azure Active Directory user[...]

Connecting EventHubs to API Management

Fri, 19 Jun 2015 16:18:41 GMT

In the previous posts, we looked at: Authenticating callers using Groups in API Management Policies Exposing Strongly Typed APIs for Azure Queues with API Management Policies In the third article of the Azure API Mangement series, my esteemed (highly esteemed) colleage Ali Baloch documented how to front-end Event Hubs with API Management with a next-to-no-code solution. Thank You, Ali. We're assuming you already have: Event hub already created. API Management instance already created Azure API Management allows you to take any backend and invoke any HTTP verb POST/GET/PUT/DELETE requests and also provide security, throttling and scaling capabilities. EventHubs can be invoked using POST request which has following three main parts: (see MSDN) URL: Where to send the message. Headers: Authentication/Authorization information using Shared Access Signatures (SAS) Body: Actual message Procedure: Steps: Create an API and specify web service URL of EH. http{s}://{serviceNamespace}{eventHubPath}/publishers/{deviceId}/messages (Optional) create a product and add API in the product. Create a policy for API. This is needed to add SAS security headers. For example                                                                                                                                                                                                                                                                                                                                                                 Make sure sr and sig are hex enc[...]

API Management - Plays Well with Other Azure Services

Thu, 11 Jun 2015 13:35:47 GMT

In the first post of the series, we took a quick look at API management for layering a very basic authentication scheme over an existing service. That works great when you have an existing service API with an HTTP interface, but what do you do when your existing service listens on a queue? If it's an Azure Queue or a ServiceBus Queue which expose REST APIs, then you can layer a developer-friendly, strongly typed API over that generic queue API and as a side benefit get usage analytics, enforce call limits and user management beyond the Shared Access Signature Keys you get from the Azure Storage Service.   Let's look at the Azure Storage API for a second. The REST interface exposed by the Azure Storage Service authenticate based on a Shared Access Signature and the APIs are, as we would expect, generic queueing APIs. API management helps simplifying programming against these queues since we can "translate" the security approach based on shared keys to user accounts by caller of the API.         Each API operation defines a couple of policies below front ends Azure queues with an API management API. The "translation" is implemented via API operations with a pair of policies.   The inbound policy transforms the queue specific API call into an Azure queue REST request. The outbound policy in this example simply extracts the message from the XML-formatted queue message and returns it to the caller. The steps for front end a ServiceBus queue are very similar. I'll post a sample within the next few days.   The approach with policies is very straight forward. There are only a few things related to the Azure queue interface in the sample below. The Azure Queues API requires a signature over the headers of the HTTP request.  That signature is comprised of:   The standard HTTP headers The time header Canonical resource and canonical query parameters   All signed with the shared key for the storage account. In addition, the timestamp header in the signature has to match the actual header, which also has to be within 15 minutes of the current time to prevent replay attacks.   The policy below shows how to create the signature and the timestamp headers.   @{ string accountName = ""; string sharedKey = ""; string sig = ""; string canonical = String.Format( "GET\n\n\n\nx-ms-date:{0}\n/{1}/myqueue/messages",   context.Variables.GetValueOrDefault("UTCNow"), accountName );   using (HMACSHA256 hmacSha256 = new HMACSHA256( Convert.FromBase64String(sharedKey) )) {                 Byte[] dataToHmac = Encoding.UTF8.GetBytes(canonical);                 sig = Convert.ToBase64String(hmacSha256.ComputeHash(dataToHmac));             }   return String.Format("SharedKey {0}:{1}", accountName, sig); }                              @( context.Variables.GetValueOrDefault("UTCNow") )   Note that you're not fully qualifying .NET types when referencing static methods or in variable declarations. If you do, you'd get an error, for example:   Usage of type 'System' is not supported within expressions.     [...]

Azure API Management - Who's calling?

Fri, 05 Jun 2015 14:20:30 GMT

API Management is one of the lesser explored areas of Azure. Regardless, it's a very powerful service to layer onto your service to add:   Governance Throttling Analytics Authentication developer documentation And more stuff   … just like that. It's a fantastic example to illustrate how Azure can take care of the work that's not as interesting as the code that your boss wants you to write to add new features to your service.   Since I've been playing with it a little bit on behalf of a few service providers now, I thought I share some of my learnings. So here we go.   Imagine you have an internal service API you'd like to expose to the rest of the world over the big, bad internet - securely.   Azure and API Management make this daunting task a rather simple endeavor. Yes. Really! Secure … as in security infrastructure for DDoS prevention, Intrusion detection, etc. are in place, as well as application security like authentication and authorization. All that without running a single cable or having to punch holes in your firewall.   Security Infrastructure   API Management avoids you having to expose your service to the internet directly. Instead, it provides your endpoint on the internet, which runs in Azure and therefore piggy-backs on the defense mechanisms in place for all Azure services. You connect your Azure API to your internal service via a secure VPN or an ExpressRoute connection, which means your server is not directly exposed to the internet. If you don't want to or you can't set up a VPN connection, you could also make your service listen on a queue and then layer API management in front of the queue - which will be the topic of the of the next post.   Authentication / Authorization   Now what about letting only known users use the API? It's an internal API … It's probably not built for OAuth or SAML authentication. API Management will help you avoid writing authN or authZ code as well. First off, you secure the API with an access token, which maps to a user in the management portal. You can also assign group memberships to those users. Those groups can be handy to implement authorization in your code, since you can check groups or roles fairly easy.   API Management can do the translation from user to group using policies, which means you don’t have to change your API. Authorization can occur using HTTP headers. Policies can translate the API Management user and its groups into an HTTP header.  For example, this policy will add a header with the username of the user associated with the certification used to authenticate the call:      @(context.User.Email)   While this policy will construct a string listing the user's group memberships:      @(string.Join(";", (from item in context.User.Groups select item.Name)))   Your API implementation can now check these headers to authorize against your group system, e.g. Active Directory without having to change API methods and their implementation.   If you really wanted to, you could also add policies to modify the actual API call with a set-body policy. We'll explore that policy in the next post.   Policies   Both policies were written using single-line .NET expressions and the context object provided by the policy engine. The policy can work with a subset of .NET classes and most language constructs. As you can see above, even Linq queries work.   You can even write complex multi-line .NET expressions, which will be part of the examples in the next article.  [...]

The Third Industrial Revolution - Fueled by The Cloud

Mon, 27 Aug 2012 04:11:00 GMT

 It's so early in the days of cloud computing. Many haven't even started to realize the magnitude what this shift could be. It's the change from custom to standardized… Yes, there's the often cited analogy to the evolution of power supply. From having your own water wheel to plugging into the electric grid. My current favorite analogy is clothing. People had clothing for the longest time, they still do, but lots of things have changed since a few hundred years ago when people either made their own, or they could afford a tailor to have their clothes tailor made. Because of that, most people didn't own more than a few pieces of clothing, but the few tailored ones they owned fit very well. With increasing industrialization came clothes in standard sizes. It was more economical for manufacturers to make just a few sizes. Over time, even the number of sizes went down. Some companies only make S, M, L and possibly XL. Why? Because it's even more economical to make as few sizes as possible.  Cloud computing is going to put the world of computing through the same evolution. It takes computing to the industrial age. We're moving past the times where environments were made from scratch and purpose built. In the industrial age, it's about buying mass-market products, manufactured to fit "good enough" for many but not fitting perfectly for just one use. Clothing brands that have more differentiated sizing today are usually more expensive. They have to pass on the higher manufacturing cost and cater to customers that prefer better fitting clothes over cheap ones. You can also still got to a tailor and get custom made clothes, they fit great, but they are far more expensive than buying a standard size.  The analogy seems to hold when you explore deeper. Custom clothes never went away. Special purpose clothing is still not mass-produced and sold at Walmart. In fact, department stores, high-end boutiques and custom tailors are still around.  Computing will undergo a similar evolution and thus fuel the Third Indstrial Revolution. On-prem computing is not going away, but there are many the cases where "good enough, easy and cheap will do". Those cases will go to the cloud. In the cases where you need something special, you go to Nordstrom for clothes, you'll stay in your own computing environment for computing. What's uncertain is the mix between on-prem and the cloud? I would think that Kohl's or Target sell a whole lot more clothes than Nordstrom or Saks. Target's price points and the ubiquity of their stores make buying clothes cheap and easy. Overall, Target's annual revenue is about 70x the revenue of Nordstrom. While they're both retailers, the experience and the products don't compare. They address different scenarios. Sometimes cheap and easy is sufficient. In fact, I may sacrifice some nice-to-haves when I get something cheap with a 5 minute trip around the corner. In the world of cloud computing, I don’t get the same high-performance, highly tuned servers I can put into my own data center - BUT I don't need those in every scenario. In many cases, an IaaS environment with all the constraints of a standardized environment will be good enough. With the right architecture, an cloud environment can still deliver well performing and highly scalable solutions. In some cases however, the cloud doesn't meet performance or scale requirements and then I have the option to engineer and environment that will meet my needs. If the analogy holds then cloud computing is just as transformational as the introduction of standardized clothing sizes. When S, M and L came around, it wasn't about giving people the ability to express themselves with wearing different styles! It was about meeting the scalability needs of war that needed to put uniforms on so[...]

CFOs, Fiscal Cliff, Cost Transparency and The Cloud

Fri, 13 Jul 2012 16:03:48 GMT

CFOs probably look at the business very differently than IT  … that's what I thought joining our CFO John Rex for the Breakfast at the CFO Summit a couple of weeks ago. I was eager to listen and learn what's on a CFO’s mind and how Cloud Strategy plays a role in their thinking. A couple of things shouldn't come as a big surprise. 1. CFOs mostly face the same problems we're all facing. Only few problems are finance-centric, but many of the bigger business problems have a finance perspective, just like they have an IT perspective. 2. CFOs don't think about technology or about The Cloud in particular all that much, even though cloud computing can ease or even solve some of the problems they are thinking about. Familiar topics that came up during the discussion were the shortage of skilled labor understanding the true cost of different areas of the business and the need to be prepared for change. The exclusive group of financial leaders of Dallas also discussed more finance-centric topics, for example the financial cliff and cost control measures. The surprise was the realization that The Cloud can play a vital role addressing some of the challenges. Escape the Labor Shortage Take the labor shortage for starters. It was interesting to hear that finding skilled workers isn't a problem unique to IT. It affects all business areas. Just in IT, the shortage was cited as a blocker for growth. It's ironic that the resistance to moving to the cloud in IT often stems from fear of losing work when at the same time business leaders complain about trouble finding skilled workers. We've  been promoting shifting workloads to the cloud as a way to increase focus. Free up the skilled workers you’ve already hired to focus them on what differentiates your business instead of deploying them on tasks that are merely keeping the lights on – like running your email system for example. Rely less on hiring new workers to manage growth. Instead focus (re-using the word) your existing workforce on enabling growth and new business. Moving to the cloud is definitely a way to escape labor shortage in IT. Cost Awareness Another, surprisingly open discussion was about cost hidden in different areas of the business. There were some examples about how the country's obesity problems were contributing to higher cost for employers due to increases in healthcare cost, but also loss of productivity, etc. The attendees didn’t think about the cost of IT -- yet. With attention shifting to cost in all areas of the business, It's just a matter of time until we get asked for more transparent accounting in IT. Public cloud offerings make accounting for the true operational cost of an app extremely transparent. If you want to see the full operational cost of running your app every month, just take a look at your Amex bill. For a more detailed view, you can go to your cloud provider’s portal and get cost broken out by compute hour, storage, bandwidth, etc. Those numbers include all the cost that's so easily hidden in shared cost buckets, e.g. power, cooling, data center real estate, shared support teams, etc. Not many companies have invested in infrastructure or processes to support this kind of transparency because it requires equipment deeply embedded in the data center infrastructure. With workloads moving to the cloud they don't have to make that investment and get all the accounting transparency benefits of the cloud. The monthly report Windows Azure customers receive breaks out true cost by application, location and resource utilization. Cloud Computing to avoiding falling off the Fiscal Cliff Then there was the conversation about the fiscal cliff that the US is heading towards and other factors that cause a lot of uncertainty. The respons[...]

Cloud is the Next iPhone (for IT)

Mon, 07 May 2012 14:27:00 GMT

It was the year 2006. The year Google acquired YouTube for a mere $1.65B, Pavarotti opened the Winter Olympics and Germany hosted the World Cup. After successfully branching out into music players, Apple is hinting at releasing a phone. The excitement is building, but the smartphone market is dominated by Blackberry. Microsoft's Windows Mobile has been in the market for a few years and is steadily growing in popularity because it's a more accessible developer platform. Then on January 9th, 2007 the world changed. Not just the technology world, but the world as we knew it. Yes, Steve Jobs only showed a product, the first iPhone, but what he really showed the world what it's like to be connected and have access to the internet at all times. The iPhone wasn't the first of its kind. Far from it actually. Microsoft had been toying with the idea of Smartphones for almost 10 years at the time. Mobile powerhouses like Blackberry and Nokia had products in the market as well, but the iPhone had two new things going for it. 1) it was beautiful and desirable and 2) it made things that mattered easy. It was no longer about piling on features. It was about making the important things easy and hiding the complexity of common tasks. Those two factors made the iPhone an overnight success. The philosophy of beauty and simplicity was the perfect recipe to form a strong emotional connection between users and their devices. style="width: 91px; height: 128px;" height="319" src="" frameBorder="0" width="230" scrolling="no" mce_src=""> It wasn't the apps. Those came later. A year and a half later, when Apple opened the AppStore. At that point there was no holding back, the success of the iPhone seemed unstoppable. It no longer was about being cool. You simply didn't participate without one. So what about the cloud? The cloud is about the same thing. Making things that matter easy. For consumers, the cloud makes keeping things in sync easy. Keeping your appointments, your contacts and files in sync is frustrating problem that was worth solving. For the IT Crowd it's about making it easy to run apps. No longer do you have to spend time and effort on things you or, more importantly, your users(!), don't care about. The users care about how good you are at racking, stacking and cabling. They don't even want to know how much you know about maintaining and patching an OS image. You may argue that they think you're "wasting too much time" on such "unimportant" things. They care about one thing. They want their apps, fast, consistently and everywhere. The cloud makes those things easy, because they’re done in different ways that makes users not wait for things to show up. Gratification is instant. Apps show up. New features show while they're still new and exciting. They can show off their new toys before others have them … like people showed off their new iPhones. That's what forms the emotional bond, but there were good business reasons behind it. Let's look at some other aspects iPhone Cloud iPhone hides complexity to accomplish the important tasks users care about Cloud hides complexity to accomplish the important tasks users care about iPhone required upfront investment , but the investment paid off quickly in productivity gains Transition to the cloud is not seamless, but adopters confirm cost savings and transformational capabilit[...]

Silverlight 3 / Expression Lab Posted

Wed, 09 Dec 2009 17:09:52 GMT

Arturo was kind enough to post a hands-on lab I created for a training event in Dallas.

It’s an introduction to Silverlight, that showcases some Silverlight 3 features, such as:

but it’s also written as an introduction to Silverlight with basics such as:

while you build a rotating Zune.


The lab comes with a Lab Manual for you to work through and Visual Studio projects to keep thing simple and help you out if you get stuck.


Silverlight 3 – How to access peripherals

Tue, 08 Dec 2009 17:27:00 GMT

The previous post on Silverlight 3 for kiosk apps outlined some architecture options how you can build Silverlight applications with access to peripherals. This follow up post goes into more detail on the implementation. I strongly encourage you to consider WPF, ClickOnce and POS for .NET as the foundation of your application before you go ahead building these types of applications. The major benefits of Silverlight are the small run-time and the cross-platform availability. Chances are that cross-platform is not necessary in a kiosk environment. The small runtime may not be a big advantage if you are deploying a local application, if you’re deploying, you may as well deploy the full .NET runtime. In fact, the lean runtime may be a disadvantage since it’s lacking functionality available in the .NET libraries. ClickOnce may provide similar benefits as a web deployment of Silverlight. Now … if you’re still reading, you probably determined that Silverlight is the way to go. The solution is based on several key features in Silverlight Hosting Silverlight in a custom container via the hosting API – introduced with Silverlight 1 Scripting Silverlight applications – introduced with Silverlight 1 Local messaging between Silverlight applications – even across different containers, first introduced with Silverlight 3 Let’s take a look how we can string these together to build a solution that maintains the integrity and protection of the Silverlight sandbox, but also let us get to local computing resources and maintains the benefits of Silverlight development and deployment. First, you can host Silverlight applications not only in a web browser, but also in a custom application container via the Hosting COM API. You’re in for a blast from the past since some experience with C++ and COM is definitely required to get this to work. There’s a sample for Silverlight Alternative Hosting on MSDN. You save yourself a lot of time and effort compared to implementing the various COM interfaces if you just download the sample and start from there. Custom Silverlight Hosting First, you decide how you’re going to load the Silverlight application in the custom container. You could load a .xap from the local disk, or you could local the .xap application over the web. Loading from a URL over the web preserves some of the deployment flexibilities of a browser app, but if you’re running an application without a network connection, loading the application from a http URL may not be an option. Loading a XAP from from a URL, either a file:/// URL or an http:// URL will work. To load the Silverlight application, you pass the URL to the application as a named Source property in the PropertyBag during control activation (I didn’t think I would ever have to write about ActiveX control activation again). The TutorialXcpHost application from the MSDN sample makes this very easy. It includes a XcpControlHost helper class with the SetSource method. You call SetSource before activating the control:int WINAPI _tWinMain(HINSTANCE, HINSTANCE, LPTSTR lpCmdLine, int nShowCmd) { CXcpControlHost::SetSource(L"http://localhost/AppsComms/ClientBin/SenderApp.xap"); return _AtlModule.WinMain(nShowCmd); } Even when running in a custom host, Silverlight is still validating zones and enforces some cross-domain security. If you’re loading additional application data or resources, then base URL and zone (file:/// or http://) need to match. You can play some trickery by implementing the container’s IXcpControlHost::GetBaseUrl() method to return a matching zone, but it’s safer to play by the rules and comply with Silverlight’s security policies. after all, they’ve been put in place[...]

What’s so old-school about text based programming?

Mon, 07 Dec 2009 15:20:53 GMT

Computerworld posted this piece that Microsoft developers are using text editors for development. What’s so old-school about that? What I mean really? Coding in text editors is not a trend among the grey-hairds like Lewis suggested on an internal thread. Text based tools are all the rave with the next generation of developers. I mean people that look like the Mac guy in the Apple commercials.

Lots of today’s developers are all fired up about Ruby, RoR, PHP and even javascript – they’re all about text programming.  Those are going to be the thought-leaders for the next generation of developers and they are programming with very similar tools and dev models that many of us started with. You may say that those developer icons at Microsoft are downright on the cutting edge.

Aside from that … are the development tools really the right thing to look at to judge state of development at Microsoft (or any other shop)? What about code quality? Innovation? Shouldn’t Microsoft developers be judged by that?

Some of my personal conclusions are that the developer(!) community doesn’t crave graphical tools – maybe it’s because they favor power and flexibility over dealing with level of complexity because that’s a higher priority for the job they are doing. Architects likely that deal with multiple more dimensions, i.e. cross-system dependencies, deployment, etc. and thus like other levels of abstractions that lend themselves better to a visual representation.

Furthermore, frameworks like RoR following the trends started with Java and .NET. They are raising the level of abstraction for developers – without going to graphical development models. I just spent time experimenting with some COM work. The productivity problem isn’t working with text-based languages like C++.  The much bigger productivity problem is that COM interfaces were designed for late bound environments and are extremely low level. That problem was solved either with graphical environments or text based environments like VB.

Again, use the right tool for the job. Libraries that raise the level of abstraction have been very successful to boost productivity for developers. Enhancements to text editors to speed up development have been around since way longer than Visual Studio. Pretty much everybody had their Emacs or VI rigged with all sorts of fancy macros to keep code clean. Graphicals tools, at least today are much more helpful for visualizing architecture and the high-level flow of a program. Those are different from executable code and are intended for a different audience.

What am I missing?

Silverlight 3 for Kiosk Apps? Of Course!

Tue, 17 Nov 2009 15:48:36 GMT

Several of the customers I work with are looking to build kiosk or point-of-sale applications with Silverlight. The ease of deployment with browser-based Silverlight applications is definitely appealing. Sharing applications or components between customers’ kiosks and web sites is another appealing reasons to go with Silverlight. This post outlines the architecture decisions between Silverlight and WPF and presents architecture options for Silverlight based solutions. A follow up post will discuss the Silverlight implementation details. However, POS systems or kiosks often need integration with local peripherals, such as credit card readers, barcode scanners, printers, etc. Since Silverlight browser applications run in a sandbox access to these devices isn’t immediately available. Therefore we need to find a way to insert a bridge between the peripherals and the Silverlight application to read data from the devices and forward the data to the Silverlight app. To get started, let’s look at applications that can communicate with local peripherals. Desktop applications can communicate with local devices. Devices usually ship with C++ or .NET libraries to read data or sink events from devices. Therefore Desktop applications are usually preferred for POS systems. Microsoft has a POS for .NET framework to simplify development of applications that need access to a wide array of peripherals. WPF offers a very compelling option to build the application UI and building the UI in WPF is a great step to share assets between the kiosk and the Web. The following table summarizes the decision points to decide between a full desktop application or a Silverlight app.   Pro Con Full POS Framework for peripheral integration Requires high-touch deployment Full access to local resources (files, registry, printers, peripherals) Some re-development to share assets between desktop and web applications Hardware accelerated graphics Windows specific Richest Graphics with WPF and XNA   Full .NET Framework (WCF, WPF, WF, SxS versioning, …)   If the cons weigh too high and you really need a browser-based app, for example when you’re running in shared kiosk environments or if ease of deployment is much more important than peripheral integration, then you have a couple of options with Silverlight. First, you can simply load the Silverlight application with a control hosted in a desktop application via the COM hosting interfaces. The host application can load the Silverlight application from a web URL, i.e. once you install the host application, you can still download the Silverlight application from the web. The Silverlight hosting interfaces even allow managing the download process, customize caching of .xap files and other resources through the IXcpControlHost interface. For communication between the Silverlight app and the device manager running in the host application, the Silverlight application can expose an interface via the scriptable object bridge. That bridge is intended for communicating with the javascript engine of a web browser but it works in other containers as well. The scriptable object is accessible to the Win32 host via COM IDispatch interfaces, which the host application can invoke to send data to th[...]

WcfTestClient with Windows Azure

Fri, 30 Oct 2009 16:25:13 GMT

One of my customers is working on an Azure WCF service. When wanted to test the service with WcfTestClient, but we ran into some issues. We started the dev fabric and had the WebRole running on port 81. When we went to the WCF service metadata page at http://mybox:5101/ProdKService.svc, we got the expected web page, which states:   To test this service, you will need to create a client and use it to call the service. You can do this using the svcutil.exe tool from the command line with the following syntax: svcutil.exe http://mybox:5101/ProdKService.svc?wsdl This will generate a configuration file and a code file that contains the client class. Add the two files to your client application and use the generated client class to call the Service. Note that the instructions point you to port 5101 in the service URL. That’s the port where the Azure instance is running in my local development fabric. It is not as we would expect the address of the Azure dev fabric which is running on port 81. We tried to follow the instructions and point WcfTestClient to the address on the page, but instead of testing the service, we got this not so friendly error message:   Error: Cannot obtain Metadata from http://mybox:5101/ProdKService.svc If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at Exchange Error URI: http://mybox:5101/ProdKService.svc Metadata contains a reference that cannot be resolved: 'http://mybox:5101/ProdKService.svc'. There was no endpoint listening at http://mybox:5101/ProdKService.svc that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. The remote server returned an error: (400) Bad Request.HTTP GET Error URI: http://mybox:5101/ProdKService.svc There was an error downloading 'http://mybox:5101/ProdKService.svc'. The request failed with HTTP status 400: Bad Request.   Now this is weird. We know that metadata publishing is enabled, because we got the instructions for the WSDL address from the metadata page in the first place. If you look at the address line in the browser when the Azure dev fabric launches, you note that the local fabric is listening on localhost port 81, not mybox port 5101. localhost is IP address, whereas mybox is bound to the IP address on my corporate network. . Is instructed, we tried to get the WSDL with and we’re getting a different error in in WcfTestClient: Error: Cannot obtain Metadata from http://mybox:5101/ProdKService.svc If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address.  For help enabling metadata publishing, please refer to the MSDN documentation at Exchange Error    URI: http://mybox:5101/ProdKService.svc    Metadata contains a reference that cannot be resolved: 'http://mybox:5101/ProdKService.svc'.    Could not connect to http://mybox:5101/ProdKService.svc. TCP error code 10061: No connection could be made because the target machine actively refused it     Unable to connect to the remote server    No connection could be made because [...]

Need Help Migrating to SQL Azure?

Tue, 20 Oct 2009 21:47:21 GMT

Migrating an existing SQL Server database to SQL Azure is a compelling story, but it’s not a trivial task … unless you start with the SQL Azure migration wizard put together by my team mates George and Wade.

They just updated the wizard to support SQL Azure CTP 2. Details at:

WinMo App Store Questions? I got answers

Mon, 05 Oct 2009 18:09:37 GMT

The Windows Mobile app store is about to launch. The team launched a number of videos on the Windows Mobile Dev Channel on YouTube to walk you through the submission process for your application, but as I’m working with the team over at TripCase, we had a few more questions that we got answered over the past weeks. I thought they worth sharing:


Q: My app runs on Windows Phone Standard (i.e. SmartPhone, SP, no-touch) and Windows Phone Professional (Pocket PC, PPC, with touch screen). The application is packaged in a single cab file. The AppStore submission UI doesn’t let me select both platforms. How do I make sure I reach both groups of users

A: Submit the app twice. Once for Standard, once for Professional. You can submit with the same name.



Q: I want to make an update to my application to fix a bug or a spelling error, change the description, etc. What do I do?

A: Fix your cab, don’t change the version number of the application. Re-submit – it’s free.



Q: I’m getting an error: Unable to enable shim engine on device when running the AppVerifier. Help! How do I fix this?

A: Most commonly, the Windows CE 5.0 Test Kit is missing on your machine or you haven’t replaced the libraries as outlined in the release docs. Replace the Application verifier binaries in the processor folder for the device type. For example, replace C:\Program Files\Windows CE platform builder\5.00\CEPB\wcetk\DDTK\ARMV4I with C:\Program Files\Application Verifier for Mobile 5.0\Armv4i.

If you’re testing with retail devices, then you also need to sign Privileged Certificate. Make sure your app meets the requirements for the cert.

Steve has more details.



Q: Do my customers need to install .NET CF Versions that my application needs?

A: No. The Marketplace client takes care of installing the required version of the Compact Framework.



Q: I want to distribute my application through other channels than the Windows Mobile Marketplace. Can I distribute the certified and signed cab from Marketplace?

A: No: You need to sign the cab with your own certificate. Applications signed with the Marketplace certificate can only be distributed from Marketplace.


Looking forward lots of cool WinMo apps on Marketplace.

Win7 Multi-touch. Why wait until WPF4?

Wed, 23 Sep 2009 13:30:23 GMT

WPF 4 is going to fully integrate Win 7’s multi-touch capabilities. with Windows 7 being RTM, you don’t have to wait for WPF 4 to be released for developing multi-touch demos. You can get started today with the native Win7 APIs, or with the WindowsTouch library for .NET 3.5SP1. The .NET library is much easier to work with since the native APIs are rather low level and based on the existing tablet APIs.

Check out the link to the Win 7 .NET Interop Sample or the Channel9 video:

Lynn’s blog post discusses the question on multi-touch with .NET 3.5 or 4.0: It’s got more useful links to multi-touch development on Win7 resources as well.


At least until WPF4 ships, there is a viable option to build production apps with multi-touch. There are some differences how WindowsTouch and WPF4 implement things, and you’re probably encountering a little bit of re-work, but both options are based on WPF and most of your UI and you code-behind should stay the same and you learn the UI Design Guidelines for Touch right away.