Subscribe: Simply a programmer
Added By: Feedage Forager Feedage Grade B rated
add  application  data  date  headers  information  net  osession oresponse  osession  presentation  rfc  sharepoint  time  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Simply a programmer

Simply a programmer

// a software development blog by Tim Jones

Updated: 2017-03-24T10:21:47.309-05:00


Angular message bus and smart watch


In my Angular apps, I typically add a couple services to $rootScope so they're available to all of my controllers. The first is a message bus for any inter-component messaging, and the other is my replacement for $watch which doesn't fire during registration.

Please note that this code is ES6, so either use an online transpiler or the transpiler of your choice. I would suggest you try Babel if you're not using it already.

Why Bower is better than NuGet


More specifically, Bower is a better package manager than NuGet for client-side (JavaScript, CSS, etc.) packages, even for ASP.NET web applications where the incumbent is very well entrenched. Apples vs. Oranges? Is this a fair comparison? Let me offer you my disclaimer. The only reason I’m making this particular comparison is because my background is in ASP.NET web applications where NuGet was the only package manager option for quite a while. (Which means you’re probably not going see a Rails developer blogging about this :). These days, however, your average .NET web app developer is highly likely to run into a dilemma between package managers during the course of application construction. The happy path of finding all the packages you needed from just one package manager quickly accelerates into a dirt road bumpy enough to get your apple cart all jacked up. Let me start by acknowledging that NuGet is and will continue to be the standard when it comes to .NET package management. I don’t anticipate that Bower will wholesale replace NuGet as they don’t set out to achieve the same goals. Overlap between these package managers does exist, however. NuGet hosts a variety of packages that do not add .NET assemblies to your project’s reference list, JavaScript frameworks and CSS libraries being the most popular in this category. Not only that, but some of these are actually the most popular packages hosted on, such as the jQuery package with now over 6 million downloads. The case I’ll be making here is that where NuGet and Bower overlap in the realm of client-side packages, Bower is a better choice. NuGet encourages polluting global scope What’s wrong with NuGet’s client-side packaging implementation then? In 3 words: Global scope pollution. Ever since NuGet first launched and we started using the jQuery package included in our ASP.NET web application templates, the infamous Scripts directory has always been the dumping ground of every JavaScript package. This directory, as well as the Content directory for CSS, has essentially become a global namespace of sorts. As soon as package authors recognize this “convention” contained in several of the most popular packages, they replicate this pattern and create their packages to deploy their JavaScript files into Scripts and their CSS files into Content. For example, here’s the scripts directory after creating a new project in Visual Studio 2013 and adding a couple Angular packages. What if 2 packages each want to deliver foo to Scripts? Well, I guess the last guy in wins in that case. The responsibility of avoiding name collisions is left solely with the package author. In the defense of package authors continuing this practice, it seems to me that they are merely following Microsoft’s lead, since we have officially sanctioned ASP.NET web application project templates installed in Visual Studio that contain pre-installed NuGet packages which install their files directly into the Scripts directory. Does Microsoft own these packages? No. They are quick to point out that you assume all risk of using 3rd party packages. But what happens when said 3rd party packages are pre-installed in the project template? While we could all agree this is a gray area, ultimately I place the responsibility of enforcing a polite ecosystem on the package manager. Ideally, a package manager should discourage these types of collisions to whatever extent is practical and appropriate. It’s also worth pointing out that the Scripts and Content folders are merely secondary delivery locations. Behind the scenes, as shown in the following screenshot, all packages are isolated quite nicely in the packages folder. The Bower alternative Now let’s take a look at what Bower has to offer. Run the following in your console of choice. bower install What? Did you get an error? bower is not a recognized command? You need to install it first, then. :) Once installed, this command will produce a bo[...]

Using Fiddler to implement wildcard DNS lookups


Wow, this is now the third post I've done on Fiddler rules. And sequentially at that (HttpOnly cookies and CORS were the previous 2 posts). What's up with that?  Seriously?  Yep.  Stuff happens.  And sometimes you have to use Fiddler to help out with that stuff.  In this case, I'm trying to set up multiple development environments to build apps for SharePoint 2013.  Anyways, this one is short and sweet compared to the other rules I've posted about, but it does have a wider application than just debugging web applications. 

What problem does this solve?

  1. You're building web applications with a hosting system that creates dynamic sub domains for your app (like the SharePoint app model)
  2. In production, you will use DNS techniques such as wildcard CNAME records against a purchased domain, but during development this doesn't work as easily.
    1. You're working with a team of developers, and you may not always be able to share the same development domain name
    2. Your network admin doesn't want to maintain forward lookup zones for development in the production DNS
      1. Even if you had a development DNS, you'd have to manage it and then jack with your TCP/IP settings and detach your normal DHCP configuration to add a new primary DNS server (and who needs all that noise?)
    3. Your hosts file does not support wildcard entries
  3. Or, perhaps you've always wanted wildcard entries in your hosts file for subdomains? 

How do I set this bad boy up?

UPDATE - 12/10/13: Please see the comment below from Eric regarding a more appropriate solution that avoids the rule definition.

First, you're going to create the rule definition.  Notice how the default is true for the rule instead of it normally being false/disabled?  I did this to simulate how your hosts file is "always on", but dude, change it if it's getting in your way. 

I named this one for what I'm using it for (SharePoint app domains), but you would create a separate rule for each domain that you want redirected.  Can you store all of these rules in a database and look them up at run-time instead of creating all these rules?  Yes, you can find an example of using a database if you want to do that, but JScript.NET is "not the modern programming environment you're looking for" to help you out, so caveat emptor.

public static RulesOption("SharePoint App Domain Override")
var m_SharePointAppDomain: boolean = true;

Then, add the following code to the OnBeforeRequest handler.

// replaces need of wildcard CNAME in DNS
if (m_SharePointAppDomain) {
    if (oSession.hostname.EndsWith("")) {
        oSession.bypassGateway = true;                       
        oSession["x-overrideHost"] = "mydevserver";  // DNS name or IP address of target server

Hope this helps that fraction of you out there needing a good solution to this.  Perhaps my next post will be "Using Fiddler to make bacon pancakes"!

Using Fiddler to force a web service to support CORS for debugging


It started with a hybrid native app developer's Declaration of Independence, then came the liberation of HttpOnly cookies, so now we need to deal with the issue of cross domain restrictions that browsers put on XHR because of previous XSS exploitations. Problem: Debugging in a browser This post applies to you if: You are building some sort of native application based on HTML and JavaScript (PhoneGap in my case), where all calls to a backend web service will always be treated as a cross domain request from your XHR code. You are working with a web service that you don't control, and it doesn't support CORS, which modern browsers use as a policy for allowing cross domain calls. You have come to terms with this reality and you're using native web libraries on actual devices to do your integration testing, or you're using a mock service in the browser with faked responses You are no longer able to run and debug your application in a browser with the actual service, and now you feel like you've been tossed back about 20 years in the past of application development and maybe even you've resorted to using console.log for your debugging and feel defeated (I know I did!) Solution: Proxy FTW! Surprise, Fiddler is again our proxy tool of choice to help us out.  Instead of repeating all the steps to set up a custom rule from the HttpOnly cookies post here, please go and read that first. Once you use the following code snippet, you should be able to use the Force CORS Response rule as pictured here. The Fiddler Custom Rule Implementation Just like in the HttpOnly cookie rule, you should just copy and paste the following code to the bottom of the method, OnBeforeResponse, which should already be in the default rules script. if (m_ForceCORS && ( oSession.oRequest.headers.HTTPMethod == "OPTIONS" || oSession.oRequest.headers.Exists("Origin") ) ) { if(!oSession.oResponse.headers.Exists("Access-Control-Allow-Origin")) oSession.oResponse.headers.Add("Access-Control-Allow-Origin", "*"); if(!oSession.oResponse.headers.Exists("Access-Control-Allow-Methods")) oSession.oResponse.headers.Add("Access-Control-Allow-Methods", "POST, GET, OPTIONS"); if(oSession.oRequest.headers.Exists("Access-Control-Request-Headers")) { if(!oSession.oResponse.headers.Exists("Access-Control-Allow-Headers")) oSession.oResponse.headers.Add( "Access-Control-Allow-Headers" , oSession.oRequest.headers["Access-Control-Request-Headers"] ); } if(!oSession.oResponse.headers.Exists("Access-Control-Max-Age")) oSession.oResponse.headers.Add("Access-Control-Max-Age", "1728000"); if(!oSession.oResponse.headers.Exists("Access-Control-Allow-Credentials")) oSession.oResponse.headers.Add("Access-Control-Allow-Credentials", "true"); oSession.responseCode = 200; } Unlike the HttpOnly cookie rule, forcing CORS is a bit more challenging.  There is a chance you may need to tweak this, but I trust this example rule should get you past most of the hurdles of not having adequate documentation for this in the search engine(s). Happy debugging! [...]

Using Fiddler to emancipate HttpOnly cookies for web app debugging


In light of the hybrid native app developer's Declaration of Independence, this post might very well be the first shot of the revolution.Problem: Debugging in a browserThis post applies to you if:You are building some sort of native application based on HTML and JavaScript (PhoneGap in my case) You are working with a web service that you don't control, and it uses HttpOnly cookies Your application requires the values of said cookies to work (most likely related to the authentication pipeline) You have come to terms with this reality and you're using native web libraries to work around this fact You are no longer able to run and debug your application in a browser because of #4, and now you feel like you've been tossed back about 20 years in the past of application development and maybe even you've resorted to using console.log for your debugging and feel defeated (I know I did!) Solution: Proxy FTW!In order to restore debugging support, we're going to need to sanitize the set-cookie HTTP header on it's way down to our application, and the easiest way to do this across all browsers is to use a proxy tool. The most popular tool for this type of operation is Fiddler, so this post will cover how to set this up in Fiddler. To best explain what we're doing here, here is a before and after view of a cookie that we need to work with.Before:Set-Cookie: Flavor=chocolatechip; secure; HttpOnlyAfter:Set-Cookie: Flavor=chocolatechip; For example, if you had your web application open from http://localhost/myapp, your page's JavaScript would now be able to access the Flavor cookie. So, super easy, right?  Following is my 3-step plan. 1: Install the syntax highlighting add-on bundleThis step is just to make things easier on you and is not required.  Installing this add-on adds a tab to your Fiddler interface which gives you one-click access to the Fiddler Script custom rules: 2: Create a custom rule definition using FiddlerScriptTo avoid going into too much detail here about how to create custom rules, see the official documentation and the Fiddler book site which have good examples. I will say that when you first crack open the rules file, you're immediately greeted with a couple of lines of code like this: public static RulesOption("Hide 304s") var m_Hide304s: boolean = false;What is that var declaration with a colon and a type?  Well, good reader, that is likely the first and the last time you'll work with a language called JScript (also referred to as JScript.NET).  I've leave the speculation (or research) up to you on how JScript ended up being the language of choice for custom rules. Let's just say it's a bit out of the way to locate a good set of documentation on this language. Hopefully you'll find examples that accomplish everything you need for your custom rule requirements either through a search or in the example below.Add the following 2 lines of code in the file and save it in order to create a new rule definition.public static RulesOption("Free the Cookies!") var m_FreeTheCookies: boolean = false;You should now see the rule in the Rules menu. Clicking this item will enable it, so all that's left to do is add the code for the implementation. It's helpful to remember that your rule will need to be enabled every time you save the script file or close and reopen Fiddler (by default, at least). 3. Write the custom rule implementationIn order to sanitize the cookies, we need to add our implementation into the response handler method, OnBeforeResponse, which should already be in the default rules script.  Just add the following code to the bottom of the method.if(m_FreeTheCookies) { for (var x:int = 0; x < oSession.oResponse.headers.Count(); x++) { if(oSession.oResponse.headers[x].Name.Contains("Set-Cookie")) { var cookie : Fiddler.HTTPHeaderItem = oSession.oResponse.headers[x]; if(cookie.Value.Contains("Http[...]

A Declaration of Independence from Tyrannical Web Services


When in the course of human events it becomes necessary for a HTML-based app development team to dissolve the technical bands (XHR) which have connected them with a web service (due to such security concerns as HttpOnly cookies) and to assume among the powers of the earth, the separate and equal station (native web libraries) to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of the web development community requires that we can still run and debug our application in a browser.We hold these truths to be self-evident, that all web developers are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are an internet connection, code editors, XHR, Fiddler, coffee, and the pursuit of engineering excellence. that HTML is increasingly becoming an architecture that allows applications to be created in a variety of platforms, not the least of these being delivered in arguably the most popular app delivery mechanisms on the planet: the app stores of the iOS and Android platform in the form of hybrid native apps. That to secure these rights, ISPs and ISVs are instituted among Men, deriving their just powers from the consent of the developer consumer in the free market, That whenever any Form of web service becomes destructive of these ends, it is the Right of the developer to alter or to abolish it, and to institute a layer of abstraction, laying its foundation on such principles and organizing its capabilities in such form, as to them shall seem most likely to re-establish XHR to its rightful place as a testing harness for all HTML-based application architectures. Prudence, indeed, will dictate that web services securely established should not be changed for light and transient causes; and accordingly all experience hath shewn, that developers are more disposed to suffer, while security restrictions are sufferable, than to right themselves by abolishing the development environments to which they are provided. But when a long train of console.log() statements, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such a configuration, and to provide a platform for their application's testability.Such has been the patient sufferance of these developers; and such is now the necessity which constrains them to alter their web service interfaces and restrictions. The history of web application security has been a longsuffering struggle of developers against hackers, resulting in reduced capabilities of JavaScript and the introduction of security sandboxes, in direct object the establishment of an absolute Tyranny over such things as simple as an HTTP header. To prove this, let Facts be submitted to a candid development community. Some providers of web services have produced APIs which seem to ignore the fact that HTML applications can and are increasingly being created outside of a standard web browser. Some web services do not support CORS, going so far as to nullify the use of XHR in some cases Some web services only allow authentication through HttpOnly cookies, nullifying XHR in all cases Nor have We been wanting in attempts at workarounds.  We created JSONP as a temporary solution. We have stood up facade web services. We have in some cases just given up on entire platforms and walked away in the shame of defeat.We, therefore, the developers of web platform-based applications, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good People on our teams and on behalf of the employers, clients and customers we serve, solemnly publish and declare, That security restrictions should not trump the need for a proper debugging environment. That we ought to be Free to use the same tools we've grown to depend on for web applications development ev[...]

How to build a jQuery world clock in SharePoint


Update - 9/12/12This post was deemed worthy of publishing on Nothing But SharePoint's site. May the SEO battle begin (it's a hobby to watch Google's relevancy algorithm in action). RequirementsOn a recent project, my client asked me to produce a world clock web part on his portal’s home page.  He requested a list of office locations and the time in that location, honoring Daylight Savings Time rules if applicable (’s world clock page is a good working example).  The following screenshot is an example of the layout requested. My first reaction was, “there must be an existing world clock web part I can use, because how would it be possible that something this common isn’t already out there?”.  So after searching for “sharepoint world clock”, I downloaded and installed the 4 most popular solutions (all server-side web parts), both community and commercial. I was shocked that none of them had this capability out of the box.  Some would have this functionality embedded with weather features, and some would output html that required serious DOM manipulation to extract the data and place it into a tabular output like I needed. When I found one that looked like a close match, there was always something that caused it not to fit, such as the lack of support for the exact time format that I needed, or not being able to override the location name after executing a web service call. Coming to the realization that I was going to have to do something custom, I set out to write my own web part because it seems simple enough, right? :)After a bit of reflection, I arrived at the following list of requirements.No external web service dependencies. Powered by JavaScript using currently installed web parts.  Try to avoid deploying a solution with compiled code. Clocks should be based on a more reliable time source than the time on a client machine. Customized clock format (date and time). Adjusts for Daylight Savings Time when observed. Storing the dataFirst of all, the question may be raised here as to why we’d want to store this data ourselves.  Won’t this make the data out of date as soon as a locality updates their Daylight Savings Time or decides, gasp, to change their GMT offset one day? In my humble opinion, the rate of change of this data is low enough to justify caching it in a list somewhere on your SharePoint farm. When you combine this with the fact that using an external web service API to fetch this information dynamically would come with a fee attached, it becomes a little easier to justify the infrequent edits required to keep your local storage up to date with the time zones of the world. (If you happen to be that edge case which produces a maintenance burden updating cached time zone and DST data, perhaps consider automating the import of the time zone database?)So if you’re still with me, let’s do this. The first order of business is to build out the data source.  In order to store the data required to render clocks, I created a list in SharePoint to hold GMT offsets and Daylight Savings Time date ranges.  If you’d like, you can download my list template to get started with your list.  My list has the following design. I have to admit I was on the fence regarding the use of separate columns for storing date components of the Daylight Savings Time data (so I therefore reserve the right to change my mind later). The primary reason for this is because SharePoint adjusts date fields based on local time zones, even though it actually stores the date as UTC/GMT in the content database. What would be really cool here is a custom field type in SharePoint called “UTC Date” that doesn’t try do any time zone adjustments. Then, you could just use that for UI date validation and ignore the year upon retrieval when checking for Daylight Savings Time. I’ve built some custom fields[...]

Yes, you can do recursion in a Nintex workflow


table { border-collapse: collapse } table th { border-bottom: gray 1px solid; border-left: gray 1px solid; padding-bottom: 4px; padding-left: 4px; padding-right: 4px; vertical-align: top; border-top: gray 1px solid; border-right: gray 1px solid; padding-top: 4px } table td { border-bottom: gray 1px solid; border-left: gray 1px solid; padding-bottom: 4px; padding-left: 4px; padding-right: 4px; vertical-align: top; border-top: gray 1px solid; border-right: gray 1px solid; padding-top: 4px } table th { background-color: #ddd } RequirementsI was recently introduced to the Nintex Workflow 2010 product several months ago, and during construction of one of my workflows for a client, I found myself needing recursion.  Why, you ask?  Well, this problem always seems to surface whenever a hierarchy is involved, such as folders in a file system that have subfolders. In my case, I had a list in SharePoint that contained a template of tasks that were to occur in a certain order, and each task had a relationship to another task.  For example, “Set up utilities” must occur 30 days before from “Move in to my new house”.  What I knew ahead of time was only “Move-in Date”, and I wanted my workflow to basically walk through the hierarchy of related tasks and calculate dates for all of my tasks.If I were coding this in a programming language, I’d just create a recursive method that called itself with the base date and passed that to all related tasks so they could do their calculations.  However, in order to hand this calculation procedure over to a client that had no coders on staff, Nintex Workflow was the chosen vehicle for delivery. When I first used Nintex, I didn’t easily spot a construct that would allow me to implement recursion. Sure, workflows can call other workflows, but the number of simultaneous executing workflows could get out of hand causing SharePoint to rain on that parade rather quickly. Prerequisite KnowledgeIt’s helpful at this point to step back a bit and think about what we’re trying to do. The feature I want in Nintex is basically the ability to call a function (method is more of an object oriented programming label), but Nintex doesn’t provide the concept of a function. But what is a function, and how does recursion within functions normally work in a standard programming environment you ask? (you ask a lot of questions, but that’s a good thing)Computer Science 101 and 201When I was in college getting my bachelor’s degree in Computer Science, there were a couple courses I took that proved helpful to me when determining how to set this up.  The first is Assembly Programming 101, where one learns exactly what happens behind the scenes when a function is called.  The short answer is to use a data structure called a stack, which I learned in another class, Data Structures 101. In assembly, when you want to call a function that requires a parameter, you “push” the parameter (using a register to store the value) onto the stack first.  Then, once inside the function, you “pop” the value off the stack to use it.  So, the next time you’re writing methods in C#, just think about all those stack operations going on behind the scenes on your behalf.  :) Helpful animation explaining stacksSo, now that you understand how a function call might work with a stack, how does recursion work?  Instead of scaring you away with terms like “depth-first preorder tree traversal” (also from Data Structures 101), just know that we’ll need to keep a running list of tasks that have related tasks, and we’ll use a stack for that. Here’s the algorithm: Every time you find a task (parent) that has related tasks (children), push the parent to the stack. Loop over the stack, pop an item off, and get children. Repeat step 1 and 2 until the list is empt[...]

Ext Js 4 presentation at VNext Dallas


I had the privilege of presenting a couple months ago at the VNext Dallas user group on my experiences with Ext Js 4 during a recent engagement at a client.  The slide deck is embedded below.  If you’d like a little more context than what you can get in the slides (I make the slides like this for a reason, you know), I would point you towards the video of the presentation on, the guys that make the existence of the video possible.

I’d like to thank Shawn Weisfeld especially for recording this session, and for always showing up at the user group meetings no matter the size!  His dedication to the community is unlike anything I’ve seen, folks.


Ext Js 4 - Slides height="355" marginheight="0" src="" frameborder="0" width="425" marginwidth="0" scrolling="no" allowfullscreen>

“Prism is not a MVVM framework!” and other goodness from the MVVM Smackdown


A couple months ago I participated in a friendly “smackdown” of MVVM frameworks at the North Texas Silverlight User Group.  To sum up the exchange, I represented the Prism offering from the Microsoft Patterns and Practices group (which is awesome by the way for composite applications), Dave Yancey presented Jounce, Bonas Khanal presented MVVM Light, and Justin Weinberg presented Caliburn Micro. If you want more information about my provocative post title, you’ll need to get the context from my video in the list below.

WCF BasicHttpBinding does not always mean W3C SOAP 1.1


I spent a couple hours tonight troubleshooting why a vendor's SOAP 1.1 service couldn't talk to WCF's BasicHttpBinding. In this case I was getting the exception:

Server returned an invalid SOAP Fault.

Inner Exception: XmlException: Message=End element 'Fault' from namespace '' expected. Found element 'tla:billybob' from namespace ''. Line 2, position 568.

Was the service sending .NET bad XML?  What’s the deal?  Well, the exception goes into more detail complaining about an unexpected element.

W3C says regarding SOAP 1.1. faults:

Other Fault subelements MAY be present, provided they are namespace-qualified.

Turns out it the service is just fine according to W3C since these extra fault elements were namespace-qualified. However, BasicHttpBinding is more restrictive than the SOAP spec.  Man, this is giving me a flashback of .NET being picky with date-parsing! The issue was that .NET only supports SOAP 1.1 in this binding if it's also WS-I BP 1.1 compliant:

R1000: When an ENVELOPE is a Fault, the soap:Fault element MUST NOT have element children other than faultcode, faultstring, faultactor and detail.

I calmly pointed out to said vendor that I thought it would be a good idea to update their SOAP endpoint to be a little more friendly with .NET. We’ll see!


Using SharePoint for Extranets


(image) My article, Using SharePoint for Extranets, was just published in the September issue of Windows IT Pro magazine. In my post about importing files into SharePoint, I hinted that I had replaced a custom extranet with a WSS 3.0 extranet. This article is a brief summary of what I learned about configuring SharePoint as an extranet platform during that project.

Only after this experience did I decide that something needed to be written about this subject. Specifically, I spend quite a bit of time advocating zone address consistency (having 2 or more authentication zones use the same site address). I was a bit disappointed that SharePoint ignored IIS's IP address binding feature, which makes collaboration much easier when multiple zones are used, or when SSL is needed (especially for extranets).

It's brief because it had to be. As you would be able to tell by reading some of my posts, I lean towards using more words than less to get my points across. When I first submitted the article to them, it was over 4000 words. I had to delete 1500 words to fulfill the article submission requirements. Of course this almost required a complete rewrite, and it's most likely the reason that the online edition of the article contains an extra sidebar and 2 graphics that wouldn't fit in the printed edition.

The gracious editorial staff (thanks Gayle!) over at Windows IT Pro agreed to allow the public access to the online edition of the article for about 2 weeks so I can share it with you.

Technorati Tags: ,

Integrating LexisNexis InterAction with Portals


Printer and reader-friendly version Slide deckThis white paper discusses how to integrate the LexisNexis InterAction CRM into a Web-based portal alongside data from other systems. The reason for this paper’s existence is my presentation at the 2008 International Legal Technology Association (ILTA) conference. I was one of 3 panelists for the session titled “InterAction - A Different Perspective Using Your Firm’s Portal”. This paper and my presentation answers the How question, assuming that you already know about the What (InterAction) and the Why (the value gained by contextualizing business entity information in a portal). In order to best answer the How question, I’ve broken down the process into 3 steps: prepare, design and build. Specifically, I’m referring to preparing the integration, designing the presentation and building the solution.Prepare the IntegrationBefore we can discuss how to integrate InterAction into a portal, we need to know some background information. First, we need to know what portals are and what integration is.PortalsOriginally, portals were simply entryways, such as doors or gates. Recently, information technologists have used the word to indicate a digital entry point for multiple information sources. Portals exist on a variety of platforms, but the ones most popular today are web sites. The popularity of portals has exploded recently because of their ability to present data from diverse sources in a unified, yet customized, way. Instead of requiring the user to access each data source individually, a portal can surface all of them on a single page, allowing a dashboard-type overview. Portals are typically centered on a topic or entity of importance. For example, portals can currently be located that bring together information related to news, sports and fitness. For public portals such as Yahoo or iGoogle, the entity is you. As an example, imagine an enterprise portal site within your organization that, instead of displaying information customized to a user, displays information about one of your clients. A client’s documents from your document management system might be displayed alongside their contacts from your contact relationship manager (CRM) and revenue information from your billing system. In order to present a page with all of this information, a portal needs to be intelligent enough to inquire about this client from each system one at time. This information is then gathered and displayed on the screen in its assigned locations. The portal is able to do this only because it knows how to ask each system for the right data. And the reason it knows how to ask is because these systems are related to each other, or integrated in some way. IntegrationSystems integration could be explained as relationships between systems. Systems can integrate with one another only after they share something in common. This common information is usually in the form of entities, such as people, companies or projects. For example, a billing system contains the client name and its billing contact information. The CRM also contains this information about the client. In order to make these systems talk, a translation needs to be created in order to map one entity to another. This mapping information is usually stored in one of the systems, but it’s possible to also store it externally in an independent system.Unique IdentifiersWithin each system, data structures are usually in place to easily manage entity data. For example, in a billing system’s database, instead of copying a client’s name into every corner of the system related to a client, a unique identification (ID) number is created for the client and referenced. The use of an ID allows updated information to b[...]

The presentation of technical content


My right brain is sore, but I'm told that means it's working. If you are called upon to present technical content, and aren't sure of the best way to go about doing it, this lengthy blog post will hopefully save you hours of research and inspiration-searching. If you'd prefer to skip all the background information, feel free to jump to the top 3 things I learned. My history with technical presentations I've sat through many software development-related presentations over the years, and have even given a handful to small audiences. However, I've never given a presentation at anything like a conference before. That's about to change in a couple weeks however, when I'll be speaking about portals, systems integration and web development at the 2008 ILTA conference. In preparation, I decided to throw out what little knowledge I had about how to create a good presentation and started researching the topic.I was at the 2005 Microsoft Patterns and Practices Summit and saw Harry Pierson's architecture presentation. His presentation seemed a lot simpler than all the others since it lacked a page full of bullet points. During the presentation he mentioned that he was trying out a new presentation approach he had recently read about in Cliff Atkinson's book Beyond Bullet Points (BBP). I didn't run out and buy the book back then, but I did get an idea about the concept from the free material available through Cliff's Webcasts and sample chapters.Researching for the presentationCollecting informationI started my research for presentations by looking up BBP to see what Cliff's been up to the last couple years. Since he recently published an updated edition of BBP for PowerPoint 2007, I decided it was a good time to buy. I created a new web feed folder for presentations in Snarfer and added his blog.Next, I let Google guide me to the most popular presentation blogs, which immediately lead me to Garr Reynolds's blog Presentation Zen. I discovered that Garr recently wrote a book about presentations, also titled Presentation Zen (PZ), so I added it to the cart. If you'd like a quick overview of his points, watch the video of the metapresentation he made to Google employees. His post on technical presentations was very relevant to my needs, as well as a link he provided to an awesome booklet (partially funded by our federal tax dollars) on the subject by the Oceanography Society.From these first two sources, I found references to Nancy Duarte's presentation work and her blog. Her team was the one that helped out Al Gore with An Inconvenient Truth. Nancy did a webinar on how to create powerful presentations for VizThink, a new visual thinking community that love thinking visually and creating mind maps. Nancy also recently finished working on her book, slide:ology, but it wasn't published until this month, so I couldn't get it in time. Around the time I was doing my primary research in July, Dan Roam's book, The Back of the Napkin (BOTN), was still fresh off of the press. It created quite a buzz in the presentation circles, so I thought it was worth looking into. Almost everyone has something good to say about it, and the cover just looks cool. It wasn't a book on presentations specifically, but it was right up my alley in terms of relevance. Considering my topics, so I was intrigued about the possibility of being able to communicate my topics using simple pictures.Garr's book begins with a presentation-style forward (a one-page series of slides) by Guy Kawasaki. Guy is a former Apple fellow and considered to have this whole speaking-presenting thing down by his peers. Guy's site was featured in his blog (of course you'd expect some self-promotion) as a service that makes it e[...]

ASP.NET Enterprise Single Sign-On with BlackBerry Smartphones


Update 7/27/13: BlackBerry has removed the developer journal from their web site (it's quite old by now), but for the sake of removing broken links and keeping this here for sentimental reasons, the download links are below.
(image) My article about single sign-on (SSO) using ASP.NET and BlackBerry devices was just published in the August 2008 BlackBerry Developer Journal.

Download the article
Download the sample code

I was able to work a lot of concepts into the article, the following being a small sample of the variety.

One of the cool takeaways included in the sample code is a method in C# that simulates ASP.NET's Active Directory authorization check.
Writing is so much more time consuming than I originally imagined. It was definitely a learning process. I now have much more of an appreciation for those that set out to write an entire book.

Extracting assemblies from the Global Assembly Cache in Windows Explorer


Here's a quick tip I accidentally discovered about extracting assemblies from .NET's Global Assembly Cache (GAC).  In the past, when I wanted to extract .NET assemblies from the GAC, I was accustomed to opening a command window and executing copy commands.  It's tedious, but it works. This is necessary because of shfusion (shfusion.dll), a Windows Explorer extension that controls the interaction of the the %windir%\assembly folder and prevents extraction of its assemblies. You can add assemblies into the GAC from Windows Explorer, but you can't extract them.

I've had to call Microsoft support in the past for one issue or another, and on one call I watched them unregister shfusion.dll (regsvr32 /u shfushion.dll) in order to extract multiple assemblies out of the GAC. However, I remember trying this on another machine one time and it didn't quite work.

I recently experienced the joy of needing to extract a handful of assemblies out of the GAC to manually reinstall an application. After using the command console a couple times, I thought I'd try to open one of the paths from the run box and to my amazement it worked.  It opened the GAC in Windows Explorer and bypassed shfusion. I've confirmed this works on XP, Vista and Server 2003.

Finally, the tip

Open the run command box (or quick search box in Vista)  and enter the following.



Parsing dates for web feeds in .NET


During the initial development of FeedFly last year, I was exposed to the evils of date parsing for web feeds.  While working on release 0.2.1, I revisited this issue when fixing a date parsing bug. Before I forget all of this, it helps to write it down.Feed formats and datesIf you decide to start syndicating a web feed these days, you're basically limited to the two most popular feed formats: RSS or Atom.  RSS seems to have the upper hand right now in terms of popularity, but some believe Atom will eventually take over because it's being pushed as an official standard. For the purposes of a web feed, both of these formats need to be able to store and transmit dates and times.  Each format has it's own date and time format, however.  RSSRSS feeds are supposed to use the date format outlined in section 5 of RFC 822 (published in 1982). Here are some example RFC 822 dates:Wed, 28 May 08 07:00:00 EDT Thu, 10 Apr 08 02:30:00 UT Tue, 10 Jun 08 05:15:00 -0600 Thu, 17 Apr 08 22:45:00 GMTIn 1989, RFC 1123 was created to update the RFC 822 format to use 4-digit years (section 5.2.14). Also included was a recommendation that all dates should be limited to GMT (universal time) or include a numerical offset. So, for RFC 1123's purposes, only the last date in the above list should be used. But it's supposed to be backwards compatible. Don't take my word for it, though:There is a strong trend towards the use of numeric timezone indicators, and implementations SHOULD use numeric timezones instead of timezone names. However, all implementations MUST accept either notation. If timezone names are used, they MUST be exactly as defined in RFC-822.I interpret this to mean that while RFC 1123 strongly encourages you to use numerical offsets or GMT, you could ignore the recommendation and still be in compliance. Because of this, the following dates should be considered to be in both RFC 822 and RFC 1123 format. However, only the bottom 2 are recommended by RFC 1123.Wed, 28 May 2008 07:00:00 EDT Thu, 10 Apr 2008 02:30:00 UT Tue, 10 Jun 2008 05:15:00 -0600 Thu, 17 Apr 2008 22:45:00 GMTResearching the history of RSS lead to some dark places on the web. This format was evidently born out of a lot of heated "collaboration". All versions of RSS history (controversial or not) I could dig up stated it began in 1997. The reason I looked up the history was to figure out why it's flavor of RFC 822 did not follow the recommendations in RFC 1123, or make any references to it.  Well, I didn't find an answer for that, and it's even more perplexing given that the HTTP specification was recommending RFC 1123 exclusively as early as 1996. Here's a revealing excerpt:Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123... [this date format] is preferred as an Internet standard and represents a fixed-length subset of that defined by RFC 1123 [6] (an update to RFC 822 [7]).Why am I spending so much precious time on this?  .NET only parses RFC 822 dates according to RFC 1123's recommendations, that's why. They (the .NET team) are in safe territory here (I guess) since they can simply point to HTTP's exclusive use of RFC 1123, but more on this later.AtomThe Atom format specifies that its dates must conform to RFC 3389, which defines a simple profile of the ISO 8601 date format. Just looking at the number of the RFC indicates it's more recent. Proposed in 2002, RFC 3389 is 20 years younger than RFC 822, so you'd expect it to include some "lessons learned" from previous date formats.Here is an example date in  in RFC 3389: 1985-04-12T23:20:50.52ZThe benefit of this date format i[...]

Icon formats for Windows Mobile


About 6 months ago, I created a custom icon for my Windows Mobile application, FeedFly. I think I used one of those "We'll make your icon for free from an image! It's totally FREE!!!" web sites. I chased that down with a trial edition of an icon editor (it might have been Microangelo) to make a couple modifications.My icon turned out great, or at least I thought it did until I deployed it to my device. The transparent background wasn't transparent. It was either white (as shown on the start menu graphic below) or this weird smoky gray color (shown on HTC's Home Today Screen plug-in next to the fixed one). I tried opening the icon in several different applications (Visual Studio, Infranview, and even old MS Paint). Each one told me I actually had a transparent background (in more or less words). Since I'm working on a new release for FeedFly, I thought I'd once again look for a solution to my icon transparency woes. A couple days ago, I stumbled upon IcoFX and installed it to see if it could be of any help. Well, it was. It turns out that the handy web site I used to generate the icon provided me with a 32 bit color icon along with the alpha transparency included within the original image. Using IcoFX, I converted my icon to 256 color (8 bit) and removed the alpha transparency. After deploying to my device, it is shown correctly now. On a related note, let me start by saying I realize that the target audience for a tool such as an icon editor is a small slice of the general user population. However, for an operating system that uses icons so extensively, I still can't believe an icon editor is not included in Windows. I'm on a budget, and I can't justify purchasing something like Photoshop just to edit an icon. I'm nowhere near an artist, so I definitely won't make icon editing a habit (I get pretty frustrated as it is), but I do end up fiddling with an icon once every other year it seems (mostly favicons lately).Finally, this post isn't meant to imply that all the high bit counts won't work for icons (read this for more information), but to hopefully help you avoid some frustration trying to track this issue down. Just make sure your icon is not 32 bit with alpha.Technorati Tags: Windows Mobile,Icon[...]

Importing files into a SharePoint document library using regular expressions and WebDAV


I just finished writing a utility to export a folder hierarchy of files from my existing custom extranet to a SharePoint document library. The custom extranet was database-driven and allowed the user to name a file or folder whatever he or she wished up to a maximum of 500 characters. When I wrote this extranet 6 years ago in classic ASP, I'd just HTML encode whatever name the user wished and store it in the database. Whenever a folder or file was retrieved, it was always by using the ever-so-not-user-friendly URL parameter "id=". I already knew I would need to remove restricted characters from my folder and files names that SharePoint does not allow. Furthermore, SharePoint's document libraries actually display the full folder path in the URL, which means I'll need to be concerned about the total path length. My migration plan was to build a physical folder hierarchy for staging the files, then use WebDAV (SharePoint's explorer view for document libraries) for importing the hierarchy into SharePoint within Windows. This method will allow me to keep the utility focused on a simpler task than actually importing the files into SharePoint and make sure I don't have to worry about server timeouts. Naming restrictionsSharePoint has naming restrictions for sites, groups, folders and files. Since I'm only interested in folders and files, only the following restrictions will be considered.Invalid characters: \ / : * ? " ' < > | # { } % ~ & You cannot use the period character consecutively in the middle the name You cannot use the period character as the first or the last character Someone already familiar with this topic will notice that I added the apostrophe to the official restricted character list. During my own testing, SharePoint complained when I uploaded a file with an apostrophe, so I added it to the list.Length restrictionsBesides naming restrictions, SharePoint also has the following length restrictions (from KB 894630).A file or a folder name cannot be longer than 128 characters. The total URL length cannot be longer than 260 characters. 128 character limit for folders and filesRegarding the 128 character limit, you can't use SharePoint's UI to get to this limit. The text box's maxlength property is set to 123 for both folders and files. I don't have any inside sources, but my guess is that the SharePoint team did this to make sure the total file name would not exceed 128 characters if the extension was 4 characters (as is the case with Office 2007 file formats like docx and xlsx). The odd thing is that the folder text box is limited to 123 characters as well. However, if you put the document library into Explorer view, you can rename a folder to allow the full 128 characters. I bet there's some reuse going on between the data entry screens for the file and the folder in this case (also something a programmer on the SharePoint team might want to do).260 character limit for URLsI've done some WebDAV importing to this particular SharePoint farm in the past, and I'm pretty sure I ran into paths close to the 260 character limit, so I investigated this. I found several instances where the total URL exceeded 260 characters. KB 894630 mentioned above also says:To determine the length of a URL, .... convert the string in the URL to Universal Character Set (UCS) Transformation Format 16 (UTF-16) format, and then count the number of 16-bit characters in the string.However, it should probably say something like "decode the URL first, then count the characters" to make it easier to understand. I created a folder hierarch[...]

Creating a batch of Active Directory accounts for SharePoint with the help of Excel


I recently deployed an extranet using SharePoint and Active Directory (AD). At my company, when a new extranet is requested, the request is typically accompanied with a list of new user accounts that should be created as well. Sometimes this list can contain over 20 accounts.Using the Active Directory Users and Computers applet is not my favorite interface for creating accounts, and especially not for a large batch of them. I found myself having to edit an account 3 times to get it to appear the way I wanted in AD and in SharePoint. I looked into my scripting options, and ran across dsadd. This command has everything I need in order to create AD accounts in one step. However, it's very tedious to have to type the command given the customization I wanted and to remember the syntax rules.Using Excel to create dsadd commandsTo make this process easier, I created an Excel spreadsheet to generate the dsadd command based on a collection of cells. I've used Excel several times in the past to generate batches of commands or SQL statements. Even if you don't need a dsadd command, this spreadsheet is a useful reference for building other batches of commands.After downloading the spreadsheet, you'll need to fill in the data for your company's Active Directory and the OU you'll be creating the accounts in. You'll then probably want to extend the cell formulas beyond the single row that's included. After you've filled in everything, copy the cells in the script column and paste it into a text editor like Notepad. Save it as a cmd file, and you can double click it to execute all the scripts. You can add or remove more columns to accommodate the other dsadd parameters. Regarding additional information fields, I was only interested in company and department.Tip: If you'd like to save the results of the commands, add an append redirection operator to the call to the script file from a console screen. This would be helpful to find out what went wrong if some of your commands failed. For example,createaccounts.cmd >> c:\dsadd.log 2>&1will send all the commands and any errors to the dsadd.log file instead of the console. It's important to remember to add 2>&1 at the end, since dsadd sends errors to stderr, not stdout.Adding a batch of user accounts to a SharePoint site collectionAfter the accounts have been added to AD, you can reuse the spreadsheet to add multiple accounts to SharePoint. Following are the steps I've used to translate a column of user account names into a semicolon-delimited list for the add user screen in SharePoint.Copy the column of cells that have the account name (Domain\username) and Paste Special into Word to avoid it creating a table. You only want unformatted text. Replace each line feed character (type ^p into the find box) with a semicolon. Copy and paste into the SharePoint add user screen. Technorati Tags: SharePoint,Excel,Active Directory[...]

A Windows Mobile feed reader: FeedFly


Yesterday I published my first open source project, FeedFly. FeedFly was the focus of my senior project course last semester. This class is required to graduate with a bachelor in Computer Science at my university.We were given the choice to write any kind of software application, but it had to get enough votes in order to form a team. I was lucky since my idea got a couple votes. About 300 man hours later, we gave our final presentation and I nervously gave a demo. Besides ActiveSync messing with my device, the demo was fine. I had to leave it on, since I was relying on it for my data connection. The room we were presenting in was like a steel box with no windows.Including myself, we were a team of 3 programmers. I attempted to manage the timeline of the project using a modified version of Scrum. It was as effective as it could be given we couldn't do daily stand-ups. Plus, it made the Gannt chart something very easy to look at: This chart shows our 1 week architecture sprint at the beginning, 3 2-week development sprints, and a 1-week documentation and presentation sprint. However, generating the schedule wasn't easy. I had to enter our weird college work hours and account for our school holidays for Microsoft Project to get the end date just right.I'm very happy with how FeedFly turned out. If I wasn't, it wouldn't have been published as an open source project. I learned a lot about the .NET Compact Framework during development, and I think the project serves as a good example application that implements some best practices. If you're a compact framework developer, I'd love to get your feedback about it.Oh, and I got an A on the project by the way.Technorati Tags: .NET Compact Framework,Windows Mobile,Feed reader,Feed,RSS,Atom[...]

RSS can't catch up to email yet


When I first learned about XML, I thought it sounded like a great idea. I had a hard time coming up with an excuse to use it in my earlier programming days (would have involved a rewrite, or couldn't cost-justify the implementation of that cool, new self-describing configuration system), but .NET changed all of that (for me, being a Microsoft-experienced dev) and now XML is extremely easy to work with. Even Microsoft is using it in Office 2007 for all their new file formats. For example, did you know you could rename a docx file to zip then unpack and inspect it? What you'll see is a folder hierarchy of XML files, which could be edited in Notepad if you're so inclined. If you eat XML for breakfast, you don't even need Office to create Office documents. I think we would all agree that XML has definitely arrived and is here to stay. I mention RSS in the title because this particular XML web feed standard has become so popular it has become synonymous with "web feed" itself. The following video pretty much sums up the goals of using web feeds instead of the typical models of gathering your information from the internet.In short, web feeds are a perfect application of the theory of XML. It makes personal blogs just as "subscribe-able" as our magazines and radio shows available at any time via podcasts. I should probably spend an entire post praising podcasts since I'm always listening to them now. Ever since my phone became my mp3 player, my car stereo hasn't been turned on (I have one of those after-market ones that plays mp3 disks, too. It always reads "Standby"). I fear this introduction was a bit lengthy, but I'm trying make everyone in my audience happy (and may it never change). Why the negative title about RSS? My point is that even though web feeds are more popular now than ever before, the barrier of entry is still too high to participate in all of its glory. I'm sure you know plenty of non-technical computer users out there. Are they subscribing to feeds? Probably not. Last semester, I graduated with my Computer Science bachelor's degree (finally - it took me 7 years with my day job). My senior project was a Windows Mobile feed reader named FeedFly (In process of creating an open source project for this - stay tuned). During our final presentation, in a room full of technical experts and industry advisors, we asked how many of them actually subscribed to blogs (indicating they regularly use a feed reader). I would say the response was less than 10 percent, and I was one of those with a raised hand. I created a training session on blogging for my company 3 years ago. I was able to convince one of the teams to replace their email newsletter with a blog, and boy did it take off. It's still on the first page of a Google search without having to pay any SEO "specialist" vendors (I don't like most of these - another post). In this training, I predicted that web feeds would take off once Internet Explorer 7 and Outlook 2007 started shipping with built-in feed readers. I was wrong. Once I used the Outlook 2007 RSS client, I understand why. It's nowhere near as integrated as it needs to be in order to get everyone to use it. Plus, it's not easy to work with once you have a lot of subscriptions. If you're curious what reader I use, it's Snarfer with the Bloglines synchronization feature. Oh, and Doppler for the podcasts. What's the solution for the lack of adoption so far? The best solution would be to purchase your own domain name for hosting your blog, so you [...]