Added By: Feedage Forager Feedage Grade B rated
Language: English
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics


.Net Core App Error: Can not find runtime target for framework

Thu, 01 Dec 2016 08:43:17 GMT

Originally posted on:

This has been annoying me for a while, but as usual the solution was simple. It turns out Visual Studio 2015 removes some key entries form project.json when you update Microsoft.NETCore.App using NuGet Package Manager.

Before updating to Microsoft.NETCore.App v1.1.0 project.json:

"version": "1.0.0-*",
"buildOptions": {
"emitEntryPoint": true

"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.1"

"frameworks": {
"netcoreapp1.0": {
"imports": "dnxcore50"

After updating to Microsoft.NETCore.App v1.1.0 project.json:

"version": "1.0.0-*",
"buildOptions": {
"emitEntryPoint": true

"dependencies": {
"Microsoft.NETCore.App": "1.1.0"

"frameworks": {
"netcoreapp1.0": {
"imports": "dnxcore50"

Which causes the error:

Can not find runtime target for framework '.NETCoreApp,Version=v1.0' compatible with one of the target runtimes: 'win10-x64, win81-x64, win8-x64, win7-x64'.

So to fix it just add the missing bits back into project.json:

"version": "1.0.0-*",
"buildOptions": {
"emitEntryPoint": true

"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.1.0"

"frameworks": {
"netcoreapp1.0": {
"imports": "dnxcore50"

# (image) (image)

Keyboard short-cut to get your lost Google Chrome tabs back

Wed, 30 Nov 2016 08:51:18 GMT

Originally posted on:


You just accidentally closed your Google Chrome browser, and oh boy, you lost all your open tabs and windows. Google Chrome has a really helpful short-cut to get them all back. You also have similar options in Firefox and other browsers but have to use their menu system to “Reopen closed tabs”.

The Solution:

In Google Chrome simply use this sort-cut keyboard hotkey:


If you’re not using Google Chrome, my question is: “Why not?” (image)

(image) (image)

Mapper vs Mapper: Performance Revisited

Tue, 29 Nov 2016 13:44:53 GMT

Originally posted on: recently wrote a blog on the performance of various object-object mappers, but it turns out my tests weren't quite fair and equal at that point. Specifically: Creating Empty Collections vs... Not ExpressMapper, Mapster and ValueInjecter were leaving target collections as null if their source value was null, which means AgileMapper, AutoMapper and the manual mappers I wrote were creating millions (and millions) of collection objects the other three mappers weren't. No fair! Mapping Objects vs Copying Object References In some instances, Mapster and ValueInjecter were copying references to the source objects instead of mapping the objects themselves. The ValueInjecter usage was more my mistake than anything (its Mapper.Map(source) doesn't perform a deep clone by default), but I raised a bug for the behaviour in Mapster, and it's going to be fixed in an upcoming release. The Updated Results So I've updated the relevant mapper classes and re-measured. As before, these are the total number of seconds to perform one million mappings:   Constructor Complex Flattening Unflattening Deep Manual 0.00890 1.76716 0.05548 0.05300 0.50052 AgileMapper 0.8 0.17826 4.33683 0.38902 0.57726 1.15797 AutoMapper 5.1.1 0.15132 7.28190 0.36671 - 0.95540 ExpressMapper 1.8.3 0.21580 15.48691 ^ 0.52166 - 6.56550 ^ Mapster 2.5.0 0.12014 2.50889 * 0.29629 - 1.69947 ValueInjecter 0.47637 101.70602 ^ 12.67350 13.71370 28.05925 I've marked the major changes with a ^, and put a * next to Mapster's deep cloning result, because some of the objects in that test are having their reference copied instead of being cloned; it's likely to be significantly slower if that were not the case. Points to Note Mapster is still the fastest at churning out simple objects - it just gets slower when mapping nested object members ValueInjecter's reflection-heavy approach is very costly, but as I understand it that is how it's intended to be used - as before, I'm happy to be corrected I'm pleased to have improved AgileMapper's performance on every test - the complex type / deep-cloning is nearly 20% faster than in v0.6 :) [...]


Sun, 27 Nov 2016 15:56:36 GMT

Originally posted on:

This weekend…..


I’ve realised the truth that Docker is the future.   Please take a few moments to inhale


I’ve built a MS SQL 2016, in Docker, connected to from another Container running Swift.  

(image) (image)

HTTP logging using CustomTraceListener from Enterprise library

Sun, 27 Nov 2016 06:12:13 GMT

Originally posted on: was more like a debugging related topic for me. The overall knowledge is helpful to understand how the class CustomTraceListener can be useful to build you own tracing mechanism.My use-case was the trace all the HTTP requests in and out of my application. One way to do that was using HttpModule but as I never intended to do any re-routing or change in processing etc. I did not find using HttpModule was needed. I was looking for a more silent way of doing things on the background.So here it is:1. You need the have Enterprise library for logging For writing to file better use the log4net Develop a simple library project with the follwing code inside itusing log4net;using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;using Microsoft.Practices.EnterpriseLibrary.Logging;using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration;using Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners;using System;using System.Collections.Generic;using System.Diagnostics;namespace TraceLib{    //[ConfigurationElementType(typeof(CustomTraceListenerData))]    public class DebugTraceListener : CustomTraceListener    {        private static readonly ILog log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);        public DebugTraceListener()            : base()        { }        public override void Write(string message)        {            log.Debug(message);        }        public override void WriteLine(string message)        {            log.Debug(message);        }        public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, object data)        {            if (data is LogEntry && this.Formatter != null)            {                this.WriteLine(this.Formatter.Format(data as LogEntry));            }            else            {                this.WriteLine(data.ToString());            }        }    }}This will be your tracing code, build it and reference it your web-project like a normal "add reference" The configuration to add within your web.config is as follows:......                                                                                              Mostly your done, configure you log4net part in the web.config so that you can put the data to some file.                                                                                        Run the web-app and fire a web-service or some code that triggers an http call. You should be able to see all the relevant data inside the "[...]

Can’t connect to Oracle XE on Windows 7

Sat, 26 Nov 2016 09:29:48 GMT

Originally posted on:

The install went fine. I can connect when I use Start>Oracle XE>Run SQL Command Line, however, I cannot connect when using a third party client.

Running tnsping xe or lsnrctl status also shows things in a failed state.

I think the solution has to do with some funky networking stuff where sometimes I’m VPN-ed into a work network and sometimes not. If I execute the hostname command I see MyMachine. If I ping localhost I see the full dns name, But If I ping the full dns name, it is not found.


  • Update the network/ADMIN/tnsnames.ora and replace the full dns name with localhost
  • Ditto for/ADMIN/listener.ora
  • Update drivers\etc\hosts and end entries for to localhost, MyMachine and
  • Reboot

    1. (image) (image)

      Roslyn XsltCompile issues

      Wed, 23 Nov 2016 15:56:05 GMT

      Originally posted on:

      I recently was in the process of upgrading a VS2013 project to VS2015.  I created a new VB Web Application project and it addded the Roslyn compiler components.

      In the project were several xsltCompiledTransform calls. There were no compile errors, but during runtime when the xslt.Load method was called, an error was raised:

      Exception Details: System.Xml.Xsl.XslLoadException: Type 'Global.System.Web.HttpRequest' is not defined.

      It took me a while to figure out that the xslt sheet contained a  element that was compiled during runtime. I was able to track down the compile files in %user%/AppData/temp.  The compiling was using the Roslyn compilers but for some reason the System.Web assembly was not loaded during the compilation.

      So how to fix this error?  I found several ways.
      1.  Remove the Roslyn compiler from the web.config.  Comment out the element and the run time compiler will use the old .Net Framework compiler.
      2.  Add   to the msxsl:script element
      3. Add "/R System.Web.dll" to the web.config compiler compilerOptions.

      (image) (image)

      .NET Standard and Testing Partially Trusted Code

      Wed, 23 Nov 2016 08:24:49 GMT

      Originally posted on: libraries written for .NET Standard can run on multiple platforms – with more to come in the future – it's worth considering how or if they run in partially trusted environments. Having implemented support in both ReadableExpressions and AgileMapper - here's pointers on how. What is Partial Trust? Managed .NET code runs with a set of permissions, used to determine the actions it can perform. If there are no restrictions, the code runs in Full Trust; if there are restrictions, it's Partial Trust. For example, the following class uses reflection to access the underlying _items array in a List: public static class ListUnwrapper { public static T[] GetItems(List list) { var underlyingArray = typeof(List) .GetField( "_items", BindingFlags.NonPublic | BindingFlags.Instance) .GetValue(list); return (T[])underlyingArray; } } Accessing the value of this private field requires the ReflectionPermission, so when the above code is executed, the environment in which it is executing must be granted that permission. Without it, a MemberAccessException is thrown. Testing in Partial Trust There's a few steps to testing code in a Partial Trust environment: Enable Partially-Trusted Callers Add the assembly-level AllowPartiallyTrustedCallersAttribute to the AssemblyInfo.cs of the assembly under test - this declares that the assembly can be used from Partial Trust environments: [assembly: AllowPartiallyTrustedCallers] Write a Test Helper Tests can be executed in partial trust by setting up a partially-trusted AppDomain, having that AppDomain create an instance of a class containing your test methods (your test helper), and executing those methods via a remote proxy from a Full Trust test class. For example, a basic test helper for the ListUnwrapper would look like this - it derives from MarshalByRefObject to enable remoting from the Full Trust domain into the Partially-Trusted one: public class ListUnwrapperTestHelper : MarshalByRefObject { public void TestGetItems() { var list = new List { 1, 2, 3 }; var unwrapped = ListUnwrapper.GetItems(list); Assert.True(list.SequenceEqual(new[] { 1, 2, 3 })); } } Write a Partial Trust Helper Method Your full-trust test class needs a helper method with which to create a Partially-Trusted domain and execute the test helper test methods. Something like this: public static class PartialTrustHelper { public static void Execute(Action testAction) { // Just a wrapper method for void tests: Execute(helper => { testAction.Invoke(helper); return default(object); }); } public static TResult Execute(Func test) { AppDomain partialTrustDomain = null; try { // Use the untrusted Internet Zone: var evidence = new Evidence(); evidence.AddHostEvidence(new Zone(SecurityZone.Internet)); // Setup a permission set from the same untrusted Zone: var permissions = new NamedPermissionSet( "PartialTrust", SecurityManager.GetStandardSandbox(evidence)); // Setup the new AppDomain with the same root directory // as the current one: var domainSetup = new AppDomainSetup { ApplicationBase = "." }; // Create a partially-trusted domain: partialTrustDomain = AppDomain.CreateDomain( "PartialTrust", evidence, domainSetup, permissions); [...]

      Mapper vs Mapper: Performance

      Sun, 20 Nov 2016 10:20:35 GMT

      Originally posted on: the set of test results in this blog didn't turn out to all be based on fair comparisons - I've updated them and re-measured. I'm leaving this blog in place as it describes the tests. This is the first in a series of posts comparing a subset of the available mappers. I'm going to compare performance, features and ease of use, and I'll turn those words into links as I write the blogs. This blog is on that favourite thing we're not supposed to obsess too much about (in programming) - performance. The Mappers The mappers I'll be comparing are: AgileMapper My mapper project which I first published last month. AgileMapper focuses on ease of use, flexibility and transparency. AutoMapper 343 thousand downloads of the latest version; pretty safe to say you all already know about AutoMapper, right? Not much else to say, then :) ExpressMapper ExpressMapper  is a ‘lightweight' mapper, first written as a faster alternative to the AutoMapper 4.x.x series. Mapster Mapster is another ‘lightweight' mapper, written to be “kind of like AutoMapper, just simpler and way, way faster” (quoted from their NuGet page). ValueInjecter ValueInjecter is written for flexibility, and supports unflattening as well as flattening. The Tests The performance test project is a console project based on the AutoMapper benchmark which performs each of the following, for each mapper, 1 million times each: Constructor mapping - creating a POCO with a single constructor parameter from a POCO with a matching property Complex mapping - deep cloning a Foo POCO with various kinds of value type properties, multiply-recursive Foo, List and Foo[] properties, and IEnumerable and int[] properties Flattening - mapping from a POCO with nested POCO properties to a POCO with all value type (and string) properties Unflattening - mapping from a POCO with all value type (and string) properties to an object with nested POCO properties Deep mapping - mapping a POCO with nested POCO and POCO collection properties onto a differently-typed POCO with corresponding properties An aside about Unflattening As it turned out, it was only AgileMapper and ValueInjecter that supported the unflattening scenario. AutoMapper has limited configurable support for unflattening, but can't - or at least I couldn't figure out how it could - map to two different nested POCO properties of the same type. Not a knock against AutoMapper - it's just not what it's made for. The Results! I bet at least some of you skipped to this bit :) That's ok! :) Here's the results I got - the numbers are the total elapsed seconds for 1 million mappings of each type. I used the latest versions of each mapper.   Constructor Complex Flattening Unflattening Deep Manual 0.00909 1.74734 0.05565 0.05247 0.50256 AgileMapper 0.6 0.19113 5.13823 0.38389 0.58581 1.17215 AutoMapper 5.1.1 0.15574 6.24280 0.36955 - 0.95390 ExpressMapper 1.8.3 0.20984 5.11046 0.52418 - 1.58197 [...]

      Gunnar spectacles, no Warranty, no sales support.

      Fri, 18 Nov 2016 09:51:08 GMT

      Originally posted on:

      If you using using your phone, pc and laptop for a long time on work, you maybe search for a solution to make eye strain easy for you.


      For some people flux work fine. I have installed it on my desktop and laptop both. If I can’t get it on new device I can feel what is missing in my system.


      I bought Gunnar spectacles from Amazon India. First of all The package come with full of dust but packed well (Amazon packed everything well). I didn’t have any problem with packaging because product was safe in it.


      After few days I see small small spot on Gunnar. I report them on FB. They disable rating just after my rating. They have paid for rating as I can see 39 people give them rating. So this company pay for Fb rating that is totally fake.


      I mail them but got no reply. FB them but no reply. Tweet them but still not see anything in my inbox.


      1. If you closely look at people you will find that all these people talking about it is getting free copy of the gunnar. Gunnar give them free because of marketing.

      Marketing too much but no support at all. No one reply when you see a spot in your Gunnar spectacles.


      Why there is no comment on FB. Why remove all those comment. Do you see any company doing this with comments. They disable rating option just after my rating. After buying my spectacles they discontinue that list on


      Be careful with company like this, Too much money spent on marketing but it doesn’t work good if you see a problem in the spectacles.

      (image) (image)

      JavaScript–Adding months or days to a date

      Fri, 18 Nov 2016 09:50:06 GMT

      Originally posted on:


      Add moths or days (or other increments) to a date in JavaScript.


      var current = new Date();

      var threeMonthsInTheFuture =  new Date(new Date(current).setMonth(current.getMonth() + 3));

      for days use:


      for hours use:



      and so on….

      (image) (image)

      EF Code First Migrations–Create Index with INCLUDE

      Wed, 16 Nov 2016 12:09:50 GMT

      Originally posted on:

      The problem:

      Using Entity Framework code first migrations to create a complex index that defines “INCLUDES” as part of the index. Yeah, good luck doing that using Entity Frameworks fluent notation using LINQ ModelBuilder. Btw, the example below is an ASP.Net Core 1.0 implementation.

      The solution:

      Plug into the Code First migrations pipeline by simply defining your own custom sql.

      Create a “dummy” migration file and add the code below that is just using the model builder’s SQL method to run any custom sql. Again you should have an empty Up and Down method unless you made other mapping changes, which is OK, just add the SQL statement below. Of course you need to rename the index and fields you want to include in the index.

      To create migrations file via command line, make sure you are in the project folder that contains the EF DbContext.

      dotnet ef migrations add CreateIndexForInvoice –c

      It will create the empty migrations file, now add your custom index as in the example below. Note that if you already have an index with that name you will not be able to apply the migration successfully to the DB. A safer implementation would be to first check if the index already exists inside the SQL statement.

      public partial class CreatIndexesForInvoice : Migration
          protected override void Up(MigrationBuilder migrationBuilder)
                  CREATE NONCLUSTERED INDEX [IX_Invoice_DateTime]
                  ON [dbo].[Invoice] ([DateTime])
                  INCLUDE ([Id],[Number])


          protected override void Down(MigrationBuilder migrationBuilder)


      (image) (image)

      How to enumerate an ENUM (C#)

      Tue, 15 Nov 2016 15:18:45 GMT

      Originally posted on:


      You have a C# ENUM and for some reason you would like to enumerate it. Perhaps you would like to return the enumerated list to the client’s ajax call rather than calling the DB table that represents the ENUM. For example you may have invoice statuses and want to build an anonymous list.


      C# class:

      public enum InvoiceStatus
          [Display(Name = "OPEN")]
          Open = 1,
          [Display(Name = "CANCELLED")]
          Cancel = 2,
          [Display(Name = "POSTED")]
          Posted = 3,
          [Display(Name = "VOIDED")]
          Void = 4,
          [Display(Name = "SUBMITTED")]
          Submitted = 5

      MVC Controller:



      public IList GetInvoiceStatuses()


      var list = new List();

      foreach (var status in Enum.GetValues(typeof(InvoiceStatus)))
              Id = (int)status,
              Description = status.ToString()

      return list;








      (image) (image)

      WinForms and MVP: Making a testable application

      Tue, 15 Nov 2016 08:35:55 GMT

      Originally posted on: With modern frameworks available that were built with loose coupling and separation of concerns in mind, working in WinForms may seem like a testability wasteland. But there are times when the options of WPF with MVVM or MVC on the web are not available, and you’re stuck with WinForms. But fear not, the Mode-View-Presenter pattern is here to save the day! If you are not familiar with the MVP pattern, the idea is that you have a presenter, which handles all the interactions between the user and the model. The presenter “Talks” to the UI through an abstraction. There are a couple ways of doing the MVP pattern. I’m going to explore the Passive View style, meaning all logic is taken out of the view and moved into the presenter. I’m not going to go into differences in the styles, but if you’re interested, Martin Fowler has in depth descriptions of the Passive View, and Supervising Controller patterns, which are both derivatives of MVP. I’m going to do a walkthrough for creating a simple MVP application. We need a premise for our demo. I’ll go with something simple, an application that organizes notes. Here our note will be a bit of textual data with a title. Lets implement it. The Model We need the note and title, as well as created and last edited dates for tracking. We’ll want to validate our model; lets make sure we actually have a note, and I’m going to use files for storage, so we need to make sure we can use our title as a filename: public class NoteCard { [Required] [RegularExpression(@"^[_0-9a-zA-Z\s\-_]+$", ErrorMessage="Title cannot contain special characters.")] public string Title { get; set; } [Required] public string NoteText { get; set; } public DateTime? CreateDate { get; set; } public DateTime? EditDate { get; set; } } I also created a repository that will handle the persistence: public interface INoteCardRepository { IEnumerable NoteCards { get; } NoteCard GetNote(string title); bool NoteExists(string title); void Save(NoteCard note); void Delete(NoteCard note); } So, we have a nice way to represent our notes, now we need a way to display it. The View Our main form has a file menu, a status bar for displaying the number of notes, a list of notes, and a display/edit area for a note. Keep in mind, this form IS our view. We need to abstract away our view by creating an interface that defines the capabilities of our UI: public interface IMainView { event EventHandler NewNote; event EventHandler NoteSelected; event EventHandler SaveNote; event EventHandler DeleteNote; string SelectedNote { get; set; } string StatusText { get; set; } string Title { get; set; } void DisplayMessage(string title, string message); void ClearSelectedNote(); void LoadNotes(IEnumerable notes); void LoadNote(NoteCard note); NoteCard GetNote(); } IMainView is a contract that defines the UI, we need to make sure our form fulfills that contract. Since we’re using the passive view style, the implementation will be vary anemic. The only job of MainForm will be translating the messages into UI actions (databinding, manipulating controls, etc.) and notifying interested parties (the presenter) of user interaction: public partial class MainForm : Form, IMainView The Presenter Our presenter is going to be doing all the real work. So it will need access to the view and the repository. Since our goal here is loose coupling, we’re going to inject those dependencies via the constructor: public MainPresenter(I[...]

      Aurelia Links

      Mon, 14 Nov 2016 10:25:11 GMT

      Originally posted on:’ve been following AureliaJs for quite awhile now and I’ve collected a lot of links. Here they are, in no particular order, for you to enjoy. Please follow me @alignedDev and checkout my articles on Gooroo. My Aurelia Code for a presentation  - The Aurelia JavaScript Framework with Rob Eisenberg - Hanselminutes  - Discover Aurelia with CEO Rob Eisenberg - YouTube  - JSChannel 2015 - Angular 2.0 Vs Aurelia • "Aurelia provides the developer the ability to use whichever data-binding libraries you wish, including but not limited to the default aurelia-binding, handlebars, knockout, etc..." Aurelia CLI Aurelia Docs StackOverflow Documentation on Aurelia, - compare a bunch of frameworks – DotNetRocks with Steve Sanderson on JavaScriptServices. Rob Eisenberg added many comments related to Aurelia for this talk. [...]

      ProcessAdd bug in AMO 2016

      Mon, 14 Nov 2016 02:26:15 GMT

      Originally posted on:

      I saw the question below on the MSDN forum about processAdd not working in AMO 2016 and I thought it sounded strange so I did some investigation:

      When I ran Redgate Reflector over the Microsoft.AnalysisServices.Core.dll I came across this little gem:


      Where it checks if the object is of a type IQueryBinding from the Miocrosoft.AnalysisServices.Core namespace and as far as I can see nothing currently implements this interface. What this means is that if you pass any of the built-in binding classes to the Process method - it will always throw a NotImplemented exception. 

      I've posted a bug here and apparently it's a known issue and a fix is already in the pipeline. However there is also a relatively simple workaround which involves creating a class which implements IQueryBinding in your own project.

      The IQueryBinding interface is thankfully not that complicated and a full implementation is outlined below:

      public class MyBinding : Microsoft.AnalysisServices.Core.IQueryBinding
              public MyBinding(string dataSourceID, string queryDefinition)
                  DataSourceID = dataSourceID;
                  QueryDefinition = queryDefinition;
              public string DataSourceID { get; set; }
              public string QueryDefinition { get; set; }
              public ISite Site { get; set; }
              public event EventHandler Disposed;
              public void Dispose() { }

      You can then simply create a MyBinding instance and pass this in to the Process method for you your partition:

      var qb = new MyBinding("Adventure Works DW", "SELECT * FROM table");
      partition.Process(ProcessType.ProcessAdd, qb);

      (image) (image)

      Powered By WordPress

      Thu, 10 Nov 2016 10:26:33 GMT

      Originally posted on:


      For a number of reasons, it was decided to migrate the blog content from this site to a new platform. For earlier and newer blog entries, please find this blog located at (image) (image)

      Application Insights is not showing browser exceptions when window.onerror is set

      Thu, 10 Nov 2016 10:09:08 GMT

      Originally posted on: are using Application Insights from Azure to gain insight into our production database. We noticed that the browser tab was not showing Browser exceptions. This should be working by default as discussed in their Application Insights for web pages walk through shows, but ours was empty. I was certain we didn’t have perfect JavaScript code, so I dug deeper. First I made a quick file new project in Visual Studio, checked use Application Insights, added a throw new Error(‘bad dev!`), published it to Azure and had working test in a few minutes. I really enjoy how quickly it is to spin that up with Azure! The exceptions started streaming in. However, once I realized we were capturing the window.onError so that we could log through our API, I added this to the test, published again and they stopped. To get both custom logging and App Insights logging, you’ll need to add some code.  Here is my TypeScript code example (I apologize if you’re only using JavaScript and you’ll have to pretend logger exists). You can get the App Insights d.ts from npm install @types/applicationInsights --save import {ILogger} from './logger'; import {AppInsightsTelemetryClient} from './appInsightsTelemetryClient'; export class Shell.ts { constructor(private logger: ILogger){ this.setupGlobalErrorHandler(telemetryClient); } private setupGlobalErrorHandler(telemetryClient: AppInsightsTelemetryClient) { // log all uncaught exceptions and log to the db function handleGlobalErrors(message: string, url: string, lineNumber: number, colno: number, error: Error) { const errorInfo: any[] = []; errorInfo.push(`message: ${message}`); errorInfo.push(`url: ${url}`); errorInfo.push(`location: ${window.location}`); errorInfo.push(`lineNumber: ${lineNumber}`); errorInfo.push(`col no: ${colno}`); errorInfo.push(`stack: ${error.stack}`); errorInfo.push(`browser info: ${navigator.userAgent}|${navigator.vendor}|${navigator.platform}`); this.logger.logError('Unhandled JavaScript Exception', errorInfo.join('\n'), 'window.onerror'); telemetryClient.trackException(error); } window.onerror = handleGlobalErrors; } } // simplified version of our wrapping code, put in a different file named appInsightsTelemetryClient export class AppInsightsTelemetryClient { protected appInsights: Microsoft.ApplicationInsights.AppInsights; constructor(appInsights: Microsoft.ApplicationInsights.AppInsights) { this.appInsights = appInsights; } public trackEvent(name: string, properties?: any, measurements?: any): void { this.appInsights.trackEvent(name, properties, measurements); } } After getting it working, I’m considering removing our custom logging, but we’ll see. I hope this helps! [...]

      Some Git help

      Tue, 08 Nov 2016 13:29:28 GMT

      Originally posted on:

      I shared some information at a Lunch and Learn along with a demo and now I’m sharing it with you.

      "branching as a tool vs branching as a strategy" ~ Scott
      a branch is a playground

      new task = new branch, use Pull Requests to get the code into master
      - or allow direct check-in to master (for 1 or 2 person teams?)
      checkout a branch with git checkout myBranch

      from VS UI
      VS Code UI

      #taskNumber in comment will link up to a TFS task.

      Pull Request
      - gated
      - code reviews

      Command Line
      get latest = git pull
      check in = git add -A (stage), commit -m "commit message" (repeat), git push
      git checkout is switching the branch

      git checkout master
      git pull

      git checkout myBranch
      git rebase master
      git push --force // VS doesn't do this

      git ..master karma.conf (compare your branched code against master)

      rebase or merge?

      get away from VIM:   



      XKCD is relevant here:


      (image) (image)

      Entity Framework DbSet Attach() vs Add()

      Mon, 07 Nov 2016 14:41:09 GMT

      Originally posted on:


      Two options are available in EF to let it know an object exists, “Attach” and “Add”, not sure when to use which.


      “Add” as the verb suggests is for adding new objects to the EF context in order for them to be created/inserted in the database when calling “SaveChanges”. Without using “add” they float around as orphans as it were. The goal of “Add” is NOT to “tell EF about an object that already exists in the DB”. That is what “Attach” is used for: “Hey EF, you didn’t retrieve this object from the DB you lazy bum, but I am going to let you know that this object now exists as I programmatically instantiated it in custom code”.

      So why on earth would you manually/programmatically instantiate an object that you could just as well have retrieved from the DB? Because its cheaper, and if you are like my wife you’re going to love saving money. If you’re the government you will make this pattern illegal. I quite often use ENUMs for keys of reference tables, like 1 = Male and 2 – Female. For example I have two Gender classes, one is part of the EF DbSet and the other just a plain old ENUM. In stead of making a DB call to get the object, if I need it on another object where it is a navigation property (like a Person class that has a navigation property of Gender), when a create a new Person object I don’t have to retrieve the Gender set from the DB, all I have to do is new up “Gender” and assign ENUM 1 to the ID to indicate a “Male”. Attach the new Gender object to the context, _dbSet.Attach(gender). Assign it to the Person object’s Gender property. EF will not try to insert it in the DB and sets the FK on the owning class (Person) relationship. Nice!

      var male = new Gender();

      male.ID = (int)Enums.Gender.Male;

      _dbSet.Attach(male); //add to EF context indicating that it already exists in the DB

      var person = new Person();

      person.Gender = male;

      _dbSet.Add(person); // insert into DB on SaveChanges

      _context.SaveChanges(); //insert person but do nothing to the Gender table


      Remember, behind the scenes when EF retrieves an object from the DB it calls “Attach” on its own context.

      Cause-effect of SaveChanges:

      - Attach: updates DB if DetectChanges in ChangeTracker observes a modification to the object.

      - Add: insert into DB

      (image) (image)

      Accessing SalesForce Data Using SQL with Enzo Unified

      Mon, 07 Nov 2016 10:34:05 GMT

      Originally posted on: this article I will show you how to access SalesForce data to simplify data access and analysis, either in real-time or through a local cache, using SQL commands through the Enzo Unified data virtualization server. Most customers using SalesForce will need to access their data stored in SalesForce tables to run custom reports, or display the information on custom applications. For example, customers may want to build a custom monitoring tool to be notified when certain data changes are detected, while other customers want to keep a local copy of the SalesForce data for performance reasons. High Level Overview You may need to access SalesForce data for a variety of reasons: reports, data synchronization, real-time data access, business orchestrations... Most organizations facing the need to extract SalesForce data are looking for a way to get real-time data access into SalesForce, or access a local copy of the data for faster processing. Enzo Unified is a data virtualization server that gives you the ability to access SalesForce data in real-time, or automatically cached locally, either through SQL or REST commands. Enzo Unified offers three data access and synchronization methods: - Virtual Table A virtual table provides a view into a SalesForce object. The virtual table doesn’t actually store any data; it provides a simple way to get to the data through REST and/or native SQL requests. - Snapshot A snapshot is created on top of a Virtual Table. Snapshots create a local copy of the underlying SalesForce data and make the data available to both REST and SQL requests. Snapshots buffer the data locally inside Enzo Unified, which provides a significant performance boost when querying data. Snapshots can be configured to be refreshed periodically. - Integration When SalesForce data needs to be copied to an external system, such as SQL Server or SharePoint, Enzo Unified provides an Integration adapter that is designed to copy data changes to a destination. For example, with the Integration adapter, Enzo Unified can detect changes to an Account table in SalesForce and replicate changes made to that table in near-time to a SalesForce list. The Integration adapter will be covered in a future article. For a broader overview of Enzo Unified, read this whitepaper. For a brief justification for this technology, see this article. SalesForce Virtual Table Let’s first create a Virtual Table in our SalesForce adapter; the virtual table is called Account1, and points to the Account table in SalesForce. Note that the adapter is already configured with my SalesForce credentials. SalesForce credentials are assigned to a Enzo login; in the example below, the credentials to connect to my SalesForce environment is attached to the ‘sa’ account in Enzo. Because multiple configuration settings can be saved, they are named; I am showing you the ‘bscdev’ configuration, and the (*) next to it means that it’s the default setting for this login. To create a virtual table, let’s select the Virtual Tables tab. A virtual table in SalesForce is defined by a SOQL definition, which Enzo Unified runs behind the scenes to fetch the data. Clicking on the ellipses next to the SOQL statement allows you to edit the command. The edit window allows you to test your SOQL command. The columns of the virtual table are automatica[...]

      SQL Server Statistics Parser

      Fri, 04 Nov 2016 08:38:31 GMT

      Originally posted on:  Working as a Data Engineer, one of my key role is to do Query and Server Optimizations, and one of the weird things that we will be seeing is this Nerdy Data   Of course this does not automatically show up unless you turn this 2 ON SET STATISTICS IO ON SET STATISTICS TIME ON   Whole Query Sample   The data is pretty readable and understandable but still can be improved, Hopefully soon by Microsoft via SSMS(SQL Server Management Studio).   Luckily  Richie Rump (  created a site - that let’s us paste our data and parse them into this, Now if we want to compare(before/after) stats from queries that we are optimizing improvements should be more READABLE.   And if you are wondering if the site collects that Data including the table names? NO, the site does not collect anything, and the author is kind enough to share his source code on GITHUB incase you want to setup your own under your webserver [...]

      My Best Player Isnt My Hardest Worker -

      Thu, 03 Nov 2016 01:59:20 GMT

      Originally posted on: At some point every athletic coach will have a player that has the mindset to do whatever is necessary to disrupt any team cohesion that you are trying to create. You have basic team standards that you expect everyone to follow and adhere to but this one player will always do the minimum or assume those standards are for the other people on the team. The player comes to practices, games, and meetings late, is the last one on the floor and is the first to leave when the practice is over. He walks to and from timeouts and when things get hard and the team faces adversity he is the one that starts blaming and pointing fingers at the rest of the team. Nothing is ever his fault and the coaches never do enough to cater to his ego and vision of himself. Unfortunately as disruptive as he is to the development of your program and to the other members or your team, he happens to be your best player. Some version this scenario happens on most athletic teams every year, but it also is happening in our IT development and business teams. A team member with exceptional skills, knowledge and tenure happens to be our worst teammate and because of his prominence on the team, he is also our worst leader. As agile coaches we have the task of working with people that have been grouped into teams and each team is unique and is comprised of team members that come from a variety of backgrounds and skills and all of them happen to have an agenda. It is our hope that their agenda will be centered on the business, the product being developed and a unity with the group to adhere to the principles that best accomplish the business objectives. At some point, you will have a team member that doesn’t show up for meetings, is late to meetings, or on his phone during meetings, delivers sub-standard work, talks poorly of leaders or other members of the team, no adherence to company or team standards, or hordes information making it difficult for other people to efficiently do or complete their work.Below are four things to consider when you best player/employee isn’t your best worker/leader.1.       OwnershipAs the coach or leader of this group you first have to take complete ownership in everything that is happening from within it. The situation where you have a player being lazy, disloyal, disruptive and unproductive should initially be on you as the leader. I’m not suggesting that you run to your team and this person and apologize for the behavior of this one person. When you reflect on this issue, you need to first ask yourself, what did I do to allow this to happen? What did I do to allow it to get this far? What did I do to build five great team members and one bad one? What have I allowed to slip that would have caught this earlier? Was the employee brought to onto your team with these traits and challenges? If they had these characteristics from day one, you need to evaluate your recruiting/hiring strategies. If the team member developed those characteristics while being on your team, you have to look at your team vision, team communication strategies, and coaching philosophies to determine where along the line, you allowed this to seep into your team. In the book ‘Extreme Ownership’ by Jocko Willink and Leif Babin they spend an entire chapter discussing that there “are no bad teams, only bad leaders.” Below are some quotes from that book and chapter.·[...]

      Restore SQL Database–Restoring security user settings

      Wed, 02 Nov 2016 06:04:52 GMT

      Originally posted on: issue: I just restored a database backup file from somewhere. The users had a problem in a production environment and I want access to the same data to debug. The user (service account) created to access the DB is “Developer”. Why am I telling you this, well context can be useful sometimes So the restore is successful, yippee. I run my web app and it blows up with SQL login failed exceptions. Fail! I read the exception carefully, and oh it mentions “Developer” failed to log in, aha, that should be a hint. I need to remap my current “Developer” service account on my local box (or wherever you restored your DB) to give it access rights to my DB. How to fix it: Expand the Security tab in SQL Server Management Studio. Right-click and select Properties. Note the DB I am pointing to with the green arrow (if your color-blind just look for the arrows ), it’s the one I just restored. In the screen-shot below you will land on the General tab. Click/select the User Mapping option. In the User Mapping window, it lists all the attached databases. The DB with the green arrow is not selected, so go ahead and check it as in the example below. Don’t forget, the row with the DB you just restored must be checked  but also make sure the row is highlighted, then look at the Database Roles section below. Make sure db_owner is checked (or whatever access privileges you desire). Click OK to save the changes and you’re set! Also if you would like the script for restoring the user role to the database rather than using the wizard: USE [HydroChemFieldSystem_00031LCA] GO CREATE USER [Developer] FOR LOGIN [Developer] GO USE [HydroChemFieldSystem_00031LCA] GO ALTER ROLE [db_owner] ADD MEMBER [Developer] GO If you want a script to restore without having to use the wizard use the script below. Of course you will need to tweak the DB name “HydroChemFieldSystem” to suite your DB name and also the physical location from where the .bak file is restored from. USE [master] BACKUP LOG [HydroChemFieldSystem_00031LCA] TO  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQLEXPRESS\MSSQL\Backup\HydroChemFieldSystem_00031LCA.bak' WITH NOFORMAT, NOINIT,  NAME = N'HydroChemFieldSystem_00031LCA', NOSKIP, NOREWIND, NOUNLOAD,  NORECOVERY ,  STATS = 5 RESTORE DATABASE [HydroChemFieldSystem_00031LCA] FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQLEXPRESS\MSSQL\Backup\HydroChemFieldSystem.bak' WITH  FILE = 1,  MOVE N'HydroChemFieldSystem' TO N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQLEXPRESS\MSSQL\DATA\HydroChemFieldSystem_00031LCA.mdf',  MOVE N'HydroChemFieldSystem_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQLEXPRESS\MSSQL\DATA\HydroChemFieldSystem_00031LCA_log.ldf',  NOUNLOAD,  REPLACE,  STATS = 5 GO   [...]

      Extreme Ownership : Prioritize & Execute

      Mon, 31 Oct 2016 03:14:19 GMT

      Originally posted on: Notes from 'Extreme Ownership' By Jocko Willink & Leif BabinPrioritize and Execute How could we possibly tackle so many problems at once. Prioritize and execute - even the greatest of battlefield leaders could not handle an array of challenges simultaneously without being overwhelmed. Determine the greatest priority for the team. Then rapidly direct the team to attack that priority.  - full resources of the team were engaged in that highest priority effort. I could then determine the next priority, focus the teams efforts there, and then move on to the next priority I could not allow myself to be overwhelmed. That training was designed to overwhelm us - to push us far outside our comfort zone, and force us to make critical decisions under pressure. Principle A leader must remain calm and make the best decisions possible. SEAL  combat leaders utilize Prioritize and Execute. - we verbalize this principle with this direction - 'Relax, look around and make a call' Even the most competent of leaders can be overwhelmed if they try to tackle multiple problems or a number of tasks simultaneously. Leaders must determine the highest priority task, and execute. When overwhelmed - fall back on this principle - prioritize and execute. A particularly effective means to help prioritize and execute under pressure is to stay at least a step or two ahead of real time problems. Leaders at the top of the organization, to 'pull themselves off the firing lines' step back, and maintain the strategic picture. This is essential to help correctly prioritize for the team. Just as in combat - priorities can rapidly shift and change. When this happens, communication of that shift to the rest of the team, both up and down the chain of command is critical. To implement prioritize and execute - in any business, team, or organization - a leader must. Evaluate the highest priority. Lay out in simple, clear and concise terms the highest priority effort for the team. Develop and determine a solution Direct the execution of that solution - focusing all efforts and resources toward this priority task. Move on to the next highest priority When priorities shift within the team - pass situational awareness both up and down the chain. Don’t let the focus on one priority cause target fixation - see other problems developing. Application to Business The CEO went into great detail through a multitude of very impressive sounding plans Have you ever heard the term 'decisively engaged' Is a term used to describe a battle in which a unit locked in a tough combat situation cannot maneuver or extricate themselves.  - in other words, they cannot retreat, they must win. Of all the initiatives, which one do you feel is the most important, which one is the highest priority With all that you have planned, do you think your team is clear that this is your highest priority. With all your other efforts - all your other focuses - how much actual attention is being given to ensuring your fron[...]

      Extreme Ownership : Simple

      Mon, 31 Oct 2016 02:51:25 GMT

      Originally posted on: Notes from 'Extreme Ownership' By Jocko Willink & Leif BabinSimple I appreciate your motivation to get out there and get after it - but perhaps we should for these first few patrols - we need to simplify this a bit Simplify ? - its just a patrol, how complex can it get? There are some risks that can compound when working in an environment like this. Lets just keep this simple to start and we can expand as we get more experience. Just as he had been taught: simple, clear, concise information - Principle Combat just like anything in life has inherent layers of complexities Simplifying as much as possible is crucial to success. When plans and orders are too complicated, people may not understand them When things go wrong, complexity compounds those issues that can spiral out of control into total disaster. Plans and orders must be communicated in a manner that is simple, clear and concise. As a leader - it doesn’t matter how well you feel you have presented the information or communicated an order, plan, tactic or strategy - if your team doesn’t get it, you have not kept things simple and you have failed. Ask questions to clarify when they do not understand the mission or key task to be performed. Leaders must encourage this communication and take the time to explain so that every member of the team understands. It is critical to keep plans and communication simple. Application to Business People weren't sure what they should be focused on . It was essential to keep things simple so that everyone on the team understood. That is an extremely complex plan - too complex. I think that you need to simplify it It doesn’t matter if I understand it - what matters is that they understand it. They need to understand it to a point that they don’t need to be thinking about it to understand it. Your plan is too complex that there is no way that they can mindfully move in the direction that would increase their bonus. If there is not a strong enough correlation between the behavior and the reward or the punishment, then the behavior will never be modified. Need to see the connection between action and consequence in order to learn or react appropriately. People take the path of least resistance. Your plan violates one of the most important principles we adhered to in combat - simplicity. Our standard operating procedures were always kept as simple as possible, our communication plans were simple and as direct as possible. With all this simplicity embedded in the way we worked, our troops clearly understood what they were doing and how that tied to the mission. That core understanding allowed us to adapt quickly without stumbling over ourselves. [...]

      Sketchnotes: Microsoft Windows 10 Creator Update Event

      Wed, 26 Oct 2016 05:15:03 GMT

      Originally posted on:

      On October 26, 2016 Microsoft had an event to show off the future of Windows 10 and some new hardware.  The following sketchnotes summarize the announcements from that event.





      (image) (image)

      My First Amazon Echo Skill

      Fri, 21 Oct 2016 15:49:16 GMT

      Originally posted on:

      So excited.   I've done it.   First Amazon Echo Skill, all up and running.   Will report back with full implementation details.   Super easy and really great result.

      (image) (image)

      DAX Studio 2.5.0 Release

      Wed, 19 Oct 2016 13:46:18 GMT

      Originally posted on: next version of DAX Studio has just been released. You can download this release and read the release notes here Note: In this release we have updated the versions of the Microsoft ADOMD.Net and AMO libraries we reference to use the SQL 2016 versions. This gives us scope to access some of the new functionality in Power BI and SQL Server 2016, but may mean that you are prompted to download these when you upgrade from a previous version. Some of the highlights of this release are: New Features Added an option to trace Direct Query events There is now an option under File > Options where you can enable extra events in Server Timings for Direct Query based models. These events add extra overhead so you should only enable this option before you start tracing a Direct Query model and you should disable this option once you are finished. Added Dynamic syntax highlighting Earlier versions of DAX Studio contained a hard coded list of syntax highlighting keywords and functions. The lists of keywords and functions used for syntax highlighting is now dynamically discovered from the data source. This has advantages when dealing with Power BI in particular which can get new functionality added from one month to the next. Added rows and KB to server timings tab Analysis Services 2016 and Power BI have added information to the server timing events that includes information about the number of rows and the size of data returned from each of the timing events. If this information is found it is now surfaced in the server timings tab. Optimized DaxFormatter calls The old version of the API required a second call if there was an error to find out the details of the error. The nice people at have updated their API so that this is no longer necessary. Added an option to specify the default separator style In the 2.4 release we introduced an option where you could convert on demand between the 2 different separator styles. But all queries had to be executed using the UK/US style. The UK/US style is where a comma (,) is used as the list and thousands separator and the period (.) is used as the decimal separator. eg. EVALUTE FILTER( 'Product' , 'Product'[List Price] > 1.25 ) The European/Other style is where a semi-colon (;) is used as the list separator, the thousands separator is a period (.)  and the comma (,) is used as the decimal separator. eg. EVALUTE FILTER( 'Product' ; 'Product'[List Price] > 1,25 ) Now you can choose which style you want to use as your default in File > Options menu. Added an error message when you attempt to open a .dax file that no longer exists Prior to this version if you clicked on an entry in your recent file list which pointed to a file that had been renamed or deleted you would just get a blank window with no idea what went wrong. Now there will be an error posted to the output window tell you what went wrong. Bug Fixes Fixed a bug where server timing traces were also listening for query plan events Fixed incorrect removal of square brackets from MDX results Fixed a race condition that happened sometimes when trying to capture both Query Plans and Server Timings [...]

      TFS 2015 Build quick Issues and Fix

      Tue, 18 Oct 2016 19:34:06 GMT

      Originally posted on: 2015 Vnext build gives you the option for running Visual studio build which will ideally have these options 1) Get Source  This action is Hidden but as you can understand TFS will download source files from TFS based on your settings on Repository for Mappings to Source. There are chances that it will complain about files already exist Warning C:\Agent\_Work\4\s\Tests\project.xml - Unable to perform the get operation because the file already exists locally Warning C:\Agent\_Work\4\s\Tests\project - Gateway - .xml - Unable to perform the get operation because the file already exists locally One or more errors occurred while performing a get operation for workspace ws_4_9;Build\5a940eb8-a66f-4dc3-a742-ad23cee5dc01 The quick Solution is always clean the workspace before starting the new build. 2) Nuget Restore  If you are using Nuget restore in your build and if you have some Local repository for your Nuget , TFS nuget Restore will fail for any Local Nuget package download 2016-10-12T05:30:08.1016689Z Set workingFolder to default: C:\Agent\tasks\NuGetInstaller\0.1.18 2016-10-12T05:30:08.1173069Z Executing the powershell script: C:\Agent\tasks\NuGetInstaller\0.1.18\NuGetInstaller.ps1 2016-10-12T05:30:08.4766837Z C:\Agent\agent\worker\tools\NuGet.exe restore "C:\Agent\_Work\4\s\Services.sln" -NoCache -NonInteractive 2016-10-12T05:30:08.8204175Z MSBuild auto-detection: using msbuild version '14.0' from 'C:\Program Files (x86)\MSBuild\14.0\bin'. 2016-10-12T05:30:22.4923248Z ##[error]Unable to find version '2.0.5939.19993' of package 'FOO.Logging'. 2016-10-12T05:30:22.5860716Z ##[error]Unexpected exit code 1 returned from tool NuGet.exe To avoid this issue when we setup the Build Agent we should be running it with our domain account which has access to this Nuget Repository and not with default Account NT Authority\Network Service 3) Copy and Publish Build Artefacts This action fails with "TF14044: Access Denied: User Project Collection Build Service needs the Admin Workspaces global permission(s).". To Avoid this issue we need to install TFS 2015 Update 3 4) TFS 2015 Build Clean up TFS 2015 Build clean-up is daily bases and not on count. I am not fan of this logic because we have 4-5 Projects running simultaneously and continuous build will occupy lots of space. Plus, I like to retain some baseline release builds for longer period. So I have created quick PowerShell script for clean-up and scheduled it as Command line in TFS build. I am using 2 for keeping latest 2 builds and delete anything older than that. You can retain more number of builds by increasing that number. "Get-ChildItem '\\dev-exe-002\drop\* ' -Directory | Sort-Object CreationTime -Descending | Select-Object -Skip 2 | Remove-Item -Recurse -Force" Add Command line in your build step and provide parameter like [...]

      Central Pennsylvania Dot Net user group Code Camp 2016

      Mon, 17 Oct 2016 21:02:13 GMT

      Originally posted on:

      Code Camp 2016 will be in a different venue from last year. This year we will be at Harrisburg University. We are extremely indebted to the wonderful people at Harrisburg University. (image) (image)

      Effective Chat

      Mon, 17 Oct 2016 15:44:25 GMT

      Originally posted on: I thought I’d write down some tips for using a chat application effectively on a development team.  Chat applications can revolutionize a how a development team works and even its culture.  I’ll go through some of the goals of implementing a chat application, some best practices, and pitfalls. Goals Obviously the goal of a chat application is to improve communication amongst the team.  It is meant to remove “silos”.  Silos refers to an organizational structure where members of the same team do not communicate directly with each other.  The image of a silo is a very tall structure with a very small opening (for communication) at the top or bottom.  Developers aren’t given good opportunities to communicate with each other except through their manager or what they deliver. Silos are bad for development environments because often different members of the team will duplicate work.  They aren’t aware that others have already solved the same problem.  Or worse yet, the work of different team members can be divergent, pulling the products and development strategies is different, conflicting directions. The answer to this in the Agile and Scrum movements is “The Daily Stand-up”.  This is a brief meeting at that start of the day where everyone gets together to let the team know what they are working on.  The typical structure of the meeting is to answer “3 questions”:  What did I do yesterday, what will I do today, and what impediments are getting in my way.  Just by answering these questions publically to the team can be very effective in breaking down silos. One of the problems with the Daily Stand-Up is that it does take some coordination.  Everyone has to get together at the same time and in the same physical space.  The meeting can’t start until everyone arrives.  This can be problematic if different people are on different schedules.  Remote developers and different time zones, client phone calls, and traffic delays can all infer with this.  A chat application can be a very low friction way to allow this meeting to happen with the flexibility for people’s schedules and locations. It also allows for updates to be posted multiple times a day.  If someone runs into an impediment they don’t have to wait until the next morning to seek the team’s help. The chat application is well sized as a communication medium for this problem.  Phone calls or face-to-face meetings can be highly disruptive.  Developers are forced to drop what they are doing immediately.  It can take from 10 to 25 minutes or more to recover from the interruption.  Email on the other hand can become too long winded.  It lacks the brevity that chat encourages.  When email is used as chat the large number of chat emails can make drowned out the important emails.  Chat seems to be a happy median between these extremes.  It is brief and it can be easily deferred and non-interruptive when desired. Chat can also improve the interpersonal relationships on a team.  Instead of the only team communication being at a biweekly manager-supervised sprint planning meeting, communication occ[...]

      HttpModule event execution order for multiple modules

      Mon, 17 Oct 2016 17:06:29 GMT

      Originally posted on: did some tests to find out the event execution order for multiple HttpMoudles in ASP.NET. Here is the what I found:When there are multiple HttpMoudles, and the modules are registered in Web.config,  all event handlers are executed in the SAME order in which they were registered, except PreSendRequestHeaders and PreSendRequestContent which are in REVERSE order.Here is an example, suppose we have 3 modules registered in Web.config in following order.                          An event, for example BeginRequest, first Module1's BeginRequest will be executed then Module2's and the Module3's. This is true for all events except PreSendRequestHeaders  and PreSendRequestContent which are in following order:Module3's PreSendRequestContent then PreSendRequestHeaders   Module2's PreSendRequestContent then PreSendRequestHeaders   Module1's PreSendRequestContent then PreSendRequestHeaders   So they are in REVERSE order and PreSendRequestContent  and PreSendRequestHeaders in the same module are executed one after another then next module.In terms of events in one module, this is the execution order:BeginRequestAuthenticateRequestPostAuthenticateRequestAuthorizeRequestPostAuthorizeRequestResolveRequestCachePostResolveRequestCacheMapRequestHandlerPostMapRequestHandlerAcquireRequestStatePostAcquireRequestStatePreRequestHandlerExecutePostRequestHandlerExecuteReleaseRequestStatePostReleaseRequestStateUpdateRequestCachePostUpdateRequestCacheLogRequestPostLogRequestEndRequestPreSendRequestContentPreSendRequestHeadersRequestCompletedThis result pretty much confirms what this MSDN document states.Here is full result from my tests:BeginRequesto   Moudle1o   Moudle2o   Moudle3AuthenticateRequesto   Moudle1o   Moudle2o   Moudle3PostAuthenticateRequesto   Moudle1o   Moudle2o   Moudle3AuthorizeRequesto   Moudle1o   Moudle2o   Moudle3PostAuthorizeRequesto   Moudle1o   Moudle2o   Moudle3ResolveRequestCacheo   Moudle1o   Moudle2o   Moudle3PostResolveRequestCacheo   Moudle1o   Moudle2o   Moudle3MapRequestHandlero   Moudle1o   Moudle2o   Moudle3PostMapRequestHandlero   Moudle1o   Moudle2o   Moudle3AcquireRequestStateo   Moudle1o   Moudle2o   Moudle3PostAcquireRequestStateo   Moudle1o   Moudle2o   Moudle3PreRequestHandlerExecuteo   Moudle1o   Moudle2o   Moudle3PostRequestHandlerExecuteo   Moudle1o   Moudle2o   Moudle3ReleaseRequestStateo   Moudle1o   Moudle2o   Moudle3PostReleaseRequestStateo   Moudle1o   Moudle2o   Moudle3UpdateRequestCacheo   Moudle1o   Moudle2o   Moudle3PostUpdateRequestCacheo   Moudle1o   Moudle2o   Moudle3LogRequesto   Moudle1o   Moudle2o   Moudle3PostLogRequesto   Moudle1o   Moudle2o   Moudle3EndRequesto   Moudle1o   Moudle2o   Moudle3PreSendRequestContento   Moudle3PreSendRequestHeaderso   Moudle3PreSendRequestContento   Moudle2PreSendRequestHeaderso   Moudle2PreSendRequestContento   Moudle1PreSendRequestHeaderso   Moudle1RequestCompletedo   Moudle1o   Moudle2o[...]

      RabbitMQ vs MSMQ (High Level Differences)

      Mon, 17 Oct 2016 03:15:10 GMT

      Originally posted on: RabbitMQ MSMQ Centralized queuing. Decentralized queuing. Multiplatform (Linux, Windows, Mac, etc) Windows only Standard based (AMQP) No standard    Centralized vs Decentralized: A message broker like RabbitMQ is a centralized message broker where messages are stored on a central or a clustered server and client/subscriber does pubsub from this central server.MSMQ is decentralized and each machine has its own queue. Client can send messages to a particular queue and the subscriber can retrieve the message from that particular queue. Multiplatform vs Windows only: RabbitMQ is multiplatform message broker so clients from any platform can read/write messages to/from RabbitMQ. It also has client libraries written in .NET, Java, Erlang, Ruby, Python etc. Integration is easy.MSMQ is a windows machine only messaging system. Standards vs No Standards: RabbitMQ follows one of the standard called AMQP (Advanced Messaging Queuing Protocol). If you have multiple platform taking with each other than RabbitMQ is a better option.MSMQ uses its own proprietary messaging format. If you have a use case of windows machine talking with windows machine than MSMQ can suffice that use case.  [...]

      Extreme Ownership : Cover & Move

      Thu, 13 Oct 2016 02:47:18 GMT

      Originally posted on: Notes from 'Extreme Ownership' by Jocko Willink & Leif Babin Cover and Move I was so focused on our own squads dilemma, I didn’t think to coordinate with the other team, OP1, to work together. This was the first rule in Jocko's Laws of Combat - Cover and Move We had operated independently, failing to support or help each other. It was foolishness to not work together. We were all trying to accomplish the same mission We should have utilized every strength and tactical advantage possible The most important tactical advantage we had was working together as a team, always supporting each other. I had become so immersed in the details, decision points, and immediate challenges of my own team that I forgotten about the other team, what they could do for us and how we might help them. Principle Cover and Move: put simply - cover and move means teamwork. Mutually supporting one another for that singular purpose. Departments and groups within the team must break down silos - depend on each other and understand who depends on them. Often, when smaller teams within the team get so focused on their immediate tasks, they forget about what others are doing or how they depend on each other teams. The focus must always be on how best to accomplish the mission. When the team succeeds, everyone within and supporting the team succeeds. Application to Business While he was right that they were a different company, both companies fell under the leadership of the same parent company. What you just called the worst part should be the best part - you are both owned by the same corporation - so you have the same mission - and that is what this is all about, the overall mission, the overall team. Not just your team, but the whole team, the entire corporation - all departments within your company - you must all work together and support each other as one team. The enemy is all the other competing companies in your industry that are vying for your customers. You are all on the same team - you have overcome the 'us versus them' mentality and work together, mutually supporting one another. The production manager must now be willing to take a step back and see how this production team's mission fit into the overall plan Its about the bigger strategic mission How can you help this subsidiary company do their job more effectively so they can help you accomplish your mission and you can all win. Engage with them Build a personal relationship with them Explain to them what you need from them and why. Make them part of your team, not an excuse for your team. Depended on them and they depended on us - so we formed relationships with them and worked together  to accomplish the overall mission. Work together to win. [...]