Benefits of customized .NET Application Development

Mon, 19 Mar 2018 22:03:57 GMT

Originally posted on: are the most desired tools of promotion for every brand and company that wants to reach more audience and increase the potential of the lead generation. Well, there are many options to choose from when it comes to the development of a website but custom .NET development services is your obvious choice.  With the awareness among the companies that without a unique and innovative website, a business can’t flourish to it’s maximum to earn huge profits.   There is more demand than ever as far as website development services are concerned. People across the world are able to reach your brand with the help of a business web app that reflects your business vision and goals.  Why choose professional ASP.NET development services?  The topmost reason why ASP.NET is made for you is that it helps develop scalable and versatile applications.  ASP.NET is actually a server-side scripting language- owned by the most reliable industry giant- Microsoft. The accountability factor is raised to a high level with the fact that the .NET framework comes from a leader in the industry.   We can’t emphasize enough that developing customized web portals is a breeze with .NET programming language. This even means that customized web portals can be created with much ease. Further to this, publishing, editing and modification of content on ASP.NET websites is a cakewalk if you hire .NET developers to create and handle your website.  .NET Application Development: Skill Sets and Prerequisites  ? WinForms (Windows Forms): GUI Class Library ? NET Programming concepts ? OOP (Object Oriented Programming) ? .NET (Network Enabled Technologies) framework ? WPF (Windows Presentation Foundation)  Diversified and Multi-level Web Apps with .NET Application Development With the coming of all new Microsoft Visual Studio, there has been no looking back till then. Web development companies are closely watching and adopting the newly launched tools to develop reliable and high-class web apps for their clients. This is the output of the constant improvement in the technologies and tools being introduced and put-forward by the industry leader- Microsoft! Skill Sets Desirable from a .NET Developer:  If you have already made the wise decision to go for .NET application development and get your top-notch web app developed, you may choose to approach a .NET developer. It may be quite an uphill task to choose the best developer or an ultimate .NET development company, but if you make it a point to check the following points, you will never fail to get the best web app.   Following guidelines are meant to make your task a little too much less tricky and worry-free:  1. Major programming languages necessary to create innovative and creative web apps are-  VB, J# and C, C#. A developer should be well-versed of these languages. Before you invest into a developer or a company, check the certifications of the developer.  2. Visual Basic Studio is owned by a giant industry leader- Microsoft. The reliability and trust factor is the highest with the giant industry leader is extraordinary. A .NET developer should be well-versed in VB Studio. Cutting-edge and robust web apps are possible with this technology.      3. Experience is one of the significant factors that matter the most. A developer you hire for your web app development should be experienced enough. This is not just a number game. Experience comes with the ability to handle any situations in the workflow of the project. Creativity and customization also comes with experience.   4. Another vital factor that a client should never ignore is to check the portfolio of the developer. If you wish to hire a .NET development company, make sure you check the authentic portfolio of the company. You will come to know from the portfolio whether the developer is just beating about the bush or worth your investment.  Why invest into .NET development services?  ?  If a[...]

Free Course: Hands on ASP.NET Core 2

Fri, 16 Mar 2018 12:57:44 GMT

Originally posted on:


Hands on ASP.NET Core 2

I've recently just finished up my latest course, Hands on ASP.NET Core 2 at Udemy!!

To celebrate, I thought I'd give a free coupon for Geeks with blogs readers!!!

Yes, that's 100% off, free! For a limited time

Here's your free coupon code:

If you've ever been interested in learning .NET Core, this course goes in depth, showing you the ropes to learn this awesome new technology.

We cover:

  • Installation of .NET Core Framework and SDK (in Windows, Mac, Linux)
  • Configuring your Development Environment (in Windows, Mac, Linux)
  • Building Console Applications
  • Building ASP.NET MVC Applications
  • Building ASP.NET Web API Applications
  • Installing .NET CORE on Windows and Linux Servers

And more! By the time you take this course, you'll be able to confidently build ASP.NET Core applications in Windows, Mac or Linux. You'll be able to scaffold out applications, and work with Entity Framework Core to get database connectivity.

So check it out, and let me know what you think!

(image) (image)

Which Distribution of Linux Should I Use?

Tue, 13 Mar 2018 11:10:49 GMT

Originally posted on: I'm often asked this question: "hey, you're a Linux guy right? What Linux should I use? I have this friend who recommends _____ and I want to know what you think?" I usually reply with the same question: what do you want to do? So I decided to make a blog post about it that I can send people to instead. My History with Linux I should probably preface this article with a little bit of my history with Linux, in case you're reading this and you don't know me (very likely). You can probably skip this if you don't care. I started out using Linux around 1996. My first Linux was Slackware 4, a set of CDs I purchased at Egghead Software (yep, I'm old). A friend of mine told me about this Unix like thing that was so great and I just had to try it and he thought I would love it. I read a lot about Unix, and was very curious about it. I had a shell account at my internet provider and I'd tinkered around, yet at first I was a bit hesitant. "Why would I need this?" His reply was simply: "Because you hate Windows 95 so much and love DOS, you'll love this". So I bought it. He was right. I took an old hard drive I had and installed it. I fought with it for hours, then days. I finally got a desktop up and running. I have no idea what drove me in this time, but I had to figure out how to make this system work, and it was difficult. I had to know so much about my hardware! Simple things were suddenly hard again. But I pushed through, and I got my desktop up. And I started building some silly scripts for fun. The system was fast, and I could change nearly everything about it. It had a built in C compiler? I just bought some really expensive Borland package for this I could barely figure out. But this OS had a compiler built in? A free Image editor? I was hooked! For years after that, I experimented with tons of Distributions. Even BSD Unix ones. My "main computer" was always a dual boot, and some of them were pure Linux. Most of the early 2000s, I avoided Windows completely. So by year, I can break it down to my "main machine", it would be: 1996-1999: Slackware 1999-2002: Redhat (and FreeBSD) 2003-2005: FreeBSD / Knoppix 2005-2009: Gentoo 2009-2011: Linux Mint 2011-2018: Arch Linux / Debian So of course I used probably 50 or more distributions in my time, but this was what was running on my "main machine" I used for work, or browsing, or development or whatever. Obviously Arch had the longest run so far, mainly because I could just configure it and forget about it for long periods of time. But the main distro for my "real work" the last few years has been Debian. Enough about me though, let's talk about what you should use. So What Do You Want to Do? I'm going to put these in categories based on common needs. There is some overlap here, and with enough work any of these Linux distributions will work for your desired needs. One of the great things about Linux is you can make it whatever you want. But some distributions do a lot of that work for you, or have a design that works better towards certain goals. I'll present these in categories based on the easiest path to reach your goals. I'm a Linux Newbie Just Getting Started For a long time I recommended Ubuntu for this. As far as ease of use and compatibility it was great. But I pretty much hate Ubuntu now. I still use it for demos in my courses and articles because so many people use it, but I am not a fan of the way they run this distribution, the built in Amazon adware, and Unity is annoying. So if you're just starting out I recommend: Linux Mint Download Debian Download It's kind of a cheat because Linux Mint is built off Debian, but Mint looks prettier and has some nice cross platform stuff. Use these distributions if you want: A Windows-like experience Something simple to install Something reliable Something "Linux like" that doesn't deviate from the norm Something that "just work[...]

How to open URL in browser in C# ?

Sat, 10 Mar 2018 05:00:04 GMT

Originally posted on:

In C# sometime we need to open URL in browser, for opening we can use following code.
Read more
(image) (image)

SQL Server: How do I pull the ASCII value for each character in a column name?

Tue, 27 Feb 2018 16:22:45 GMT

Originally posted on:

This is a handy script to cycle through every character in a column to determine what each ascii value is.  This is especially useful when a string match isn't matching.  Often times, there is a hidden space, etc.

 DECLARE @counter int = 1;
--DECLARE @asciiString varchar(10) = 'AA%#&    ';
 DECLARE @asciiString varchar(100) 
 SELECT @asciiString = [ColumnName]
  FROM schema.TableName
  where ColumnName like '%Something%'

WHILE @counter <= DATALENGTH(@asciiString)
   SELECT CHAR(ASCII(SUBSTRING(@asciiString, @counter, 1))) as [Character],
   ASCII(SUBSTRING(@asciiString, @counter, 1)) as [ASCIIValue]
   SET @counter = @counter + 1
(image) (image)

Common Code Smells

Tue, 27 Feb 2018 13:35:22 GMT

Originally posted on: You may have heard the term "code smells" lately, it seems its being talked about frequently again. In this short post I'll explain what they are, and a few of them you may run across. What is a Code Smell? A Code Smell is just a fancy word for an indicator of a bigger problem with your code. It's language agnostic because you can have code smells in any application. It's just a sign of bad construction that you can spot fairly quickly. The biggest problem with code smells is not that programmers are ignorant about them, it's that they choose to ignore them. They'll jump into someone's code, or their own code and see the problems and make the application work, with the intention of fixing it later. This rarely happens. Checking for code smells Generally you find code smells when examining code, or doing refactoring. Small cycle refactoring is something you should be doing quite frequently. Often you can take a small method and make it a little better, build some better tests and make it more solid. Eventually, you'll get through enough of the application to make it solid, or decide on a full rewrite. The best ways to find code smells are: Casual code inspection Refactoring Heuristics analysis Tools such as PMD, CheckStyle or ReSharper Common code smells So what can you expect to find that might indicate a bigger problem? The list is very long and depends on how deep you choose to inspect your software. But here are some very common ones: Repetition - Easily one of the most common ones. Do you have sections of code repeated all over the place? This is a sure sign of amateur work, and deserves a deeper look. When refactoring repeated code, you have to effectively search for all instances of that code to get the results you want. While abstracting it into a method is a good fix, it doesn't mean you're in the clear. Needless Complexity - This is somewhat subjective but most of the time you know it when you see it. Its more complex than it needs to be to solve a given problem and is difficult to comprehend. Needless complexity stems from only a few things: A beginning programmer who is making copy and pasted code work to solve a solution. A show off who enjoys creating complex solutions to simple problems in order to appear smarter. A programmer seeking job security - if they are the only ones who understand it, you can't get rid of them (or so they think). A programmer who has inherited some nasty code and was forced to build in place. (Arguably the worst case scenario). Rigidity - While usually considered a good thing, the wrong kind of rigidity can turn your software into a house of cards. When you have methods with too many dependencies that are strictly enforced by breakage, you have a system that can't be touched or modified. Two solutions to this are loose coupling and high cohesion. Make the methods as independent as they can be, yet closely tied to the objects that objects that use them. Break up your objects if they get too big, or have too many dependencies on other unrelated objects. Configuration scattered within the code - This is a basic design flaw that is shown frequently in the wild. The most common one is keeping the database credentials in a database class. This can be a disaster when it gets out of control. Keep all configurable data (anything anyone can change) at a very high level in a centralized location. This is one of the reasons you see huge config.php or similar files. By having them all in one place you make changes simple and easy. By scattering your data through multiple files and locations you end up creating more time (and technical debt) for the next programmer. Immobile code - Everyone has seen this one. Classes or methods that are marked "don't touch" are bad and a sign of a much larger problem. If you have something that cannot be modi[...]

7 Essential Principles to know: iPhone App Development

Sun, 25 Feb 2018 23:30:22 GMT

Originally posted on: are definitely the most demanded smartphones with their unique features and dynamic UI /UX makes every iPhone user go in awe. Thanks to the amazing features of the iPhone coupled with awesome apps that one is able to download to take the productivity and ease of the iPhone to the next level.  Boom in the demand of iPhone applications for businesses: With the booming demand of the iPhone app development industry, every iPhone development company is striving on catering the apps according to the most agile technologies available for iOS app development process. SWIFT is the basic language behind iOS applications and handling it is not a cakewalk.   In fact, the whole process of iOS app development can be tangling and overwhelming, if not heed upon by  the professionals in the industry.  In this write up, we will be spotlighting 7 essential pointers every iOS app developer or a freelancer or even a company should know before diving into the deep sea of development. After all, you are never able to find the rare stones or diamonds on the surface!Core Principles of iPhone application development:  Let’s get to the checklist of the technicalities of app development for iOS: Phases of iOS Development are: 1.Discovery 2.Concept 3.Wireframe 4.Design/ UX 5.Architecture 6.Coding 7.Testing 8.Launching  The above phases will have to be conducted by only an expert iPhone development company:? Choose Developers after Scrutiny: The best way out there to make an app a big success is to start right. One should choose the developer carefully after strict scrutiny and checking the portfolio of the developer will show you the real picture. Because he will be responsible for all the development phases of the app like creating a wireframe of the app, choosing the architectural pattern, coding, testing, and finally launching the app in the App Store.  ? MVC- The Groundwork of any iOS app: What is MVC? It is a compulsory architectural design pattern which forms the basis of any iOS app. It stands for Model View Controller and this model is based on the fact that when a user takes action on the UI of an app, it is the View function that asks for the information to show from the controller.   It is a building block or a backbone of a solid iPhone app. The actual flow between the View and the Model is managed by the Controller.? UI/ UX of an app: First Impressions matter! After the creation of the Wireframe which gives us a rough idea of what an app would look like (or a user roadmap), it’s time of create a dynamic UI of the app. Designing the best UX is a challenging task but if the best elements are  integrated into the UX of the app, it will surely become a huge success.   Even before the user moves on to the functionalities of the map, its a UX which a user will notice and if it manage to let the user navigate smoothly and seamlessly, your app will pass the test. ? Action-Oriented Programming/ Coding: Though, it’s basically an MVC which controls the sending and receiving of the information when the event is called upon by the user of the app, it is the interaction between the model and the view as well as the controller that is responsible for this. An event is triggered upon by the request from the user and the code defines these events and actions.? Front-end and Back-end of the app: After the above phases of building an app for the clients in iOS app development, is the Objective-C, Swift, and the Cocoa Touch framework that the iOS developer is able to construct the controller layer of the app. An engineer who is responsible for creating the back-end of the app with full functionalities is called the back-end systems engineer. It is important to hire one with complete knowledge and experience.  ? iOS Development Environment & Frameworks:  The frameworks required[...]

My Latest Virtualization Setup

Tue, 20 Feb 2018 17:56:02 GMT

Originally posted on: many geeks of the time I spent the 90s and 2000s with at least 2 or 3 old computers in a closet, connected by a switch running various operating systems with various services running on them. Giant, loud, clunky machines whirring away. This very website was hosted for years on an old Pentium Machine running FreeBSD connected to my DSL line in my bedroom. It was just the way you did things then. I had stacks of hard drives with labels on them. Ebay specials I'd purchased for the sole purpose of putting different operating systems on them. Everything from Redhat, Gentoo, FreeBSD or various versions of Windows, I would just swap the drive and go. For years I lived in a condo that had a coat closet near the front door with an electrical outlet in it. I ran Ethernet through the ceiling and filled it to the top with machines. All for development, file servers, and web hosting purposes. It all seems a little silly now. 2018 is a little different... These days I have an Azure Account, AWS, and Digital Ocean. Virtualization has made progress even the most die hard geeks didn't expect. Even development itself is a bit abstracted from the bare metal these days, at least for most people. Things like Docker will make the OS and it's configuration almost irrelevant. Yet I still have a server in my house, and I want to show you guys the setup. This is an old HP Xeon Workstation I picked up that was going to use for course production, but even with 2 Xeons and 32 gig of RAM it fails to outperform my Mac Mini for video and audio rendering. So I decided to use it for some remote virtualization stuff, and stuffed it in a corner of our daylight basement. Why do I need this? Lately I've been doing a lot of random software development and building courses online. For those courses I need a fresh operating system and development environment. Obviously I could record some of these things on my desktop, but with the frequent configuration changes and software installs I could mess up the accuracy of my courses. So I want a machine with a dedicated operating system for each course. That motivated me to set this up. Getting the server setup To start this out, I originally had Windows 10 on this server. It ran well, the performance was not too bad. But if I were to use this as only a Virtualization machine, the overhead wasn't that great. This is with the machine "at an idle". I was using up 2 gigs of memory just loading up a desktop with nothing else. Surely I could do better than that. It should come as a surprise to nobody I decided to go with Linux base, and I installed Debian 9 on it. Why Debian? It's the same OS that powers and I like it because it's lean, simple and well supported. It doesn't have the bleeding edge packages (I use Arch on the Desktop for that) but it's stable. Incredibly stable. I wanted something I could install and forget about. So Debian was my choice. I went with a barebones Linux installation, custom kernel and no window manager, and found that the idle was significantly better: This gives me a little more headroom and CPU cycles to work with. I ended up installing LXDE because it's easier to configure and manage VMs with a window system. But I needed to set my server to boot to just a console, that way I can start and stop the windowing environment when I wanted to. To do that, I had to set my runlevel, which is a little different these days than in the past. If you want your server/desktop to boot to a prompt, do this: systemctl get-default In my case, it said Which is not what I want. So I then type in: systemctl set-default that way I can just fire up the server at a prompt and not have to have the windowing system loaded in memory. Yep, it's that e[...]

Free Book Chapter

Sun, 18 Feb 2018 15:10:11 GMT

Originally posted on:

Here’s a free chapter from Programming the Microsoft Bot Framework, "Fine-Tuning Your Chatbot":


(image) (image)

DAX Studio 2.7.2 Released

Tue, 13 Feb 2018 00:54:25 GMT

Originally posted on:

The lastest update for DAX Studio is now live at

This release includes a number of small enhancements and fixes including the following:

  • Enhancement: Allowing "Unlimited" Dataset sizes from PowerPivot – previously results were buffered through an internal memory structure that had a 2Gb limit which resulted in most typical queries failing at around the 2 million row mark. We've now implemented a new streaming interface which removes this limit and have run tests exporting over 6 million records and creating a 6Gb csv file.
  • Fix: Default Separators so that the option run queries with non-US separators is used correctly when set in the Options screen
  • Fix: Tracing Query Plans for PowerPivot
  • Fix: Setting Focus to the Find box after typing Ctrl-F
  • Fix: "Linked Excel" output when connected to an Analysis Services Multi-Dimensional cube so that it always includes Cube=[CubeName] in the connection string
  • Fix: Crash in "Define and Expand Measure" when run against a Power BI model with a measure with the same name as one of the column names

In addition to the above specific issues that have been fixed numerous stability enhancements have been added as a result of crash reports that have been logged. Thanks to those of you that have submitted these reports when the program crashes, specially those of you that have taken the extra time to note down some extra information about what you were doing when the crash occurred. With some of the crash reports it's easy to figure out what happened from the stack trace and screen shot, but in other cases it's quite difficult. We have also seen some reports that appear to be from .Net faults or issues in some of the third party libraries that we are using.

(image) (image)

Load Testing Your IIS Web Server

Sun, 11 Feb 2018 12:25:02 GMT

Originally posted on: All the theory, calculations, and estimations in the world aren’t going to tell you how your website will truly perform under a load. If you’re deploying a new server or doing any kind of performance enhancements, you don’t want to test your results in production. It’s always a good idea to see how your system behaves before your visitors do. To do that, you can use a load testing tool, and here are a few I use quite frequently.Update: I’ve featured these tools in my latest IIS course on Pluralsight, IIS Administration in Depth, check it out!NetlingDownload Netling HereOne of the “quick and dirty” applications I use is Netling. This is a super simple tester written in C#. You will need to compile this with Visual Studio, but you don’t necessarily have to be a developer to do it. I’ve been able to load it up and select build to create it with no modifications with many versions of Visual Studio.Netling is super simple to operate and about as easy as it gets.You select how many threads you want to run. This is entirely up to you, more threads will put more load on your machine, and depending on how many cores your CPU has, more may not necessarily be better. Experiment with it and see what works best for you.It has a feature for “Pipelining”. This is when multiple requests go through a single socket without waiting for a response. Setting this higher will generate a higher load but again this something to adjust for best results. There will be a physical limit to pipelining depending on your hardware and connection speed.This is a handy tool and is extremely simple to use. One issue I’ve had with netling is it sends raw requests that aren’t much like a real browser. To emulate real traffic more accurately I have another tool I like to use.Netling ProsFreeOpen Source, can be easily modifiedExtremely SimpleNetling ConsDoesn’t simulate real transactions wellCan’t do authentication or other simulationsOnly tests one URL at a timeWeb SurgeDownload Web Surge HereWeb surge is by far one of my favorites. It’s a great application that simulates a load on your server in a very realistic fashion.With this program you create sessions, which means you can use more than one URL for the test. Each of the URLs will be run in the session, which can make it more random and realistic. It has a ton of great options as well:It gives you quick results, and you can “drill down” to get more detailed data.You can also export these results in several formats. There’s the websurge proprietary format, as well as XML or JSON. You could parse these results for future analysis work.Overall Websurge is among my favorites for load testing because it’s closer to real world traffic. If you put in a list of all your pages and randomize the test, it can provide some solid information.Web Surge ProsFree (for personal use)Professional Version reasonably pricedFast and generates a large loadSimple to use, yet powerfulSimulates “real world” traffic very wellExtremely configurableWeb Surge ConsNone than I can think ofApache JMeterDownload JMeter Here The Next application we’ll look at is JMeter. This is an extremely powerful program and can do very thorough testing in addition to generating a load. In fact, load testing with JMeter is just a very small part of its overall functionality.With JMeter you have scenarios to run out, because it’s more of a testing-oriented application you can run through a longer set of steps and processes as a part of your test.I would encourage you to really dig into this application and learn as much as you can about it to get the full benefits of it.Apache JMeter ProsFreeVery PowerfulDetailed tests [...]

DAX Studio recent Win7 SP1 crashes

Tue, 06 Feb 2018 00:08:24 GMT

Originally posted on:

We've just found out that a recent security update to the .Net framework in January 2018 for Windows 7 SP1 has been causing crashes in DAX Studio when accessing the File menu. Unfortunately this issue is outside of our control and affects any WPF based windows app which references the Windows Font collection (which DAX Studio does in the Options window)

If this issue is affecting you the following link outlines the cause of the issue and some possible fixes

This fault typically manifests as a fatal DAX Studio crash with a CrashReporter dialog which reports an "MS.Internal.FontFace.CompositeFontParser.Fail" exception and a message saying "No FontFamily element found in FontFamilyCollection that matches current OS or greater: Windows7SP1"

Hopefully a follow-up patch for this will be released soon that will remove the need for people to apply manual fixes for this.

(image) (image)

C# Collection part 2

Sun, 04 Feb 2018 10:24:59 GMT

Originally posted on:

width="560" height="315" src="" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen=""> (image) (image)

Learn C# Collection part 1

Sun, 04 Feb 2018 08:58:06 GMT

Originally posted on:

width="560" height="315" src="" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen=""> (image) (image)

SQL Server: Moving DATA and LOG file to different location

Thu, 01 Feb 2018 04:36:49 GMT

Originally posted on: default location of the "data" and "log" files for SQL Server database is "C:\Program Files\Microsoft SQL Server\MSSQL13.SQL2016\MSSQL\DATA" for version 2016 unless it is set differently during the installation process.The core issue in discussion here is the copy the database files from one drive to another in case of a scenario where you current drive is getting exhausted. In my case the C: drive was running of space and I wanted to move the files to F: driveSo start the SQL Management Studio and fire the following query to understand the current file locations for a specific DB.SELECT name,physical_name FROM .sys.database_files Once you have verified the locations, just copy the .mdf and .ldf files from the location listed in the output of above query and move to the desired drive (location). At this instance you have the database DATA nd LOG files in 2 drives, and you want it at just the new location. So before you are ready to delete the files from the old location first you must change the registry of those file at Server level. You can edit the file locations using the following query commandALTER DATABASE MODIFY FILE (NAME = , FILENAME = 'F:\DATA\my_db.mdf');ALTER DATABASE MODIFY FILE (NAME = _Log, FILENAME = 'F:\DATA\my_db.ldf'); Note: are used as placeholders, do replace them with your respective valuesThe final part, is the take the DB offline delete the files in the old location and bring the DB online again so that it will start using the files from new location. [...]

Unit Test for EF LINQ queries using Mocked DbSet

Tue, 30 Jan 2018 08:11:40 GMT

Originally posted on: Create a unit test for a EF repository query. As we all know unit tests are infrastructure agnostic and therefore have no kowledge of for example external services, databases, etc. So how can we create a unit test for a repository query without hitting the DB? Mocked DbSet to the rescue:Solution:Create an extension method on collections that can be used to mock the data/collections returned by EF instead of hitting the DB.//A List of allocations for a gas system that we want to return when the EF LINQ query is executedvar allocationVOMockDbSet = new List{ new AllocationVO { LocationId = -1, ContractId = -1, NominationId = null, AllocatedQuantity = 1000, AdjustmentIndicatorType = AdjustmentIndicatorType.Measurement, GasDayId = -100, GasDay = new GasDay(null, DateTime.Today), AccountingPeriod = new YearMonth(2016, 10) }, //new new AllocationVO { LocationId = -1, ContractId = -1, NominationId = null, AllocatedQuantity = 1500, AdjustmentIndicatorType = AdjustmentIndicatorType.Other, GasDayId = -100, GasDay = new GasDay(null, DateTime.Today), AccountingPeriod = new YearMonth(2016, 10) },}.AsMockDbSet();  //here we define the list as a mocked set//Now we use the StructureMap container to create an inject the DbContext of IAfterFlowContextvar afterFlowContextMock = thisContainer.InstantiateAndInjectMock();//Now we add the setup for the mocked context on Set of AllocationVO (a domain entity) that returns the mocked setafterFlowContextMock.Setup(s => s.Set()).Returns(allocationVOMockDbSet.Object);//Here is the extension for collections on DbSet that makes it possible to mock all the different implementations of the DbSetpublic static Mock> AsMockDbSet(this ICollection source) where T : class{var queryable = source.AsQueryable();var dbSet = new Mock>();dbSet.As>().Setup(m => m.Provider).Returns(queryable.Provider);dbSet.As>().Setup(m => m.Expression).Returns(queryable.Expression);dbSet.As>().Setup(m => m.ElementType).Returns(queryable.ElementType);dbSet.As>().Setup(m => m.GetEnumerator()).Returns(queryable.GetEnumerator);dbSet.Setup(s => s.Add(It.IsAny())).Callback(source.Add);dbSet.Setup(s => s.Include(It.IsAny())).Returns(dbSet.Object);dbSet.Setup(x => x.AsNoTracking()).Returns(dbSet.Object);return dbSet;} [...]

Having trouble using GitHub with BI projects

Fri, 26 Jan 2018 09:34:02 GMT

Originally posted on:

I had existing solution built.  I've attached a screenshot of local repository folder structure.

So to get started in GitHub, I created a company repository called "DataServices", and uploaded the entire code structure through the code >> Upload files functionality.

I was happy to see that my SQL project was showing connection.  However, whenever I make a change, it would show the red checkmark for checkout while the file is being edited.  As soon as I save the file, that red checkmark would go away and be replaced by a blue lock sign, even though the change has not been committed or sync to the master branch.  There is no option on any of these files to right-click and commit.  When I go to Team Explorer and click on "Changes", it does not detect any changes.  I can disconnect and reconnect to the repository, it is still the same thing.  However, after I close visual studio and reload, it would show the changes (see screenshot)

In my SSIS project, it's even worse.  It looks like on the solution file was connected to GitHub, and nothing else.  See the second screenshot.

Just to make sure, I created a blank solution and selected "Add to repository" through visual studio, and created code into a brand new respository, then added some packages and check in the code.   The result is the same.

So at this time, I can't check in the code without having to use the website and "upload files".  Our team cannot sync up as we don't know who has what checked out.

Need some expert to tell us what's wrong, what to try, and how to save this.
(image) (image)

C# Local Function

Wed, 24 Jan 2018 23:18:25 GMT

Originally posted on:

  static void Main(string[] args)
            int Sum(int x, int y)
                return x + y;

            int res = 5 + 2;

This is a 5 line code to show how Local method works in C#.
(image) (image)

ICYMI: Programming the Microsoft Bot Framework

Wed, 10 Jan 2018 14:13:54 GMT

Originally posted on:

My latest book, Programming the Microsoft Bot Framework: A Multiplatform Approach to Building Chatbots, is now available. You can find more details on the Microsoft Press site.


(image) (image)

Hello World in Xamarin

Wed, 27 Dec 2017 15:01:41 GMT

Originally posted on:

width="560" height="315" src="" frameborder="0" gesture="media" allow="encrypted-media" allowfullscreen=""> (image) (image)

Released LINQ to Twitter v4.2.0

Tue, 19 Dec 2017 10:03:20 GMT

Originally posted on:, I released the latest version of LINQ to Twitter. In addition to fixing bugs, the highlighted features of this release include support for DM Events, Extended Tweets and .NET Core. Here’s a demo of using extended mode tweets in a Search query: static async Task DoSearchAsync(TwitterContext twitterCtx) { string searchTerm = "\"LINQ to Twitter\" OR Linq2Twitter OR LinqToTwitter OR JoeMayo"; Search searchResponse = await (from search in twitterCtx.Search where search.Type == SearchType.Search && search.Query == searchTerm && search.IncludeEntities == true && search.TweetMode == TweetMode.Extended select search) .SingleOrDefaultAsync(); if (searchResponse?.Statuses != null) searchResponse.Statuses.ForEach(tweet => Console.WriteLine( "\n User: {0} ({1})\n Tweet: {2}", tweet.User.ScreenNameResponse, tweet.User.UserIDResponse, tweet.Text ?? tweet.FullText)); else Console.WriteLine("No entries found."); } Notice that the query now has a TweetMode property. You can set this to to enum TweetMode.Extended to request tweets that go beyond 140 characters. To handle the difference between Classic and Extended tweets, the Console.WriteLine statement uses either tweet.Text or tweet.FullText. A null tweet.Text tells that the tweet is extended, which is consistent with the Twitter API convention. Here are a few API queries associated with Direct Message Events: Show: DirectMessageEvents dmResponse = await (from dm in twitterCtx.DirectMessageEvents where dm.Type == DirectMessageEventsType.Show && dm.ID == 917929712638246916 select dm) .SingleOrDefaultAsync(); MessageCreate msgCreate = dmResponse?.Value?.DMEvent?.MessageCreate; if (dmResponse != null && msgCreate != null) Console.WriteLine( "From ID: {0}\nTo ID: {1}\nMessage Text: {2}", msgCreate.SenderID ?? "None", msgCreate.Target.RecipientID ?? "None", msgCreate.MessageData.Text ?? "None"); List: int count = 10; // intentionally set to a low number to demo paging string cursor = ""; List allDmEvents = new List(); // you don't have a valid cursor until after the first query DirectMessageEvents dmResponse = await (from dm in twitterCtx.DirectMessageEvents where dm.Type == DirectMessageEventsType.List && dm.Count == count select dm) .SingleOrDefaultAsync(); allDmEvents.AddRange(dmResponse.Value.DMEvents); cursor = dmResponse.Value.NextCursor; while (!string.IsNullOrWhiteSpace(cursor)) { dmResponse = await (from dm in twitte[...]

importing ssis flat files in redshift using copy

Mon, 18 Dec 2017 16:59:51 GMT

Originally posted on:

After trying a million combinations, I finally figured out how to export data in SSIS using an OLE DB source (SQL Server) and a flat file destination. In the end all I really should have done was use "ENCODING UTF16" in the COPY command in Redshift.  None of the settings I changed in SSIS actually helped, aside from making sure the Unicode box was checked in the General tab of the Flat File Destination settings. (image) (image)

creating a t-sql script to enable query store and auto tuning for each user database

Tue, 12 Dec 2017 11:02:18 GMT

Originally posted on:

from sys.databases 
where name not in ('master','model','msdb','tempdb')

(image) (image)

SEO, Personas, and Analytics - Bridging the IT / Marketing Gap

Mon, 11 Dec 2017 09:11:27 GMT

Originally posted on:


As more marketing teams and IT teams start to cross paths more often, one areas they both could use some collaborative education is performance analytics.  In this episode of Marketer-to-Marketer, Ardath Albee and Andy Crestodina discuss the topics of buyer personas, SEO, and data analytics.
(image) (image)

In Visual Studio, how do I pass arguments in debug to a console program?

Thu, 30 Nov 2017 18:09:09 GMT

Originally posted on:

  1. Go to your project properties, either by right-clicking on the project and picking "Properties" or by picking Properties from the Project menu.

  2. Click on Debug, then enter your arguments into the "Script Arguments" field.

  3. Save.

(image) (image)

Generating a SAS token for Service Bus in .Net Core

Wed, 29 Nov 2017 15:16:18 GMT

Originally posted on: has decided to separate the queue/topic send/receive functionality from the queue/topic management functionality.  Some of these separations make sense, while others, like the inability to auto-provision new queues/topics does not. In any event, we can still create these objects using the service bus REST API, but it requires some special handling, especially for authorization. The send/receive client library uses a connection string for authentication.  This is great.  Its easy to use, and can be stored as a secret. No fuss, no muss.  The REST API endpoints require a SAS token for authorization.  You would think there would be a provider to produce a SAS token for a resource, given the resource path and connection string.  You would be wrong.  Finding working samples of the token generation using .net core 2.x was surprisingly difficult.  In any event, after (too) much researching, I’ve come up with this: public interface ISasTokenCredentialProvider   {      string GenerateTokenForResourceFromConnectionString(string resourcePath, string connectionString, TimeSpan? expiresIn = null);   } public class SasTokenCredentialProvider:ISasTokenCredentialProvider   {      private readonly ILogger logger;      public SasTokenCredentialProvider(ILogger logger)      {           this.logger = logger;      }          public string GenerateTokenForResourceFromConnectionString(string resourcePath, string connectionString, TimeSpan? expiresIn = null)      {          if (string.IsNullOrEmpty(resourcePath))          {              throw new ArgumentNullException(nameof(resourcePath));          }          if (string.IsNullOrEmpty(connectionString))          {               throw new ArgumentException(nameof(connectionString));          }                   // parse the connection string into useful parts          var connectionInfo = new ServiceBusConnectionStringInfo(connectionString);          // concatinate the service bus uri and resource paths to form the full resource uri          var fullResourceUri = new Uri(new Uri(connectionInfo.ServiceBusResourceUri), resourcePath);                     // ensure its URL encoded          var fullEncodedResource = HttpUtility.UrlEncode(fullResourceUri.ToString());                   // default to a 10 minute token          expiresIn = expiresIn ?? TimeSpan.FromMinutes(10);          var expiry = this.ComputeExpiry(expiresIn.Value);          // generate the signature hash           var signature = this.GenerateSignedHash($"{fullEncodedResource}\n{expiry}", connectionInfo.KeyValue);                // assembly the token          var keyName = connectionInfo.KeyName;           var token = $"SharedAccessSignature sr={fullEncodedResource}&sig={signature}&s[...]

A Short Deep Learning Study Guide

Mon, 27 Nov 2017 22:34:36 GMT

Originally posted on: Machine Learning pre-requisites:python charts intro to Neural nets MLLib Foundations of Machine Learning: Learningneural networks refresher - tutorial - theano code, briefly covers all topics TensorFlow Brownlee's blog - all Keras / LTSM tutorials docs docs - Cloud Machine Learning - TensorFlow scale out Cookbook learning textbook online study guide: Deep Learning - Reinforcement Learning: blog series: of bleeding edge: OpenAI blogs: [...]

Swashbuckle Swagger UI– Prompt for Access Token (.net Core)

Wed, 22 Nov 2017 10:21:17 GMT

Originally posted on: use swagger to document my API endpoints.  I like the descriptive nature, and find the swagger UI to be a great place for quick testing and discovery.   The swagger UI works great out of the box for unsecured API endpoints, but doesn’t seem to have any built-in support for requiring users to supply an access token if its required by the endpoint. Based on my research, it appears we can add an operation filter to inject the parameter into the swagger ui.  Using the code at as a guide, I’ve ported the filter to .net core (2.0) as: ///     ///     This swagger operation filter     ///     inspects the filter descriptors to look for authorization filters     ///     and if found, will add a non-body operation parameter that     ///     requires the user to provide an access token when invoking the api endpoints     ///      public class AddAuthorizationHeaderParameterOperationFilter : IOperationFilter     {         #region Implementation of IOperationFilter         ///         ///          ///         ///         public void Apply(Operation operation, OperationFilterContext context)         {             var descriptor = context.ApiDescription.ActionDescriptor;             var isAuthorized = descriptor.FilterDescriptors                  .Any(i => i.Filter is AuthorizeFilter);              var allowAnonymous = descriptor.FilterDescriptors                  .Any(i => i.Filter is AllowAnonymousFilter);             if (isAuthorized && !allowAnonymous)             {                 if (operation.Parameters == null)                 {                      operation.Parameters = new List();                 }                 operation.Parameters.Add(new NonBodyParameter                 {                     Name = "Authorization",                     In = "header",                      Description = "access token",                     Required = true,                     Type = "string"                  });             }         }         #endregion     } and add it to the Swagger middleware services.AddSwaggerGen(c =>           {             …               c.OperationFilter();           }); That’s it!  now when an endpoint requires an access token, the swagger UI will render a parameter for it: [...]

Beginning Azure Machine Learning

Mon, 13 Nov 2017 18:20:57 GMT

Originally posted on:

Free version for Azure ML Studio for Learning

Azure ML Cheat Sheet for Selecting ML Algorithm for your Experiments

(image) (image)

Announcing Enzo Online: IoT and Mobile Development Made Easier

Mon, 13 Nov 2017 08:30:56 GMT

Originally posted on: you are a software developer and would like to build a Mobile application, or an IoT system, it is likely that you will experience a steep learning curve for two reasons: development languages, and lack of SDKs.  Indeed, every platform has its own programming language variation; Objective-C on iOS, C++ on Arduino boards, Python on Rasbperry Pis (or other supported languages), .NET/Javascript for MVC Web applications, PowerShell for DevOps... and so forth. The second learning curve is around the limited (or complex) support for Software Development Kits (SDKs) for certain platforms, such as Microsoft Azure, or other technologies that do not have formal SDKs (such as sending a text message from an Arduino board for example), or introduce breaking changes when upgrading. To simplify an already complex ecosystem of languages and platforms, I created a new kind of cloud technology: Enzo Online. Enzo Online is an HTTP Protocol Bridge that makes it very easy to access other services. With Enzo Online, you can easily configure access to some of your key cloud services, and call them from your Mobile apps and IoT devices without the need to download an SDK. In other words, Enzo Online allows you to configure your services once, and reuse from any language/platform through HTTPS calls. During the Preview phase of Enzo Online, you can query SQL Server/Azure and MySQL databases, send SMS messages, Emails, and access other Microsoft Azure services (Service Bus, Azure Storage, Azure Key Vault) by sending simple HTTPS commands. At the time of this writing, Enzo Online is in preview, and many more services will be added over time. Visit for more information. This is a cross post from About Herve Roggero Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified ( Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association. [...]

Octopus Deploy : Format exception unpacking Windows NuGet package onto Linux box

Fri, 10 Nov 2017 11:00:15 GMT

Originally posted on:

There appears to be an issue with OD's "Deploy NuGet package" task (tested in Octopus Deploy 3.1.5). We encountered the problem while unpacking a .NET Core 2.0 application on a Windows tentacle and copying the resultant assets to an Ubuntu box. We noticed that if the assets were extracted using OD's in-built "Deploy package" task, the application would fail to launch on the Linux box with an "incorrect format" exception. If we unpacked the NuGet package some other way however (eg. manually extracted it), the application would run without error.

As a workaround, there are quite a few options. We have tried both of the following successfully:

1. Share the Octopus Packages folder, and then replace the "Deploy NuGet Package" step with a call to the "[System.IO.Compression.ZipFile]::ExtractToDirectory()" method, using the Octopus.Project.Name and Octopus.Release.Number properties to identify the package to extract and the Octopus.Tentacle.Agent.ApplicationDirectoryPath and Octopus.Environment.Name properties to identify the desired extraction path.

2. Install mono on the Linux box in question and unpack the package there using the "Deploy NuGet Package."

I have raised a Support ticket with Octopus Deploy:

(image) (image)

Octopus Deploy : Format exception unpacking Windows NuGet package onto Linux box

Fri, 10 Nov 2017 10:59:07 GMT

Originally posted on:

There appears to be an issue with OD's "Deploy NuGet package" task (tested in Octopus Deploy 3.1.5). We encountered the problem while unpacking a .NET Core 2.0 application on a Windows tentacle and copying the resultant assets to an Ubuntu box. We noticed that if the assets were extracted using OD's in-built "Deploy package" task, the application would fail to launch on the Linux box with an "incorrect format" exception. If we unpacked the NuGet package some other way however (eg. manually extracted it), the application would run without error.

As a workaround, there are quite a few options. We have tried both of the following successfully:

1. Share the Octopus Packages folder, and then replace the "Deploy NuGet Package" step with a call to the "[System.IO.Compression.ZipFile]::ExtractToDirectory()" method, using the Octopus.Project.Name and Octopus.Release.Number properties to identify the package to extract and the Octopus.Tentacle.Agent.ApplicationDirectoryPath and Octopus.Environment.Name properties to identify the desired extraction path.

2. Install mono on the Linux box in question and unpack the package there using the "Deploy NuGet Package."

I have raised a Support ticket with Octopus Deploy:

(image) (image)

Starting An Umbraco Project

Fri, 10 Nov 2017 07:08:50 GMT

Originally posted on: As I have been documenting Umbraco development I realized that people need a starting point.  This post will cover how to start an Umbraco project using an approach suitable for ALM development processes. The criteria I feel a maintainable solution include are a customizable development project which can be easily in source control with a robust and replicatable database.  Of course this has to fall within the options available with Umbraco.  For mean this means an ASP.NET web application and a SQL Server database.  Let’s take a look at the steps required to get started with this architecture. Create The Database I prefer a standard SQL Server database instance over SQL Server Express due to its manageability.  For each Umbraco instance we need to create an empty database and then a SQL Server login and a user with permissions to alter the database structure.  You will need the login credentials when your first start your site. Create The Solution This is the easiest part of the an Umbraco project.  The base of each Umbraco solution I create starts with an empty ASP.NET Web Application.  Once that is created open the NuGet package manager and install the UmbracoCms package.  After that it is simply a matter of building and executing the application. Finish Installation As the ASP.NET application starts it will present the installation settings.  The first prompt you will get is to create your admin credentials as shown below.  Fill these fields in but don’t press any buttons. The key is the be sure to click the Customize button before the install button as it doesn’t verify whether you want to use an existing database before running the install.  It will simply create a SQL Server Express instance on its own.  Pressing the Customize button will show the configuration screen shown below.  Fill in your SQL Server connection information and click Continue. Conclusion Once you start the install sit back and relax.  In a few minutes you will have an environment that is ready for your Umbraco development.  This will be the starting point for other future posts.  Stay tuned. [...]

Relating Umbraco Content With the Content Picker

Wed, 08 Nov 2017 14:36:09 GMT

Originally posted on: After addressing Umbraco team development in my previous post I want to explore maintaining relationships between pieces of content in Umbraco and accessing them programmatically here. For those of us who have a natural tendency to think of data entities and their relationships working within a CMS hierarchy can be challenging.  Add to that the fact that users don’t only want to query within that hierarchy and things get even more challenging.  Fortunately we will see here that adding the Content Picker to your document type defintion and a little bit of LINQ to your template you can deliver on all these scenarios. Content Picker Adding the Content Picker to your document type definition is the easiest part of the process but make sure that you use the new version and not the one that is marked as obsolete.  You will then be presented with a content tree that allows you to navigate to and select any node in your site. Querying Associated Content The field in your content will return the ID of the content instance you associated using the Content Picker.  Unfortunately it actually returns it as a HtmlString so you need to use the ToString method before utilizing it or you will get unexpected results and compile errors. In the example below I am looking for the single piece of content selected in the Content Picker.  The LINQ query show a more complicated approach, but this also gives you and idea of how you could get a list of all nodes of a certain content type and use a lambda expression to filter it.  It requires that you first back up to the ancestors of the content you are displaying and find the root.  The easier way is to use the Content or TypedContent methods of the UmbracoHelper.  In future posts I will show alternate methods for finding the root node as well. var contentId = Umbraco.Field("contentField"); var associatedContent = Model.Content.Ancestors().FirstOrDefault().Children().Where(x => x.Id == int.Parse(contentId .ToString())).FirstOrDefault(); Conclusion While the Umbraco team needs to create some better documentation for this feature it is extremely useful for building and using relationships between content in your Umbraco site. [...]

How Did I Become An IT Consultant Curmudgeon?

Thu, 02 Nov 2017 01:25:29 GMT

Originally posted on:


I have been accused of being a curmudgeon by more than one co-worker.  The short, pithy answers to the question of I got to this point would be “experience” or “it comes with age”.  But what is the real reason and does it have any benefit?

Firstly, I was raised in an Irish-German family which by default makes me surly and sarcastic.  At almost half a century of this habit I don’t see any change coming there.  I also find that most developers have similar traits along with a dry sense of humor.

The main thing that gained me the label of curmudgeon is the knack for identifying issues that could adversely affect a project.  Things like scope creep and knowledge voids that could push a project beyond its budget and deadlines.  This tendency has come from 20 years of being a technical lead and architect.  It is an attribute that has served me well.

The place where this becomes a thorn in some team members side is that with years I think I have become more blunt with my assessments.  I am still capable of tact, but probably need to employee a little more liberally.

Ultimately, I think being a self-aware-curmudgeon is a good thing.  As long as we continue to learn and strive to work with people a little surly is just the spice of life.

(image) (image)