Added By: Feedage Forager Feedage Grade B rated
Language: English
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics


How to study effectively?

Tue, 18 Jul 2017 05:06:50 GMT

Originally posted on:

(image) (image)

mysqli_connect(): (HY000/1045): Access denied for user (using password: YES)

Sun, 16 Jul 2017 04:58:18 GMT

Originally posted on: is the solution for the error:mysqli_connect(): (HY000/1045): Access denied for user (using password: YES)Solution:Make sure that your password doesn't have special characters and just keep a plain password (for ex: 12345), it will work. This is the strangest thing that I have ever seen. I spent about 2 hours to figure this out.Note: 12345 mentioned below is your plain password that you would like to set for the below username.GRANT ALL PRIVILEGES ON dbname.* TO 'yourusername'@'%' IDENTIFIED BY '12345';FLUSH Privileges; [...]

Pi Zero W - Streamer - Gains A Lego Case

Fri, 07 Jul 2017 13:26:14 GMT

Originally posted on:

So I’ve added a Lego Case To My Music Streamer 




Its working great.   Our boys can wander in with their tablets/phones and just wirelessly play what they like.    Music should just be like this.   Accessible and easy sounds awesome too.


Sum of parts.   Raspberry Pi Zero W - Hifi-Berry DAC Board - Some Lego - Cambridge Audio Amp and Speakers.    + Great Kids.

(image) (image)

Legacy Projects, Technical Debt and NDepend

Thu, 29 Jun 2017 13:44:51 GMT

Originally posted on: every project you've worked on has been green field and / or built with no time pressure, you'll have found yourself working on a legacy project at some point. Unwieldy methods, mystery sections of code, ancient technologies, wholesale duplication... it's not much fun, but it's a large percentage of the code that's out there. Projects to replace or rewrite these systems are commonplace, but where do you begin? What if you want to make a case to the business that such a system needs to be replaced? Technical debt can be a useful metaphor to make that case, but while it's easy to explain in the abstract, it's difficult to come up with anything concrete to justify the expense of an update to someone with an eye on their bottom line. Thankfully, the folks at NDepend have now built technical debt computation on top of their code analysis tools, giving you a much easier way to have these sorts of discussions. This is doubly powerful - as well as putting a concrete cost on choosing not to refactor, the data it presents has the authority of having been produced by a tool. Tools don't try to get nice, tidy-up projects for academic reasons - they impartially detect problems in code. Someone (I think Erik Deitrich, but I can't find the blog) recently pointed out the advantages of an automated critique of this sort - there's no politics or personal opinions involved, and that automatically means everyone takes it more seriously. A Real-World Example I'm currently working with a legacy project, so when I heard about NDepend's new technical debt capabilities, I was eager to fire it up and see what it said. With all the default settings, it said this! The main takeaways are: Based on the number of lines of code, the project took an estimated 2,536 days of development The code had 19,486 issues (!) of various severity - 2,736 were Major issues or worse Based on the number and types of issues, the project's technical debt will take 944 development days to fix; i.e. we are currently 944 days in the hole if we are going to sort this out completely. That's approximately 3.5 developers for a year! The debt cost was 37.23% (944 technical debt days / 2,536 development days); i.e. 37.23% of the total cost of developing the software now exists as technical debt. Sad face. As it was a legacy project, it predictably had no automated tests, which would have enabled NDepend to more precisely calculate the total annual interest incurred by the debt. Double sad face. You can still see NDepend's total interest calculation in the Debt and Issues explorer, though (see below) - it was 481 development days; i.e. an additional 481 days of development time needed every year the issues in the code base go unfixed - that's about 2 whole developers! These numbers make a powerful financial argument for refactoring and cleaning up the code. Which is exactly what we're doing :) But there's more - Debt and Issues As usual with NDepend, you can explore the issues it finds in great detail. Selecting from the Explore debt menu: can check out Debt and Issues on a rule-by-rule basis: The main offenders here are the aforementioned unwieldy methods and direct use of data access code in the UI layer. You see the debt and annual interest here on a per-rule basis, with the annual interest sum in the bottom row. You can click into a particular rule to see more details, as well as the query used to calculate the debt and interest. For the 'Methods too complex' rule, that looks like this: Debt is calculated directly from the Cyclomatic Complexity measurement - the number of paths through the method. Interest is calculated as 10 minutes per year if the method is 100% covered by tests, and 2 hours per year otherwise. Again as usual with NDepend, if you think these numbers don't sound quite right, or you'd like a[...]

SQL Azure as part of availability group, but not really doing any real work

Thu, 29 Jun 2017 04:38:30 GMT

Originally posted on:

Wouldn't it be nice to have SQL Azure as part of high availability group, but not really doing any real work?  I'm looking for a set up of Active/ReadOnly/DR-Azure.  So far I have no luck finding information on how to implement this. (image) (image)

SSMS 17.1 missing features

Thu, 29 Jun 2017 03:30:24 GMT

Originally posted on:

Just noticed that SSMS 17.1 (current version) does not support drag and drop of a SQL file.  Reverted back to SSMS 17 RC3 and that feature is back.  Going to check on other features. (image) (image)

How Do I Open a Port in Windows 10 (refused connection) ?

Tue, 27 Jun 2017 11:00:49 GMT

Originally posted on: you are seeing the "connection refused" message when attempting to set up a localhost access port, the chances are good that the port is blocked from allowing connections through Windows. To open the port, follow these instructions:1. Navigate to Control Panel, System and Security and Windows Firewall.2. Select Advanced settings and highlight Inbound Rules in the left pane.3. Right click Inbound Rules and select New Rule.4. Add the port you need to open and click Next.5. Add the protocol (TCP or UDP) and the port number into the next window and click Next.6. Select Allow the connection in the next window and hit Next.7. Select the network type as you see fit and click Next.8. Name the rule something meaningful and click Finish. [...]

WordPress 101 - Plugins!

Wed, 21 Jun 2017 02:28:23 GMT

Originally posted on: blog series covers just a few of the many features of WordPress from a developer's perspective.WordPress 101 - Setup/Configuration!WordPress 101 - Plugins!WordPress 101 - Themes!"WordPress powers over 25% of the internet..."If you've never heard of WordPress, you can think of it as the king of content management systems (CMS), allowing individuals and businesses to build and maintain robust websites with relative ease and with very little to no developer experience required. WordPress has a very flexible framework that allows for 3rd party developers to create "themes" and "plugins" that website owners can download and install with most freely available. Themes allow you to completely change the look and feel of your website with the click of a button and Plugins add functionality ranging from full blown online stores to hooking into Google Analytics... really just about anything you can think of.This article will focus on creating a simple plugin that will display a modal dialog when a user first visits your WordPress site.NOTE: You may want to download the full sample to be able to more easily follow along. is a WordPress plugin?"A WordPress Plugin is a program or a set of one or more functions written in the PHP scripting language, that adds a specific set of features or services to the WordPress site. You can seamlessly integrate a plugin with the site using access points and methods provided by the WordPress Plugin API."From a traditional developer view, what this really means is that this gives us the ability to hook into the framework and extend the base functionality to do nearly anything we want. So for example, say you have a client that needs to have their Google Analytics data displayed on an admin area of their website for the regional managers to review. Lucky for you there's a wonderful plugin already created for that and all you have to do is click a few buttons to install/configure and now you're looking like a boss! Or say your client tells you they want to capture more leads by asking site visitors to register with their newsletter when they first come to their site. Well I'm sure there's already a plugin that exists to do exactly that, however we'll use this requirement as the basis for our example (minus the newsletter part).If you have downloaded the sample plugin, let's go ahead and get that installed on your development instance of WordPress, otherwise you can skip ahead to the "Files and Locations" section.In WordPress, click on the "Plugins" option, or hover and then select "Add New"Then click "Choose File" and navigate to where you downloaded the sample .zip file and then click "Install Now"After a few moments, you'll see the following message... let's go ahead and activate the plugin while we're hereNow you'll see our new plugin listedFiles and LocationsAll WordPress plugins and associated files are located in the wp-content/plugins/ folder.  For our example, you'll find our files in wp-content\plugins\holisticsTG. In there you'll find one file and one directory that contains 2 more files.holisticsTG.php - our main 'entry point' file if you will for our pluginincludes/settings.php - handles saving the options that our plugin usesincludes/showpopup.php - does the work of displaying the modal dialogholisticsTG.phpThis file contains the required documentation/metadata as well as the initial code to get our plugin up and running. You'll notice at the top of the file, in the comments section, there is various information that is required by WordPress.WordPress allows developers to wire into existing "actions", which are simply PHP functions that are executed at specific points throughout the page load/life-cycle. For example there is an action ("wp_[...]

Creating a SharePoint DataLake with SQL Server using Enzo Unified

Mon, 19 Jun 2017 09:16:37 GMT

Originally posted on: this blog post I will show how you can easily copy a SharePoint list to a SQL Server table, and keep the data updated one a specific frequency, allowing you to easily create a DataLake for your SharePoint lists. This will work with SharePoint 2013 and higher, and with SharePoint Online. While you can spend a large amount of time learning the SharePoint APIs and its many subtleties, it is far more efficient to configure simple replication jobs that will work under most scenarios. The information provided in this post will help you get started in setting a replication of SharePoint lists to a SQL Server database, so that you can query the local SQL Server database from Excel, Reporting tools, or even directly to the database. You should also note that Enzo Unified provides direct real-time access to SharePoint Lists through native SQL commands so you can view, manage and update SharePoint List Items.   Installing Enzo Unified To try the steps provided in this lab, you will need the latest version of Enzo Unified (1.7 or higher) provided here: The download page also contains installation instructions. Enzo Unified Configuration Once Enzo Unified has been installed, start Enzo Manager (located in the Enzo Manager directory where Enzo was installed). Click on File –> Connect and enter the local Enzo connection information.  NOTE:  Enzo Unified is a Windows Service that looks like SQL Server; you must connect Enzo Manager to Enzo Unified which by default is running on port 9550. The password should be the one you specified during the installation steps. The following screen shows typical connection settings against Enzo Unified: Create Connection Strings Next, you will need to create “Central Connection Strings” so that Enzo will know how to connect to the source system (SharePoint) and the destination database (SQL Server). You manage connection strings from the Configuration –> Manage Connection Strings menu. In the screen below, you can see that a few connection strings have been created. The first one is actually a connection string to Enzo Unified, which we will need later. The next step is to configure the SharePoint adapter by specifying the credentials used by Enzo Unified. Configuring the SharePoint adapter is trivial: three parameters are needed: a SharePoint login name, the password for the login, and the URL to your SharePoint site. You should make sure the login has enough rights to access SharePoint lists and access SharePoint Fields. Once the configuration to the SharePoint site is complete, you can execute commands against Enzo Unified using SQL Server Management Studio. Fetch records from SharePoint using SQL Server Management Studio To try the above configuration, open SQL Server Management Studio, and connect to Enzo Unified (not SQL Server). From the same machine where Enzo is running, a typical connection screen looks like this: Once you are connected to Enzo Unified, and assuming your SharePoint site has a list called Enzo Test, you can run simple SQL commands like this: SELECT * FROM SharePoint.[list@Enzo Test] Create a Virtual Table You will also need to create a Virtual Table in Enzo so that the SharePoint list looks like a table in Enzo Unified. A Virtual Table is made of columns that match the SharePoint list you want to replicate. To do this, open Enzo Manager, select the SharePoint adapter, and create a new Virtual Table by clicking on the NEW icon; provide a name for the Virtual Table, and select the columns to create through a picker. In the example below, I am creating a Virtual Table called vEnzoTest, which mirrors a SharePoint List called [...]

Wordpress change permalinks to Postname cause Page Not Found

Fri, 16 Jun 2017 00:38:13 GMT

Originally posted on: you select postname in permalinks of wordpress admin page, it might not work sometimes.Following are the reasons.* If in the admin permalinks page itself if it shows some message saying "if you have given permissions to .htaccess we could have done this ourselves". Do the following1. Go to wordpress installation folder in ftp or if you are connected to your ssh, navigate to the folder2. Ensure that you that you have permissions 644 for .htaccess file and wp-config.php file  (If you use filezilla, you can rightclick on the file and ensure that these are checked : Owner Permissions : Read and WriteGroup Permissions : Read Public Permissions : Read)3. Also ensure that 755 permission is given for all the subfolders under your installation directory4. Go to admin - permalinks page again, choose postname option and save it, the * message mentioned above should go away5. if you still see * message as mentioned above, try to set the permissions for .htaccess file as 777 and do step 4 again.6. now you should not see the * message anymore if you have done step 57. try to navigate to the post you have created 8. if it works, go to filezilla and change the permissions back to 644 and everything should still work fine.9. if it still doesn't work and if it says 404 page not found, do the following.10. ssh to the server12. vim /etc/apache2/apache2.conf (might be httpd.conf in some cases)13. find the word Directory14. You can see a couple of Directory sections like that (ex: ....15. Insert an additional section next to what you have like below. (ensure that you mention your complete wordpress installtion path. The path given below is just an example. for the given path, AllowOverride is what matters. So ensure that AllowOverride is set to All for the folder that you mention under the Directory tag.         Options Indexes FollowSymLinks         AllowOverride All         Require all granted 16. Restart your apache server.In my case its service apache2 restartit can also be service httpd restart17. Refresh your sample blog that you have created or create a new blog post from the admin page and try navigating to it, it should all work fine.If it still doesnt work, it means your module rewrite may not be enabled.Ensure that this line is uncommented (remove the # in front of the following line) in the /etc/apache2/apache2.conf (or) httpd.confLoadModule rewrite_module modules / mod_rewrite.soEnsure to restart the apache server and refresh the page and try again.Ensure to do step 8 finally, if you have not done it already. [...]

If I Made a New Web App over the Last 12 Years

Wed, 14 Jun 2017 09:33:53 GMT

Originally posted on:

Have you thought about how you've built web applications in the past and how you want to do it now?

Please visit me at for my reflections. (image) (image)

Show off your API with a little Swagger...

Wed, 14 Jun 2017 07:36:33 GMT

Originally posted on: So you've built yourself a ground-breaking RESTful .NET Web Api that's effectively going to change the world... awesome!You've even sold and marketed the idea to maybe your boss or potential client... fantastic!And then they ask you for documentation... oops!You're not alone, developers are notorious for "saving the world" with their code and yet always seem to overlook the obvious. But fear not!There are a few easy solutions that you can implement that will actually do all the documentation work for you. Microsoft has their own built-in version of this, however I'm going to focus today on a specification called "Swagger."  More specifically, the implementation technology we'll be using to generate the Swagger documentation with the Web Api framework is called "Swashbuckle."Why Swashbuckle?Well not only does Swashbuckle allow you to have very nicely formatted documentation, it will also handle auto-generating interactive test cases for you so your consumers can easily play around with your Api while they're familiarizing themselves with it's overall usage.Here's a couple screenshots of our Demo API found here: Here is an example of one of those interactive test cases I mentioned earlier..."Ok, I'm sold. How do I set this up for my Web Api?"There is actually just a few simple steps to get this up and running...Open NuGet package manager within Visual Studio and install "Swashbuckle.Net45"After the NuGet package is installed, you'll notice a new file has been added to your "App_Start" folder called "SwaggerConfig.cs" This file allows you to customize various aspects of your generated Swagger documentation, we'll dive more into that in a just a bit.For now, you can simply use your browser and point to the root url of your api and append it with "/swagger"So here is my controller code for our Demo API: /// /// Holistics Technology Group Demo API /// public class DemoApiController : ApiController { /// /// Simple string example /// /// string [HttpGet] [Route("api/demoapi/helloworld")] public async Task StringExample() { return await Task.Factory.StartNew(() => "Hello from Holistics Technology Group!"); } /// /// Get weather summary /// /// zip code /// string [HttpPost] [Route("api/demoapi/todaysweather")] public async Task Summary(string zipCode) { return await Task.Factory.StartNew(() => zipCode.LookupWeather().TodaySummary); } /// /// Get weather forecast object /// /// zip code /// WeatherData [HttpPost] [Route("api/demoapi/threedayforecast")] [ResponseType(typeof(WeatherData))] public async Task Forecast(string zipCode) { return await Task.Factory.StartNew(() => zipCode.LookupWeather()); } } Now that we have Swashbuckle wired up properly, we'll want to configure it to read in our XML code comments. This is where the SwaggerConfig.cs file comes into play. But first we'll need to tell Visual Studio to actually save our XML comments into our bin directory.Then open up SwaggerConfig.cs and un-comment the line that contains "c.IncludeXmlComments" and supply the path your actual XML file name you set just above.Swashbuckle is very configurable and we have only scratched the surface of what it can do, but this s[...]

The TDLR; on easily managing .NET concurrent asynchronous threads

Mon, 12 Jun 2017 10:04:38 GMT

Originally posted on: had the requirement for your code to perform some series of background actions, say to query an API for weather data for various locations, but your provider caps you at having only 2 max concurrent connections at any given time? If so, then you’ve already gone through the wealth of information found online for a solution. I’ve seen quite a few myself and yes, they all work just fine! For me, I like to keep things as simple as possible… so with that in mind and knowing that you’re most likely here looking for quick code samples, here is my TDLR; for handling this type of scenario: var mockActions = new List { { () => { Console.WriteLine("Get Weather for Zip 1"); Thread.Sleep(1500); Console.WriteLine("Save Results for Zip 1"); } }, { () => { Console.WriteLine("Get Weather for Zip 2"); Thread.Sleep(1500); Console.WriteLine("Save Results for Zip 2"); } }, { () => { Console.WriteLine("Get Weather for Zip 3"); Thread.Sleep(1500); Console.WriteLine("Save Results for Zip 3"); } }, { () => { Console.WriteLine("Get Weather for Zip 4"); Thread.Sleep(1500); Console.WriteLine("Save Results for Zip 4"); } }, { () => { Console.WriteLine("Get Weather for Zip 5"); Thread.Sleep(1500); Console.WriteLine("Save Results for Zip 5"); } } }; //Currently only allowing 2 concurrent threads to run at any given time var threadBroker = new SemaphoreSlim(2); Task.WhenAll(mockActions.Select(iAction => Task.Run(() => { try { threadBroker.WaitAsync().Wait(); iAction(); } finally { threadBroker.Release(); } }))).Wait(); NOTE: You'll want to implement your own exception handling and logging inside your try/finally block ,which I have left out for brevity. Now before we actually get into what's going on, here's the above wrapped into a nice little extension method... public static class AsyncExtensions { public static void ExecuteAsync(this IEnumerable actions, int maxCurrentThreads = 1) { var threadBroker = new SemaphoreSlim(maxCurrentThreads); Task.WhenAll(actions.Select(iAction => Task.Run(() => { try { threadBroker.WaitAsync().Wait(); iAction(); } finally { threadBroker.Release(); } }))).Wait(); } } Then, using the same mockActions object above, here's is how we consume the extension method: mockActions.ExecuteAsync(2); This really simplifies most of the more common concurrent asynchronous threading tasks that I run into on a day to day basis... So what's going on here? The key component of what we're utilizing here is SemaphoreSlim, which allows for managing the total number of concurrent threads that we want to have executing at any given time. This is ideal for situations where the thread wait times are "expected to be very short" (that's Microsoft's official documentation). The reason for this is because SemaphoreSlim implements something called SpinWait, which is fast however keeps the CPU active while waiting for a thread resource to free up. In situations where the wait times might be on the longer side, you can use the SemaphoreSlim's big brother, Semaphore. Semaphore isn't quite as fast as it's little brother however is a bit more robust. Now let's look at a more realistic example utilizing our extension method while sticking with our weather API scenario: var locker = new object(); var results = new List(); var zipCodes = new[] { "1111111", "222[...]

The 10 Commandments of Good SEO Hipster Version

Sun, 04 Jun 2017 23:16:13 GMT

Originally posted on: do not know about you, but to all those who are dedicated to online marketing, do not you think, what ever Google is more like a religion? , A little rather yes, right? And if the Catholics have the figure of Moses, as the one who met on Mount Sinai with God, and were given the tables with the commandments, does not seem very far, the figure of A hipster sent from Google, instead of two heavy stones, carry a heavy tablet, with the commandments of good SEO in epub or pdf version, according to the god Google, who sees everything, knows everything, and is ubiquitous.And, what would those 10 commandments of good SEO?, Because I believe without doubt that something very similar to what comes next, and these 10 commandments, not to fulfill them, expect many bad things to your website, more plagues than In a dark and humid basement, there we go with the 10 commandments ( here would molaría what has been a jingle in the style of the main 40 ). You will love Google about all the search engines , it may make you more or less grace, that power you have, that way of deciding how to stop in the Roman circus, what is good and bad, but Whether you like it or not, you have to love everything about marketing 2.0, you have to treat it with love and "respect" that would say Ali G, and listen very very much, everything that comes to say, because Google is like A mother, who gives you advice, which are not advice, but imperative orders of Sergeant O'neil. Also in case you lack reasons, here in Spain has a level of implementation of 90% on search engines, and should miss the other 10%, those who use Internet Explorer, and those who have installed some type of malware. You will not take the name of your tags in vain , it is not worth cheating, putting tags that have nothing to do with your content, that Google will see, and it will give you behind Of the ears, you can not put in your tags, all the cities of Spain, if you only serve in a province, because that's very ugly, and you're going to catch Google and you're going to kick that, 4 page sends you. It's going to give you a rowing blow that riete your videos of falls from Youtube. Sanctify the headings , nothing to make a web page that does not respect the structure of headings html, H1 to H6, you do not have to use all, but if you make use of it, and the Such as H1, H2 and H3, to indicate what is being talked about in that content, and rank the importance of those words within the content, so that Google can be well positioned, and not that it is he who chooses Of what you're talking about, because Google is always in a hurry, and if you do not keep this command, he'll put whatever he wants. You will honor the Penguin and the Panda , these two creatures of Mr. Google, you must honor them, because Google is very much of your things, and the one for your children does whatever it is, so you can give them content Of quality to the Panda, and quality links to the Penguin, or else, or else ... look better do not want to know what Google will do as dishonor to one of them two, because it's going to give you a colleja that you're going to put to dance Something that will not have orchestra to follow the rhythm.You will not kill creativity , Google is a creative god, and if something does not like anything, but nothing at all, it is uncreative, empty and simple content that is not useful to your Followers and fans, every time you kill the creativity of your content, Google drops you off a page results, so you better not take it as a joke, or soon appear on the foot of the monitor, down there nex[...]

Get Custom Fields using Quickbooks SDK

Sun, 28 May 2017 13:41:19 GMT

Originally posted on: was attempting to get a custom field that a customer had added to Items.It took me a while to figure out it is simple but obscure and basically just requires adding a child element of OwnerID = 0.Once I found it, the QB SDK documentation was clear after looking at this article for getting Customer custom info:Described here: thing is that the DataExtRet element contains custom fields and only exists if that Item has data for the custom field(s) present.Request for item query: - -                  0      Environment:The customer is running Enterprise 16I'm using SDK type library is QBXMLRP2 1.0 Type Library, File Version 13.0.R4000 + 10The app is a WPF/C# desktop billing application.Currently using  Visual Studio 2015 Enterprise. [...]

SSIS project foreach loop editor does not show configuration for ADO or ADO.NET enumerator

Fri, 26 May 2017 07:39:22 GMT

Originally posted on:

UPDATE on 2017-05-31:

It seems to be a bug with version 17.1 (build 14.0.61705.170).  I uninstalled that version (including some prerequisite and other components from program list), did a repair on Visual Studio 2015, then installed build 14.0.61021.0.  I was able to see the ADO components working again (but lost the ability to deploy to SQL 2017, as expected).   I will report this to the SSDT team.


I set up Visual Studio 2017, but the SSDT for SQL 2016 did not integrate so I am in Visual Studio 2015 Shell.  I just created a Foreach Loop Container, and tried to get to the configuration screen and only see blanks (see below, both ADO ones do not work, but the file one works).  Anybody else see the same issue?



(image) (image)

Writing a Voice Activated SharePoint Todo List - IoT App on RPi

Tue, 16 May 2017 09:49:09 GMT

Originally posted on: wanted to write a voice activated system on an IoT device to keep track of your “todo list”, hear your commands being played back, and have the system send you a text message with your todo list when it’s time to walk out the door?  Well, I did. In this blog post, I will provide a high level overview of the technologies I used, why I used them, a few things I learned along the way, and partial code to assist with your learning curve if you decide to jump on this.  I also had the pleasure of demonstrating this prototype at Microsoft’s Community Connections in Atlanta in front of my colleagues. How It Works I wanted to build a system using 2 Raspberry Pis (one running Windows 10 IoT Core, and another running Raspbian) that achieved the following objectives: * Have 2 RPis that communicate through the Azure Service Bus This was an objective of mine, not necessarily a requirement; the intent was to have two RPis running different Operating Systems communicate asynchronously without sharing the same network * Learn about the Microsoft Speech Recognition SDK I didn’t want to send data to the cloud for speech recognition; so I needed an SDK on the RPi to perform this function; I chose the Microsoft Speech Recognition SDK for this purpose * Communicate to multiple cloud services without any SDK so that I could program the same way on Windows and Raspbian (Twilio, Azure Bus, Azure Table, SharePoint Online) I also wanted to minimize the learning curve of finding which SDK could run on a Windows 10 IoT Core, and Raspbian (Linux); so I used Enzo Unified to abstract the APIs and instead send simple HTTPS commands allowing me to have an SDK-less development environment (except for the Speech Recognition SDK). Seriously… go find an SDK for SharePoint Online for Raspbian and UWP (Windows 10 IoT Core). The overall solution looks like this: Technologies In order to achieve the above objectives, I used the following bill of materials: Technology Comment Link 2x Raspberry Pi 2 Model B Note that one RPi runs on Windows 10 IoT Core, and the other runs Raspbian Microphone I tried a few, but the best one I found for this project was the Mini AKIRO USB Microphone Speaker I also tried a few, and while there is a problem with this speaker on RPi and Windows, the Logitech Z50 was the better one USB Keyboard I needed a simple way to have keyboard and mouse during while traveling, so I picked up the iPazzPort Mini Keyboard; awesome… Monitor You can use an existing monitor, but I also used the portable ATian 7 inch display. A bit small, but does the job.  IoT Dashboard Utility that allows you to manage your RPis running Windows; make absolutely sure you run the latest build; it should automatically upgrade, but mine didn’t. Windows 10 IoT Core The Microsoft O/S used on one of the RPis; Use the latest build; mine was 15063; if you are looking for instructions on how to install Windows from a command prompt, the link provided proved useful Raspbian Your RPi may be delivered with an SD card prel[...]

MS SQL - bcp to export varbinary to image file

Mon, 15 May 2017 22:56:27 GMT

Originally posted on:

I don't do much SQL in regular day to day basis but when it comes to it then it gets really exciting. Here is one of the odd days where I wanted to push out a image file from SQL which was sorted as varbinary(MAX) in database. As you all know that bcp is a very handy utility when it comes to dumping data out. So, I made that as a first choice but soon realized it was difficult to handle varbinary with the default arguments. Reading the internet here is what I could learn...

You need a format (.fmt) file for such an export. To generate the format file you need first go to the command prompt and perform the following:

D:\>bcp "select top 1 annotation from [DEMO_0]..[MasterCompany_Demographics_files]"  queryout d:\test.png -T

Enter the file storage type of field annotation [varbinary(max)]:{press enter}
Enter prefix-length of field annotation [8]: 0
Enter length of field annotation [0]:{press enter}
Enter field terminator [none]:{press enter}

Do you want to save this format information in a file? [Y/n] y {press enter}
Host filename [bcp.fmt]: annotation.fmt {press enter}

Starting copy...

1 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 1      Average : (1000.00 rows per sec.)

This will help you to generate your format file which then can be used to export out the images easily.

D:\>bcp "select top 1 annotation from [DEMO_0]..[MasterCompany_Demographics_files]"  queryout "d:\test.png" -T -f annotation.fmt

Starting copy...

1 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 15     Average : (66.67 rows per sec.)

I hope this helps someone...
(image) (image)

This blog has been moved to

Sat, 13 May 2017 10:39:04 GMT

Originally posted on:

Hello everyone,

Good evening !

It’s my pleasure to use this blog to write stuff about technical stuff and things that I do or use in my daily life. This blog have a lot of technical issue in past. It’s not work properly.


Many time I click on publish button and it was not working. I see on those days nothing has been published on last many hours  on whole website.


I am moving my blog to for my future post subscribe me on my new address. I am currently focusing on .NET Web & Desktop programming.

I hope We can do in future. If you have any feedback don’t hesitate to tweet to me


Happy coding (image)

(image) (image)

What’s New in C# 7.0 Article in Code Magazine

Thu, 11 May 2017 08:49:59 GMT

Originally posted on:

My What’s New in C# 7.0 article in Code Magazine is out now.


You can find it online, at:

Or you can download the PDF of the entire issue, at:

You can also get a print subscription, at:

(image) (image)

Getting SQL Server MetaData the EASY Way Using SSMS

Thu, 11 May 2017 04:41:57 GMT

Originally posted on: So, you are asked to find out which table uses up the most index space in a database.  Or how many rows are in a give table.  Or any other question about the metadata of the server.  Many people will immediately jump into writing a T-SQL or Powershell script to get this information.  However, SSMS can give you all the information you need about any object on the server.  Enter, OBJECT EXPLORER DETAILS.  Simply go to View –> Object Explorer Details OR hit F7 to open the window.  Click on any level of the Object Explorer tree, and see all of the objects below it.  So, lets answer the first question above…..which table uses the most index space.  We’ll check the AdventureWorks2012 database and see:     Dammit Jim, that doesn’t tell us anything!  Well, hold on Kemosabe, there’s more.  Just like every other Microsoft tool, you can right-click in the column bar of the Object Explorer Details pane, and add the column[s] you want to see.  So, I’ll right-click on the column bar and select the “Index Space Used” column.  The columns can be sorted by clicking on the column name.  So, that makes our job a lot easier:         And, as we might have guessed, the dreaded SalesOrderDetail table uses the most Index Space.  And, we’ve found that out without writing a single line of code, and we can be sure the results are accurate.  I guess it’s possible that you are never asked about your metadata.  Maybe you’re thinking “That don’t confront me, as long as I get my paycheck next Friday” (George Thorogood reference….I couldn’t help it).  But wait, don’t answer yet……did I mention that the OED window can help us with bulk scripting?  Let’s say we want to make changes to several stored procedures, and we want to script them off before doing so.  We have a choice to make: Right-click on each procedure and choose “Script Stored Procedure As” Right-click on the database, select Tasks –> Generate Scripts, and then use the wizard to whittle down the list of objects to just the stored procedure you want Highlight the stored procedures you want to script in the OED window and right-click, Script Stored Procedure As.         Yeah, ok that’s all well and good, but there really isn’t much time savings between the 3 options in that scenario.  I’ll concede that point, but consider this; Without using the OED window, there is no way to script non-database objects like Logins, Jobs, Alerts, Extended Events, etc, etc, etc without right-clicking on each individual object and choosing “Script As”.  In the OED, just highlight all the objects you want to script and right-click.  You’re done.  If you have 50 users you need to migrate to a new instance, or you want to move all of your maintenance jobs to a new server, that’s going to save a lot of time.   I’ve seen lots of people much smarter than I am go hog-wild writing scripts to answer simply questions like “which table has the most rows”.  Or, where is the log file from that table stored in the file system (you can find that out too).  Or what are the database collations on your server?  I’ve seen them opening multiple windows and cutting and pasting objects onto a new server.  While they were busy coding and filling in wizards, I[...]

Raspberry Pi Zero W - Media Streamer

Mon, 08 May 2017 04:35:23 GMT

Originally posted on:

So I’ve been wanting to update my media streamer for a while.

In our kitchen  I have an amp and speakers,  and I have a Raspberry Pi One acting as a media streamer,  primarily for use with Air Play.



It sounds truly amazing.    I did the build for around £30.     


Here is my build list -


Software -


Pi Zero W - Board only


Hammer on - Header Board - this is genius


DAC Board - Gives the Pi Decent Sound capability


Metal stand-offs - These just look nice



Parts I already had - 

8 Gb Micro SD Card

Power Supply - Used an old phone charger

Micro USB Cable

RCA Cable

Amp + Speakers




(image) (image)

Video Editing PC Build - Ryzen 8-Core Processor

Fri, 05 May 2017 13:02:49 GMT

Originally posted on: everyone! It's an exciting day. The first shipment of parts has arrived for the new AMD Ryzen 7 1700X Processor Video Editing PC Build! This fresh new shiny piece of technology grabbed my attention as soon as it was originally released on 03/02/2017. It's worth checking out if you're considering upgrading. The 1700X clocks @ 3.4GHz with 8 Cores and currently delivers a 14717 benchmark score for $365.37! If you're interested in seeing how this stacks up to the competitors, check out PassMark's New Desktop CPU benchmarks this month. I've always been a huge fan of Intel, but it makes my bank account sad planning an entire PC Build around the Intel Core i7-6950X or Intel Core i7-6900K. If you take a close look at the chart below, you can see me in the red. This is an accurate representation of my thought process when choosing between these Intel and AMD processors.Looks outstanding, right? I'm anxiously awaiting the rest of the parts which will arrive within the next couple days. I picked out each one of these for my needs and budget strictly for Video Editing and a little bit of Gaming. I'm currently operating on a Surface Pro 3 which is amazing for everything but those two hobbies! If you're thinking about doing a build similar to this but concerned about the price, trimming down the SSD and RAM could help with the finances. Be decisive. If you're not careful, you may end up spending a week researching a single component. In a world full of constant innovation, it's always a good time for an upgrade!I'm going to throw together a quick build guide with performance results soon for anyone interested. We'll see how well it functions! Hope this helps someone consider a Ryzen upgrade! Big shoutout to Amazon for mastering the logistics of fast shipping! Good luck!Here's the build:ComponentSelectionPriceCPU + MotherboardAMD YD170XBCAEWOF Ryzen 7 1700X Processor & ASUS PRIME X370-PRO Motherboard Bundle$515.39CPU CoolerNoctua NH-D15 SE-AM4 Premium-Grade 140mm Dual Tower CPU Cooler for AMD AM4$89.90MemoryG.SKILL Ripjaws V Series 32GB (2 x 16GB) 288-Pin DDR4 SDRAM 2133 (PC4 17000) Z170/X99 Desktop Memory F4-2133C15D-32GVR$237.00StorageSamsung 850 EVO 1TB 2.5-Inch SATA III Internal SSD (MZ-75E1T0B/AM)$349.99Video CardMSI VGA Graphic Cards RX 580 GAMING X 8G$279.99CaseNZXT H440 Mid TowerComputer Case, Matt Black/Red /w Window (CA-H442W-M1)$114.99Power SupplyEVGA SuperNOVA 750 G2, 80+ GOLD 750W, Fully Modular, EVGA ECO Mode, 10 Year Warranty, Includes FREE Power On Self Tester Power Supply 220-G2-0750-XR$99.99Total$1,687.25  [...]

MyGet Command Line Authentication for Windows NuGet

Thu, 04 May 2017 06:57:42 GMT

Originally posted on:

When using Windows and the NuGet package manager:

nuget.exe sources Add|Update -Name feedName -UserName user -Password secret
(image) (image)

Adopt a SQL orphan

Thu, 04 May 2017 03:08:45 GMT

Originally posted on: DBAs have been asked to copy a database from one SQL instance to another.  Even if the instances are identical, the relationship (SID) between the logins and table users are broken.  Microsoft includes sp_change_users_login to help mitigate this problem, but using the stored procedure must be done database by database, or even user by user, which can take a significant amount of time.  Consider the time it takes to migrate an entire server, or create a secondary instance.   Why not automate the process.  Instead of taking hours, the following Powershell script will fix the orphans first, and then drop any table users that do not have a corresponding login.  You’ll be done in seconds instead of hours…..but that will be our little secret.    IMPORT-MODULE -name SQLPS -DisableNameChecking $Server = 'Myserver'   $ProdServer = new-object ('') $server $ProdLogins = $ foreach ($database in $prodserver.Databases) { $db = $database.Name.ToString() $IsSystem = $database.IsSystemObject $ProdSchemas = $     #######################################     # Auto-Fix Orphaned user with a login                #     #######################################     foreach ($user in $database.users | select name, login | where{$ -in $ProdLogins -and $_.login -eq ""})      {     $sqlcmd = "USE " + $db + " EXEC sp_change_users_login 'Auto_Fix','" + $ + "'"     invoke-sqlcmd -query $sqlcmd -serverinstance $Server     }     #######################################     # Drop table users that have no login                 #     #######################################     foreach ($user in $database.users | select name, login | where{$ -notin $ProdLogins -and $ -notin $ProdSchemas -and $_.login -eq "" -and $db -notin ('SSISDB','ReportServer','ReportServerTempDb') -and $IsSystem -eq $false})     {     $sqlcmd = "USE [" + $db + "]         IF  EXISTS (SELECT * FROM sys.database_principals WHERE name = N'" + $ + "')         DROP USER [" + $ + "]         GO"     invoke-sqlcmd -query $sqlcmd -serverinstance $Server     } }   In this example, I’ve excluded system databases and the SSRS and SSIS databases.  You can include/exclude any databases you want.  A version of this script has helped me with numerous server migrations, and continues to be a way to keep production instances in sync.  I’ve also used it to do nightly production refreshes to a development environment without causing problems for the users.  Give it a try, but don’t forget to add some logging (I removed mine in the example to simplify the code.).  So get going….adopt an orphan! [...]

Deleting old AZURE backups with Powershell

Mon, 01 May 2017 08:26:55 GMT

Originally posted on:


As companies begin migrating to AZURE, DBAs are tempted to save terrestrial (non-AZURE) backup files to “the cloud” to leverage the unlimited amount of space that is available.  No more alarms in the middle of the night due to a lack of disk space.  Ole Hallengren has even added this option to his maintenance solution (  Others have also followed suit.

The trouble is that most backup solutions do not have a way to clean up old blob backup files.  This little script will clean up files older that the “$daystokeep” parameter. 


import-module AzureRM

$daysToKeep = 7
$storageAccount = 'sqlbackups'
$storageAccessKey = '0JqxsxUOb9kJQa2oAqlGFjalB78vieOqySMdValN+fpYfrSu2XtZz8VNaS/iEb5KqdVySsSFYtLKQe+/imDnEw=='
$storageContainer = 'prodbackups'

$context = New-AzureStorageContext -StorageAccountName $storageAccount -StorageAccountKey $storageAccessKey
New-AzureStorageContainer -Name $storageContainer -Context $context -Permission Blob -ErrorAction SilentlyContinue

$EGBlobs = Get-AzureStorageBlob -Container $storageContainer -Context $context | sort-object LastModified | select lastmodified, name

foreach($blob in $EGBlobs)
    if($blob.lastmodified -lt (get-date).AddDays($daysToKeep*-1))


        Remove-AzureStorageBlob -Blob $ -Container $storageContainer -Context $context


You must of course set your own parameter values, but the script is pretty simple and blindly removes every file older than 7 days from the storage container named “prodbackups”.  With a little tweaking, you can filter by extension type or particular file name or pattern. 


I have a version of this script in my maintenance plan, and run it every night.  Although there is an unlimited amount of space available in the cloud, leaving all of your backup files on a storage account without a little cleanup can get very expensive.  All of that storage comes at a price.

(image) (image)

Query TFS/VSTS for past build history

Mon, 01 May 2017 01:23:21 GMT

Originally posted on:

I wrote a post recently about how to Query TFS/VSTS for past build history and I wanted to share it here as well. (image) (image)

SQL server AlwaysOn Availability Group data latency issue on secondary replicas and synchronous state monitoring

Sun, 30 Apr 2017 15:58:38 GMT

Originally posted on: article explains how data synchronization works on SQL Server AlwaysOn Availability Group and some details about how to use sys.dm_hadr_database_replica_states table to check replica states.I borrowed this daligram from this article which explains data synchronization process on SQL Server AlwaysOn Availability group. The article has full details about the process. Things are worthing noting here are step 5 and 6.When a transaction log is received by a secondary replica, it will be cached and then5. Log is hardened to the disk. Once the log is hardened, an acknowledgement is sent back to the primary replica. 6.Redo process will write the change to the actual databaseSo for a Synchronous-Commit secondary replica, after step 5, the primary replica will be acknowledged to complete the transaction as “no data loss” has been confirmed on the secondary replica. This means after a transaction is completed, SQL server only guarantees the update has been written to the secondary replica’s transaction logs files rather than the actual data file. So there will be some data latency on the secondary replicas even though they are configured as “ Synchronous-Commit”. This means after you made some changes on the primary replica, if you try to read it immediately from a secondary replica, you might find your changes are there yet. This is why in our project, we need to monitor the data synchronization states on the secondary replicas. To get the replica states, I use this query:select      r.replica_server_name,    rs.is_primary_replica IsPrimary,    rs.last_received_lsn,    rs.last_hardened_lsn,    rs.last_redone_lsn,    rs.end_of_log_lsn,    rs.last_commit_lsnfrom sys.availability_replicas rinner join sys.dm_hadr_database_replica_states rs on r.replica_id = rs.replica_idThe fields end with “_lsn” are the last Log Sequence Number at different stages.last_received_lsn: the last LSN a secondary replica has receivedend_of_log_lsn: the last LSN a has been cachedlast_hardened_lsn: the last LSN a has been hardened to disklast_redone_lsn: the last LSN a has been redonelast_commit_lsn: not sure what exactly this one this. But from my test, most of the time, it equals to last_redone_lsn with very rare cases, it is little smaller than last_redone_lsn. So I guess it happens a little bit after redone.If you run the query on the primary replica, it will returns the information for all replicas If you run the query on a secondary replica, it will returns the data for itself.Now we need to understand the format of those LSNs. As this article explained, a LSN is in following format:the VLF (Virtual Log Files) sequence number 0x14 (20 decimal)in the log block starting at offset 0x61 in the VLF (measured in units of 512 bytes)the slot number 1As the LSN values returned by the query are in decimal, we will have to break them into parts like this:Now we can compare the LSN values to figure out some information about state of the replica. For example, we can tell how much redo process in behind the logs have been hardened on Replica-2: last_hardened_lsn - last_redone_lsn = 5137616 - 5137608 = 8. Note, we don’t need to include the slot number(the last 5 digits) into the calculation. In fact, most of the LS[...]

So you want to go Causal Neo4j in Azure? Sure we can do that

Wed, 26 Apr 2017 08:54:22 GMT

Originally posted on: you might have noticed in the Azure market place you can install an HA instance of Neo4j – Awesomeballs! But what about if you want a Causal cluster? Hello Manual Operation! Let’s start with a clean slate, typically in Azure you’ve probably got a dashboard stuffed full of other things, which can be distracting, so let’s create a new dashboard: Give it a natty name: Save and you now have an empty dashboard. Onwards! To create our cluster, we’re gonna need 3 (count ‘em) 3 machines, the bare minimum for a cluster. So let’s fire up one, I’m creating a new Windows Server 2016 Datacenter machine. NB. I could be using Linux, but today I’ve gone Windows, and I’ll probably have a play with docker on them in a subsequent post…I digress. At the bottom of the ‘new’ window, you’ll see a ‘deployment model’ option – choose ‘Resource Manager’ Then press ‘Create’ and start to fill in the basics! Name: Important to remember what it is, I’ve optimistically gone with 01, allowing me to expand all the way up to 99 before I rue the day I didn’t choose 001. User name: Important to remember how to login! Resource group: I’m creating a new resource group, if you have an existing one you want to use, then go for it, but this gives me a good way to ensure all my Neo4j cluster resources are in one place. Next, we’ve got to pick our size – I’m going with DS1_V2 (catchy) as it’s pretty much the cheapest, and well – I’m all about being cheap. You should choose something appropriate for your needs, obvs. On to settings… which is the bulk of our workload. I’m creating a new Virtual Network (VNet) and I’ve set the CIDR to the lowest I’m allowed to on Azure ( which gives me 8 internal IP addresses – I only need 3, so… waste. I’m leaving the public IP as it is, no need to change that, but I am changing the Network Security Group (NSG) as I intend on using the same one for each of my machines, and so having ‘01’ on the end (as is default) offends me Feel free to rename your diagnostics storage stuff if you want. The choice as they say – is yours. Once you get the ‘ticks’ you are good to go: It even adds it to the dashboard… awesomeballs! Whilst we wait, lets add a couple of things to the dashboard, well, one thing, the Resource group, so view the resource groups (menu down the side) and press the ellipsis on the correct Resource group and Pin to the Dashboard: So now I have: After what seems like a lifetime – you’ll have a machine all setup and ready to go – well done you! Now, as it takes a little while for these machines to be provisioned, I would recommend you provision another 2 now, the important bits to remember are: Use the existing resource group: Use the same disk storage Use the same virtual network Use the same Network Security Group BTW, if you don’t you’re only giving yourself more work, as you’ll have to move them all to the right place eventually, may as well do it in one! Whilst they are doing their thing, let’s setup Neo4j on the first machine, so let’s co[...]

MS Dynamics CRM Adapter now available on Enzo Unified

Tue, 25 Apr 2017 09:38:50 GMT

Originally posted on:

I am proud to present our first third party adapter created by Aldo Garcia. With this adapter, you can communicate to Microsoft Dynamics CRM Online directly from SQL Server Management Studio, and from stored procedures/views/functions. You can also programmatically access MS Dynamics CRM records through this adapter using basic HTTP commands and headers, without the need to download/use/install the Dynamics CRM SDK. This provides a simple, light-weight programmatic environment for both SQL and lightweight clients (such as IoT and mobile devices).

Aldo’s adapter allows you to issue simple SQL EXEC or SELECT/INSERT/UPDATE/DELETE commands to manage records in MS Dynamics. For example, this command retrieves all the records found in the account table:

SELECT * FROM DynamicsCRM.RetrieveMultiple@account

And this command does the same thing using an EXEC request:

EXEC DynamicsCRM.RetrieveMultiple ‘account’

The SELECT command supports simple WHERE clauses, TOP N, NULL field filters, and ORDER BY operations.

To view how Aldo is able to read/write MS Dynamics data through simple SQL commands, visit his blog post here.

And to see how to use the MS Dynamics adapter to load data in bulk, please visit this blog post.

(image) (image)

Redirect Standard Error to Output using PowerShell

Tue, 25 Apr 2017 00:02:48 GMT

Originally posted on:

We use LiquiBase for Database change control and Octopus for deployment to downstream environments in our CICD pipeline. Unfortunately, the version of LiquiBase we use writes information messages to standard error. Octopus then interprets this as an error and marks the deployment with warnings when in fact there were no warnings or errors. Newer versions of LiquiBase may have corrected this.

This statement in the update-database function of the liquibase.psm1 file will publish information level messages as errors in the Octopus task log:

..\.liquibase\liquibase.bat --username=$username --password=$password --defaultsFile=../.liquibase/ --changeLogFile=$changeLogFile $url --logLevel=$logLevel update

As a work-around, you can call the statement as a separate process and redirect standard error to standard out as follows:

&  cmd /c "..\.liquibase\liquibase.bat --username=$username --password=$password --defaultsFile=../.liquibase/ --changeLogFile=$changeLogFile $url --logLevel=$logLevel update 2>&1" | out-default

Now the messages are published to Octopus as standard output and display appropriately.

(image) (image)

BizTalk Server best articles

Mon, 24 Apr 2017 04:36:10 GMT

Originally posted on: simplify navigation on the BizTalk articles, I've selected only the best articles. Study Big ThingsBizTalk Internals: Publishers and SubscribersPart 1: Zombie, Instance Subscription and Convoys: DetailsPart 2: Suspend shape and ConvoyOrdered DeliveryInternals: the Partner Direct Ports and the Orchestration ChainsCompensation Model Study Small ThingsInternals: NamespacesMapping Empty or Missing attributes and elements with different combinations of parameters: Required/Optional, Min/MaxOccurs, Default, FixedMapping, Logical functoids, and Boolean valuesSchema: Odds: Generate InstanceBizTalk Messaging ModelInternals: Mapping: Script functoid: Type of the Input and output parametersInternals: Enlisted/Unenlisted; Started/Stopped; Enabled/Disabled. Errors and Suspended messages generated by SubscribersAccumulating messages in MessageBox, Lifespan of the messagesxpath: How to work with empty and Null elements in the OrchestrationInternals: Schema Uniqueness Rule ExamsAdvanced QuestionsQuestions for interview without answersInterview questions and principles Some ArchitectureBizTalk Integration Development ArchitectureNaming ConventionsArtifact CompositionNaming Conventions in ExamplesDomain Standards and Integration ArchitectureBizTalk Server and Agile. Can they live together?Complex XML schemas. How to simplify? From FieldSample: Context routing and Throttling with orchestrationBizTalk and RabbitMQMapping with XsltCustom API: Promoted PropertiesSample: Error Handling Diagrams Timeline: Platform Support Timeline: Development Tools [...]

Nested Enumerable::Any in C++/CLI (difficult++)

Wed, 19 Apr 2017 03:20:20 GMT

Originally posted on:

I frequently use the construct in C# (C-Sharp) to cross-reference the contents of two repositories simultaneously with one compound command:

IEnumerable<string> arr_strSources = new string[]{"Alpha", "Bravo", "Charlie", "Delta"};
IEnumerable<string> arr_strPieces = new string[] { "phha", "lie", "zelt" };
bool blnRetVal = arr_strPieces.Any(strPiece => arr_strSources.Any(strSource => strSource.Contains(strPiece)));
System.Diagnostics.Debug.WriteLine("Found = {0}", blnRetVal);

That code searches for the presence of ANY string in arr_strSources that contains ANY substring in arr_strPieces.
If found, it simply returns TRUE.
In C++, this task is not syntactically as easy.
The biggest problem is the SCOPE and type of the anonymous variable.
Where the C# framework dynamically handles the conversion, it must be explicit in C++.

Here is the example:

To handle the scope of the second IEnumerable::String^, I created a class that (when instantiated) has a method that returns a Func that tests the contents.
The same thing happens with the individual Strings (another class used to handle the scoping).

This is nowhere as "cute" as the C# code, but it satisfies my curiosity.


(image) (image)

How to use Bower to install packages

Fri, 31 Mar 2017 21:10:51 GMT

Originally posted on:

In VS 2017, you have choice to install ui components by using bower.  If you work previously in mvc project in visual studio you know all we use is nuget to install anything from jQuery to Newtonsoft.json.


For using bower right click on project and check manage bower package, this option list next to Manage Nuget Package.

Just like that nuget window everything is same. For library stuff you still need Nuget.  


So is there any way like in nuget I can just type and install the package


The good thing with bower is it’s make a bower.json file in your project’s root directory. you can just edit it.  for example I need to install moment in my dotnet core project now check how easily it is


open bower and start writing moment under dependencies. now when you go after : it will show you all the version. doesn’t it sound cool and much easier ?


You see a version  number started from ~ and one is ^. you want to know what is that thing and how it’s work. please follow this stackoverflow question


Thanks for reading my post, Happy coding (image)

(image) (image)

SQL Database is in use and cannot restore

Fri, 31 Mar 2017 08:17:39 GMT

Originally posted on:

USE master

--This rolls back all uncommitted transactions in the db.

RESTORE DATABASE FILE = N'mydatabase' FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\mydatabase.bak' WITH  FILE = 1,  NOUNLOAD,  REPLACE,  STATS = 10

(image) (image)