Get Custom Fields using Quickbooks SDK

Sun, 28 May 2017 13:41:19 GMT

Originally posted on: was attempting to get a custom field that a customer had added to Items.It took me a while to figure out it is simple but obscure and basically just requires adding a child element of OwnerID = 0.Once I found it, the QB SDK documentation was clear after looking at this article for getting Customer custom info:Described here: thing is that the DataExtRet element contains custom fields and only exists if that Item has data for the custom field(s) present.Request for item query: - -                  0      Environment:The customer is running Enterprise 16I'm using SDK type library is QBXMLRP2 1.0 Type Library, File Version 13.0.R4000 + 10The app is a WPF/C# desktop billing application.Currently using  Visual Studio 2015 Enterprise. [...]

My leap of faith...

Sun, 28 May 2017 07:14:26 GMT

Originally posted on:

I have been a professional software engineer since the early 90's and have always enjoyed the comfort and security of working as a full-time employee. (image) (image)

SSIS project foreach loop editor does not show configuration for ADO or ADO.NET enumerator

Fri, 26 May 2017 07:39:22 GMT

Originally posted on:

I set up Visual Studio 2017, but the SSDT for SQL 2016 did not integrate so I am in Visual Studio 2015 Shell.  I just created a Foreach Loop Container, and tried to get to the configuration screen and only see blanks (see below, both ADO ones do not work, but the file one works).  Anybody else see the same issue?



(image) (image)

Writing a Voice Activated SharePoint Todo List - IoT App on RPi

Tue, 16 May 2017 09:49:09 GMT

Originally posted on: wanted to write a voice activated system on an IoT device to keep track of your “todo list”, hear your commands being played back, and have the system send you a text message with your todo list when it’s time to walk out the door?  Well, I did. In this blog post, I will provide a high level overview of the technologies I used, why I used them, a few things I learned along the way, and partial code to assist with your learning curve if you decide to jump on this.  I also had the pleasure of demonstrating this prototype at Microsoft’s Community Connections in Atlanta in front of my colleagues. How It Works I wanted to build a system using 2 Raspberry Pis (one running Windows 10 IoT Core, and another running Raspbian) that achieved the following objectives: * Have 2 RPis that communicate through the Azure Service Bus This was an objective of mine, not necessarily a requirement; the intent was to have two RPis running different Operating Systems communicate asynchronously without sharing the same network * Learn about the Microsoft Speech Recognition SDK I didn’t want to send data to the cloud for speech recognition; so I needed an SDK on the RPi to perform this function; I chose the Microsoft Speech Recognition SDK for this purpose * Communicate to multiple cloud services without any SDK so that I could program the same way on Windows and Raspbian (Twilio, Azure Bus, Azure Table, SharePoint Online) I also wanted to minimize the learning curve of finding which SDK could run on a Windows 10 IoT Core, and Raspbian (Linux); so I used Enzo Unified to abstract the APIs and instead send simple HTTPS commands allowing me to have an SDK-less development environment (except for the Speech Recognition SDK). Seriously… go find an SDK for SharePoint Online for Raspbian and UWP (Windows 10 IoT Core). The overall solution looks like this: Technologies In order to achieve the above objectives, I used the following bill of materials: Technology Comment Link 2x Raspberry Pi 2 Model B Note that one RPi runs on Windows 10 IoT Core, and the other runs Raspbian Microphone I tried a few, but the best one I found for this project was the Mini AKIRO USB Microphone Speaker I also tried a few, and while there is a problem with this speaker on RPi and Windows, the Logitech Z50 was the better one USB Keyboard I needed a simple way to have keyboard and mouse during while traveling, so I picked up the iPazzPort Mini Keyboard; awesome… Monitor You can use an existing monitor, but I also used the portable ATian 7 inch display. A bit small, but does the job.  IoT Dashboard Utility that allows you to manage your RPis running Windows; make absolutely sure you run the latest build; it should automatically upgrade, but mine didn’t. Windows 10 IoT Core The Microsoft O/S used on one of the RPis; Use the latest build; mine was 15063; if you are looking for instructions on how to install Windows from a command prompt, the link provided proved useful Raspbian Your RPi may be delivered with an SD card preloaded with the necessary utilities to install Raspbian; connecting to a wired network makes the installation a breeze. Visual Studio 2015 I used VS2015, C#, to build the prototype for the Windows 10 IoT Core RPi Python 3 On the Raspbian RPi, I use[...]

MS SQL - bcp to export varbinary to image file

Mon, 15 May 2017 22:56:27 GMT

Originally posted on:

I don't do much SQL in regular day to day basis but when it comes to it then it gets really exciting. Here is one of the odd days where I wanted to push out a image file from SQL which was sorted as varbinary(MAX) in database. As you all know that bcp is a very handy utility when it comes to dumping data out. So, I made that as a first choice but soon realized it was difficult to handle varbinary with the default arguments. Reading the internet here is what I could learn...

You need a format (.fmt) file for such an export. To generate the format file you need first go to the command prompt and perform the following:

D:\>bcp "select top 1 annotation from [DEMO_0]..[MasterCompany_Demographics_files]"  queryout d:\test.png -T

Enter the file storage type of field annotation [varbinary(max)]:{press enter}
Enter prefix-length of field annotation [8]: 0
Enter length of field annotation [0]:{press enter}
Enter field terminator [none]:{press enter}

Do you want to save this format information in a file? [Y/n] y {press enter}
Host filename [bcp.fmt]: annotation.fmt {press enter}

Starting copy...

1 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 1      Average : (1000.00 rows per sec.)

This will help you to generate your format file which then can be used to export out the images easily.

D:\>bcp "select top 1 annotation from [DEMO_0]..[MasterCompany_Demographics_files]"  queryout "d:\test.png" -T -f annotation.fmt

Starting copy...

1 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 15     Average : (66.67 rows per sec.)

I hope this helps someone...
(image) (image)

This blog has been moved to

Sat, 13 May 2017 10:39:04 GMT

Originally posted on:

Hello everyone,

Good evening !

It’s my pleasure to use this blog to write stuff about technical stuff and things that I do or use in my daily life. This blog have a lot of technical issue in past. It’s not work properly.


Many time I click on publish button and it was not working. I see on those days nothing has been published on last many hours  on whole website.


I am moving my blog to for my future post subscribe me on my new address. I am currently focusing on .NET Web & Desktop programming.

I hope We can do in future. If you have any feedback don’t hesitate to tweet to me


Happy coding (image)

(image) (image)

What’s New in C# 7.0 Article in Code Magazine

Thu, 11 May 2017 08:49:59 GMT

Originally posted on:

My What’s New in C# 7.0 article in Code Magazine is out now.


You can find it online, at:

Or you can download the PDF of the entire issue, at:

You can also get a print subscription, at:

(image) (image)

Getting SQL Server MetaData the EASY Way Using SSMS

Thu, 11 May 2017 04:41:57 GMT

Originally posted on: So, you are asked to find out which table uses up the most index space in a database.  Or how many rows are in a give table.  Or any other question about the metadata of the server.  Many people will immediately jump into writing a T-SQL or Powershell script to get this information.  However, SSMS can give you all the information you need about any object on the server.  Enter, OBJECT EXPLORER DETAILS.  Simply go to View –> Object Explorer Details OR hit F7 to open the window.  Click on any level of the Object Explorer tree, and see all of the objects below it.  So, lets answer the first question above…..which table uses the most index space.  We’ll check the AdventureWorks2012 database and see:     Dammit Jim, that doesn’t tell us anything!  Well, hold on Kemosabe, there’s more.  Just like every other Microsoft tool, you can right-click in the column bar of the Object Explorer Details pane, and add the column[s] you want to see.  So, I’ll right-click on the column bar and select the “Index Space Used” column.  The columns can be sorted by clicking on the column name.  So, that makes our job a lot easier:         And, as we might have guessed, the dreaded SalesOrderDetail table uses the most Index Space.  And, we’ve found that out without writing a single line of code, and we can be sure the results are accurate.  I guess it’s possible that you are never asked about your metadata.  Maybe you’re thinking “That don’t confront me, as long as I get my paycheck next Friday” (George Thorogood reference….I couldn’t help it).  But wait, don’t answer yet……did I mention that the OED window can help us with bulk scripting?  Let’s say we want to make changes to several stored procedures, and we want to script them off before doing so.  We have a choice to make: Right-click on each procedure and choose “Script Stored Procedure As” Right-click on the database, select Tasks –> Generate Scripts, and then use the wizard to whittle down the list of objects to just the stored procedure you want Highlight the stored procedures you want to script in the OED window and right-click, Script Stored Procedure As.         Yeah, ok that’s all well and good, but there really isn’t much time savings between the 3 options in that scenario.  I’ll concede that point, but consider this; Without using the OED window, there is no way to script non-database objects like Logins, Jobs, Alerts, Extended Events, etc, etc, etc without right-clicking on each individual object and choosing “Script As”.  In the OED, just highlight all the objects you want to script and right-click.  You’re done.  If you have 50 users you need to migrate to a new instance, or you want to move all of your maintenance jobs to a new server, that’s going to save a lot of time.   I’ve seen lots of people much smarter than I am go hog-wild writing scripts to answer simply questions like “which table has the most rows”.  Or, where is the log file from that table stored in the file system (you can find that out too).  Or what are the database collations on your server?  I’ve seen them opening multiple windows and cutting and pasting objects onto a new server.  While they were busy coding and filling in wizards, I hit F7 and got the job done in a few seconds.  Work smarter, not harder.  I hope this helps you in some way….thanks for reading. [...]

Raspberry Pi Zero W - Media Streamer

Mon, 08 May 2017 04:35:23 GMT

Originally posted on:

So I’ve been wanting to update my media streamer for a while.

In our kitchen  I have an amp and speakers,  and I have a Raspberry Pi One acting as a media streamer,  primarily for use with Air Play.



It sounds truly amazing.    I did the build for around £30.     


Here is my build list -


Software -


Pi Zero W - Board only


Hammer on - Header Board - this is genius


DAC Board - Gives the Pi Decent Sound capability


Metal stand-offs - These just look nice



Parts I already had - 

8 Gb Micro SD Card

Power Supply - Used an old phone charger

Micro USB Cable

RCA Cable

Amp + Speakers




(image) (image)

Video Editing PC Build - Ryzen 8-Core Processor

Fri, 05 May 2017 13:02:49 GMT

Originally posted on: everyone! It's an exciting day. The first shipment of parts has arrived for the new AMD Ryzen 7 1700X Processor Video Editing PC Build! This fresh new shiny piece of technology grabbed my attention as soon as it was originally released on 03/02/2017. It's worth checking out if you're considering upgrading. The 1700X clocks @ 3.4GHz with 8 Cores and currently delivers a 14717 benchmark score for $365.37! If you're interested in seeing how this stacks up to the competitors, check out PassMark's New Desktop CPU benchmarks this month. I've always been a huge fan of Intel, but it makes my bank account sad planning an entire PC Build around the Intel Core i7-6950X or Intel Core i7-6900K. If you take a close look at the chart below, you can see me in the red. This is an accurate representation of my thought process when choosing between these Intel and AMD processors.Looks outstanding, right? I'm anxiously awaiting the rest of the parts which will arrive within the next couple days. I picked out each one of these for my needs and budget strictly for Video Editing and a little bit of Gaming. I'm currently operating on a Surface Pro 3 which is amazing for everything but those two hobbies! If you're thinking about doing a build similar to this but concerned about the price, trimming down the SSD and RAM could help with the finances. Be decisive. If you're not careful, you may end up spending a week researching a single component. In a world full of constant innovation, it's always a good time for an upgrade!I'm going to throw together a quick build guide with performance results soon for anyone interested. We'll see how well it functions! Hope this helps someone consider a Ryzen upgrade! Big shoutout to Amazon for mastering the logistics of fast shipping! Good luck!Here's the build:ComponentSelectionPriceCPU + MotherboardAMD YD170XBCAEWOF Ryzen 7 1700X Processor & ASUS PRIME X370-PRO Motherboard Bundle$515.39CPU CoolerNoctua NH-D15 SE-AM4 Premium-Grade 140mm Dual Tower CPU Cooler for AMD AM4$89.90MemoryG.SKILL Ripjaws V Series 32GB (2 x 16GB) 288-Pin DDR4 SDRAM 2133 (PC4 17000) Z170/X99 Desktop Memory F4-2133C15D-32GVR$237.00StorageSamsung 850 EVO 1TB 2.5-Inch SATA III Internal SSD (MZ-75E1T0B/AM)$349.99Video CardMSI VGA Graphic Cards RX 580 GAMING X 8G$279.99CaseNZXT H440 Mid TowerComputer Case, Matt Black/Red /w Window (CA-H442W-M1)$114.99Power SupplyEVGA SuperNOVA 750 G2, 80+ GOLD 750W, Fully Modular, EVGA ECO Mode, 10 Year Warranty, Includes FREE Power On Self Tester Power Supply 220-G2-0750-XR$99.99Total$1,687.25  [...]

MyGet Command Line Authentication for Windows NuGet

Thu, 04 May 2017 06:57:42 GMT

Originally posted on:

When using Windows and the NuGet package manager:

nuget.exe sources Add|Update -Name feedName -UserName user -Password secret
(image) (image)

Adopt a SQL orphan

Thu, 04 May 2017 03:08:45 GMT

Originally posted on: DBAs have been asked to copy a database from one SQL instance to another.  Even if the instances are identical, the relationship (SID) between the logins and table users are broken.  Microsoft includes sp_change_users_login to help mitigate this problem, but using the stored procedure must be done database by database, or even user by user, which can take a significant amount of time.  Consider the time it takes to migrate an entire server, or create a secondary instance.   Why not automate the process.  Instead of taking hours, the following Powershell script will fix the orphans first, and then drop any table users that do not have a corresponding login.  You’ll be done in seconds instead of hours…..but that will be our little secret.    IMPORT-MODULE -name SQLPS -DisableNameChecking $Server = 'Myserver'   $ProdServer = new-object ('') $server $ProdLogins = $ foreach ($database in $prodserver.Databases) { $db = $database.Name.ToString() $IsSystem = $database.IsSystemObject $ProdSchemas = $     #######################################     # Auto-Fix Orphaned user with a login                #     #######################################     foreach ($user in $database.users | select name, login | where{$ -in $ProdLogins -and $_.login -eq ""})      {     $sqlcmd = "USE " + $db + " EXEC sp_change_users_login 'Auto_Fix','" + $ + "'"     invoke-sqlcmd -query $sqlcmd -serverinstance $Server     }     #######################################     # Drop table users that have no login                 #     #######################################     foreach ($user in $database.users | select name, login | where{$ -notin $ProdLogins -and $ -notin $ProdSchemas -and $_.login -eq "" -and $db -notin ('SSISDB','ReportServer','ReportServerTempDb') -and $IsSystem -eq $false})     {     $sqlcmd = "USE [" + $db + "]         IF  EXISTS (SELECT * FROM sys.database_principals WHERE name = N'" + $ + "')         DROP USER [" + $ + "]         GO"     invoke-sqlcmd -query $sqlcmd -serverinstance $Server     } }   In this example, I’ve excluded system databases and the SSRS and SSIS databases.  You can include/exclude any databases you want.  A version of this script has helped me with numerous server migrations, and continues to be a way to keep production instances in sync.  I’ve also used it to do nightly production refreshes to a development environment without causing problems for the users.  Give it a try, but don’t forget to add some logging (I removed mine in the example to simplify the code.).  So get going….adopt an orphan! [...]

Deleting old AZURE backups with Powershell

Mon, 01 May 2017 08:26:55 GMT

Originally posted on:


As companies begin migrating to AZURE, DBAs are tempted to save terrestrial (non-AZURE) backup files to “the cloud” to leverage the unlimited amount of space that is available.  No more alarms in the middle of the night due to a lack of disk space.  Ole Hallengren has even added this option to his maintenance solution (  Others have also followed suit.

The trouble is that most backup solutions do not have a way to clean up old blob backup files.  This little script will clean up files older that the “$daystokeep” parameter. 


import-module AzureRM

$daysToKeep = 7
$storageAccount = 'sqlbackups'
$storageAccessKey = '0JqxsxUOb9kJQa2oAqlGFjalB78vieOqySMdValN+fpYfrSu2XtZz8VNaS/iEb5KqdVySsSFYtLKQe+/imDnEw=='
$storageContainer = 'prodbackups'

$context = New-AzureStorageContext -StorageAccountName $storageAccount -StorageAccountKey $storageAccessKey
New-AzureStorageContainer -Name $storageContainer -Context $context -Permission Blob -ErrorAction SilentlyContinue

$EGBlobs = Get-AzureStorageBlob -Container $storageContainer -Context $context | sort-object LastModified | select lastmodified, name

foreach($blob in $EGBlobs)
    if($blob.lastmodified -lt (get-date).AddDays($daysToKeep*-1))


        Remove-AzureStorageBlob -Blob $ -Container $storageContainer -Context $context


You must of course set your own parameter values, but the script is pretty simple and blindly removes every file older than 7 days from the storage container named “prodbackups”.  With a little tweaking, you can filter by extension type or particular file name or pattern. 


I have a version of this script in my maintenance plan, and run it every night.  Although there is an unlimited amount of space available in the cloud, leaving all of your backup files on a storage account without a little cleanup can get very expensive.  All of that storage comes at a price.

(image) (image)

Query TFS/VSTS for past build history

Mon, 01 May 2017 01:23:21 GMT

Originally posted on:

I wrote a post recently about how to Query TFS/VSTS for past build history and I wanted to share it here as well. (image) (image)

SQL server AlwaysOn Availability Group data latency issue on secondary replicas and synchronous state monitoring

Sun, 30 Apr 2017 15:58:38 GMT

Originally posted on: article explains how data synchronization works on SQL Server AlwaysOn Availability Group and some details about how to use sys.dm_hadr_database_replica_states table to check replica states.I borrowed this daligram from this article which explains data synchronization process on SQL Server AlwaysOn Availability group. The article has full details about the process. Things are worthing noting here are step 5 and 6.When a transaction log is received by a secondary replica, it will be cached and then5. Log is hardened to the disk. Once the log is hardened, an acknowledgement is sent back to the primary replica. 6.Redo process will write the change to the actual databaseSo for a Synchronous-Commit secondary replica, after step 5, the primary replica will be acknowledged to complete the transaction as “no data loss” has been confirmed on the secondary replica. This means after a transaction is completed, SQL server only guarantees the update has been written to the secondary replica’s transaction logs files rather than the actual data file. So there will be some data latency on the secondary replicas even though they are configured as “ Synchronous-Commit”. This means after you made some changes on the primary replica, if you try to read it immediately from a secondary replica, you might find your changes are there yet. This is why in our project, we need to monitor the data synchronization states on the secondary replicas. To get the replica states, I use this query:select      r.replica_server_name,    rs.is_primary_replica IsPrimary,    rs.last_received_lsn,    rs.last_hardened_lsn,    rs.last_redone_lsn,    rs.end_of_log_lsn,    rs.last_commit_lsnfrom sys.availability_replicas rinner join sys.dm_hadr_database_replica_states rs on r.replica_id = rs.replica_idThe fields end with “_lsn” are the last Log Sequence Number at different stages.last_received_lsn: the last LSN a secondary replica has receivedend_of_log_lsn: the last LSN a has been cachedlast_hardened_lsn: the last LSN a has been hardened to disklast_redone_lsn: the last LSN a has been redonelast_commit_lsn: not sure what exactly this one this. But from my test, most of the time, it equals to last_redone_lsn with very rare cases, it is little smaller than last_redone_lsn. So I guess it happens a little bit after redone.If you run the query on the primary replica, it will returns the information for all replicas If you run the query on a secondary replica, it will returns the data for itself.Now we need to understand the format of those LSNs. As this article explained, a LSN is in following format:the VLF (Virtual Log Files) sequence number 0x14 (20 decimal)in the log block starting at offset 0x61 in the VLF (measured in units of 512 bytes)the slot number 1As the LSN values returned by the query are in decimal, we will have to break them into parts like this:Now we can compare the LSN values to figure out some information about state of the replica. For example, we can tell how much redo process in behind the logs have been hardened on Replica-2: last_hardened_lsn - last_redone_lsn = 5137616 - 5137608 = 8. Note, we don’t need to include the slot number(the last 5 digits) into the calculation. In fact, most of the LSNs in dm_hadr_database_replica_states table do not have slot number. As this document says: they are not actual log sequence numbers (LSNs), rather each of these values reflects a log-block ID padded with zeroes. We can tell this by looking at the last 5 digits of a LSN number. If it always ends with 00001, it means it is not actual LSN.As from this art[...]

So you want to go Causal Neo4j in Azure? Sure we can do that

Wed, 26 Apr 2017 08:54:22 GMT

Originally posted on: you might have noticed in the Azure market place you can install an HA instance of Neo4j – Awesomeballs! But what about if you want a Causal cluster? Hello Manual Operation! Let’s start with a clean slate, typically in Azure you’ve probably got a dashboard stuffed full of other things, which can be distracting, so let’s create a new dashboard: Give it a natty name: Save and you now have an empty dashboard. Onwards! To create our cluster, we’re gonna need 3 (count ‘em) 3 machines, the bare minimum for a cluster. So let’s fire up one, I’m creating a new Windows Server 2016 Datacenter machine. NB. I could be using Linux, but today I’ve gone Windows, and I’ll probably have a play with docker on them in a subsequent post…I digress. At the bottom of the ‘new’ window, you’ll see a ‘deployment model’ option – choose ‘Resource Manager’ Then press ‘Create’ and start to fill in the basics! Name: Important to remember what it is, I’ve optimistically gone with 01, allowing me to expand all the way up to 99 before I rue the day I didn’t choose 001. User name: Important to remember how to login! Resource group: I’m creating a new resource group, if you have an existing one you want to use, then go for it, but this gives me a good way to ensure all my Neo4j cluster resources are in one place. Next, we’ve got to pick our size – I’m going with DS1_V2 (catchy) as it’s pretty much the cheapest, and well – I’m all about being cheap. You should choose something appropriate for your needs, obvs. On to settings… which is the bulk of our workload. I’m creating a new Virtual Network (VNet) and I’ve set the CIDR to the lowest I’m allowed to on Azure ( which gives me 8 internal IP addresses – I only need 3, so… waste. I’m leaving the public IP as it is, no need to change that, but I am changing the Network Security Group (NSG) as I intend on using the same one for each of my machines, and so having ‘01’ on the end (as is default) offends me Feel free to rename your diagnostics storage stuff if you want. The choice as they say – is yours. Once you get the ‘ticks’ you are good to go: It even adds it to the dashboard… awesomeballs! Whilst we wait, lets add a couple of things to the dashboard, well, one thing, the Resource group, so view the resource groups (menu down the side) and press the ellipsis on the correct Resource group and Pin to the Dashboard: So now I have: After what seems like a lifetime – you’ll have a machine all setup and ready to go – well done you! Now, as it takes a little while for these machines to be provisioned, I would recommend you provision another 2 now, the important bits to remember are: Use the existing resource group: Use the same disk storage Use the same virtual network Use the same Network Security Group BTW, if you don’t you’re only giving yourself more work, as you’ll have to move them all to the right place eventually, may as well do it in one! Whilst they are doing their thing, let’s setup Neo4j on the first machine, so let’s connect to it, firstly click on the VM and then the ‘connect’ button We need two things on the machine Neo4j Enterprise Java The simplest way I’ve found (provided your interwebs is up to it) is to Copy the file on your local machine, and Right-Click Paste onto the VM desktop – and yes – I’ve found it works way better using the mo[...]

MS Dynamics CRM Adapter now available on Enzo Unified

Tue, 25 Apr 2017 09:38:50 GMT

Originally posted on:

I am proud to present our first third party adapter created by Aldo Garcia. With this adapter, you can communicate to Microsoft Dynamics CRM Online directly from SQL Server Management Studio, and from stored procedures/views/functions. You can also programmatically access MS Dynamics CRM records through this adapter using basic HTTP commands and headers, without the need to download/use/install the Dynamics CRM SDK. This provides a simple, light-weight programmatic environment for both SQL and lightweight clients (such as IoT and mobile devices).

Aldo’s adapter allows you to issue simple SQL EXEC or SELECT/INSERT/UPDATE/DELETE commands to manage records in MS Dynamics. For example, this command retrieves all the records found in the account table:

SELECT * FROM DynamicsCRM.RetrieveMultiple@account

And this command does the same thing using an EXEC request:

EXEC DynamicsCRM.RetrieveMultiple ‘account’

The SELECT command supports simple WHERE clauses, TOP N, NULL field filters, and ORDER BY operations.

To view how Aldo is able to read/write MS Dynamics data through simple SQL commands, visit his blog post here.

And to see how to use the MS Dynamics adapter to load data in bulk, please visit this blog post.

(image) (image)

Redirect Standard Error to Output using PowerShell

Tue, 25 Apr 2017 00:02:48 GMT

Originally posted on:

We use LiquiBase for Database change control and Octopus for deployment to downstream environments in our CICD pipeline. Unfortunately, the version of LiquiBase we use writes information messages to standard error. Octopus then interprets this as an error and marks the deployment with warnings when in fact there were no warnings or errors. Newer versions of LiquiBase may have corrected this.

This statement in the update-database function of the liquibase.psm1 file will publish information level messages as errors in the Octopus task log:

..\.liquibase\liquibase.bat --username=$username --password=$password --defaultsFile=../.liquibase/ --changeLogFile=$changeLogFile $url --logLevel=$logLevel update

As a work-around, you can call the statement as a separate process and redirect standard error to standard out as follows:

&  cmd /c "..\.liquibase\liquibase.bat --username=$username --password=$password --defaultsFile=../.liquibase/ --changeLogFile=$changeLogFile $url --logLevel=$logLevel update 2>&1" | out-default

Now the messages are published to Octopus as standard output and display appropriately.

(image) (image)

BizTalk Server best articles

Mon, 24 Apr 2017 04:36:10 GMT

Originally posted on: simplify navigation on the BizTalk articles, I've selected only the best articles. Study Big ThingsBizTalk Internals: Publishers and SubscribersPart 1: Zombie, Instance Subscription and Convoys: DetailsPart 2: Suspend shape and ConvoyOrdered DeliveryInternals: the Partner Direct Ports and the Orchestration ChainsCompensation Model Study Small ThingsInternals: NamespacesMapping Empty or Missing attributes and elements with different combinations of parameters: Required/Optional, Min/MaxOccurs, Default, FixedMapping, Logical functoids, and Boolean valuesSchema: Odds: Generate InstanceBizTalk Messaging ModelInternals: Mapping: Script functoid: Type of the Input and output parametersInternals: Enlisted/Unenlisted; Started/Stopped; Enabled/Disabled. Errors and Suspended messages generated by SubscribersAccumulating messages in MessageBox, Lifespan of the messagesxpath: How to work with empty and Null elements in the OrchestrationInternals: Schema Uniqueness Rule ExamsAdvanced QuestionsQuestions for interview without answersInterview questions and principles Some ArchitectureBizTalk Integration Development ArchitectureNaming ConventionsArtifact CompositionNaming Conventions in ExamplesDomain Standards and Integration ArchitectureBizTalk Server and Agile. Can they live together?Complex XML schemas. How to simplify? From FieldSample: Context routing and Throttling with orchestrationBizTalk and RabbitMQMapping with XsltCustom API: Promoted PropertiesSample: Error Handling Diagrams Timeline: Platform Support Timeline: Development Tools [...]

Nested Enumerable::Any in C++/CLI (difficult++)

Wed, 19 Apr 2017 03:20:20 GMT

Originally posted on:

I frequently use the construct in C# (C-Sharp) to cross-reference the contents of two repositories simultaneously with one compound command:

IEnumerable<string> arr_strSources = new string[]{"Alpha", "Bravo", "Charlie", "Delta"};
IEnumerable<string> arr_strPieces = new string[] { "phha", "lie", "zelt" };
bool blnRetVal = arr_strPieces.Any(strPiece => arr_strSources.Any(strSource => strSource.Contains(strPiece)));
System.Diagnostics.Debug.WriteLine("Found = {0}", blnRetVal);

That code searches for the presence of ANY string in arr_strSources that contains ANY substring in arr_strPieces.
If found, it simply returns TRUE.
In C++, this task is not syntactically as easy.
The biggest problem is the SCOPE and type of the anonymous variable.
Where the C# framework dynamically handles the conversion, it must be explicit in C++.

Here is the example:

To handle the scope of the second IEnumerable::String^, I created a class that (when instantiated) has a method that returns a Func that tests the contents.
The same thing happens with the individual Strings (another class used to handle the scoping).

This is nowhere as "cute" as the C# code, but it satisfies my curiosity.


(image) (image)

How to use Bower to install packages

Fri, 31 Mar 2017 21:10:51 GMT

Originally posted on:

In VS 2017, you have choice to install ui components by using bower.  If you work previously in mvc project in visual studio you know all we use is nuget to install anything from jQuery to Newtonsoft.json.


For using bower right click on project and check manage bower package, this option list next to Manage Nuget Package.

Just like that nuget window everything is same. For library stuff you still need Nuget.  


So is there any way like in nuget I can just type and install the package


The good thing with bower is it’s make a bower.json file in your project’s root directory. you can just edit it.  for example I need to install moment in my dotnet core project now check how easily it is


open bower and start writing moment under dependencies. now when you go after : it will show you all the version. doesn’t it sound cool and much easier ?


You see a version  number started from ~ and one is ^. you want to know what is that thing and how it’s work. please follow this stackoverflow question


Thanks for reading my post, Happy coding (image)

(image) (image)

SQL Database is in use and cannot restore

Fri, 31 Mar 2017 08:17:39 GMT

Originally posted on:

USE master

--This rolls back all uncommitted transactions in the db.

RESTORE DATABASE FILE = N'mydatabase' FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\mydatabase.bak' WITH  FILE = 1,  NOUNLOAD,  REPLACE,  STATS = 10

(image) (image) core as a windows service

Fri, 31 Mar 2017 12:46:48 GMT

Originally posted on:

Posting this to remind myself that nssm just doesn't suck.
If you want to run a  windows service and it's do this.

Publish your app to a folder on windows. 
Make sure dotnet 1.1 > is installed on the box.
get nssm
run nssm install MyService
in the first box put dotnet
in the second put the folder of where you just published your app.
in the third put the name of your app .dll (don't forget the .dll)  MyService.dll
then it's done. that simple folks.
(image) (image)

Zoho: Is it possible to get GeoLocation of user?

Sat, 25 Mar 2017 11:20:00 GMT

Originally posted on:

Problem: Many times we want latitude and longitude information of user who is currently filling the Zoho forms. Zoho doesn't provide any such object by which we can get this information & it also doesn't support any javascript or any third party tool to get this information Or in other words is it possible to get GeoLocation Zoho form creator?

Solution:Here I am going to explain how you can achieve this:

a) You need to create two fields LAT & LON on the Zoho form where you want this information to be stored.

b) You then need to create one PHP page which you can host on external server. As Zoho doesn't provide any space to host any custom page. So you have to host this page on the external domain.

c) Now you need to use Geolocation object of HTML 5, to get the user's current location, on this PHP page.

d) In my current example I am passing the newly generated record ID on this page php via query string while calling this PHP page from Zoho form. I have divided my zoho form into two steps. In the first step I am filling all the user's details and after clicking on submit button on this step 1, I am calling this external PHP page. 

e) After getting the user's latitude & longitude information, I have called Zoho's form update API and updating the record with Lat & Lon for the ID which I have received on this php page via query string. In this way you can get Latitude & Longitude of user in Zoho. 

Please note that SSL should be installed on the domain where you are going to host this PHP page as without SSL you will never get Lat & Lon information.

You can check screenshots of above solution. This solution is tested on web, android & iphone. It is is working everywhere without any issue.



(image) (image)

Integrating ASP.NET Core With Webforms Using IIS URL Rewrite

Sat, 25 Mar 2017 06:16:02 GMT

Originally posted on:'m currently updating a legacy ASP.NET WebForms application to ASP.NET Core. Because big rewrites (almost) never work, it's a case of migrating sections of the site one at a time, having WebForms pass specific requests to ASP.NET Core, with no change to the end user's experience. How, you ask? With a couple of IIS tools and a sprinkle of web.config entries. ASP.NET Core can be served via IIS, with IIS acting as a reverse proxy. Requests come into IIS, the ASPNetCoreModule routes them to Kestrel, and returns the results. In my scenario, the ASP.NET Core application is only ever accessible via WebForms, so it takes a little bit of setting up. Here's how. Setting up IIS AspNetCoreModule Firstly, you need the AspNetCoreModule. Luckily, you probably already have it - Visual Studio installs it into IIS for you. To check, open IIS Manager, and at the server level open Modules in the IIS section - you should see it listed there. If not, you can install it via the ASP.NET Core Server Hosting Bundle - here's a direct link to download the installer: download! Application Request Routing Next, you need the Application Request Routing module to route requests rewritten by the URL Rewrite module (try saying that ten times fast). You can install this via IIS Manager - click Get New Web Platform Components in the right-hand column to open the Web Platform Installer, then search for ARR, and look for version 3.0: Once that's installed, open Application Request Routing in the server-level IIS section (you may need to close and re-open IIS to see the icon), click Server Proxy Settings, check Enable proxy, and click Apply: URL Rewrite Finally, you need the URL Rewrite module. This you can also install via the Web Platform Installer - just search for rewrite, and look for version 2.0: Setting up ASP.NET Core Firstly, you need IIS integration in your application. This is super easy, and you probably already have it - it's simply a call to UseIISIntegration() on the WebHostBuilder in your Program.cs. If you're missing it, UseIISIntegration is an extension method from the Microsoft.AspNetCore.Server.IISIntegration NuGet package. That one line is all you need in your ASP.NET Core application - now you just publish the project. You can use a File System publish, go via WebDeploy, or whatever you prefer. Finally, set up an IIS website pointing to your ASP.NET Core publish directory. Because this website will be accessed via WebForms only, bind it to a non-public port number - I'll use 1234 for our example: Setting up WebForms Finally, you need to tell WebForms to send the appropriate requests to ASP.NET Core. You can do this with rules in your web.config which configure the URL Rewrite module. For example, say you've migrated your news pages to an ASP.NET Core NewsController, the following rules tell IIS what to do with requests for the ASP.NET Core 'news' section:

AppDynamics Disk Space Alert Extension: Windows support

Fri, 24 Mar 2017 05:41:48 GMT

Originally posted on:

I've added a Powershell port of this shell script to support Windows servers, and Derrek has merged the PR back into the trunk:

The port is pretty faithful to the original shell script, uses the same configuration file and reports and alerts in the same way. So all you need to do is swap in the Powershell rather than than Bash script to monitor your Windows-based estate. Hope you find it useful!
(image) (image)

Custom Role Claim Based Authentication on SharePoint 2013

Tue, 21 Mar 2017 07:08:58 GMT

Originally posted on:

We had a requirement in a project to authenticate users in a site collection based on Country claim presented by a User.

Below powershell sample is to add a US country claim value to a Visitors group of a site collection to allow any users from US to get authorized to view the site.

$web = Get-SPWeb "https://SpSiteCollectionUrl"
$claim = New-SPClaimsPrincipal -TrustedIdentityTokenIssuer “users” -ClaimValue “US" -ClaimType “
$group = $web.Groups[“GDE Home Visitors“]
$group.AddUser($claim.ToEncodedString(), “”, “US", “”)

Note : for this solution to work you we have used an ADFS solution which pull our calim values from Enterpise directory and sends SAML claim to SharePoint.

(image) (image)

Powershell to monitor Server Resource and make html report for a set number of iteration and intreval

Tue, 21 Mar 2017 07:00:18 GMT

Originally posted on: building a SharePoint farm for a project, i did load testing with VSTS.Since my SharePoint farm was in a different domain than that of my Load test controllers, I had to monitor few performance counters to take a well informed decision while tuning the farm capacity.Below powershell was developed by me to generate a html report for all the different servers at specified intervals.The output html file will be generated in D Drive. #Array which contains the different Server names$ServerList  = @('server01', 'server02', 'server03','Server04','Server05')#Array which represents the role of the server$ServerRole=@('WFE1', 'WFE2', 'Central Admin', 'WorkFlow1','WorkFlow2')#Number of times this powershell should be executed$runcount = 15;$Outputreport = " Server Health Report                                                                

Server Health Report

                                                                                                                                                                                                             "for($i=1; $i -le $runcount;$i++){            $ArrayCount = 0     ForEach($computername in $ServerList)     {        $role = $ServerRole[$ArrayCount]        $ArrayCount = $ArrayCount  + 1                Write-Host $i $computername $role         $AVGProc = Get-WmiObject -computername $computername win32_processor |         Measure-Object -property LoadPercentage -Average | Select Average        $OS = gwmi -Class win32_operatingsystem -computername $computername |        Select-Object @{Name = "MemoryUsage"; Expression = {“{0:N2}” -f ((($_.TotalVisibleMemorySize - $_.FreePhysicalMemory)*100)/ $_.TotalVisibleMemorySize) }}        $vol = Get-WmiObject -Class win32_Volume -ComputerName $computername -Filter "DriveLetter = 'C:'" |        Select-object @{Name = "C PercentFree"; Expression = {“{0:N2}” -f  (($_.FreeSpace / $_.Capacity)*100) } }          $result += [PSCustomObject] @{                 ServerName = "$computername"                CPULoad = "$($AVGProc.Average)%"                MemLoad = "$($OS.MemoryUsage)%"                CDrive = "$($vol.'C PercentFree')%"            }                                    Foreach($Entry in $result)         {             if(($Entry.CpuLoad) -or ($Entry.memload) -ge "80")             {        [...]

Distributed TensorFlow Pipeline using Google Cloud Machine Learning Engine

Fri, 17 Mar 2017 08:01:33 GMT

Originally posted on: is an open source software library for Machine Learning across a range of tasks, developed by Google. It is currently used for both research and production at Google products, and released under the Apache 2.0 open source license. TensorFlow can run on multiple CPUs and GPUs (via CUDA) and is available on Linux, macOS, Android and iOS. TensorFlow computations are expressed as stateful dataflow graphs, whereby neural networks perform on multidimensional data arrays referred to as "tensors". Google has developed the Tensor Processing Unit (TPU), a custom ASIC built specifically for machine learning and tailored for accelerating TensorFlow. TensorFlow provides a Python API, as well as C++, Java and Go APIs.Google Cloud Machine Learning Engine is a managed service that enables you to easily build TensorFlow machine learning models, that work on any type of data, of any size. The service works with Cloud Dataflow (Apache Beam) for feature processing, Cloud Storage for data storage and Cloud Datalab for model creation. HyperTune performs CrossValidation, automatically tuning model hyperparameters. As a managed service, it automates all resource provisioning and monitoring, allowing devs to focus on model development and prediction without worrying about the infrastructure. It provides a scalable service to build very large models using managed distributed training infrastructure that supports CPUs and GPUs. It accelerates model development, by training across many number of nodes, or running multiple experiments in parallel. it is possible to create and analyze models using Jupyter notebook development, with integration to Cloud Datalab. Models trained using GCML-Engine can be downloaded for local execution or mobile integration.Why not Spark ?Deep Learning has eclipsed Machine Learning in accuracy Google Cloud Machine Learning is based upon TensorFlow, not SparkMachine Learning industry is trending in this direction – its advisable to follow the conventional wisdom.TensorFlow is to Spark what Spark is to Hadoop.Why Python ?Dominant Language in field of Machine Learning / Data Scienceit is the Google Cloud Machine Learning / TensorFlow core language Ease of use (very !)Large pool of developersSolid ecosystem of Big Data scientific & visualization tools – Pandas, Scipy, Scikit-Learn, XgBoost, etcWhy Deep LearningThe development of Deep Learning was motivated in part by the failure of traditional Machine Learning algorithms to generalize well – because it becomes exponentially more difficult when working with high-dimensional data - the mechanisms used to achieve generalization in traditional machine learning are insufficient to learn complicated functions in high-dimensional spaces. Such spaces also often impose high computational costs. Deep Learning was designed to overcome these obstacles.  The curse of dimensionality As the number of relevant dimensions of the data increases, the number of computations may grow exponentially. Also, a statistical challenge because the number possible configurations of x is much larger than th[...]

SQL Server - Kill any live connections to the DB

Fri, 17 Mar 2017 08:47:43 GMT

Originally posted on:

Often you need to restore a DB or take oit offline only to find out the process aborts due to active connected sessions to the DB. Here is a quick sript that will kill all active sessions to the DB.

USE [master];

DECLARE @kill varchar(8000) = '';  
SELECT @kill = @kill + 'kill ' + CONVERT(varchar(5), session_id) + ';'  
FROM sys.dm_exec_sessions
WHERE database_id  = db_id('myDataBase')

(image) (image)

Keras QuickRef

Fri, 17 Mar 2017 07:26:26 GMT

Originally posted on: QuickRefKeras is a high-level neural networks API, written in Python that runs on top of the Deep Learning framework TensorFlow. In fact, tf.keras will be integrated directly into TensorFlow 1.2 !Here are my API notes:Model APIsummary() get_config() from_config(config) set_weights() set_weights(weights) to_json() to_yaml() save_weights(filepath) load_weights(filepath, by_name) layers Model Sequential /Functional APIsadd(layer) compile(optimizer, loss, metrics, sample_weight_mode) fit(x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight) evaluate(x, y, batch_size, verbose, sample_weight) predict(x, batch_size, verbose) predict_classes(x, batch_size, verbose) predict_proba(x, batch_size, verbose) train_on_batch(x, y, class_weight, sample_weight) test_on_batch(x, y, class_weight) predict_on_batch(x) fit_generator(generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe) evaluate_generator(generator, val_samples, max_q_size, nb_worker, pickle_safe) predict_generator(generator, val_samples, max_q_size, nb_worker, pickle_safe) get_layer(name, index) LayersCoreLayerdescriptionIOparamsDensevanilla fully connected NN layer(nb_samples, input_dim) --> (nb_samples, output_dim)output_dim/shape, init, activation, weights, W_regularizer, b_regularizer, activity_regularizer, W_constraint, b_constraint, bias, input_dim/shapeActivationApplies an activation function to an outputTN --> TNactivationDropoutrandomly set fraction p of input units to 0 at each update during training time --> reduce overfittingTN --> TNpSpatialDropout2D/3Ddropout of entire 2D/3D feature maps to counter pixel / voxel proximity correlation(samples, rows, cols, [stacks,] channels) --> (samples, rows, cols, [stacks,] channels)pFlattenFlattens the input to 1D(nb_samples, D1, D2, D3) --> (nb_samples, D1xD2xD3)-ReshapeReshapes an output to a different factorizationeg (None, 3, 4) --> (None, 12) or (None, 2, 6)target_shapePermutePermutes dimensions of input - output_shape is same as the input shape, but with the dimensions re-orderedeg (None, A, B) --> (None, B, A)dimsRepeatVectorRepeats the input n times(nb_samples, features) --> (nb_samples, n, features)nMergemerge a list of tensors into a single tensor[TN] --> TNlayers, mode, concat_axis, dot_axes, output_shape, output_mask, node_indices, tensor_indices, nameLambdaTensorFlow expressionflexiblefunction, output_shape, argumentsActivityRegularizationregularize the cost functionTN --> TNl1, l2Maskingidentify timesteps in D1 to be skippedTN --> TNmask_valueHighwayLSTM for FFN ?(nb_samples, input_dim) --> (nb_samples, output_dim)same as Dense + transform_biasMaxoutDensetakes the element-wise maximum of prev layer - to learn a convex, piecewise linear activation function over the inputs ??(nb_samples, input_dim) --> (nb_samples, output_dim)same as Dense + nb_featureTimeDistributedApply a Dense layer for each D1 time_dimension(nb_sample, time_dimension, input_dim) --> (nb_sample, time_dimension, output_dim)DenseConvo[...]


Wed, 15 Mar 2017 14:38:22 GMT

Originally posted on:

Compete for a chance to win $2,000!

That’s right. We are offering a $2,000 first prize for the most innovative use of our technology. Within just a few hours you will be surprised how much you can accomplish with our .NET Developer SDK. In addition, your work could be advertised nationally and promoted at customer locations worldwide as part of our solutions and earn significant additional revenue if you so desire.





-          First Place Prize:  $2,000
-          Second Place Prize:  $750
-          Third Place Prize: $250

All submissions should be made by May 1st 2017 by midnight (US Eastern Time)

Contact us at for more information

Additional Information

How to write your first adapter:

Download the Enzo Unified SDK at this location:

Download the help file for the Enzo Unified SDK here:

Download the installation doc for the Enzo Unified SDK here:

(image) (image)

Using Measure-Object to sum file sizes in Powershell

Wed, 15 Mar 2017 09:56:58 GMT

Originally posted on: aware that this line: gci -r "C:\temp\test"  | measure-object -property length -sum will throw an error if it encounters a folder whose only contents is another (empty) folder; this is because measure-object tries in this case to measure an object which does not have a “length” property defined: PS C:\Projects> Get-ChildItem -Recurse "C:\temp\test"  | measure-object -property length -sum measure-object : The property "length" cannot be found in the input for any objects. At line:1 char:42 + ... dItem -Recurse "C:\temp\test"  | measure-object -property length -sum +                                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : InvalidArgument: (:) [Measure-Object], PSArgumentException     + FullyQualifiedErrorId : GenericMeasurePropertyNotFound,Microsoft.PowerShell.Commands.MeasureObjectCommand     There are various ways to achieve the correct effect, based on the idea that objects returned by gci that are directories will have the PSIsContainer property set to $true. For example:   $root = "c:\Oracle"   $total_files = 0 $total_size = [int]0   [System.Collections.Stack]$stack = @() $stack.Push($root)   while ($stack.Length -gt 0) {     $folder = $stack.Pop()       gci $folder |% `     {         $item = $_                 if ($item.PSIsContainer)         {             $stack.Push($item.FullName)         }         else         {              $total_size += $_.Length              $total_files ++           }      } }   Write-Host "Total size: $([Math]::Round($total_size / 1Mb, 2)) Mb over $total_files files" [...]

Writing your first adapter for Enzo Unified

Tue, 14 Mar 2017 01:57:45 GMT

Originally posted on: blog post shows you how to write your first adapter using the Enzo Unified SDK, so that you can extend the capabilities of SQL Server and/or create a light-weight HTTP proxy for applications, IoT devices or mobile applications. This post assumes you have a high level understanding of Enzo Unified ( and that you have installed and configured the Enzo Unified SDK (download details below). In this post, you will learn how to create an adapter to get the current UCT time using the official US NIST time servers. You will write this logic in a new adapter called TimeServers and expose a method called GetUSTime().  Once coded, you will be able to access this method through SQL Server as a stored procedure, or using an HTTP call from any client. Because Enzo Unified exposes these methods as HTTP calls, any client running on any operating system capable of making HTTP calls will be able to run this command. Pre-Requisites To successfully build this adapter, you will need the following: Visual Studio 2012 or higher SQL Server 2014 Express Edition or higher Enzo Unified SDK (see download links below)   Download the Enzo Unified SDK here: Download the help file for the Enzo Unified SDK here: Download the installation doc for the Enzo Unified SDK here: Create an Empty Adapter Project After downloading and installing the SDK, and registering the Enzo Unified project template per installation instructions, you will be able to create a new Adapter project. Start Visual Studio 2012, and select New Project Select Enzo Unified DataAdapter (Annotated) under Templates –> Visual C# –> Enzo Unified Enter the adapter name:  TimeServers Verify the location of your adapter (it should be under the namespace you specified during installation) and click OK You will need to change two settings: Open TimeServers.cs On line 5, replace the namespace by your namespace (MY is the namespace I entered) Change the the project configuration from Any CPU to x64 Make sure the project compiles successfully Press F5 to start the adapter (at this point, the adapter does nothing yet) The Enzo Unified Development Host will start and a user interface will be displayed allowing you to test your adapter Type this command, and press F5 to execute it:  exec Stop the project. We are ready to add the method to this adapter. Note that while you can use the Development Host for simple testing, it is recommended to use SQL Server Management Studio for better experience. We will review how to connect to Enzo Unified in more detail later. Create the GetUSTime Method Let’s replace the entire content of the TimeServers.cs file; we will review the implement next. using BSC.Enzo.Unified; using BSC.Enzo.Unified.Logging; using System; using System.Collections.Generic; using System.IO;[...]

How To Suppress DHCP Popups In NETUI

Sun, 12 Mar 2017 05:37:47 GMT

Originally posted on: on a CE device that presented a kiosk application I wanted to find a way to suppress the following Windows CE Networking DHCP popups:"DHCP was unable to obtain an IP address. If the netcard is removable, then you can remove/reinsert it to have DHCP make another attempt to obtain an IP address for it. Otherwise, you can statically assign an address.""Your IP address lease has expired. DHCP was unable to renew your lease.""A DHCP Server could not be contacted. Using cached lease information."These popups were obtrusive and the end user of the device had no idea how to resolve the issues that these popups were bringing to light. The device depends upon a reliable network infrastructure and these popups should really never ... pop up ... but when they did they did nothing more than confuse the user and get in the way of the kiosk application / user interface the user was trying to use.Typically the popups were quickly ignored by the user and closed, but in some cases a call or ticket was created with their IT department for resolution, wasting everyone's time.The network attached device is smart enough to buffer the readings it was responsible for taking, simply sending them when the networking issue was resolved naturally by being carried to where there was network coverage, so the popups only served as an annoyance to the user, not to mention the IT department who would have to deal with any users bringing up the fact that this strange popup popped up, making them explain to the user what should do about it.This post alluded to the solution but did not follow through with the specifics so I am doing that here.If you want to suppress these or any of the other of the multitude of popups the NETUI supports start by following the instructions in the blog post by Michel Varhagen of GuruCE, Cloning publc code: An example.Add the following new function, SuppressNetMsgBox, in between the two functions you'll find @:netui.c(139):GetNetString (UINT uID, LPTSTR lpBuffer, int nBufferMax)andnetui.c(165): BOOL WINAPIV NetMsgBox (HWND hParent, DWORD dwFlags, TCHAR *szStr)BOOL SuppressNetMsgBox(DWORD dwId, TCHAR *szStr){#define MAXCMP 76 TCHAR szBuffer[MAXCMP] = {0}; DWORD dwMaxCount = sizeof(szBuffer)/sizeof(szBuffer[0]); if (!GetNetString(dwId, szBuffer, (int)dwMaxCount)) return FALSE; if (!_tcsncmp(szBuffer, szStr, (size_t)(dwMaxCount-1))) { NKDbgPrintfW(_T("Networking Pop-up: \"%s\"... suppressed.\r\n"), szBuffer);  return TRUE; } return FALSE;}Add calls to SuppressNetMsgBox inside the NetMsgBox function, where shown, as shown:BOOL WINAPIV NetMsgBox (HWND hParent, DWORD dwFlags, TCHAR *szStr){    TCHAR szTitle[200];    DWORD dwStyle, dwId;    int iRet;    HCURSOR hCur;    // verify the incoming window handle and fail is not null and invalid    if (hParent && !IsWindow(hParent)) {        SetLastError(ERROR_INVALID_WINDOW_HANDLE);        return FALSE;    } if ( (SuppressNetMsgBox(NETUI_GETNETSTR_NO_IPADDR, szStr)) // "DHCP was unable to obtain an IP address. If the n[...]

Server NameServer RoleAvrg.CPU UtilizationMemory UtilizationC Drive UtilizatoinIteration