MS Dynamics CRM Adapter now available on Enzo Unified

Tue, 25 Apr 2017 09:38:50 GMT

Originally posted on:

I am proud to present our first third party adapter created by Aldo Garcia. With this adapter, you can communicate to Microsoft Dynamics CRM Online directly from SQL Server Management Studio, and from stored procedures/views/functions. You can also programmatically access MS Dynamics CRM records through this adapter using basic HTTP commands and headers, without the need to download/use/install the Dynamics CRM SDK. This provides a simple, light-weight programmatic environment for both SQL and lightweight clients (such as IoT and mobile devices).

Aldo’s adapter allows you to issue simple SQL EXEC or SELECT/INSERT/UPDATE/DELETE commands to manage records in MS Dynamics. For example, this command retrieves all the records found in the account table:

SELECT * FROM DynamicsCRM.RetrieveMultiple@account

And this command does the same thing using an EXEC request:

EXEC DynamicsCRM.RetrieveMultiple ‘account’

The SELECT command supports simple WHERE clauses, TOP N, NULL field filters, and ORDER BY operations.

To view how Aldo is able to read/write MS Dynamics data through simple SQL commands, visit his blog post here.

And to see how to use the MS Dynamics adapter to load data in bulk, please visit this blog post.

(image) (image)

Redirect Standard Error to Output using PowerShell

Tue, 25 Apr 2017 00:02:48 GMT

Originally posted on:

We use LiquiBase for Database change control and Octopus for deployment to downstream environments in our CICD pipeline. Unfortunately, the version of LiquiBase we use writes information messages to standard error. Octopus then interprets this as an error and marks the deployment with warnings when in fact there were no warnings or errors. Newer versions of LiquiBase may have corrected this.

This statement in the update-database function of the liquibase.psm1 file will publish information level messages as errors in the Octopus task log:

..\.liquibase\liquibase.bat --username=$username --password=$password --defaultsFile=../.liquibase/ --changeLogFile=$changeLogFile $url --logLevel=$logLevel update

As a work-around, you can call the statement as a separate process and redirect standard error to standard out as follows:

&  cmd /c "..\.liquibase\liquibase.bat --username=$username --password=$password --defaultsFile=../.liquibase/ --changeLogFile=$changeLogFile $url --logLevel=$logLevel update 2>&1" | out-default

Now the messages are published to Octopus as standard output and display appropriately.

(image) (image)

BizTalk Server best articles

Mon, 24 Apr 2017 04:36:10 GMT

Originally posted on: simplify navigation on the BizTalk articles, I've selected only the best articles. Study Big ThingsBizTalk Internals: Publishers and SubscribersBizTalk: Part 1: Zombie, Instance Subscription and Convoys: DetailsBizTalk: Part 2: Suspend shape and ConvoyBizTalk: Ordered DeliveryBizTalk: Internals: the Partner Direct Ports and the Orchestration ChainsBizTalk: Compensation Model Study Small ThingsBizTalk: Internals: NamespacesBizTalk: Mapping Empty or Missing attributes and elements with different combinations of parameters: Required/Optional, Min/MaxOccurs, Default, FixedBizTalk: Mapping, Logical functoids, and Boolean valuesBizTalk: Schema: Odds: Generate InstanceBizTalk Messaging ModelBizTalk: Internals: Mapping: Script functoid: Type of the Input and output parametersBizTalk: Internals: Enlisted/Unenlisted; Started/Stopped; Enabled/Disabled. Errors and Suspended messages generated by SubscribersBizTalk: Accumulating messages in MessageBox, Lifespan of the messagesBizTalk: xpath: How to work with empty and Null elements in the OrchestrationBizTalk: Internals: Schema Uniqueness Rule ExamsBizTalk: Advanced QuestionsBizTalk: Questions for interview without answersBizTalk: Interview questions and principles Some ArchitectureBizTalk Integration Development ArchitectureNaming ConventionsArtifact CompositionBizTalk: Naming Conventions in ExamplesDomain Standards and Integration ArchitectureBizTalk Server and Agile. Can they live together?Complex XML schemas. How to simplify? From FieldBizTalk: Sample: Context routing and Throttling with orchestrationBizTalk and RabbitMQBizTalk: mapping with XsltBizTalk: Custom API: Promoted PropertiesBizTalk: Sample: Error Handling [...]

Nested Enumerable::Any in C++/CLI (difficult++)

Wed, 19 Apr 2017 03:20:20 GMT

Originally posted on:

I frequently use the construct in C# (C-Sharp) to cross-reference the contents of two repositories simultaneously with one compound command:

IEnumerable<string> arr_strSources = new string[]{"Alpha", "Bravo", "Charlie", "Delta"};
IEnumerable<string> arr_strPieces = new string[] { "phha", "lie", "zelt" };
bool blnRetVal = arr_strPieces.Any(strPiece => arr_strSources.Any(strSource => strSource.Contains(strPiece)));
System.Diagnostics.Debug.WriteLine("Found = {0}", blnRetVal);

That code searches for the presence of ANY string in arr_strSources that contains ANY substring in arr_strPieces.
If found, it simply returns TRUE.
In C++, this task is not syntactically as easy.
The biggest problem is the SCOPE and type of the anonymous variable.
Where the C# framework dynamically handles the conversion, it must be explicit in C++.

Here is the example:

To handle the scope of the second IEnumerable::String^, I created a class that (when instantiated) has a method that returns a Func that tests the contents.
The same thing happens with the individual Strings (another class used to handle the scoping).

This is nowhere as "cute" as the C# code, but it satisfies my curiosity.


(image) (image)

How to use Bower to install packages

Fri, 31 Mar 2017 21:10:51 GMT

Originally posted on:

In VS 2017, you have choice to install ui components by using bower.  If you work previously in mvc project in visual studio you know all we use is nuget to install anything from jQuery to Newtonsoft.json.


For using bower right click on project and check manage bower package, this option list next to Manage Nuget Package.

Just like that nuget window everything is same. For library stuff you still need Nuget.  


So is there any way like in nuget I can just type and install the package


The good thing with bower is it’s make a bower.json file in your project’s root directory. you can just edit it.  for example I need to install moment in my dotnet core project now check how easily it is


open bower and start writing moment under dependencies. now when you go after : it will show you all the version. doesn’t it sound cool and much easier ?


You see a version  number started from ~ and one is ^. you want to know what is that thing and how it’s work. please follow this stackoverflow question


Thanks for reading my post, Happy coding (image)

(image) (image)

SQL Database is in use and cannot restore

Fri, 31 Mar 2017 08:17:39 GMT

Originally posted on:

USE master

--This rolls back all uncommitted transactions in the db.

RESTORE DATABASE FILE = N'mydatabase' FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\mydatabase.bak' WITH  FILE = 1,  NOUNLOAD,  REPLACE,  STATS = 10

(image) (image) core as a windows service

Fri, 31 Mar 2017 12:46:48 GMT

Originally posted on:

Posting this to remind myself that nssm just doesn't suck.
If you want to run a  windows service and it's do this.

Publish your app to a folder on windows. 
Make sure dotnet 1.1 > is installed on the box.
get nssm
run nssm install MyService
in the first box put dotnet
in the second put the folder of where you just published your app.
in the third put the name of your app .dll (don't forget the .dll)  MyService.dll
then it's done. that simple folks.
(image) (image)

Zoho: Is it possible to get GeoLocation of user?

Sat, 25 Mar 2017 11:20:00 GMT

Originally posted on:

Problem: Many times we want latitude and longitude information of user who is currently filling the Zoho forms. Zoho doesn't provide any such object by which we can get this information & it also doesn't support any javascript or any third party tool to get this information Or in other words is it possible to get GeoLocation Zoho form creator?

Solution:Here I am going to explain how you can achieve this:

a) You need to create two fields LAT & LON on the Zoho form where you want this information to be stored.

b) You then need to create one PHP page which you can host on external server. As Zoho doesn't provide any space to host any custom page. So you have to host this page on the external domain.

c) Now you need to use Geolocation object of HTML 5, to get the user's current location, on this PHP page.

d) In my current example I am passing the newly generated record ID on this page php via query string while calling this PHP page from Zoho form. I have divided my zoho form into two steps. In the first step I am filling all the user's details and after clicking on submit button on this step 1, I am calling this external PHP page. 

e) After getting the user's latitude & longitude information, I have called Zoho's form update API and updating the record with Lat & Lon for the ID which I have received on this php page via query string. In this way you can get Latitude & Longitude of user in Zoho. 

Please note that SSL should be installed on the domain where you are going to host this PHP page as without SSL you will never get Lat & Lon information.

You can check screenshots of above solution. This solution is tested on web, android & iphone. It is is working everywhere without any issue.



(image) (image)

Integrating ASP.NET Core With Webforms Using IIS URL Rewrite

Sat, 25 Mar 2017 06:16:02 GMT

Originally posted on:'m currently updating a legacy ASP.NET WebForms application to ASP.NET Core. Because big rewrites (almost) never work, it's a case of migrating sections of the site one at a time, having WebForms pass specific requests to ASP.NET Core, with no change to the end user's experience. How, you ask? With a couple of IIS tools and a sprinkle of web.config entries. ASP.NET Core can be served via IIS, with IIS acting as a reverse proxy. Requests come into IIS, the ASPNetCoreModule routes them to Kestrel, and returns the results. In my scenario, the ASP.NET Core application is only ever accessible via WebForms, so it takes a little bit of setting up. Here's how. Setting up IIS AspNetCoreModule Firstly, you need the AspNetCoreModule. Luckily, you probably already have it - Visual Studio installs it into IIS for you. To check, open IIS Manager, and at the server level open Modules in the IIS section - you should see it listed there. If not, you can install it via the ASP.NET Core Server Hosting Bundle - here's a direct link to download the installer: download! Application Request Routing Next, you need the Application Request Routing module to route requests rewritten by the URL Rewrite module (try saying that ten times fast). You can install this via IIS Manager - click Get New Web Platform Components in the right-hand column to open the Web Platform Installer, then search for ARR, and look for version 3.0: Once that's installed, open Application Request Routing in the server-level IIS section (you may need to close and re-open IIS to see the icon), click Server Proxy Settings, check Enable proxy, and click Apply: URL Rewrite Finally, you need the URL Rewrite module. This you can also install via the Web Platform Installer - just search for rewrite, and look for version 2.0: Setting up ASP.NET Core Firstly, you need IIS integration in your application. This is super easy, and you probably already have it - it's simply a call to UseIISIntegration() on the WebHostBuilder in your Program.cs. If you're missing it, UseIISIntegration is an extension method from the Microsoft.AspNetCore.Server.IISIntegration NuGet package. That one line is all you need in your ASP.NET Core application - now you just publish the project. You can use a File System publish, go via WebDeploy, or whatever you prefer. Finally, set up an IIS website pointing to your ASP.NET Core publish directory. Because this website will be accessed via WebForms only, bind it to a non-public port number - I'll use 1234 for our example: Setting up WebForms Finally, you need to tell WebForms to send the appropriate requests to ASP.NET Core. You can do this with rules in your web.config which configure the URL Rewrite module. For example, say you've migrated your news pages to an ASP.NET Core NewsController, the following rules tell IIS what to do with requests for the ASP.NET Core 'news' section: Both rules have the same pattern: they both capture requests with URLs beginning with particular strings (content/ and news, respectively), and rewrite them to requests on port 1234 on the local machine. The {R:1} reference in the rewritten URLs is replaced with the captured group [...]

AppDynamics Disk Space Alert Extension: Windows support

Fri, 24 Mar 2017 05:41:48 GMT

Originally posted on:

I've added a Powershell port of this shell script to support Windows servers, and Derrek has merged the PR back into the trunk:

The port is pretty faithful to the original shell script, uses the same configuration file and reports and alerts in the same way. So all you need to do is swap in the Powershell rather than than Bash script to monitor your Windows-based estate. Hope you find it useful!
(image) (image)

Custom Role Claim Based Authentication on SharePoint 2013

Tue, 21 Mar 2017 07:08:58 GMT

Originally posted on:

We had a requirement in a project to authenticate users in a site collection based on Country claim presented by a User.

Below powershell sample is to add a US country claim value to a Visitors group of a site collection to allow any users from US to get authorized to view the site.

$web = Get-SPWeb "https://SpSiteCollectionUrl"
$claim = New-SPClaimsPrincipal -TrustedIdentityTokenIssuer “users” -ClaimValue “US" -ClaimType “
$group = $web.Groups[“GDE Home Visitors“]
$group.AddUser($claim.ToEncodedString(), “”, “US", “”)

Note : for this solution to work you we have used an ADFS solution which pull our calim values from Enterpise directory and sends SAML claim to SharePoint.

(image) (image)

Powershell to monitor Server Resource and make html report for a set number of iteration and intreval

Tue, 21 Mar 2017 07:00:18 GMT

Originally posted on: building a SharePoint farm for a project, i did load testing with VSTS.Since my SharePoint farm was in a different domain than that of my Load test controllers, I had to monitor few performance counters to take a well informed decision while tuning the farm capacity.Below powershell was developed by me to generate a html report for all the different servers at specified intervals.The output html file will be generated in D Drive. #Array which contains the different Server names$ServerList  = @('server01', 'server02', 'server03','Server04','Server05')#Array which represents the role of the server$ServerRole=@('WFE1', 'WFE2', 'Central Admin', 'WorkFlow1','WorkFlow2')#Number of times this powershell should be executed$runcount = 15;$Outputreport = " Server Health Report                                                                

Server Health Report

                                                                                                                                                                                                             "for($i=1; $i -le $runcount;$i++){            $ArrayCount = 0     ForEach($computername in $ServerList)     {        $role = $ServerRole[$ArrayCount]        $ArrayCount = $ArrayCount  + 1                Write-Host $i $computername $role         $AVGProc = Get-WmiObject -computername $computername win32_processor |         Measure-Object -property LoadPercentage -Average | Select Average        $OS = gwmi -Class win32_operatingsystem -computername $computername |        Select-Object @{Name = "MemoryUsage"; Expression = {“{0:N2}” -f ((($_.TotalVisibleMemorySize - $_.FreePhysicalMemory)*100)/ $_.TotalVisibleMemorySize) }}        $vol = Get-WmiObject -Class win32_Volume -ComputerName $computername -Filter "DriveLetter = 'C:'" |        Select-object @{Name = "C PercentFree"; Expression = {“{0:N2}” -f  (($_.FreeSpace / $_.Capacity)*100) } }          $result += [PSCustomObject] @{                 ServerName = "$computername"                CPULoad = "$($AVGProc.Average)%"                MemLoad = "$($OS.MemoryUsage)%"                CDrive = "$($vol.'C PercentFree')%"            }                                    Foreach($Entry in $result)         {             if(($Entry.CpuLoad) -or ($Entry.memload) -ge "80")             {             $Outputreport += ""             }             else            {            $Outputreport += ""             }            $Outputreport += "  
Server NameServer RoleAvrg.CPU UtilizationMemory UtilizationC Drive UtilizatoinIteration
$($Entry.Servername) $role$($Entry.CPULoad)$($Entry.MemLoad)$($Entry.Cdrive)

Distributed TensorFlow Pipeline using Google Cloud Machine Learning Engine

Fri, 17 Mar 2017 08:01:33 GMT

Originally posted on: is an open source software library for Machine Learning across a range of tasks, developed by Google. It is currently used for both research and production at Google products, and released under the Apache 2.0 open source license. TensorFlow can run on multiple CPUs and GPUs (via CUDA) and is available on Linux, macOS, Android and iOS. TensorFlow computations are expressed as stateful dataflow graphs, whereby neural networks perform on multidimensional data arrays referred to as "tensors". Google has developed the Tensor Processing Unit (TPU), a custom ASIC built specifically for machine learning and tailored for accelerating TensorFlow. TensorFlow provides a Python API, as well as C++, Java and Go APIs.Google Cloud Machine Learning Engine is a managed service that enables you to easily build TensorFlow machine learning models, that work on any type of data, of any size. The service works with Cloud Dataflow (Apache Beam) for feature processing, Cloud Storage for data storage and Cloud Datalab for model creation. HyperTune performs CrossValidation, automatically tuning model hyperparameters. As a managed service, it automates all resource provisioning and monitoring, allowing devs to focus on model development and prediction without worrying about the infrastructure. It provides a scalable service to build very large models using managed distributed training infrastructure that supports CPUs and GPUs. It accelerates model development, by training across many number of nodes, or running multiple experiments in parallel. it is possible to create and analyze models using Jupyter notebook development, with integration to Cloud Datalab. Models trained using GCML-Engine can be downloaded for local execution or mobile integration.Why not Spark ?Deep Learning has eclipsed Machine Learning in accuracy Google Cloud Machine Learning is based upon TensorFlow, not SparkMachine Learning industry is trending in this direction – its advisable to follow the conventional wisdom.TensorFlow is to Spark what Spark is to Hadoop.Why Python ?Dominant Language in field of Machine Learning / Data Scienceit is the Google Cloud Machine Learning / TensorFlow core language Ease of use (very !)Large pool of developersSolid ecosystem of Big Data scientific & visualization tools – Pandas, Scipy, Scikit-Learn, XgBoost, etcWhy Deep LearningThe development of Deep Learning was motivated in part by the failure of traditional Machine Learning algorithms to generalize well – because it becomes exponentially more difficult when working with high-dimensional data - the mechanisms used to achieve generalization in traditional machine learning are insufficient to learn complicated functions in high-dimensional spaces. Such spaces also often impose high computational costs. Deep Learning was designed to overcome these obstacles.  The curse of dimensionality As the number of relevant dimensions of the data increases, the number of computations may grow exponentially. Also, a statistical challenge because the number possible configurations of x is much larger than the number of training examples -  in high-dimensional spaces, most configurations will have no training example associated with it. Local Constancy and Smoothness Regularization ML algorithms need to be guided by prior beliefs about what kind of function they should learn. Priors are firstly expressed by choosing the algorithm class, but are also explicitly incorporated as probability distributions over model parameters  - directl[...]

SQL Server - Kill any live connections to the DB

Fri, 17 Mar 2017 08:47:43 GMT

Originally posted on:

Often you need to restore a DB or take oit offline only to find out the process aborts due to active connected sessions to the DB. Here is a quick sript that will kill all active sessions to the DB.

USE [master];

DECLARE @kill varchar(8000) = '';  
SELECT @kill = @kill + 'kill ' + CONVERT(varchar(5), session_id) + ';'  
FROM sys.dm_exec_sessions
WHERE database_id  = db_id('myDataBase')

(image) (image)

Keras QuickRef

Fri, 17 Mar 2017 07:26:26 GMT

Originally posted on: QuickRefKeras is a high-level neural networks API, written in Python that runs on top of the Deep Learning framework TensorFlow. In fact, tf.keras will be integrated directly into TensorFlow 1.2 !Here are my API notes:Model APIsummary() get_config() from_config(config) set_weights() set_weights(weights) to_json() to_yaml() save_weights(filepath) load_weights(filepath, by_name) layers Model Sequential /Functional APIsadd(layer) compile(optimizer, loss, metrics, sample_weight_mode) fit(x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight) evaluate(x, y, batch_size, verbose, sample_weight) predict(x, batch_size, verbose) predict_classes(x, batch_size, verbose) predict_proba(x, batch_size, verbose) train_on_batch(x, y, class_weight, sample_weight) test_on_batch(x, y, class_weight) predict_on_batch(x) fit_generator(generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe) evaluate_generator(generator, val_samples, max_q_size, nb_worker, pickle_safe) predict_generator(generator, val_samples, max_q_size, nb_worker, pickle_safe) get_layer(name, index) LayersCoreLayerdescriptionIOparamsDensevanilla fully connected NN layer(nb_samples, input_dim) --> (nb_samples, output_dim)output_dim/shape, init, activation, weights, W_regularizer, b_regularizer, activity_regularizer, W_constraint, b_constraint, bias, input_dim/shapeActivationApplies an activation function to an outputTN --> TNactivationDropoutrandomly set fraction p of input units to 0 at each update during training time --> reduce overfittingTN --> TNpSpatialDropout2D/3Ddropout of entire 2D/3D feature maps to counter pixel / voxel proximity correlation(samples, rows, cols, [stacks,] channels) --> (samples, rows, cols, [stacks,] channels)pFlattenFlattens the input to 1D(nb_samples, D1, D2, D3) --> (nb_samples, D1xD2xD3)-ReshapeReshapes an output to a different factorizationeg (None, 3, 4) --> (None, 12) or (None, 2, 6)target_shapePermutePermutes dimensions of input - output_shape is same as the input shape, but with the dimensions re-orderedeg (None, A, B) --> (None, B, A)dimsRepeatVectorRepeats the input n times(nb_samples, features) --> (nb_samples, n, features)nMergemerge a list of tensors into a single tensor[TN] --> TNlayers, mode, concat_axis, dot_axes, output_shape, output_mask, node_indices, tensor_indices, nameLambdaTensorFlow expressionflexiblefunction, output_shape, argumentsActivityRegularizationregularize the cost functionTN --> TNl1, l2Maskingidentify timesteps in D1 to be skippedTN --> TNmask_valueHighwayLSTM for FFN ?(nb_samples, input_dim) --> (nb_samples, output_dim)same as Dense + transform_biasMaxoutDensetakes the element-wise maximum of prev layer - to learn a convex, piecewise linear activation function over the inputs ??(nb_samples, input_dim) --> (nb_samples, output_dim)same as Dense + nb_featureTimeDistributedApply a Dense layer for each D1 time_dimension(nb_sample, time_dimension, input_dim) --> (nb_sample, time_dimension, output_dim)DenseConvolutionalLayerdescriptionIOparamsConvolution1Dfilter neighborhoods of 1D inputs(samples, steps, input_dim) --> (samples, new_steps, nb_filter)nb_filter, filter_length, init, activation, weights, border_mode, subsample_length, W_regularizer, b_regularizer, activity_regularizer, W_constraint, b_constraint, bias, input_dim, input_lengthConvolution2Dfilter neighborhoods of 2D inputs(samples, rows, cols, channels) --> (samples, new_r[...]


Wed, 15 Mar 2017 14:38:22 GMT

Originally posted on:

Compete for a chance to win $2,000!

That’s right. We are offering a $2,000 first prize for the most innovative use of our technology. Within just a few hours you will be surprised how much you can accomplish with our .NET Developer SDK. In addition, your work could be advertised nationally and promoted at customer locations worldwide as part of our solutions and earn significant additional revenue if you so desire.





-          First Place Prize:  $2,000
-          Second Place Prize:  $750
-          Third Place Prize: $250

All submissions should be made by May 1st 2017 by midnight (US Eastern Time)

Contact us at for more information

Additional Information

How to write your first adapter:

Download the Enzo Unified SDK at this location:

Download the help file for the Enzo Unified SDK here:

Download the installation doc for the Enzo Unified SDK here:

(image) (image)

Using Measure-Object to sum file sizes in Powershell

Wed, 15 Mar 2017 09:56:58 GMT

Originally posted on: aware that this line: gci -r "C:\temp\test"  | measure-object -property length -sum will throw an error if it encounters a folder whose only contents is another (empty) folder; this is because measure-object tries in this case to measure an object which does not have a “length” property defined: PS C:\Projects> Get-ChildItem -Recurse "C:\temp\test"  | measure-object -property length -sum measure-object : The property "length" cannot be found in the input for any objects. At line:1 char:42 + ... dItem -Recurse "C:\temp\test"  | measure-object -property length -sum +                                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : InvalidArgument: (:) [Measure-Object], PSArgumentException     + FullyQualifiedErrorId : GenericMeasurePropertyNotFound,Microsoft.PowerShell.Commands.MeasureObjectCommand     There are various ways to achieve the correct effect, based on the idea that objects returned by gci that are directories will have the PSIsContainer property set to $true. For example:   $root = "c:\Oracle"   $total_files = 0 $total_size = [int]0   [System.Collections.Stack]$stack = @() $stack.Push($root)   while ($stack.Length -gt 0) {     $folder = $stack.Pop()       gci $folder |% `     {         $item = $_                 if ($item.PSIsContainer)         {             $stack.Push($item.FullName)         }         else         {              $total_size += $_.Length              $total_files ++           }      } }   Write-Host "Total size: $([Math]::Round($total_size / 1Mb, 2)) Mb over $total_files files" [...]

Writing your first adapter for Enzo Unified

Tue, 14 Mar 2017 01:57:45 GMT

Originally posted on: blog post shows you how to write your first adapter using the Enzo Unified SDK, so that you can extend the capabilities of SQL Server and/or create a light-weight HTTP proxy for applications, IoT devices or mobile applications. This post assumes you have a high level understanding of Enzo Unified ( and that you have installed and configured the Enzo Unified SDK (download details below). In this post, you will learn how to create an adapter to get the current UCT time using the official US NIST time servers. You will write this logic in a new adapter called TimeServers and expose a method called GetUSTime().  Once coded, you will be able to access this method through SQL Server as a stored procedure, or using an HTTP call from any client. Because Enzo Unified exposes these methods as HTTP calls, any client running on any operating system capable of making HTTP calls will be able to run this command. Pre-Requisites To successfully build this adapter, you will need the following: Visual Studio 2012 or higher SQL Server 2014 Express Edition or higher Enzo Unified SDK (see download links below)   Download the Enzo Unified SDK here: Download the help file for the Enzo Unified SDK here: Download the installation doc for the Enzo Unified SDK here: Create an Empty Adapter Project After downloading and installing the SDK, and registering the Enzo Unified project template per installation instructions, you will be able to create a new Adapter project. Start Visual Studio 2012, and select New Project Select Enzo Unified DataAdapter (Annotated) under Templates –> Visual C# –> Enzo Unified Enter the adapter name:  TimeServers Verify the location of your adapter (it should be under the namespace you specified during installation) and click OK You will need to change two settings: Open TimeServers.cs On line 5, replace the namespace by your namespace (MY is the namespace I entered) Change the the project configuration from Any CPU to x64 Make sure the project compiles successfully Press F5 to start the adapter (at this point, the adapter does nothing yet) The Enzo Unified Development Host will start and a user interface will be displayed allowing you to test your adapter Type this command, and press F5 to execute it:  exec Stop the project. We are ready to add the method to this adapter. Note that while you can use the Development Host for simple testing, it is recommended to use SQL Server Management Studio for better experience. We will review how to connect to Enzo Unified in more detail later. Create the GetUSTime Method Let’s replace the entire content of the TimeServers.cs file; we will review the implement next. using BSC.Enzo.Unified; using BSC.Enzo.Unified.Logging; using System; using System.Collections.Generic; using System.IO; namespace MY {     public class TimeServers : DataAdapter     {         // this is the main call to register your functionality with the Enzo instance         public override void RegisterAdapter()         {             #region Descriptive details             Title = "NIST Time Server";            // a summary of the a[...]

How To Suppress DHCP Popups In NETUI

Sun, 12 Mar 2017 05:37:47 GMT

Originally posted on: on a CE device that presented a kiosk application I wanted to find a way to suppress the following Windows CE Networking DHCP popups:"DHCP was unable to obtain an IP address. If the netcard is removable, then you can remove/reinsert it to have DHCP make another attempt to obtain an IP address for it. Otherwise, you can statically assign an address.""Your IP address lease has expired. DHCP was unable to renew your lease.""A DHCP Server could not be contacted. Using cached lease information."These popups were obtrusive and the end user of the device had no idea how to resolve the issues that these popups were bringing to light. The device depends upon a reliable network infrastructure and these popups should really never ... pop up ... but when they did they did nothing more than confuse the user and get in the way of the kiosk application / user interface the user was trying to use.Typically the popups were quickly ignored by the user and closed, but in some cases a call or ticket was created with their IT department for resolution, wasting everyone's time.The network attached device is smart enough to buffer the readings it was responsible for taking, simply sending them when the networking issue was resolved naturally by being carried to where there was network coverage, so the popups only served as an annoyance to the user, not to mention the IT department who would have to deal with any users bringing up the fact that this strange popup popped up, making them explain to the user what should do about it.This post alluded to the solution but did not follow through with the specifics so I am doing that here.If you want to suppress these or any of the other of the multitude of popups the NETUI supports start by following the instructions in the blog post by Michel Varhagen of GuruCE, Cloning publc code: An example.Add the following new function, SuppressNetMsgBox, in between the two functions you'll find @:netui.c(139):GetNetString (UINT uID, LPTSTR lpBuffer, int nBufferMax)andnetui.c(165): BOOL WINAPIV NetMsgBox (HWND hParent, DWORD dwFlags, TCHAR *szStr)BOOL SuppressNetMsgBox(DWORD dwId, TCHAR *szStr){#define MAXCMP 76 TCHAR szBuffer[MAXCMP] = {0}; DWORD dwMaxCount = sizeof(szBuffer)/sizeof(szBuffer[0]); if (!GetNetString(dwId, szBuffer, (int)dwMaxCount)) return FALSE; if (!_tcsncmp(szBuffer, szStr, (size_t)(dwMaxCount-1))) { NKDbgPrintfW(_T("Networking Pop-up: \"%s\"... suppressed.\r\n"), szBuffer);  return TRUE; } return FALSE;}Add calls to SuppressNetMsgBox inside the NetMsgBox function, where shown, as shown:BOOL WINAPIV NetMsgBox (HWND hParent, DWORD dwFlags, TCHAR *szStr){    TCHAR szTitle[200];    DWORD dwStyle, dwId;    int iRet;    HCURSOR hCur;    // verify the incoming window handle and fail is not null and invalid    if (hParent && !IsWindow(hParent)) {        SetLastError(ERROR_INVALID_WINDOW_HANDLE);        return FALSE;    } if ( (SuppressNetMsgBox(NETUI_GETNETSTR_NO_IPADDR, szStr)) // "DHCP was unable to obtain an IP address. If the netcard is removable, then you can remove/reinsert it to have DHCP make another attempt to obtain an IP address for it. Otherwise, you can statically assign an address." || (SuppressNetMsgBox(NETUI_GETNETSTR_LEASE_EXPIRED, szStr)) // "Your IP address lease has expired. DHCP was unable to renew your lease." || (SuppressNetMsgBox(NETUI_GETNETSTR_CACHED_LEASE, szStr)) // "A DHCP Server could not be contacted. Using cached lease informati[...]

Counting pages in Confluence

Fri, 10 Mar 2017 08:05:12 GMT

Originally posted on:

We're currently migrating our Confluence installation, and I was looking around for resources to provide some basic metrics, eg. the total number of (non-private) pages we are currently hosting. It appears that the simplest way to get this information is simply to connect directly to the database and run a query:

# Get count of all pages in all global spaces (each page is counted as a single version)
select count(*)
from SPACES as A
inner join CONTENT as B
where A.SPACETYPE='global'
and B.PREVVER is null

(image) (image)

SQL Server: How do I split a comma delimited column into multiple columns or rows?

Thu, 09 Mar 2017 15:32:50 GMT

Originally posted on:

-- converting column with commas to multiple columns --

declare @col1 varchar(500)
set @col1 = 'I,Hate,Broccoli'

DECLARE @Tmp TABLE ( Id int, Element VARCHAR(20)) 
INSERT @Tmp SELECT 1,@col1
       PARSENAME(REPLACE(Element,',','.'),2) Name, 
       PARSENAME(REPLACE(Element,',','.'),1) Surname 

-- converting column with commas to multiple rows --

declare @col1 varchar(500)
set @col1 = 'I,am,really,a,smart person, one of the smartest'

DECLARE @Tmp TABLE ( Id int, Element VARCHAR(200)) 
INSERT @Tmp SELECT 1,@col1

SELECT A.[id],
    Split.a.value('.', 'VARCHAR(100)') AS String
  FROM (SELECT [id],
      CAST ('' + REPLACE(Element, ',', '') + '' AS XML) AS String
    FROM @Tmp) AS A CROSS APPLY String.nodes ('/M') AS Split(a);

(image) (image)

MonoGame – v3.6 now available

Tue, 07 Mar 2017 12:38:10 GMT

Originally posted on:

MonoGame 3.6 now available!

Pardon the interruption of the regular MonoGame blog series, but this is important.  There’s a new version of MonoGame now available!!

Grab MonoGame 3.6 here:

There are a bunch of new features, and a small mountain of bugfixes and enhancements. 

Read the entire changelog here:

(image) (image)

Handling Exception in calling Web Service, In BizTalk

Mon, 06 Mar 2017 04:36:38 GMT

Originally posted on: have faced the issue of calling web service from BizTalk, where intermittently either service is not available or there is some problem in connecting the service. Service can return response error code which is other than HTTP 200 code ie. HTTP 400 or HTTP 500 code. BizTalk doesnt like it. Send port will treat as warning, write warning in event viewer and retries. But eventually if problem and error persists then error will be thrown from send port. This error will be received in Orchestration. You can set exception handler block to catch the exception i.e. System.Exception or System.EntryPointNotFoundException. You can handle the exception gracefully. But send port instance will get suspended. This will not go away unless manually terminated. Since the exception is thrown from send port and probably it is valid response, there is no point in resuming it too. If you try doing it, orchestration which had called the service has already handled exited since exception has been handled in exception handler block.This scenario needs to be handled by a separate orchestration which we can call "SendPortExceptionHandler". This orchestration will have only one receive shape connected to receive port. It will receive message of type Raw Xml. A filter needs to be created like below:ErrorReport.FailureCode Exists   AndErrorReport.MessageType == schema_namespace#root_nodeYou can add multiple filters, so that all exceptions at send port will be handled by single orchestration.-----------------------------------------------------------------------------------------------------------------------------If we get exception say System.EntryPointNotFoundException and we want to handle this manually. We might get this error if end point is not available or network is down. Then we might want to suspend the orchestration and retry manually. For doing this, we need to add a loop. Add a scope in that loop. Catch the specific exception in that scope exception handler. Please see below:In the first expression box we will initialize values for retry. // Set retry to trueServiceRetry = true;ServiceRetryCount = 0;In the internal scope (Mule ESB Scope) I am catching System.EntryPointNotFoundException. You can add another exception handler block to throw other errors. In this block I am setting values as below:// Set retry to trueServiceRetry = true;ServiceRetryCount = ServiceRetryCount + 1;In decide shape I am checking if retry count ServiceRetryCount <= 3 then it suspends the flow else exception will thrown. It will create a finite number of retries and orchestration will exit gracefully.In outer all other exceptions are caught and thrown back to the calling orchestration.This way, we can handle service end point related and web exceptions in BizTalk. [...]

Capture HTTPS traffic from iOS devices using Charles

Fri, 03 Mar 2017 18:02:58 GMT

Originally posted on: article will show you how to set up Charles and iOS devices so that you can capture HTTPS traffic from iOS devices using Charles. Step 0: Download and install Charles from 1: Find the IP address of your mac:Ifconfig | grep “inet “     Step 2: Find the port number Charles uses listens on. It should be 8888 if you haven’t change the default value. You can find the number from Settings-->Proxy SettingsStep 3: You need set up the proxy on the iOS device.On the iOS device, Settings ? Wi-Fi, click on the little i icon next to the Wi-Fi.Change HTTP PROXY to Manual, type in the IP Address and Port number of your mac. Step 4: Go back to Charles, set filters, so that Charles only record the connections your are interested. You don't have to do this step, in case you don't want Charles are flooded with records you are not interested as it will record everything.Just include the urls you want to record.Now, if you trigger HTTPS connections from your iOS app, you will find they are recorded by Charles, but the messages are encrypted. You can’t find any useful information from i. Step 5: Enable SSL Proxying. Right click the URL, and check “Enable SSL Proxying”You still won’t be able to get any useful information at this stage, as you iOS does not trust the certificate Charles provides.Step 6: Install Charles’ certificateOpen Safari from your iOS device and browse to: Your iOS device will ask you to install the certificate. Just do it.That is it. Now you should be able to all the details of the connections. [...]

Execute AsyncTask like jquery’s $.ajax in Kotlin on Android

Fri, 03 Mar 2017 16:16:13 GMT

Originally posted on: time I showed how to execute an AsyncTask in Jquery style. Now I will show how to do it in Kotlin. Kotlin has much better support for functional programming, functions/lambdas are first class member of the language. A parameter of a method can be just typed function rather than having to be interface. So I don’t have to create interface for Func and Action.This is how MyAsyncTask looks like in Kotlin, note the type of task and completionHandler are ()->T or (T)->Unit?.class MyAsyncTask(private val task: () -> T) : AsyncTask() {   private var completionHandler: ((T) -> Unit)? = null   private var errorHandler: ((Exception) -> Unit)? = null   private var exception: Exception? = null   fun setCompletionHandler(action: (T) -> Unit) {       this.completionHandler = action   }   fun setErrorHandler(action: (Exception) -> Unit) {       this.errorHandler = action   }   override fun doInBackground(vararg voids: Void): T? {       try {           return task.invoke()       } catch (ex: Exception) {           exception = ex       }       return null   }   override fun onPostExecute(result: T) {       if (exception != null && errorHandler != null) {           errorHandler!!.invoke(exception!!)       } else if (completionHandler != null) {           completionHandler!!.invoke(result)       }   }}And here is Async:class Async private constructor() {   private var task: MyAsyncTask? = null   fun whenComplete(action: (T) -> Unit): Async {       task!!.setCompletionHandler(action)       return this   }   fun onError(action: (Exception) -> Unit): Async {       task!!.setErrorHandler(action)       return this   }   fun execute() {       try {           task!!.execute()       } catch (ex: Exception) {           ex.printStackTrace()       }   }   companion object {       fun run(task: () -> T): Async {           val runner = Async()           runner.task = MyAsyncTask(task)           return runner       }   }}Here is an example of how it is used.override fun onCreate(savedInstanceState: Bundle?) {   super.onCreate(savedInstanceState)   setContentView(R.layout.activity_main) {       val api = MyAPI()       api.signInUser("user_name", "password")   }.whenComplete {       //it.displayName   }.onError {       //it.message   }.execute()} [...]

Free Licenses for MVPs

Thu, 02 Mar 2017 06:43:16 GMT

Originally posted on:

I was interested to see a tweet from Liquid Technologies promoting free licenses of its XML products for Microsoft Most Valuable Professionals (MVPs).

So this got me wondering what else is available for free, and guess what, lots…

You can request a free Liquid XML Studio 2017 license here:


(image) (image)

Execute AsyncTask like jquery’s $.ajax in Java on Android

Thu, 02 Mar 2017 04:06:43 GMT

Originally posted on: Android, if you want to make an async call, AsyncTask is the way to go. Basically you will have to implement something like this:private class LongOperation extends AsyncTask {   @Override   protected String doInBackground(String... params) {       //execute a long running operation   }   @Override   protected void onPostExecute(String result) {       //back to UI thread, can update UI   }}Then you can call new LongOperation().execute("");I don’t like having to write so much boilerplate code to just invoke a method asynchronously and I hate those mystical type parameters with AsyncTask. I want something simple, something like jquery’s $.ajax. I want write async code like -> {   //execute a long running operation}).whenComplete(result -> {   //back to UI thread, can update UI}).onError(ex -> {   //handle error, on UI thread})Here is how I do it.First I need to create 2 interfaces:public interface Action {   void execute(T param);}public interface Func {   T execute() throws Exception;}Then I need a wrapper on AsyncTask:public class MyAsyncTask extends AsyncTask {   private Func task;   private Action completionHandler;   private Action errorHandler;   private Exception exception;   public MyAsyncTask(Func task) {       this.task = task;   }   public void setCompletionHandler(Action action) {       this.completionHandler = action;   }   public void setErrorHandler(Action action) {       this.errorHandler = action;   }   @Override   protected T doInBackground(Void... voids) {       try {           return task.execute();       } catch (Exception ex) {           exception = ex;       }       return null;   }   @Override   protected void onPostExecute(T result) {       if (exception != null && errorHandler != null) {           errorHandler.execute(exception);       } else if (completionHandler != null) {           completionHandler.execute(result);       }   }}Last, I need a small class to enable the fluent interface:public class Async {   private  MyAsyncTask task;   private Async() {   }   public static Async run(Func task) {       Async runner = new Async();       runner.task = new MyAsyncTask(task);       return runner;   }   public Async whenComplete(Action action) {       task.setCompletionHandler(action);       return this;   }   public Async onError(Action action) {       task.setErrorHandler(action);       return this;   }   public void execute() {       try {           task.execute();       } catch (Exception ex) {           ex.printStackTrace();       }   }}That is all I need!Here is an example usage of it:private void loadUser() {   final Dialog progress = showProgressDialog(); Func() {       @Override       public UserInfo[...]

angular it's just angular now

Wed, 01 Mar 2017 18:07:47 GMT

Originally posted on:  it's just angular man. using angular-cli with webpack is the way to go.
Use gitter and the angular/angular channel if you have questions. They are friendly and responsive.

rxjs of course rocks it.
and of course built with and in typescript makes for less buggy code. 

I still have issues with git. I used git but my compile I think breaks the git pull on the server side.
There should be a way to ignore that and just get the damn latest, but NOOOO. rest --hard didn't fix it either.
boy they should have a command called git just (just get the damn code). I would be happy with that.

Otherwise its' great!  This nearly completes my third project in angular.  don't use unless your making a web-api, it's all it's good for anymore.  Webstorm still rocks as the #1 angular tool, and has good debugging.

gimmie some of that fishing time.

(image) (image)

Reasons for Automated Testing

Fri, 24 Feb 2017 00:47:15 GMT

Originally posted on: I gave a talk at the Edmonton .NET User Group on an introduction to automated testing.  And of course in this introduction I gave a list of reasons why unit testing is a good idea.  Although there are already many blog posts other there on this subject, I thought I’d record my reasons here. Although all of these reasons may appeal to your hipster, idealistic sense of craftsmanship and professionalism, each of these reasons also has a sound business foundation.  Automated testing saves/makes you money. 1. Product Quality Of course the most obvious reason for automated testing is to improve product quality.  Although automated testing will not guarantee that you’ll never have another bug, it will lower that bug rate.  And having working software is the #1 design goal.  Telling customers about the automated testing that is in place can be a strong marketing point. Nobody will buy software that doesn’t work. 2. Because you are doing it anyway If you aren’t doing automated testing, then you are doing manual testing.  Manual testing (and all testing) is a thankless job that doesn’t get the credit it deserves.  A lot of people assume it just happens for free. Manual testing does take a lot of time and effort.  To run a test takes a lot of time just to set up.  For example, confirming how the system behaves with a customer with bad credit.  You have to create or find a customer record and change its credit status.  Once the setup is done comes the actual testing.  You have to run the program, login as the customer, and attempt to make a purchase.  This all takes time, several minutes. Setting up an automated test also takes quite a bit of time.  This is especially true for the first test.  However the good news is you only have to do this once.  Once the test is written you can run it as often as you want, several times an hour typically.  Also running the test is much faster; no having to start the program, login, and all the other typing and button clicks.  And the code you write to setup your first test will be reused for all the other tests. Although the first test is the most difficult to write, before the first production build of your program the time spent building an automated test suite will be less that then time it takes to manually test the same program once.  And once the automation is in place, it can be run whenever virtually instantaneously. Automated testing takes less time than manual testing.  This is certainly true in the full lifecycle of the program, but is even true before the first production release. 3. Enables Refactoring Refactoring is the practice of going back over existing code and improving the readability of it.  It removes technical debt. A common practice on my team is doing code reviews, especially with new hires.  The result of code review is often a list things that should have been done differently.  On receiving these suggestions the programmer asks, should I implement those changes.  The answer is, it depends. If the testing has been done manually then the answer is often “no”.  The code review is at the end of the project and we are up against a deadline.  Although the ch[...]

How To Hire Top iOS App Developers For The Company?

Thu, 23 Feb 2017 06:38:36 GMT

Originally posted on: have become so easy these day and all the credits go to the smartphones and mobile applications. People can just tap on smartphones and virtually do everything right from booking air tickets to paying online bills. This all has made possible only because of mobile applications. The mobile app development world has gone through dramatic changes in past few years. The pool of applications available in both App Store. But, interactive and useful features defines the success of the iOS mobile application. Because users always look for the simple and unique application equipped with quality services. As the mobile applications have become a lifeline for users, companies now need to hire top iOS app developers to build a robust application. But, hiring application developers is not an easy job. Companies need to consider different things like-When and where to start hiring process?Criteria for candidatesExperienced or fresher candidatesIt might be the waste of efforts and time if company fail to find valid application developer. That is why companies need to focus on a set of expectations defined for hiring the application developers. Following are the important tips for iOS app development companies to hire the right candidate for their project:Project Requirements:The company hires new candidates based on the nature of the project. For the highly  advanced project, the company requires experienced developers and for simpler one company can hire a fresher candidate.  The company should decide whether to hire experienced or a fresher candidate. If a company is familiar with the difficulty of the project, it can hire developers with enough experience level.Technical Knowledge:iOS applications are creating a buzz in the mobile market. The candidates should be technically strong enough to handle the Objective-C and Swift coding efficiently. Because these two languages are equipped with deep concepts of iOS application development. Dealing with core APIs and other features really challenging thing for developers. The company should also look for other technical knowledge in a candidate like Ionic, Xamarin, HTML, CSS etc. Test For Candidates:Conducting test is a good method to hire right candidates for the company.  It will make easier for companies to know the expertise of candidate in different areas. Along with technical knowledge, the company can determine the candidate’s knowledge in logical reasoning, quantitative aptitude, general awareness etc. However, a test can help the company to prevent unqualified candidates who are sneaking through the hiring process. Because app developer is a profitable profession and company can encounter n number of cheating applicants every time. The company can conduct paper-based or online tests for their potential application developers.Candidate’s References:No hiring process is complete without this important step. Checking the references help to reveal more details about the candidates. The company can actually call the candidates’ ex-client and ask about their performance. A positive response can help the company to determine the reliability and trustworth[...]

Multithreading with PowerShell using RunspacePool

Wed, 22 Feb 2017 14:29:12 GMT

Originally posted on: you are working with PowerShell, you may want to start running certain scripts in parallel for improved performance. While doing some research on the topic, I found excellent articles discussing Runspaces and how to execute scripts dynamically. However I needed to run custom scripts already stored on disk and wanted a mechanism to specify the list of scripts to run in parallel as an input parameter.  In short, my objectives were: Provide a variable number of scripts, as input parameter, to execute in parallel Execute the scripts in parallel Wait for completion of scripts, displaying their output Have a mechanism to obtain success/failure from individual scripts This article provides two scripts: a “main” script, which contains the logic for executing other scripts in parallel, and a test script that simply pauses for a few seconds. Async Script The following script is the main script that calls other scripts in separate processes running Runspace Pools in PowerShell.  This script accepts an input array of individual scripts, or script files; you might need to provide the complete path for script files. Then this script starts a new PowerShell process for each script and creates a reference object in the $jobs collection; the BeginInvoke operation starts the script execution immediately. Last but not least, this script waits for completion of all scripts before displaying their output and result. It is important to note that this main script expects each script being called to return a specific property called Success to determine whether the script was successful or not. Depending on the script you call, this property may not be available and as a result this main script may report false negatives. There are many ways to detect script failures, so you can adapt the proper method according to your needs. A Test script is provided further down to show you how to return a custom property. You can the below script like this, assuming you want to execute two scripts (script1.ps1 and script2.ps1):   ./main.ps1 @( {& "c:\myscripts\script1.ps1" }, {& "c:\myscripts\script2.ps1" }) Param(     [String[]] $toexecute = @() ) Write-Output ("Received " + $toexecute.Count + " script(s) to execute") $rsp = [RunspaceFactory]::CreateRunspacePool(1, $toexecute.Count) $rsp.Open() $jobs = @() # Start all scripts Foreach($s in $toexecute) {     $job = [Powershell]::Create().AddScript($s)     $job.RunspacePool = $rsp     Write-Output $("Adding script to execute... " + $job.InstanceId)     $jobs += New-Object PSObject -Property @{         Job = $job         Result = $job.BeginInvoke()     } } # Wait for completion do {     Start-Sleep -seconds 1     $cnt = ($jobs | Where {$_.Result.IsCompleted -ne $true}).Count     Write-Output ("Scripts running: " + $cnt) } while ($cnt -gt 0) Foreach($r in $jobs) {     Write-Output ("Result for Instance: " + $r.Job.InstanceId)     $result = $r.Job.EndInvoke($r.Result)   [...]

7. MonoGame - Putting Text Onscreen With SpriteFonts

Wed, 22 Feb 2017 14:00:28 GMT

Originally posted on: – Putting Text Onscreen With SpriteFonts In MonoGame, text is displayed on the screen in much the same way as sprites, or 2D images. The difference is in how you prepare the font and how you work with it. To display text onscreen, you can create your own images of words, letters and numbers, and display them just like regular sprites, but this is incredibly time consuming and not very flexible, so instead we use a SpriteFont, which is a bitmap containing all the characters in a given font, pre-rendered at a specific size. Think of it as a SpriteSheet, only for letters, numbers and symbols, and you won’t be far off. Creating the SpriteFont In your Solution Explorer (below), double-click the Content.mgcb icon. (You may have to associate it with a program. If so, select MonoGame Content Builder.) Once you have the MG Content Builder open, click the Add New Item button (below) and then select SpriteFont Description and enter a name for your SpriteFont. This will create an XML file which you will need to add to your project, but first, let’s take a look inside: You can open up your copy by right-clicking the file and selecting Open File. If you open it in notepad, it’s a bit of a mess, so I recommend using Notepad++ or Visual Studio so you can really see what’s going on. For now, just focus on a couple of key areas… FontName and Size.  You’ll notice it’s currently set to Arial and 12, respectively. Just for fun, change it to “Verdana” and “36”, and then save the file. Go back to the ContentBuilder, and hit the F6 key to build your SpriteFont.  This is where the MonoGame Content Pipeline reads in your XML file, looks up the specified font on your system, and generates a spritesheet containing images of all of the characters in your font, at the specified size. Assuming you didn’t introduce any typos, you’ll get a message saying the build was successful. Go back to Visual Studio, and right click on the Content folder again and select Add –> Existing Item. You’ll probably have to change the file filter to “All Files (*.*)” in order to see your SpriteFont file, so once you find it (in the Content folder), select it and add it to your project. Now to just add a couple of lines of code, and we’re all set. Displaying the SpriteFont At the class level, in your Game1.cs file, right after the redMushroom variable, add this: SpriteFont verdana36; (If you didn’t follow the previous post, just add it right before the constructor.) And in the LoadContent() method , add this right after the redMushroom line: verdana36 = Content.Load("demo"); (Again, if you jumped in here, just put it at the end of the LoadContent() method, before the closing curly brace.) I called mine demo.spritefont, but you DON’T put the extension in here or it will throw an error. If you named yours something different, be sure to change it. Finally, inside the Draw() method, put this line in between the spriteBatch.Begin() and .End() methods: spriteBatch.DrawString(verdana36, "I PUT TEXT ONSCREEN", new Vector2(0, 200), Color.White); And if you didn[...]

Migrating from NUnit to XUnit–SetupFixture fun

Wed, 22 Feb 2017 04:00:19 GMT

Originally posted on: I was limited to a specific version of NUnit for various reasons, and needed to start being able to run ‘core’ tests, so I migrated to XUnit. Generally not a problem until I hit ‘SetupFixture’. XUnit has no concept of ‘SetupFixture’ and from what I can gather, won’t either ( So, in order to get the same approach, I need to implement ‘IClassFixture’ on every test class. I could do this by going through each 1 by 1, but then how can I ensure that another developer (or even me!) remembers to do it the next time? For example, creating a new test class. In fact the reason for the SetupFixture was entirely because you can’t assume someone will derive or implement an interface. In the end, I took the approach of adding another test to my code to ‘test the tests’ ensuring that each test class implemented IClassFixture. The code is below, tweak to your own needs! [Fact] public void AllClassesImplementIUseFixture() {     var typesWithFactsNotImplementingIClassFixture =        //Get this assembly and it's types.         Assembly.GetExecutingAssembly().GetTypes()         //Get the types with their interfaces and methods         .Select(type => new {type, interfaces = type.GetInterfaces(), methods = type.GetMethods()})         //First we only want types where they have a method which is a 'Fact'         .Where(t => t.methods.Select(Attribute.GetCustomAttributes).Any(attributes => attributes.Any(a => a.GetType() == typeof(FactAttribute))))         //Then check if that type implements the type         .Where(t => t.interfaces.All(i => i != typeof(IClassFixture)))         //Select the name         .Select(t => t.type.FullName)         .ToList();     if (typesWithFactsNotImplementingIClassFixture.Any())         throw new InvalidOperationException(             $"All test classes must implement {nameof(IClassFixture)}{Environment.NewLine}These don't:{Environment.NewLine} * {string.Join($"{Environment.NewLine} * ", typesWithFactsNotImplementingIClassFixture)}"); } [...]

Why Should iOS Developers Choose Swift Programming For Applications?

Wed, 22 Feb 2017 07:07:06 GMT

Originally posted on: the developers decide to start new iOS app project, they might be pondered whether to chose Swift programming or Objective-C. Apple introduced Swift in the year 2014 after thorough research and development. Before Swift, Objective-C is widely used language for iOS application development. Some Amazing Features Of Objective-CObjective-C is considered as the core of iOS application development. It is the top choice for Mac OS development since 1980. Though Swift app developers write code in Swift but they are virtually dealing with fundamental concepts that are inherited from Objective-C. The execution of Internet protocol in a mobile application has become possible due to Objective-C. The provision of third party framework and libraries is a key feature of Objective-C. Many tools used for iOS development are not optimized with Swift but are better compatible with Objective-C. Objective-C offers the most robust language runtime environment. Hence, for most of the app development companies, Objective-C is still a favorite choice.Every iOS application development uses the core foundation of APIs. Basically, Objective-C offers a strong set of APIs with wonderful features. The APIs that are used in Swift for memory management and data wrapping, are generally Objective-C based APIs. Why iOS Developers Are Moving Towards Swift Programming?As per the experienced iOS app developers, Swift has provided amazing implications and created eagerness in new app developers about iOS app development. Generally, Swift and Objective-C both are the best options for building iPhone applications. But both languages have their own features and benefits.Generally, Swift looks friendly to app developers who have worked with Objective-C before. Swift has inherited dynamic object model found in Objective-C. The mobile applications developed with Swift are easily scaled to the cloud services.The Swift has adopted safe patterns for iOS application development by delivering more features in order to make coding easier and flexible. It is equipped with advanced compiler, debugger and structural framework. Automatic Reference Counting (ARC) feature is used in Swift to simplify the memory management issues. Modern and standard Cocoa Foundation is used in order to enhance Swift framework stack. The cross-platform compatibility makes Swift more suitable choice for iOS application development.Feature Of Swift LanguageSwift language has an ability to create responsive, immersive and consumer-facing iOS applications. This all has become possible because of its advanced feature. Following are some advantages of Swift programming:1. Safe Type interface is one of the key features of Swift programming which makes it safe. The type-interface reduces the coding length. It uses default setting unless specified by a special keyword. This will help developers to avoid false coding due to false input values. Moreover, Swift has eliminated the null pointer concept which is used in Objective-C. However, null-pointer ensur[...]

Is Your Developer Team Designing for a Disaster?

Tue, 21 Feb 2017 11:38:06 GMT

Originally posted on: teams work at top speed, and the environment in which they work is demanding and complex. Software is no longer considered done until it’s shipped, and there’s been a tendency to view shipped software as preferable to perfect software. These philosophies, created in large part by agile and lean methodologies, celebrate deployments and meeting release cycle deadlines. But have our standards of working software quality been trumped by the rapid pace of delivery? We may be designing faster, but are we designing disaster?We practice Agile development at Stackify, and are advocates of the methodology as a system to build and deploy better software more frequently. What we don’t subscribe to is the notion that the process we take to create better performing, high-quality software is more important than the software itself. If the process gets in the way of the product, it’s time to re-evaluate it.The beauty of a process like Agile is that you can modify it to suit your team and what’s most important in your delivery. Here’s a bit of what we’ve done to optimize, and some of the things we’ve learned along the way.Don’t let quality draw the short straw.I’ve yet to see a project that doesn’t have a problem when it gets past development and heads into the final push towards “done.” Primarily, I see one of two things happen:Testing cycles are compressed or incomplete due to time constraints. A major contributor to this is that code often isn’t ready to be tested until it’s all ready. As the sprint burns down, more and more code tasks are complete but they’ve yet to be reviewed, merged, and deployed to test environments. The code is rushed through QA, and ultimately, issues are found in production. Fixing those issues robs time away from the next sprint.Read full article at [...]