AppDynamics Disk Space Alert Extension: Windows support

Fri, 24 Mar 2017 05:41:48 GMT

Originally posted on:

I've added a Powershell port of this shell script to support Windows servers, and Derrek has merged the PR back into the trunk:

The port is pretty faithful to the original shell script, uses the same configuration file and reports and alerts in the same way. So all you need to do is swap in the Powershell rather than than Bash script to monitor your Windows-based estate. Hope you find it useful!
(image) (image)

Custom Role Claim Based Authentication on SharePoint 2013

Tue, 21 Mar 2017 07:08:58 GMT

Originally posted on:

We had a requirement in a project to authenticate users in a site collection based on Country claim presented by a User.

Below powershell sample is to add a US country claim value to a Visitors group of a site collection to allow any users from US to get authorized to view the site.

$web = Get-SPWeb "https://SpSiteCollectionUrl"
$claim = New-SPClaimsPrincipal -TrustedIdentityTokenIssuer “users” -ClaimValue “US" -ClaimType “
$group = $web.Groups[“GDE Home Visitors“]
$group.AddUser($claim.ToEncodedString(), “”, “US", “”)

Note : for this solution to work you we have used an ADFS solution which pull our calim values from Enterpise directory and sends SAML claim to SharePoint.

(image) (image)

Powershell to monitor Server Resource and make html report for a set number of iteration and intreval

Tue, 21 Mar 2017 07:00:18 GMT

Originally posted on: building a SharePoint farm for a project, i did load testing with VSTS.Since my SharePoint farm was in a different domain than that of my Load test controllers, I had to monitor few performance counters to take a well informed decision while tuning the farm capacity.Below powershell was developed by me to generate a html report for all the different servers at specified intervals.The output html file will be generated in D Drive. #Array which contains the different Server names$ServerList  = @('server01', 'server02', 'server03','Server04','Server05')#Array which represents the role of the server$ServerRole=@('WFE1', 'WFE2', 'Central Admin', 'WorkFlow1','WorkFlow2')#Number of times this powershell should be executed$runcount = 15;$Outputreport = " Server Health Report                                                                

Server Health Report

                                                                                                                                                                                                             "for($i=1; $i -le $runcount;$i++){            $ArrayCount = 0     ForEach($computername in $ServerList)     {        $role = $ServerRole[$ArrayCount]        $ArrayCount = $ArrayCount  + 1                Write-Host $i $computername $role         $AVGProc = Get-WmiObject -computername $computername win32_processor |         Measure-Object -property LoadPercentage -Average | Select Average        $OS = gwmi -Class win32_operatingsystem -computername $computername |        Select-Object @{Name = "MemoryUsage"; Expression = {“{0:N2}” -f ((($_.TotalVisibleMemorySize - $_.FreePhysicalMemory)*100)/ $_.TotalVisibleMemorySize) }}        $vol = Get-WmiObject -Class win32_Volume -ComputerName $computername -Filter "DriveLetter = 'C:'" |        Select-object @{Name = "C PercentFree"; Expression = {“{0:N2}” -f  (($_.FreeSpace / $_.Capacity)*100) } }          $result += [PSCustomObject] @{                 ServerName = "$computername"                CPULoad = "$($AVGProc.Average)%"                MemLoad = "$($OS.MemoryUsage)%"                CDrive = "$($vol.'C PercentFree')%"            }                                    Foreach($Entry in $result)         {             if(($Entry.CpuLoad) -or ($Entry.memload) -ge "80")             {             $Outputreport += ""             }             else            {            $Outputreport += ""             }            $Outputreport += "  "         }        $result = $null         }     }$Outputreport += "
Server NameServer RoleAvrg.CPU UtilizationMemory UtilizationC Drive UtilizatoinIteration
$($Entry.Servername) $role$($Entry.CPULoad)$($Entry.MemLoad)$($Entry.Cdrive)$i
"  $time = $(get-date -f MM-dd-yyyy_HH_mm_ss) $file = 'D:\' + $time + '.htm'$Outp[...]

Distributed TensorFlow Pipeline using Google Cloud Machine Learning Engine

Fri, 17 Mar 2017 08:01:33 GMT

Originally posted on: is an open source software library for Machine Learning across a range of tasks, developed by Google. It is currently used for both research and production at Google products, and released under the Apache 2.0 open source license. TensorFlow can run on multiple CPUs and GPUs (via CUDA) and is available on Linux, macOS, Android and iOS. TensorFlow computations are expressed as stateful dataflow graphs, whereby neural networks perform on multidimensional data arrays referred to as "tensors". Google has developed the Tensor Processing Unit (TPU), a custom ASIC built specifically for machine learning and tailored for accelerating TensorFlow. TensorFlow provides a Python API, as well as C++, Java and Go APIs.Google Cloud Machine Learning Engine is a managed service that enables you to easily build TensorFlow machine learning models, that work on any type of data, of any size. The service works with Cloud Dataflow (Apache Beam) for feature processing, Cloud Storage for data storage and Cloud Datalab for model creation. HyperTune performs CrossValidation, automatically tuning model hyperparameters. As a managed service, it automates all resource provisioning and monitoring, allowing devs to focus on model development and prediction without worrying about the infrastructure. It provides a scalable service to build very large models using managed distributed training infrastructure that supports CPUs and GPUs. It accelerates model development, by training across many number of nodes, or running multiple experiments in parallel. it is possible to create and analyze models using Jupyter notebook development, with integration to Cloud Datalab. Models trained using GCML-Engine can be downloaded for local execution or mobile integration.Why not Spark ?Deep Learning has eclipsed Machine Learning in accuracy Google Cloud Machine Learning is based upon TensorFlow, not SparkMachine Learning industry is trending in this direction – its advisable to follow the conventional wisdom.TensorFlow is to Spark what Spark is to Hadoop.Why Python ?Dominant Language in field of Machine Learning / Data Scienceit is the Google Cloud Machine Learning / TensorFlow core language Ease of use (very !)Large pool of developersSolid ecosystem of Big Data scientific & visualization tools – Pandas, Scipy, Scikit-Learn, XgBoost, etcWhy Deep LearningThe development of Deep Learning was motivated in part by the failure of traditional Machine Learning algorithms to generalize well – because it becomes exponentially more difficult when working with high-dimensional data - the mechanisms used to achieve generalization in traditional machine learning are insufficient to learn complicated functions in high-dimensional spaces. Such spaces also often impose high computational costs. Deep Learning was designed to overcome these obstacles.  The curse of dimensionality As the number of relevant dimensions of the data increases, the number of computations may grow exponentially. Also, a statistical challenge because the number possible configurations of x is much larger than the number of training examples -  in high-dimensional spaces, most configurations will have no training example associated with it. Local Constancy and Smoothness Regularization ML algorithms need to be guided by prior beliefs about what kind of function they should learn. Priors are firstly expressed by choosing the algorithm class, but are also explicitly incorporated as probability distributions over model parameters  - directly influencing the learned function.  The main prior in ML is the smoothness (local constancy) prior which states that the learned function should not change very much within a small region. ML algorithms rely exclusively on this prior to general[...]

SQL Server - Kill any live connections to the DB

Fri, 17 Mar 2017 08:47:43 GMT

Originally posted on:

Often you need to restore a DB or take oit offline only to find out the process aborts due to active connected sessions to the DB. Here is a quick sript that will kill all active sessions to the DB.

USE [master];

DECLARE @kill varchar(8000) = '';  
SELECT @kill = @kill + 'kill ' + CONVERT(varchar(5), session_id) + ';'  
FROM sys.dm_exec_sessions
WHERE database_id  = db_id('myDataBase')

(image) (image)

Keras QuickRef

Fri, 17 Mar 2017 07:26:26 GMT

Originally posted on: QuickRefKeras is a high-level neural networks API, written in Python that runs on top of the Deep Learning framework TensorFlow. In fact, tf.keras will be integrated directly into TensorFlow 1.2 !Here are my API notes:Model APIsummary() get_config() from_config(config) set_weights() set_weights(weights) to_json() to_yaml() save_weights(filepath) load_weights(filepath, by_name) layers Model Sequential /Functional APIsadd(layer) compile(optimizer, loss, metrics, sample_weight_mode) fit(x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight) evaluate(x, y, batch_size, verbose, sample_weight) predict(x, batch_size, verbose) predict_classes(x, batch_size, verbose) predict_proba(x, batch_size, verbose) train_on_batch(x, y, class_weight, sample_weight) test_on_batch(x, y, class_weight) predict_on_batch(x) fit_generator(generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe) evaluate_generator(generator, val_samples, max_q_size, nb_worker, pickle_safe) predict_generator(generator, val_samples, max_q_size, nb_worker, pickle_safe) get_layer(name, index) LayersCoreLayerdescriptionIOparamsDensevanilla fully connected NN layer(nb_samples, input_dim) --> (nb_samples, output_dim)output_dim/shape, init, activation, weights, W_regularizer, b_regularizer, activity_regularizer, W_constraint, b_constraint, bias, input_dim/shapeActivationApplies an activation function to an outputTN --> TNactivationDropoutrandomly set fraction p of input units to 0 at each update during training time --> reduce overfittingTN --> TNpSpatialDropout2D/3Ddropout of entire 2D/3D feature maps to counter pixel / voxel proximity correlation(samples, rows, cols, [stacks,] channels) --> (samples, rows, cols, [stacks,] channels)pFlattenFlattens the input to 1D(nb_samples, D1, D2, D3) --> (nb_samples, D1xD2xD3)-ReshapeReshapes an output to a different factorizationeg (None, 3, 4) --> (None, 12) or (None, 2, 6)target_shapePermutePermutes dimensions of input - output_shape is same as the input shape, but with the dimensions re-orderedeg (None, A, B) --> (None, B, A)dimsRepeatVectorRepeats the input n times(nb_samples, features) --> (nb_samples, n, features)nMergemerge a list of tensors into a single tensor[TN] --> TNlayers, mode, concat_axis, dot_axes, output_shape, output_mask, node_indices, tensor_indices, nameLambdaTensorFlow expressionflexiblefunction, output_shape, argumentsActivityRegularizationregularize the cost functionTN --> TNl1, l2Maskingidentify timesteps in D1 to be skippedTN --> TNmask_valueHighwayLSTM for FFN ?(nb_samples, input_dim) --> (nb_samples, output_dim)same as Dense + transform_biasMaxoutDensetakes the element-wise maximum of prev layer - to learn a convex, piecewise linear activation function over the inputs ??(nb_samples, input_dim) --> (nb_samples, output_dim)same as Dense + nb_featureTimeDistributedApply a Dense layer for each D1 time_dimension(nb_sample, time_dimension, input_dim) --> (nb_sample, time_dimension, output_dim)DenseConvolutionalLayerdescriptionIOparamsConvolution1Dfilter neighborhoods of 1D inputs(samples, steps, input_dim) --> (samples, new_steps, nb_filter)nb_filter, filter_length, init, activation, weights, border_mode, subsample_length, W_regularizer, b_regularizer, activity_regularizer, W_constraint, b_constraint, bias, input_dim, input_lengthConvolution2Dfilter neighborhoods of 2D inputs(samples, rows, cols, channels) --> (samples, new_rows, new_cols, nb_filter)like Convolution1D + nb_row, nb_col instead of filter_length, subsample, dim_orderingAtrousConvolution1/2Ddilated convolution with holessame as Convolution2Dsame as Convolution1/2D + atrous_rateSeparableConvolut[...]


Wed, 15 Mar 2017 14:38:22 GMT

Originally posted on:

Compete for a chance to win $2,000!

That’s right. We are offering a $2,000 first prize for the most innovative use of our technology. Within just a few hours you will be surprised how much you can accomplish with our .NET Developer SDK. In addition, your work could be advertised nationally and promoted at customer locations worldwide as part of our solutions and earn significant additional revenue if you so desire.





-          First Place Prize:  $2,000
-          Second Place Prize:  $750
-          Third Place Prize: $250

All submissions should be made by May 1st 2017 by midnight (US Eastern Time)

Contact us at for more information

Additional Information

How to write your first adapter:

Download the Enzo Unified SDK at this location:

Download the help file for the Enzo Unified SDK here:

Download the installation doc for the Enzo Unified SDK here:

(image) (image)

Using Measure-Object to sum file sizes in Powershell

Wed, 15 Mar 2017 09:56:58 GMT

Originally posted on: aware that this line: gci -r "C:\temp\test"  | measure-object -property length -sum will throw an error if it encounters a folder whose only contents is another (empty) folder; this is because measure-object tries in this case to measure an object which does not have a “length” property defined: PS C:\Projects> Get-ChildItem -Recurse "C:\temp\test"  | measure-object -property length -sum measure-object : The property "length" cannot be found in the input for any objects. At line:1 char:42 + ... dItem -Recurse "C:\temp\test"  | measure-object -property length -sum +                                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : InvalidArgument: (:) [Measure-Object], PSArgumentException     + FullyQualifiedErrorId : GenericMeasurePropertyNotFound,Microsoft.PowerShell.Commands.MeasureObjectCommand     There are various ways to achieve the correct effect, based on the idea that objects returned by gci that are directories will have the PSIsContainer property set to $true. For example:   $root = "c:\Oracle"   $total_files = 0 $total_size = [int]0   [System.Collections.Stack]$stack = @() $stack.Push($root)   while ($stack.Length -gt 0) {     $folder = $stack.Pop()       gci $folder |% `     {         $item = $_                 if ($item.PSIsContainer)         {             $stack.Push($item.FullName)         }         else         {              $total_size += $_.Length              $total_files ++           }      } }   Write-Host "Total size: $([Math]::Round($total_size / 1Mb, 2)) Mb over $total_files files" [...]

Writing your first adapter for Enzo Unified

Tue, 14 Mar 2017 01:57:45 GMT

Originally posted on: blog post shows you how to write your first adapter using the Enzo Unified SDK, so that you can extend the capabilities of SQL Server and/or create a light-weight HTTP proxy for applications, IoT devices or mobile applications. This post assumes you have a high level understanding of Enzo Unified ( and that you have installed and configured the Enzo Unified SDK (download details below). In this post, you will learn how to create an adapter to get the current UCT time using the official US NIST time servers. You will write this logic in a new adapter called TimeServers and expose a method called GetUSTime().  Once coded, you will be able to access this method through SQL Server as a stored procedure, or using an HTTP call from any client. Because Enzo Unified exposes these methods as HTTP calls, any client running on any operating system capable of making HTTP calls will be able to run this command. Pre-Requisites To successfully build this adapter, you will need the following: Visual Studio 2012 or higher SQL Server 2014 Express Edition or higher Enzo Unified SDK (see download links below)   Download the Enzo Unified SDK here: Download the help file for the Enzo Unified SDK here: Download the installation doc for the Enzo Unified SDK here: Create an Empty Adapter Project After downloading and installing the SDK, and registering the Enzo Unified project template per installation instructions, you will be able to create a new Adapter project. Start Visual Studio 2012, and select New Project Select Enzo Unified DataAdapter (Annotated) under Templates –> Visual C# –> Enzo Unified Enter the adapter name:  TimeServers Verify the location of your adapter (it should be under the namespace you specified during installation) and click OK You will need to change two settings: Open TimeServers.cs On line 5, replace the namespace by your namespace (MY is the namespace I entered) Change the the project configuration from Any CPU to x64 Make sure the project compiles successfully Press F5 to start the adapter (at this point, the adapter does nothing yet) The Enzo Unified Development Host will start and a user interface will be displayed allowing you to test your adapter Type this command, and press F5 to execute it:  exec Stop the project. We are ready to add the method to this adapter. Note that while you can use the Development Host for simple testing, it is recommended to use SQL Server Management Studio for better experience. We will review how to connect to Enzo Unified in more detail later. Create the GetUSTime Method Let’s replace the entire content of the TimeServers.cs file; we will review the implement next. using BSC.Enzo.Unified; using BSC.Enzo.Unified.Logging; using System; using System.Collections.Generic; using System.IO; namespace MY {     public class TimeServers : DataAdapter     {         // this is the main call to register your functionality with the Enzo instance         public override void RegisterAdapter()         {             #region Descriptive details             Title = "NIST Time Server";            // a summary of the adapter's operations             Description = "Access the NIST time servers";    // more detailed info what your adapter does             Author = "Me";                            //[...]

How To Suppress DHCP Popups In NETUI

Sun, 12 Mar 2017 05:37:47 GMT

Originally posted on: on a CE device that presented a kiosk application I wanted to find a way to suppress the following Windows CE Networking DHCP popups:"DHCP was unable to obtain an IP address. If the netcard is removable, then you can remove/reinsert it to have DHCP make another attempt to obtain an IP address for it. Otherwise, you can statically assign an address.""Your IP address lease has expired. DHCP was unable to renew your lease.""A DHCP Server could not be contacted. Using cached lease information."These popups were obtrusive and the end user of the device had no idea how to resolve the issues that these popups were bringing to light. The device depends upon a reliable network infrastructure and these popups should really never ... pop up ... but when they did they did nothing more than confuse the user and get in the way of the kiosk application / user interface the user was trying to use.Typically the popups were quickly ignored by the user and closed, but in some cases a call or ticket was created with their IT department for resolution, wasting everyone's time.The network attached device is smart enough to buffer the readings it was responsible for taking, simply sending them when the networking issue was resolved naturally by being carried to where there was network coverage, so the popups only served as an annoyance to the user, not to mention the IT department who would have to deal with any users bringing up the fact that this strange popup popped up, making them explain to the user what should do about it.This post alluded to the solution but did not follow through with the specifics so I am doing that here.If you want to suppress these or any of the other of the multitude of popups the NETUI supports start by following the instructions in the blog post by Michel Varhagen of GuruCE, Cloning publc code: An example.Add the following new function, SuppressNetMsgBox, in between the two functions you'll find @:netui.c(139):GetNetString (UINT uID, LPTSTR lpBuffer, int nBufferMax)andnetui.c(165): BOOL WINAPIV NetMsgBox (HWND hParent, DWORD dwFlags, TCHAR *szStr)BOOL SuppressNetMsgBox(DWORD dwId, TCHAR *szStr){#define MAXCMP 76 TCHAR szBuffer[MAXCMP] = {0}; DWORD dwMaxCount = sizeof(szBuffer)/sizeof(szBuffer[0]); if (!GetNetString(dwId, szBuffer, (int)dwMaxCount)) return FALSE; if (!_tcsncmp(szBuffer, szStr, (size_t)(dwMaxCount-1))) { NKDbgPrintfW(_T("Networking Pop-up: \"%s\"... suppressed.\r\n"), szBuffer);  return TRUE; } return FALSE;}Add calls to SuppressNetMsgBox inside the NetMsgBox function, where shown, as shown:BOOL WINAPIV NetMsgBox (HWND hParent, DWORD dwFlags, TCHAR *szStr){    TCHAR szTitle[200];    DWORD dwStyle, dwId;    int iRet;    HCURSOR hCur;    // verify the incoming window handle and fail is not null and invalid    if (hParent && !IsWindow(hParent)) {        SetLastError(ERROR_INVALID_WINDOW_HANDLE);        return FALSE;    } if ( (SuppressNetMsgBox(NETUI_GETNETSTR_NO_IPADDR, szStr)) // "DHCP was unable to obtain an IP address. If the netcard is removable, then you can remove/reinsert it to have DHCP make another attempt to obtain an IP address for it. Otherwise, you can statically assign an address." || (SuppressNetMsgBox(NETUI_GETNETSTR_LEASE_EXPIRED, szStr)) // "Your IP address lease has expired. DHCP was unable to renew your lease." || (SuppressNetMsgBox(NETUI_GETNETSTR_CACHED_LEASE, szStr)) // "A DHCP Server could not be contacted. Using cached lease information." ) { return FALSE; }    // Default title is "Windows CE Networking"    if (dwFlags & NMB_FL_TITLEUSB)        dwId = IDS_USB_CAPTION;    else        dwId = IDS_NETMSGBOX_TITLE;...}Adapt this to your needs.My needs have been [...]

Counting pages in Confluence

Fri, 10 Mar 2017 08:05:12 GMT

Originally posted on:

We're currently migrating our Confluence installation, and I was looking around for resources to provide some basic metrics, eg. the total number of (non-private) pages we are currently hosting. It appears that the simplest way to get this information is simply to connect directly to the database and run a query:

# Get count of all pages in all global spaces (each page is counted as a single version)
select count(*)
from SPACES as A
inner join CONTENT as B
where A.SPACETYPE='global'
and B.PREVVER is null

(image) (image)

SQL Server: How do I split a comma delimited column into multiple columns or rows?

Thu, 09 Mar 2017 15:32:50 GMT

Originally posted on:

-- converting column with commas to multiple columns --

declare @col1 varchar(500)
set @col1 = 'I,Hate,Broccoli'

DECLARE @Tmp TABLE ( Id int, Element VARCHAR(20)) 
INSERT @Tmp SELECT 1,@col1
       PARSENAME(REPLACE(Element,',','.'),2) Name, 
       PARSENAME(REPLACE(Element,',','.'),1) Surname 

-- converting column with commas to multiple rows --

declare @col1 varchar(500)
set @col1 = 'I,am,really,a,smart person, one of the smartest'

DECLARE @Tmp TABLE ( Id int, Element VARCHAR(200)) 
INSERT @Tmp SELECT 1,@col1

SELECT A.[id],
    Split.a.value('.', 'VARCHAR(100)') AS String
  FROM (SELECT [id],
      CAST ('' + REPLACE(Element, ',', '') + '' AS XML) AS String
    FROM @Tmp) AS A CROSS APPLY String.nodes ('/M') AS Split(a);

(image) (image)

MonoGame – v3.6 now available

Tue, 07 Mar 2017 12:38:10 GMT

Originally posted on:

MonoGame 3.6 now available!

Pardon the interruption of the regular MonoGame blog series, but this is important.  There’s a new version of MonoGame now available!!

Grab MonoGame 3.6 here:

There are a bunch of new features, and a small mountain of bugfixes and enhancements. 

Read the entire changelog here:

(image) (image)

Handling Exception in calling Web Service, In BizTalk

Mon, 06 Mar 2017 04:36:38 GMT

Originally posted on: have faced the issue of calling web service from BizTalk, where intermittently either service is not available or there is some problem in connecting the service. Service can return response error code which is other than HTTP 200 code ie. HTTP 400 or HTTP 500 code. BizTalk doesnt like it. Send port will treat as warning, write warning in event viewer and retries. But eventually if problem and error persists then error will be thrown from send port. This error will be received in Orchestration. You can set exception handler block to catch the exception i.e. System.Exception or System.EntryPointNotFoundException. You can handle the exception gracefully. But send port instance will get suspended. This will not go away unless manually terminated. Since the exception is thrown from send port and probably it is valid response, there is no point in resuming it too. If you try doing it, orchestration which had called the service has already handled exited since exception has been handled in exception handler block.This scenario needs to be handled by a separate orchestration which we can call "SendPortExceptionHandler". This orchestration will have only one receive shape connected to receive port. It will receive message of type Raw Xml. A filter needs to be created like below:ErrorReport.FailureCode Exists   AndErrorReport.MessageType == schema_namespace#root_nodeYou can add multiple filters, so that all exceptions at send port will be handled by single orchestration.-----------------------------------------------------------------------------------------------------------------------------If we get exception say System.EntryPointNotFoundException and we want to handle this manually. We might get this error if end point is not available or network is down. Then we might want to suspend the orchestration and retry manually. For doing this, we need to add a loop. Add a scope in that loop. Catch the specific exception in that scope exception handler. Please see below:In the first expression box we will initialize values for retry. // Set retry to trueServiceRetry = true;ServiceRetryCount = 0;In the internal scope (Mule ESB Scope) I am catching System.EntryPointNotFoundException. You can add another exception handler block to throw other errors. In this block I am setting values as below:// Set retry to trueServiceRetry = true;ServiceRetryCount = ServiceRetryCount + 1;In decide shape I am checking if retry count ServiceRetryCount <= 3 then it suspends the flow else exception will thrown. It will create a finite number of retries and orchestration will exit gracefully.In outer all other exceptions are caught and thrown back to the calling orchestration.This way, we can handle service end point related and web exceptions in BizTalk. [...]

Capture HTTPS traffic from iOS devices using Charles

Fri, 03 Mar 2017 18:02:58 GMT

Originally posted on: article will show you how to set up Charles and iOS devices so that you can capture HTTPS traffic from iOS devices using Charles. Step 0: Download and install Charles from 1: Find the IP address of your mac:Ifconfig | grep “inet “     Step 2: Find the port number Charles uses listens on. It should be 8888 if you haven’t change the default value. You can find the number from Settings-->Proxy SettingsStep 3: You need set up the proxy on the iOS device.On the iOS device, Settings ? Wi-Fi, click on the little i icon next to the Wi-Fi.Change HTTP PROXY to Manual, type in the IP Address and Port number of your mac. Step 4: Go back to Charles, set filters, so that Charles only record the connections your are interested. You don't have to do this step, in case you don't want Charles are flooded with records you are not interested as it will record everything.Just include the urls you want to record.Now, if you trigger HTTPS connections from your iOS app, you will find they are recorded by Charles, but the messages are encrypted. You can’t find any useful information from i. Step 5: Enable SSL Proxying. Right click the URL, and check “Enable SSL Proxying”You still won’t be able to get any useful information at this stage, as you iOS does not trust the certificate Charles provides.Step 6: Install Charles’ certificateOpen Safari from your iOS device and browse to: Your iOS device will ask you to install the certificate. Just do it.That is it. Now you should be able to all the details of the connections. [...]

Execute AsyncTask like jquery’s $.ajax in Kotlin on Android

Fri, 03 Mar 2017 16:16:13 GMT

Originally posted on: time I showed how to execute an AsyncTask in Jquery style. Now I will show how to do it in Kotlin. Kotlin has much better support for functional programming, functions/lambdas are first class member of the language. A parameter of a method can be just typed function rather than having to be interface. So I don’t have to create interface for Func and Action.This is how MyAsyncTask looks like in Kotlin, note the type of task and completionHandler are ()->T or (T)->Unit?.class MyAsyncTask(private val task: () -> T) : AsyncTask() {   private var completionHandler: ((T) -> Unit)? = null   private var errorHandler: ((Exception) -> Unit)? = null   private var exception: Exception? = null   fun setCompletionHandler(action: (T) -> Unit) {       this.completionHandler = action   }   fun setErrorHandler(action: (Exception) -> Unit) {       this.errorHandler = action   }   override fun doInBackground(vararg voids: Void): T? {       try {           return task.invoke()       } catch (ex: Exception) {           exception = ex       }       return null   }   override fun onPostExecute(result: T) {       if (exception != null && errorHandler != null) {           errorHandler!!.invoke(exception!!)       } else if (completionHandler != null) {           completionHandler!!.invoke(result)       }   }}And here is Async:class Async private constructor() {   private var task: MyAsyncTask? = null   fun whenComplete(action: (T) -> Unit): Async {       task!!.setCompletionHandler(action)       return this   }   fun onError(action: (Exception) -> Unit): Async {       task!!.setErrorHandler(action)       return this   }   fun execute() {       try {           task!!.execute()       } catch (ex: Exception) {           ex.printStackTrace()       }   }   companion object {       fun run(task: () -> T): Async {           val runner = Async()           runner.task = MyAsyncTask(task)           return runner       }   }}Here is an example of how it is used.override fun onCreate(savedInstanceState: Bundle?) {   super.onCreate(savedInstanceState)   setContentView(R.layout.activity_main) {       val api = MyAPI()       api.signInUser("user_name", "password")   }.whenComplete {       //it.displayName   }.onError {       //it.message   }.execute()} [...]

Free Licenses for MVPs

Thu, 02 Mar 2017 06:43:16 GMT

Originally posted on:

I was interested to see a tweet from Liquid Technologies promoting free licenses of its XML products for Microsoft Most Valuable Professionals (MVPs).

So this got me wondering what else is available for free, and guess what, lots…

You can request a free Liquid XML Studio 2017 license here:


(image) (image)

Execute AsyncTask like jquery’s $.ajax in Java on Android

Thu, 02 Mar 2017 04:06:43 GMT

Originally posted on: Android, if you want to make an async call, AsyncTask is the way to go. Basically you will have to implement something like this:private class LongOperation extends AsyncTask {   @Override   protected String doInBackground(String... params) {       //execute a long running operation   }   @Override   protected void onPostExecute(String result) {       //back to UI thread, can update UI   }}Then you can call new LongOperation().execute("");I don’t like having to write so much boilerplate code to just invoke a method asynchronously and I hate those mystical type parameters with AsyncTask. I want something simple, something like jquery’s $.ajax. I want write async code like -> {   //execute a long running operation}).whenComplete(result -> {   //back to UI thread, can update UI}).onError(ex -> {   //handle error, on UI thread})Here is how I do it.First I need to create 2 interfaces:public interface Action {   void execute(T param);}public interface Func {   T execute() throws Exception;}Then I need a wrapper on AsyncTask:public class MyAsyncTask extends AsyncTask {   private Func task;   private Action completionHandler;   private Action errorHandler;   private Exception exception;   public MyAsyncTask(Func task) {       this.task = task;   }   public void setCompletionHandler(Action action) {       this.completionHandler = action;   }   public void setErrorHandler(Action action) {       this.errorHandler = action;   }   @Override   protected T doInBackground(Void... voids) {       try {           return task.execute();       } catch (Exception ex) {           exception = ex;       }       return null;   }   @Override   protected void onPostExecute(T result) {       if (exception != null && errorHandler != null) {           errorHandler.execute(exception);       } else if (completionHandler != null) {           completionHandler.execute(result);       }   }}Last, I need a small class to enable the fluent interface:public class Async {   private  MyAsyncTask task;   private Async() {   }   public static Async run(Func task) {       Async runner = new Async();       runner.task = new MyAsyncTask(task);       return runner;   }   public Async whenComplete(Action action) {       task.setCompletionHandler(action);       return this;   }   public Async onError(Action action) {       task.setErrorHandler(action);       return this;   }   public void execute() {       try {           task.execute();       } catch (Exception ex) {           ex.printStackTrace();       }   }}That is all I need!Here is an example usage of it:private void loadUser() {   final Dialog progress = showProgressDialog(); Func() {       @Override       public UserInfo execute() {           APIServer api = new APIServer()           return api.loadUser();       }       })       .whenComplete(new Action() {           @Override           p[...]

angular it's just angular now

Wed, 01 Mar 2017 18:07:47 GMT

Originally posted on:  it's just angular man. using angular-cli with webpack is the way to go.
Use gitter and the angular/angular channel if you have questions. They are friendly and responsive.

rxjs of course rocks it.
and of course built with and in typescript makes for less buggy code. 

I still have issues with git. I used git but my compile I think breaks the git pull on the server side.
There should be a way to ignore that and just get the damn latest, but NOOOO. rest --hard didn't fix it either.
boy they should have a command called git just (just get the damn code). I would be happy with that.

Otherwise its' great!  This nearly completes my third project in angular.  don't use unless your making a web-api, it's all it's good for anymore.  Webstorm still rocks as the #1 angular tool, and has good debugging.

gimmie some of that fishing time.

(image) (image)

Reasons for Automated Testing

Fri, 24 Feb 2017 00:47:15 GMT

Originally posted on: I gave a talk at the Edmonton .NET User Group on an introduction to automated testing.  And of course in this introduction I gave a list of reasons why unit testing is a good idea.  Although there are already many blog posts other there on this subject, I thought I’d record my reasons here. Although all of these reasons may appeal to your hipster, idealistic sense of craftsmanship and professionalism, each of these reasons also has a sound business foundation.  Automated testing saves/makes you money. 1. Product Quality Of course the most obvious reason for automated testing is to improve product quality.  Although automated testing will not guarantee that you’ll never have another bug, it will lower that bug rate.  And having working software is the #1 design goal.  Telling customers about the automated testing that is in place can be a strong marketing point. Nobody will buy software that doesn’t work. 2. Because you are doing it anyway If you aren’t doing automated testing, then you are doing manual testing.  Manual testing (and all testing) is a thankless job that doesn’t get the credit it deserves.  A lot of people assume it just happens for free. Manual testing does take a lot of time and effort.  To run a test takes a lot of time just to set up.  For example, confirming how the system behaves with a customer with bad credit.  You have to create or find a customer record and change its credit status.  Once the setup is done comes the actual testing.  You have to run the program, login as the customer, and attempt to make a purchase.  This all takes time, several minutes. Setting up an automated test also takes quite a bit of time.  This is especially true for the first test.  However the good news is you only have to do this once.  Once the test is written you can run it as often as you want, several times an hour typically.  Also running the test is much faster; no having to start the program, login, and all the other typing and button clicks.  And the code you write to setup your first test will be reused for all the other tests. Although the first test is the most difficult to write, before the first production build of your program the time spent building an automated test suite will be less that then time it takes to manually test the same program once.  And once the automation is in place, it can be run whenever virtually instantaneously. Automated testing takes less time than manual testing.  This is certainly true in the full lifecycle of the program, but is even true before the first production release. 3. Enables Refactoring Refactoring is the practice of going back over existing code and improving the readability of it.  It removes technical debt. A common practice on my team is doing code reviews, especially with new hires.  The result of code review is often a list things that should have been done differently.  On receiving these suggestions the programmer asks, should I implement those changes.  The answer is, it depends. If the testing has been done manually then the answer is often “no”.  The code review is at the end of the project and we are up against a deadline.  Although the changes may be fairly simple, the effort to manually retest the program does not out weight benefits of the cleaner code.  It is usually just too risky to change the program. This is true not only for the first release, but future maintenance cyc[...]

How To Hire Top iOS App Developers For The Company?

Thu, 23 Feb 2017 06:38:36 GMT

Originally posted on: have become so easy these day and all the credits go to the smartphones and mobile applications. People can just tap on smartphones and virtually do everything right from booking air tickets to paying online bills. This all has made possible only because of mobile applications. The mobile app development world has gone through dramatic changes in past few years. The pool of applications available in both App Store. But, interactive and useful features defines the success of the iOS mobile application. Because users always look for the simple and unique application equipped with quality services. As the mobile applications have become a lifeline for users, companies now need to hire top iOS app developers to build a robust application. But, hiring application developers is not an easy job. Companies need to consider different things like-When and where to start hiring process?Criteria for candidatesExperienced or fresher candidatesIt might be the waste of efforts and time if company fail to find valid application developer. That is why companies need to focus on a set of expectations defined for hiring the application developers. Following are the important tips for iOS app development companies to hire the right candidate for their project:Project Requirements:The company hires new candidates based on the nature of the project. For the highly  advanced project, the company requires experienced developers and for simpler one company can hire a fresher candidate.  The company should decide whether to hire experienced or a fresher candidate. If a company is familiar with the difficulty of the project, it can hire developers with enough experience level.Technical Knowledge:iOS applications are creating a buzz in the mobile market. The candidates should be technically strong enough to handle the Objective-C and Swift coding efficiently. Because these two languages are equipped with deep concepts of iOS application development. Dealing with core APIs and other features really challenging thing for developers. The company should also look for other technical knowledge in a candidate like Ionic, Xamarin, HTML, CSS etc. Test For Candidates:Conducting test is a good method to hire right candidates for the company.  It will make easier for companies to know the expertise of candidate in different areas. Along with technical knowledge, the company can determine the candidate’s knowledge in logical reasoning, quantitative aptitude, general awareness etc. However, a test can help the company to prevent unqualified candidates who are sneaking through the hiring process. Because app developer is a profitable profession and company can encounter n number of cheating applicants every time. The company can conduct paper-based or online tests for their potential application developers.Candidate’s References:No hiring process is complete without this important step. Checking the references help to reveal more details about the candidates. The company can actually call the candidates’ ex-client and ask about their performance. A positive response can help the company to determine the reliability and trustworthiness of the candidate.Previously Developed Apps:Checking previously developed app will give a strong idea about developer’s involvement into a specific project. It helps to determine whether the developer had completed the project within give[...]

Multithreading with PowerShell using RunspacePool

Wed, 22 Feb 2017 14:29:12 GMT

Originally posted on: you are working with PowerShell, you may want to start running certain scripts in parallel for improved performance. While doing some research on the topic, I found excellent articles discussing Runspaces and how to execute scripts dynamically. However I needed to run custom scripts already stored on disk and wanted a mechanism to specify the list of scripts to run in parallel as an input parameter.  In short, my objectives were: Provide a variable number of scripts, as input parameter, to execute in parallel Execute the scripts in parallel Wait for completion of scripts, displaying their output Have a mechanism to obtain success/failure from individual scripts This article provides two scripts: a “main” script, which contains the logic for executing other scripts in parallel, and a test script that simply pauses for a few seconds. Async Script The following script is the main script that calls other scripts in separate processes running Runspace Pools in PowerShell.  This script accepts an input array of individual scripts, or script files; you might need to provide the complete path for script files. Then this script starts a new PowerShell process for each script and creates a reference object in the $jobs collection; the BeginInvoke operation starts the script execution immediately. Last but not least, this script waits for completion of all scripts before displaying their output and result. It is important to note that this main script expects each script being called to return a specific property called Success to determine whether the script was successful or not. Depending on the script you call, this property may not be available and as a result this main script may report false negatives. There are many ways to detect script failures, so you can adapt the proper method according to your needs. A Test script is provided further down to show you how to return a custom property. You can the below script like this, assuming you want to execute two scripts (script1.ps1 and script2.ps1):   ./main.ps1 @( {& "c:\myscripts\script1.ps1" }, {& "c:\myscripts\script2.ps1" }) Param(     [String[]] $toexecute = @() ) Write-Output ("Received " + $toexecute.Count + " script(s) to execute") $rsp = [RunspaceFactory]::CreateRunspacePool(1, $toexecute.Count) $rsp.Open() $jobs = @() # Start all scripts Foreach($s in $toexecute) {     $job = [Powershell]::Create().AddScript($s)     $job.RunspacePool = $rsp     Write-Output $("Adding script to execute... " + $job.InstanceId)     $jobs += New-Object PSObject -Property @{         Job = $job         Result = $job.BeginInvoke()     } } # Wait for completion do {     Start-Sleep -seconds 1     $cnt = ($jobs | Where {$_.Result.IsCompleted -ne $true}).Count     Write-Output ("Scripts running: " + $cnt) } while ($cnt -gt 0) Foreach($r in $jobs) {     Write-Output ("Result for Instance: " + $r.Job.InstanceId)     $result = $r.Job.EndInvoke($r.Result)         # Display complete output of script     #Write-Output ($result)                     # We are assuming the scripts executed return an object     # with a property called Success     if ($re[...]

7. MonoGame - Putting Text Onscreen With SpriteFonts

Wed, 22 Feb 2017 14:00:28 GMT

Originally posted on: – Putting Text Onscreen With SpriteFonts In MonoGame, text is displayed on the screen in much the same way as sprites, or 2D images. The difference is in how you prepare the font and how you work with it. To display text onscreen, you can create your own images of words, letters and numbers, and display them just like regular sprites, but this is incredibly time consuming and not very flexible, so instead we use a SpriteFont, which is a bitmap containing all the characters in a given font, pre-rendered at a specific size. Think of it as a SpriteSheet, only for letters, numbers and symbols, and you won’t be far off. Creating the SpriteFont In your Solution Explorer (below), double-click the Content.mgcb icon. (You may have to associate it with a program. If so, select MonoGame Content Builder.) Once you have the MG Content Builder open, click the Add New Item button (below) and then select SpriteFont Description and enter a name for your SpriteFont. This will create an XML file which you will need to add to your project, but first, let’s take a look inside: You can open up your copy by right-clicking the file and selecting Open File. If you open it in notepad, it’s a bit of a mess, so I recommend using Notepad++ or Visual Studio so you can really see what’s going on. For now, just focus on a couple of key areas… FontName and Size.  You’ll notice it’s currently set to Arial and 12, respectively. Just for fun, change it to “Verdana” and “36”, and then save the file. Go back to the ContentBuilder, and hit the F6 key to build your SpriteFont.  This is where the MonoGame Content Pipeline reads in your XML file, looks up the specified font on your system, and generates a spritesheet containing images of all of the characters in your font, at the specified size. Assuming you didn’t introduce any typos, you’ll get a message saying the build was successful. Go back to Visual Studio, and right click on the Content folder again and select Add –> Existing Item. You’ll probably have to change the file filter to “All Files (*.*)” in order to see your SpriteFont file, so once you find it (in the Content folder), select it and add it to your project. Now to just add a couple of lines of code, and we’re all set. Displaying the SpriteFont At the class level, in your Game1.cs file, right after the redMushroom variable, add this: SpriteFont verdana36; (If you didn’t follow the previous post, just add it right before the constructor.) And in the LoadContent() method , add this right after the redMushroom line: verdana36 = Content.Load("demo"); (Again, if you jumped in here, just put it at the end of the LoadContent() method, before the closing curly brace.) I called mine demo.spritefont, but you DON’T put the extension in here or it will throw an error. If you named yours something different, be sure to change it. Finally, inside the Draw() method, put this line in between the spriteBatch.Begin() and .End() methods: spriteBatch.DrawString(verdana36, "I PUT TEXT ONSCREEN", new Vector2(0, 200), Color.White); And if you didn’t follow from the previous post, add these lines instead: spriteBatch.Begin(); spriteBatch.DrawString(verdana36, "I PUT TEXT ONSCREEN!!", new Vector2(50, 275), Color.White); spriteBatch.End(); That’s it! You’re done.  Just hit [...]

Migrating from NUnit to XUnit–SetupFixture fun

Wed, 22 Feb 2017 04:00:19 GMT

Originally posted on: I was limited to a specific version of NUnit for various reasons, and needed to start being able to run ‘core’ tests, so I migrated to XUnit. Generally not a problem until I hit ‘SetupFixture’. XUnit has no concept of ‘SetupFixture’ and from what I can gather, won’t either ( So, in order to get the same approach, I need to implement ‘IClassFixture’ on every test class. I could do this by going through each 1 by 1, but then how can I ensure that another developer (or even me!) remembers to do it the next time? For example, creating a new test class. In fact the reason for the SetupFixture was entirely because you can’t assume someone will derive or implement an interface. In the end, I took the approach of adding another test to my code to ‘test the tests’ ensuring that each test class implemented IClassFixture. The code is below, tweak to your own needs! [Fact] public void AllClassesImplementIUseFixture() {     var typesWithFactsNotImplementingIClassFixture =        //Get this assembly and it's types.         Assembly.GetExecutingAssembly().GetTypes()         //Get the types with their interfaces and methods         .Select(type => new {type, interfaces = type.GetInterfaces(), methods = type.GetMethods()})         //First we only want types where they have a method which is a 'Fact'         .Where(t => t.methods.Select(Attribute.GetCustomAttributes).Any(attributes => attributes.Any(a => a.GetType() == typeof(FactAttribute))))         //Then check if that type implements the type         .Where(t => t.interfaces.All(i => i != typeof(IClassFixture)))         //Select the name         .Select(t => t.type.FullName)         .ToList();     if (typesWithFactsNotImplementingIClassFixture.Any())         throw new InvalidOperationException(             $"All test classes must implement {nameof(IClassFixture)}{Environment.NewLine}These don't:{Environment.NewLine} * {string.Join($"{Environment.NewLine} * ", typesWithFactsNotImplementingIClassFixture)}"); } [...]

Why Should iOS Developers Choose Swift Programming For Applications?

Wed, 22 Feb 2017 07:07:06 GMT

Originally posted on: the developers decide to start new iOS app project, they might be pondered whether to chose Swift programming or Objective-C. Apple introduced Swift in the year 2014 after thorough research and development. Before Swift, Objective-C is widely used language for iOS application development. Some Amazing Features Of Objective-CObjective-C is considered as the core of iOS application development. It is the top choice for Mac OS development since 1980. Though Swift app developers write code in Swift but they are virtually dealing with fundamental concepts that are inherited from Objective-C. The execution of Internet protocol in a mobile application has become possible due to Objective-C. The provision of third party framework and libraries is a key feature of Objective-C. Many tools used for iOS development are not optimized with Swift but are better compatible with Objective-C. Objective-C offers the most robust language runtime environment. Hence, for most of the app development companies, Objective-C is still a favorite choice.Every iOS application development uses the core foundation of APIs. Basically, Objective-C offers a strong set of APIs with wonderful features. The APIs that are used in Swift for memory management and data wrapping, are generally Objective-C based APIs. Why iOS Developers Are Moving Towards Swift Programming?As per the experienced iOS app developers, Swift has provided amazing implications and created eagerness in new app developers about iOS app development. Generally, Swift and Objective-C both are the best options for building iPhone applications. But both languages have their own features and benefits.Generally, Swift looks friendly to app developers who have worked with Objective-C before. Swift has inherited dynamic object model found in Objective-C. The mobile applications developed with Swift are easily scaled to the cloud services.The Swift has adopted safe patterns for iOS application development by delivering more features in order to make coding easier and flexible. It is equipped with advanced compiler, debugger and structural framework. Automatic Reference Counting (ARC) feature is used in Swift to simplify the memory management issues. Modern and standard Cocoa Foundation is used in order to enhance Swift framework stack. The cross-platform compatibility makes Swift more suitable choice for iOS application development.Feature Of Swift LanguageSwift language has an ability to create responsive, immersive and consumer-facing iOS applications. This all has become possible because of its advanced feature. Following are some advantages of Swift programming:1. Safe Type interface is one of the key features of Swift programming which makes it safe. The type-interface reduces the coding length. It uses default setting unless specified by a special keyword. This will help developers to avoid false coding due to false input values. Moreover, Swift has eliminated the null pointer concept which is used in Objective-C. However, null-pointer ensures crash-free application. But it can generate bugs in applications as a line of code become non-operational (no-op) in nature. Swift generates the compiler error whenever developers use null-pointer variables in a source code. It also generates [...]

Is Your Developer Team Designing for a Disaster?

Tue, 21 Feb 2017 11:38:06 GMT

Originally posted on: teams work at top speed, and the environment in which they work is demanding and complex. Software is no longer considered done until it’s shipped, and there’s been a tendency to view shipped software as preferable to perfect software. These philosophies, created in large part by agile and lean methodologies, celebrate deployments and meeting release cycle deadlines. But have our standards of working software quality been trumped by the rapid pace of delivery? We may be designing faster, but are we designing disaster?We practice Agile development at Stackify, and are advocates of the methodology as a system to build and deploy better software more frequently. What we don’t subscribe to is the notion that the process we take to create better performing, high-quality software is more important than the software itself. If the process gets in the way of the product, it’s time to re-evaluate it.The beauty of a process like Agile is that you can modify it to suit your team and what’s most important in your delivery. Here’s a bit of what we’ve done to optimize, and some of the things we’ve learned along the way.Don’t let quality draw the short straw.I’ve yet to see a project that doesn’t have a problem when it gets past development and heads into the final push towards “done.” Primarily, I see one of two things happen:Testing cycles are compressed or incomplete due to time constraints. A major contributor to this is that code often isn’t ready to be tested until it’s all ready. As the sprint burns down, more and more code tasks are complete but they’ve yet to be reviewed, merged, and deployed to test environments. The code is rushed through QA, and ultimately, issues are found in production. Fixing those issues robs time away from the next sprint.Read full article at [...]

Architecting a Developer Team Turnaround

Tue, 21 Feb 2017 11:37:15 GMT

Originally posted on:

A couple years ago, I accepted a position as the “director of digital” at a small marketing firm. And yes, there was still a divide between digital and traditional work there. I was mostly hired for my strategy capabilities and my successes with analyzing digital web properties and finding areas for growth. The title was just that, a title, as I had no direct reports and was only responsible for my own work.

Like many people in leadership positions, I found myself there accidentally. After several months as the resident strategist, I was appointed the leader over the tech team, which consisted of a few developers, a BA, a handful of digital technologists, and the interns.

At the time, the agency was going through a rough patch. In general, the culture was quite toxic. The company’s leaders were at odds with each other, and this created what I can only describe as “camps” of employees who positioned themselves with one of the three founders. As you can imagine, this put most of the employees at odds and created a lot of distrust. The only thing that was consistently working across the company was the gossip machine. Things weren’t much better with the tech group. My team had an “us versus them” mentality because “they don’t get us.”

Read full article at

(image) (image)

Developer Teams Must Work Differently in the Future

Tue, 21 Feb 2017 11:36:21 GMT

Originally posted on: we inject more new technology into our everyday lives, software development continues to be one of the most popular and fastest growing job fields. But have you ever really thought about the core essence of a developer’s work?Software developers tend to think and work only inside the tasks they are assigned to. If they are to write a code on a specific project, they only work on the coding process. As the CEO of a software development company that has been in the industry for more than a decade, I believe that this attitude from developers results only in the loss of career growth. Why do we call them “developers”? Why not “coders” or “programmers”? Because the word develop implies that you are helping something to grow. Developers are supposed to help the whole project move forward and succeed, not just write a code and let it go.Take a look at this average software development workflow:Requirements -> Design (UX/UI/Graphic) -> Coding -> Testing -> Deployment -> SupportDevelopers must feel responsible for the successful execution of each step of the workflow. The overall success of their code depends on it. When they put themselves solely in the coding “box,” the quality of their work becomes invisible. It may look like they did a great job even if as a team they failed to deliver the solution the company needed.Read full article at [...]

Continuous Improvement is Critical for Developer Success

Tue, 21 Feb 2017 11:35:31 GMT

Originally posted on: the beginning, it became clear that Kanbanize would need more than just cool perks and a nice office to build a dream team equipped to deal with the challenges of a growing startup. A productive and healthy company culture is not something that will happen on its own—you have to build it. Negative culture, though, somehow does evolve on its own, often as a result of disengaged management or a lack of effort toward nurturing a good working environment. Of course, even if you have the best of intentions there will be mistakes and poor decisions. Your job is to not let them drag you down. Act fast and fix what you can.A quintessential part of the growth and success of Kanbanize was, and still is, our internal culture. Our culture philosophies are value-centered, with attention paid to efficiency and effectiveness. We’re all about promoting continuous improvement by nurturing personal responsibility and mutual support within our team. We’ve learned many hard lessons, and from them we’re able to put together what we believe makes our company a great place to work better, build better, and deliver faster.1. Live what you preach and don’t compromise your values.From day one, Kanbanize followed the principles of Lean methodology and Kanban in all of our departments. Every employee actively applies these lean values and the product we develop in all of their work. We don’t just preach and pitch Kanbanize to others, we live it every day.Based on the combination of the five principles of Lean and the visual nature of a Kanban board with its cards and just-in-time delivery principles, we’ve managed to establish a consistent, reliable, and efficient flow of work across the organization. We admit that we are far from perfect, but we believe perfection is in the process, not the destination.Read full article at [...]

Changing the World with Agile and Lean Startup Methods

Tue, 21 Feb 2017 11:34:31 GMT

Originally posted on: can’t tell you how many companies I’ve seen, of all sizes, who start out with an innovative mission to change the world. They acquire a budget, rally a team together, and get to work for months or years with a grand vision. As time goes on, the scope of the project evolves and grows as complex designs are handed to software engineers from on high. Pressure comes down on the software engineers to pick up the pace, and the scope continues to grow. After all, we’ve waited so long to release this product, what’s another few weeks? But the longer we wait, the more funds are expended, and the more teams become stressed.In teams that claim to be agile, sprint meetings become focused on point velocity and “requirements,” rather than actual user stories. Tasks are assigned with very little understanding of the big picture. Someone argues that scrum points should be associated with hours, and someone else says days. In the end, both arguments prove equally worthless as software engineers begin to feel that pointing is just a mechanism for micromanaging how much they accomplished in a week.Finally the product is released! But virtually no one cares. Customers do not emerge. Frantically, every failure becomes someone else’s fault, every success the result of one’s own work.“Let’s never do that again,” we say. But how do we avoid it?Concierge Auctions, an online luxury real estate marketplace founded by Laura Brady, was created in order to better facilitate the auctioning of elite properties around the world. Our purpose has always been to solve problems and make processes easier, faster, and better, and that requires a team culture that operates efficiently and with purpose. Here are the cornerstones of how Concierge Auctions innovated on our own process and built a profitable online real estate marketplace with over $1 billion in sales.Read full article at [...]

Past the Ping Pong Table for Better Dev Team Culture

Tue, 21 Feb 2017 11:33:43 GMT

Originally posted on:’s quite a bit of press paid to the quirky perks of startup offices. From ping pong tables to beer kegs to pet-friendly work spaces, sometimes it seems like offices are more into having fun than getting work done. But employees who are healthy and appreciated are going to be more inspired, more creative, and more productive.In order to stay ahead in this competitive industry, startups need team members who have drive, stamina, and, most importantly, a constant stream of creativity. And it takes a lot more to inspire creativity than just a laptop. It takes a culture of support, trust, and, yes, a few extra perks, to bring out the best in employees. Put people first.At Xivic, we never underestimate the importance of a strong, positive workplace culture, and we see the benefits of our efforts in the variables and outputs from our employees. We consider the maintenance of our company culture to be a task worthy of our full-time attention. It’s easy to push culture aside when the projects and deadlines are piling up, but without it, you’re not inspiring creativity or accountability, and you’re not going to be producing the best work possible. We’re constantly asking individuals what kind of output and feedback they’re getting and updating our plans based on what they have to say.We have two offices—one in Los Angeles, and the other in Romania—and we go to great lengths to keep both offices informed and in communication. Our employees are brilliant thought leaders who not only work together, but also genuinely like each other. The community is formed with a clear understanding of everyone’s roles and how these roles should work together. We’re all on the same page. It’s important for all employees to be aware of what everyone else is doing, learn about each other’s tasks, and think holistically about projects. For companies who are just learning to implement this, I suggest starting small. Try to get people within individual departments to work together on one task, and then start expanding to other departments. It’s always easier to collaborate within a department and then grow from there.Read full article at [...]

Inspire Your Developer Team in 5 Simple Ways

Tue, 21 Feb 2017 11:33:15 GMT

Originally posted on: have a great developer team. We make software. We also invest a lot of time and energy in ensuring that the work we do is excellent. It goes without saying then that our development team is a crucial part of our business.As a team, how we work gets just as much attention as the work we do.This mindset recently played an integral role when we rebranded our company to better align with our values, the way we work, and our thoughts about software.I’ve always regarded my job as a team leader to be a facilitator of great decision making, not just the one responsible for making good decisions all the time. Since I’m nowhere near technical enough to code anymore, I take a similar approach to enabling our development team to do great work. I help create an environment that inspires success—the rest is up to them.Over the years, first with WooCommerce/WooThemes and now with Conversio, my approach to working with development teams has evolved. Here are a few pillars that dictate the way my team works and collaborates today.1. Offer flexible, remote working.Our team is fully remote, which means that no one can be that pesky manager trying to backseat pair programming or micro-manage projects. All members of the team have the space to tackle challenges in their own way and (mostly) at their own pace.It’s true that diamonds are crafted under immense pressure, but that approach generally does not bode well for software teams. Fixing the odd bug under pressure is probably okay, but being creative and solving a problem in the most efficient way possible requires time and space.Read full article at [...]

5 Principles for a Developer Team Culture that Wins

Tue, 21 Feb 2017 11:32:25 GMT

Originally posted on: Of all the overloaded, over-used, buzzword-bingo, five dollar corporate-jingoistic catchphrases of leadership jargon out there, are there any that are more hyped yet seemingly less tangible than this one? Developer teams are not immune cultural risks and benefits.“They offer catered lunches, free snacks, a kegerator, and foosball!” Don’t get me wrong—I like those things! But once you’re in the door and working at a company, what really matters is feeling like you’re part of something special; that you “fit” there.Whether you’re leading a small dev team, are in charge of a large cross-functional crew that spans the entire technology ecosystem, or are a founder who is building everything from scratch, there are real, tangible, practical steps you can take to improve your work environment, without having to clear out a conference room for another foosball table.Let me start with a few admissions. First, although I have helped craft the cultures at a number of organizations, I’m by no means the foremost authority. Second, Stackify (my current organization) doesn’t have it all figured out. We’re as imperfect as the next company. To be a part of a company is to embrace a certain level of imperfection. An important corollary to that is that no culture is perfect for all people. Nor should it be. What a culture can be is a vital instrument that propels the success of an organization. Or, on the flipside, propels its undoing.Read full article at [...]

Coded to Teach

Tue, 21 Feb 2017 11:30:31 GMT

Originally posted on:

You will be hard-pressed to find someone who doesn’t enjoy or engage with video games in this modern age. From mobile apps, Facebook games, to everyday gamification, we are always at play. Developer turned founder, Stephen Foster, found his success in the gaming world by creating a way to help others learn to code. ThoughtSTEM was created to help people, particularly those under the age of 18, learn to write and develop their own code.

A challenge many founders experience is finding a unique sell, and Foster does just that with ThoughtSTEM. “We want to teach as many people as possible how to code, and the secret sauce is video games.” With over a million programs written through their software, Foster used his passion to build a successful business. While there were ups and downs in his coding lifetime, his passion of teaching and learning computer science drove the creation of video game-based learning. “We just kind of lucked into it because we were bored in our lectures.” After deciding not to go to college and falling into a get-rich-quick scheme, the positive takeaway was a drive to become his own boss. Although he did return to complete his education, and Foster now holds a Ph.D.

While many developers-turned-founders cite that it can be a struggle to have a technical background in a founder role, Foster sees it as an asset. “I’ve seen other founders who didn’t have a technical background. They struggled to manage and communicate with their developers.” The ability to speak the language to your developers is essential for a tech company. While other founders have mentioned marketing and sales as areas they may struggle with, technical communication is still key.

Read full article at

(image) (image)

Developing for Change

Tue, 21 Feb 2017 11:29:23 GMT

Originally posted on:

John Negron, Founder / Be a Doer

The path from developer to founder takes people on many different paths. In the process of creating a non-profit volunteer pipeline and event management platform, Be a Doer founder John Negron discovered his experience to be a lot harder than expected. Negron grew up in an immigrant family with not much initial interest in computers. While working at NASA and starting out in C++, Perl, and the .Net space, Negron was an avid volunteer in his community. During this time, he found that while organizations are great at serving people or communities in need, they struggled to engage volunteers in a meaningful and valuable way. Later, this experience would be the foundation for Be a Doer.

Throughout his career as a developer, Negron has dabbled in start-ups and product launches before his own journey began. “As a dev, the markets I’ve been in, there are a lot of people with great ideas. Every now and then, I get a pitch to come be the CTO of ‘this’. I wanted to help, and I did here and there.” However, after taking a hard look at his own passions and skillsets, he decided to focus on something that would feel less like work. After volunteering at a soup kitchen in NYC, the idea for Be a Doer really came to life. “It really just became a hobby, like, wouldn’t this be cool.” While working full time as a developer, he slowly began to transition into the founder role. “As a business person, it’s not just about the product, or about a hobby. Getting into an accelerator program really helped me realize this.” After a successful soft launch and seeing traction with the product, he decided to go all in.

Read full article at

(image) (image)