Subscribe: Geekswithblogs.net
http://geekswithblogs.net/MainFeed.aspx
Preview: Geekswithblogs.net

Geekswithblogs.net



Geekswithblogs.net



 



ICYMI: Programming the Microsoft Bot Framework

Wed, 10 Jan 2018 14:13:54 GMT

Originally posted on: http://geekswithblogs.net/WinAZ/archive/2018/01/10/icymi-programming-the-microsoft-bot-framework.aspx

My latest book, Programming the Microsoft Bot Framework: A Multiplatform Approach to Building Chatbots, is now available. You can find more details on the Microsoft Press site.

@JoeMayo

(image) (image)



Hello World in Xamarin

Wed, 27 Dec 2017 15:01:41 GMT

Originally posted on: http://geekswithblogs.net/anirugu/archive/2017/12/28/hello-world-in-xamarin.aspx

width="560" height="315" src="https://www.youtube.com/embed/MTxkvRdzfOg" frameborder="0" gesture="media" allow="encrypted-media" allowfullscreen=""> (image) (image)



Released LINQ to Twitter v4.2.0

Tue, 19 Dec 2017 10:03:20 GMT

Originally posted on: http://geekswithblogs.net/WinAZ/archive/2017/12/19/released-linq-to-twitter-v4.2.0.aspxToday, I released the latest version of LINQ to Twitter. In addition to fixing bugs, the highlighted features of this release include support for DM Events, Extended Tweets and .NET Core. Here’s a demo of using extended mode tweets in a Search query: static async Task DoSearchAsync(TwitterContext twitterCtx) { string searchTerm = "\"LINQ to Twitter\" OR Linq2Twitter OR LinqToTwitter OR JoeMayo"; Search searchResponse = await (from search in twitterCtx.Search where search.Type == SearchType.Search && search.Query == searchTerm && search.IncludeEntities == true && search.TweetMode == TweetMode.Extended select search) .SingleOrDefaultAsync(); if (searchResponse?.Statuses != null) searchResponse.Statuses.ForEach(tweet => Console.WriteLine( "\n User: {0} ({1})\n Tweet: {2}", tweet.User.ScreenNameResponse, tweet.User.UserIDResponse, tweet.Text ?? tweet.FullText)); else Console.WriteLine("No entries found."); } Notice that the query now has a TweetMode property. You can set this to to enum TweetMode.Extended to request tweets that go beyond 140 characters. To handle the difference between Classic and Extended tweets, the Console.WriteLine statement uses either tweet.Text or tweet.FullText. A null tweet.Text tells that the tweet is extended, which is consistent with the Twitter API convention. Here are a few API queries associated with Direct Message Events: Show: DirectMessageEvents dmResponse = await (from dm in twitterCtx.DirectMessageEvents where dm.Type == DirectMessageEventsType.Show && dm.ID == 917929712638246916 select dm) .SingleOrDefaultAsync(); MessageCreate msgCreate = dmResponse?.Value?.DMEvent?.MessageCreate; if (dmResponse != null && msgCreate != null) Console.WriteLine( "From ID: {0}\nTo ID: {1}\nMessage Text: {2}", msgCreate.SenderID ?? "None", msgCreate.Target.RecipientID ?? "None", msgCreate.MessageData.Text ?? "None"); List: int count = 10; // intentionally set to a low number to demo paging string cursor = ""; List allDmEvents = new List(); // you don't have a valid cursor until after the first query DirectMessageEvents dmResponse = await (from dm in twitterCtx.DirectMessageEvents where dm.Type == DirectMessageEventsType.List && dm.Count == count select dm) .SingleOrDefaultAsync(); allDmEvents.AddRange(dmResponse.Value.DMEvents); cursor = dmResponse.Value.NextCursor; while (!string.IsNullOrWhiteSpace(cursor)) { dmResponse = await (from dm in twitterCtx.DirectMessageEvents where dm.Type == DirectMessageEventsType.List && dm.Count == count && dm.Cursor == cursor select dm) .SingleOrDefaultAsync(); allDmEvents.AddRange(dmResponse.Value.DMEvents); cursor = dmResponse.Value.NextCursor; } if (!allDmEvents.Any()) { [...]



importing ssis flat files in redshift using copy

Mon, 18 Dec 2017 16:59:51 GMT

Originally posted on: http://geekswithblogs.net/influent1/archive/2017/12/18/244635.aspx

After trying a million combinations, I finally figured out how to export data in SSIS using an OLE DB source (SQL Server) and a flat file destination. In the end all I really should have done was use "ENCODING UTF16" in the COPY command in Redshift.  None of the settings I changed in SSIS actually helped, aside from making sure the Unicode box was checked in the General tab of the Flat File Destination settings. (image) (image)



creating a t-sql script to enable query store and auto tuning for each user database

Tue, 12 Dec 2017 11:02:18 GMT

Originally posted on: http://geekswithblogs.net/influent1/archive/2017/12/12/244633.aspx

select 'ALTER DATABASE [' + name + '] SET QUERY_STORE = ON ALTER DATABASE [' + name + '] SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON )'
from sys.databases 
where name not in ('master','model','msdb','tempdb')



(image) (image)



SEO, Personas, and Analytics - Bridging the IT / Marketing Gap

Mon, 11 Dec 2017 09:11:27 GMT

Originally posted on: http://geekswithblogs.net/jjulian/archive/2017/12/11/seo-personas-and-analytics-bridging-the-it--marketing.aspx

(image)

As more marketing teams and IT teams start to cross paths more often, one areas they both could use some collaborative education is performance analytics.  In this episode of Marketer-to-Marketer, Ardath Albee and Andy Crestodina discuss the topics of buyer personas, SEO, and data analytics.
(image) (image)



In Visual Studio, how do I pass arguments in debug to a console program?

Thu, 30 Nov 2017 18:09:09 GMT

Originally posted on: http://geekswithblogs.net/AskPaula/archive/2017/11/30/in-visual-studio--how-do-i-pass-arguments-in.aspx

  1. Go to your project properties, either by right-clicking on the project and picking "Properties" or by picking Properties from the Project menu.

  2. Click on Debug, then enter your arguments into the "Script Arguments" field.

  3. Save.

(image) (image)



Generating a SAS token for Service Bus in .Net Core

Wed, 29 Nov 2017 15:16:18 GMT

Originally posted on: http://geekswithblogs.net/SoftwareDoneRight/archive/2017/11/29/generating-a-sas-token-for-service-bus-in-.net-core.aspxMicrosoft has decided to separate the queue/topic send/receive functionality from the queue/topic management functionality.  Some of these separations make sense, while others, like the inability to auto-provision new queues/topics does not. In any event, we can still create these objects using the service bus REST API, but it requires some special handling, especially for authorization. The send/receive client library uses a connection string for authentication.  This is great.  Its easy to use, and can be stored as a secret. No fuss, no muss.  The REST API endpoints require a SAS token for authorization.  You would think there would be a provider to produce a SAS token for a resource, given the resource path and connection string.  You would be wrong.  Finding working samples of the token generation using .net core 2.x was surprisingly difficult.  In any event, after (too) much researching, I’ve come up with this: public interface ISasTokenCredentialProvider   {      string GenerateTokenForResourceFromConnectionString(string resourcePath, string connectionString, TimeSpan? expiresIn = null);   } public class SasTokenCredentialProvider:ISasTokenCredentialProvider   {      private readonly ILogger logger;      public SasTokenCredentialProvider(ILogger logger)      {           this.logger = logger;      }          public string GenerateTokenForResourceFromConnectionString(string resourcePath, string connectionString, TimeSpan? expiresIn = null)      {          if (string.IsNullOrEmpty(resourcePath))          {              throw new ArgumentNullException(nameof(resourcePath));          }          if (string.IsNullOrEmpty(connectionString))          {               throw new ArgumentException(nameof(connectionString));          }                   // parse the connection string into useful parts          var connectionInfo = new ServiceBusConnectionStringInfo(connectionString);          // concatinate the service bus uri and resource paths to form the full resource uri          var fullResourceUri = new Uri(new Uri(connectionInfo.ServiceBusResourceUri), resourcePath);                     // ensure its URL encoded          var fullEncodedResource = HttpUtility.UrlEncode(fullResourceUri.ToString());                   // default to a 10 minute token          expiresIn = expiresIn ?? TimeSpan.FromMinutes(10);          var expiry = this.ComputeExpiry(expiresIn.Value);          // generate the signature hash           var signature = this.GenerateSignedHash($"{fullEncodedResource}\n{expiry}", connectionInfo.KeyValue);                // assembly the token          var keyName = connectionInfo.KeyName;           var token = $"SharedAccessSignature sr={fullEncodedResource}&sig={signature}&se={expiry}&skn={keyName}";          this.logger.LogDebug($"Generated SAS Token for resource: {resourcePath} service bus:{connectionInfo.ServiceBusResourceUri} token:{token}");          return token;      }      private long ComputeExpiry(TimeSpan expiresIn)      {          return DateTimeOffset.UtcNow.Add(expiresIn).ToUnixTimeSeconds();                 }      private string GenerateSignedHash(strin[...]



A Short Deep Learning Study Guide

Mon, 27 Nov 2017 22:34:36 GMT

Originally posted on: http://geekswithblogs.net/JoshReuben/archive/2017/11/28/a-short-deep-learning-study-guide.aspxbasic Machine Learning pre-requisites:python http://docs.python-guide.org/en/latest/intro/learning/Jupyter http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/Scipy http://www.scipy-lectures.org/Pandas https://pandas.pydata.org/pandas-docs/stable/10min.htmlScikit-Learn http://scikit-learn.org/stable/user_guide.htmlSeaborn charts https://seaborn.pydata.org/tutorial.htmlbrief intro to Neural nets https://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networksSpark MLLib https://spark.apache.org/mllib/Mathematical Foundations of Machine Learning: https://www.youtube.com/playlist?list=PLD0F06AA0D2E8FFBADeep Learningneural networks refresher - https://www.youtube.com/playlist?list=PLiaHhY2iBX9hdHaRr6b7XevZtgZRa1PoUDL tutorial - theano code, briefly covers all topics http://deeplearning.net/tutorial/Learning TensorFlow https://learningtensorflow.com/Jason Brownlee's blog - all Keras / LTSM tutorials http://machinelearningmastery.com/blog/TensorFlow docs https://www.tensorflow.org/api_docs/python/Keras docs - https://keras.io/Google Cloud Machine Learning - TensorFlow scale out https://cloud.google.com/ml-engine/docs/TensorFlow Cookbook https://github.com/nfmcclure/tensorflow_cookbookdeep learning textbook online http://www.deeplearningbook.org/a study guide: http://yerevann.com/a-guide-to-deep-learning/beyond Deep Learning - Reinforcement Learning: blog series: https://github.com/dennybritz/reinforcement-learningsummary of bleeding edge: OpenAI blogs: https://blog.openai.com/ [...]



Swashbuckle Swagger UI– Prompt for Access Token (.net Core)

Wed, 22 Nov 2017 10:21:17 GMT

Originally posted on: http://geekswithblogs.net/SoftwareDoneRight/archive/2017/11/22/swashbuckle-swagger-uindash-prompt-for-access-token-.net-core.aspxI use swagger to document my API endpoints.  I like the descriptive nature, and find the swagger UI to be a great place for quick testing and discovery.   The swagger UI works great out of the box for unsecured API endpoints, but doesn’t seem to have any built-in support for requiring users to supply an access token if its required by the endpoint. Based on my research, it appears we can add an operation filter to inject the parameter into the swagger ui.  Using the code at https://github.com/domaindrivendev/Swashbuckle/issues/290 as a guide, I’ve ported the filter to .net core (2.0) as: ///     ///     This swagger operation filter     ///     inspects the filter descriptors to look for authorization filters     ///     and if found, will add a non-body operation parameter that     ///     requires the user to provide an access token when invoking the api endpoints     ///      public class AddAuthorizationHeaderParameterOperationFilter : IOperationFilter     {         #region Implementation of IOperationFilter         ///         ///          ///         ///         public void Apply(Operation operation, OperationFilterContext context)         {             var descriptor = context.ApiDescription.ActionDescriptor;             var isAuthorized = descriptor.FilterDescriptors                  .Any(i => i.Filter is AuthorizeFilter);              var allowAnonymous = descriptor.FilterDescriptors                  .Any(i => i.Filter is AllowAnonymousFilter);             if (isAuthorized && !allowAnonymous)             {                 if (operation.Parameters == null)                 {                      operation.Parameters = new List();                 }                 operation.Parameters.Add(new NonBodyParameter                 {                     Name = "Authorization",                     In = "header",                      Description = "access token",                     Required = true,                     Type = "string"                  });             }         }         #endregion     } and add it to the Swagger middleware services.AddSwaggerGen(c =>           {             …               c.OperationFilter();           }); That’s it!  now when an endpoint requires an access token, the swagger UI will render a parameter for it: [...]



Beginning Azure Machine Learning

Mon, 13 Nov 2017 18:20:57 GMT

Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2017/11/14/beginning-azure-machine-learning.aspx

Free version for Azure ML Studio for Learning

https://studio.azureml.net

Azure ML Cheat Sheet for Selecting ML Algorithm for your Experiments

https://docs.microsoft.com/en-us/azure/machine-learning/studio/algorithm-cheat-sheet

(image)
(image) (image)



Announcing Enzo Online: IoT and Mobile Development Made Easier

Mon, 13 Nov 2017 08:30:56 GMT

Originally posted on: http://geekswithblogs.net/hroggero/archive/2017/11/13/announcing-enzo-online-iot-and-mobile-development-made-easier.aspx

If you are a software developer and would like to build a Mobile application, or an IoT system, it is likely that you will experience a steep learning curve for two reasons: development languages, and lack of SDKs.  Indeed, every platform has its own programming language variation; Objective-C on iOS, C++ on Arduino boards, Python on Rasbperry Pis (or other supported languages), .NET/Javascript for MVC Web applications, PowerShell for DevOps... and so forth. The second learning curve is around the limited (or complex) support for Software Development Kits (SDKs) for certain platforms, such as Microsoft Azure, or other technologies that do not have formal SDKs (such as sending a text message from an Arduino board for example), or introduce breaking changes when upgrading.

To simplify an already complex ecosystem of languages and platforms, I created a new kind of cloud technology: Enzo Online. Enzo Online is an HTTP Protocol Bridge that makes it very easy to access other services. With Enzo Online, you can easily configure access to some of your key cloud services, and call them from your Mobile apps and IoT devices without the need to download an SDK. In other words, Enzo Online allows you to configure your services once, and reuse from any language/platform through HTTPS calls.

During the Preview phase of Enzo Online, you can query SQL Server/Azure and MySQL databases, send SMS messages, Emails, and access other Microsoft Azure services (Service Bus, Azure Storage, Azure Key Vault) by sending simple HTTPS commands.

At the time of this writing, Enzo Online is in preview, and many more services will be added over time.

Visit https://portal.enzounified.com for more information.

This is a cross post from https://www.herveroggero.com/single-post/2017/11/07/IoT-and-Mobile-Development-Made-Easier

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

(image) (image)



Octopus Deploy : Format exception unpacking Windows NuGet package onto Linux box

Fri, 10 Nov 2017 11:00:15 GMT

Originally posted on: http://geekswithblogs.net/alexhildyard/archive/2017/11/10/octopus-deploy--format-exception-unpacking-windows-nuget-package-onto-again.aspx

There appears to be an issue with OD's "Deploy NuGet package" task (tested in Octopus Deploy 3.1.5). We encountered the problem while unpacking a .NET Core 2.0 application on a Windows tentacle and copying the resultant assets to an Ubuntu box. We noticed that if the assets were extracted using OD's in-built "Deploy package" task, the application would fail to launch on the Linux box with an "incorrect format" exception. If we unpacked the NuGet package some other way however (eg. manually extracted it), the application would run without error.

As a workaround, there are quite a few options. We have tried both of the following successfully:

1. Share the Octopus Packages folder, and then replace the "Deploy NuGet Package" step with a call to the "[System.IO.Compression.ZipFile]::ExtractToDirectory()" method, using the Octopus.Project.Name and Octopus.Release.Number properties to identify the package to extract and the Octopus.Tentacle.Agent.ApplicationDirectoryPath and Octopus.Environment.Name properties to identify the desired extraction path.

2. Install mono on the Linux box in question and unpack the package there using the "Deploy NuGet Package."

I have raised a Support ticket with Octopus Deploy:


(image) (image)



Octopus Deploy : Format exception unpacking Windows NuGet package onto Linux box

Fri, 10 Nov 2017 10:59:07 GMT

Originally posted on: http://geekswithblogs.net/alexhildyard/archive/2017/11/10/octopus-deploy--format-exception-unpacking-windows-nuget-package-onto.aspx

There appears to be an issue with OD's "Deploy NuGet package" task (tested in Octopus Deploy 3.1.5). We encountered the problem while unpacking a .NET Core 2.0 application on a Windows tentacle and copying the resultant assets to an Ubuntu box. We noticed that if the assets were extracted using OD's in-built "Deploy package" task, the application would fail to launch on the Linux box with an "incorrect format" exception. If we unpacked the NuGet package some other way however (eg. manually extracted it), the application would run without error.

As a workaround, there are quite a few options. We have tried both of the following successfully:

1. Share the Octopus Packages folder, and then replace the "Deploy NuGet Package" step with a call to the "[System.IO.Compression.ZipFile]::ExtractToDirectory()" method, using the Octopus.Project.Name and Octopus.Release.Number properties to identify the package to extract and the Octopus.Tentacle.Agent.ApplicationDirectoryPath and Octopus.Environment.Name properties to identify the desired extraction path.

2. Install mono on the Linux box in question and unpack the package there using the "Deploy NuGet Package."

I have raised a Support ticket with Octopus Deploy:


(image) (image)



Starting An Umbraco Project

Fri, 10 Nov 2017 07:08:50 GMT

Originally posted on: http://geekswithblogs.net/tmurphy/archive/2017/11/10/starting-an-umbraco-project.aspx As I have been documenting Umbraco development I realized that people need a starting point.  This post will cover how to start an Umbraco project using an approach suitable for ALM development processes. The criteria I feel a maintainable solution include are a customizable development project which can be easily in source control with a robust and replicatable database.  Of course this has to fall within the options available with Umbraco.  For mean this means an ASP.NET web application and a SQL Server database.  Let’s take a look at the steps required to get started with this architecture. Create The Database I prefer a standard SQL Server database instance over SQL Server Express due to its manageability.  For each Umbraco instance we need to create an empty database and then a SQL Server login and a user with permissions to alter the database structure.  You will need the login credentials when your first start your site. Create The Solution This is the easiest part of the an Umbraco project.  The base of each Umbraco solution I create starts with an empty ASP.NET Web Application.  Once that is created open the NuGet package manager and install the UmbracoCms package.  After that it is simply a matter of building and executing the application. Finish Installation As the ASP.NET application starts it will present the installation settings.  The first prompt you will get is to create your admin credentials as shown below.  Fill these fields in but don’t press any buttons. The key is the be sure to click the Customize button before the install button as it doesn’t verify whether you want to use an existing database before running the install.  It will simply create a SQL Server Express instance on its own.  Pressing the Customize button will show the configuration screen shown below.  Fill in your SQL Server connection information and click Continue. Conclusion Once you start the install sit back and relax.  In a few minutes you will have an environment that is ready for your Umbraco development.  This will be the starting point for other future posts.  Stay tuned. [...]



Relating Umbraco Content With the Content Picker

Wed, 08 Nov 2017 14:36:09 GMT

Originally posted on: http://geekswithblogs.net/tmurphy/archive/2017/11/08/relating-umbraco-content-with-the-content-picker.aspx After addressing Umbraco team development in my previous post I want to explore maintaining relationships between pieces of content in Umbraco and accessing them programmatically here. For those of us who have a natural tendency to think of data entities and their relationships working within a CMS hierarchy can be challenging.  Add to that the fact that users don’t only want to query within that hierarchy and things get even more challenging.  Fortunately we will see here that adding the Content Picker to your document type defintion and a little bit of LINQ to your template you can deliver on all these scenarios. Content Picker Adding the Content Picker to your document type definition is the easiest part of the process but make sure that you use the new version and not the one that is marked as obsolete.  You will then be presented with a content tree that allows you to navigate to and select any node in your site. Querying Associated Content The field in your content will return the ID of the content instance you associated using the Content Picker.  Unfortunately it actually returns it as a HtmlString so you need to use the ToString method before utilizing it or you will get unexpected results and compile errors. In the example below I am looking for the single piece of content selected in the Content Picker.  The LINQ query show a more complicated approach, but this also gives you and idea of how you could get a list of all nodes of a certain content type and use a lambda expression to filter it.  It requires that you first back up to the ancestors of the content you are displaying and find the root.  The easier way is to use the Content or TypedContent methods of the UmbracoHelper.  In future posts I will show alternate methods for finding the root node as well. var contentId = Umbraco.Field("contentField"); var associatedContent = Model.Content.Ancestors().FirstOrDefault().Children().Where(x => x.Id == int.Parse(contentId .ToString())).FirstOrDefault(); Conclusion While the Umbraco team needs to create some better documentation for this feature it is extremely useful for building and using relationships between content in your Umbraco site. [...]



How Did I Become An IT Consultant Curmudgeon?

Thu, 02 Nov 2017 01:25:29 GMT

Originally posted on: http://geekswithblogs.net/tmurphy/archive/2017/11/02/how-did-i-become-an-it-consultant-curmudgeon.aspx

(image)

I have been accused of being a curmudgeon by more than one co-worker.  The short, pithy answers to the question of I got to this point would be “experience” or “it comes with age”.  But what is the real reason and does it have any benefit?

Firstly, I was raised in an Irish-German family which by default makes me surly and sarcastic.  At almost half a century of this habit I don’t see any change coming there.  I also find that most developers have similar traits along with a dry sense of humor.

The main thing that gained me the label of curmudgeon is the knack for identifying issues that could adversely affect a project.  Things like scope creep and knowledge voids that could push a project beyond its budget and deadlines.  This tendency has come from 20 years of being a technical lead and architect.  It is an attribute that has served me well.

The place where this becomes a thorn in some team members side is that with years I think I have become more blunt with my assessments.  I am still capable of tact, but probably need to employee a little more liberally.

Ultimately, I think being a self-aware-curmudgeon is a good thing.  As long as we continue to learn and strive to work with people a little surly is just the spice of life.

(image) (image)



Umbraco Team Development

Fri, 27 Oct 2017 05:20:40 GMT

Originally posted on: http://geekswithblogs.net/tmurphy/archive/2017/10/27/umbraco-team-development.aspx

(image)

The Umbraco CMS platform give you the ability to create a content managed site with the familiar development process of ASP.NET MVC.  If you are the only developer things don’t get too complicated, but the moment you are sharing your solution with a team you gain a few wrinkles that have to be addressed.

Syncing Content and Document Types

Umbraco saves its content partially to the file system and partially to the database.  This complicates sharing document types, templates and content between developers.  While Courier allows you to sync these elements between your local machine and your stage and production servers it doesn’t do well between to local host instances.

We addressed this problem using the uSync package which is available in the Umbraco package page under the developer section of your site.  It allows you to automatically record changes every time you make a save in the back office.  They are saved to files that can then be transferred to another system and will automatically be imported when the application pool restarts.

Other Special Cases

Another area that you will have to address is the content indexing files and the location of the auto-generated classes.  Files like “all.dll.path” will need to be ignored in your source control since it contains a fully qualified directory location which will cause problems as you pull source code to multiple machines.

Of course you will also need to manage your web.config files as you would with any ASP.NET based solution to make sure that you don’t step on each developer’s local settings.

Summary

If you follow these couple of guidelines you will overcome some of the more annoying aspects of developing an Umbraco solution.  Ultimately it will make the life of your developers will be much easier. 

(image) (image)



.NET Core–Push Nuget Package After Build

Thu, 26 Oct 2017 07:38:56 GMT

Originally posted on: http://geekswithblogs.net/SoftwareDoneRight/archive/2017/10/26/.net-corendashpush-nuget-package-after-build.aspx

You can configure .NET Core to automatically push your nuget package to the package server of your choice by adding a Target to your project file.

1) If your package server requires an api key, you can set it by calling

nuget.exe SetApiKey

2) Add the following target to your csproj file.   This sample is configured to only fire on Release Builds.

   
  
  
    
  
   
   https://www.nuget.org/api/v2/package ">

OR

Here’s a version that will ensure releases with a .0 revision number are properly pushed.


    < GetAssemblyIdentity AssemblyFiles="$(TargetPath)">
     
   
    < PropertyGroup>
      < vMajor>$([System.Version]::Parse(%(AssemblyVersion.Version)).Major)
      < vMinor>$([System.Version]::Parse(%(AssemblyVersion.Version)).Minor)
      < vBuild>$([System.Version]::Parse(%(AssemblyVersion.Version)).Build)
      < vRevision>$([System.Version]::Parse(%(AssemblyVersion.Version)).Revision)
   
       
    https://www.nuget.org/api/v2/package ">
 

(image) (image)



ETH security feature distribution

Mon, 23 Oct 2017 15:40:34 GMT

Originally posted on: http://geekswithblogs.net/foxjazz/archive/2017/10/23/eth-security-feature-distribution.aspx

Ethereum sidechain feature...

ETH nodes would have a private key to decrypt the code it would run, this code is copied x times based on keys that would have access to the data, one encryption for the sender, one for the receiver, and one for the computer and more for 3rd parties listed. 
The way to seed this process would be hardware accessed as the secret key owners, nodes would have to carry a hardware wallet copy for all running nodes which ETH would use to decrypt and process code.
This ensure ultimate privacy of the contract.

Currently contracts are open, and not private, they may contain things like listing of properties or trade in volume.
 
(image) (image)



NET USE in WSL

Wed, 18 Oct 2017 07:03:23 GMT

Originally posted on: http://geekswithblogs.net/WinAZ/archive/2017/10/18/net-use-in-wsl.aspx

Still getting into Windows Subsystem for Linux (WSL). I was using PowerShell to execute NET USE commands to access remote shares. Having tried this on WSL, it wasn’t immediately obvious how to get it to work, though I knew it should be possible. e.g. here was my first try

$net use \\sharename password /USER:domain\username
  

Which resulted in:

Invalid command: net use
 
  Usage: 
  net rpc             Run functions using RPC transport
  net rap             Run functions using RAP transport 
  …
  net help            Print usage information

  

Clearly, WSL is not impressed. After some trial and error, I landed on the following command line:

$net.exe use '\\sharename' password '/USER:domain\username'

First, notice that I used net.exe, instead of just net. That’s because we’re talking about two different tools. The net.exe is a Windows tool for working with users, groups, file shares, and more, but net is a Linux tool for working with Samba and CIFS servers (type man net in WSL for more details).

Next, I added quotes around the options with back-slashes. This keeps from needing to escape the back-slashes. Alternatively, you could write:

$net.exe use \\\\sharename password /USER:domain\\username

That explicitly escapes the back-slashes. Now, I’m able to access secure file shares through WSL.

@JoeMayo on Twitter

(image) (image) (image)



I'm back!

Sat, 14 Oct 2017 08:31:26 GMT

Originally posted on: http://geekswithblogs.net/paul/archive/2017/10/15/244610.aspx

Wow... just Wow... the posts before this are from when I was at TAFE and just beginning my studies. Oh how far I have come since then. I completed studies and got my Bachelor in Game Design/Computer Science, worked for a small council initiative doing some PHP stuff and now currently I am a .NET developer at a stockbroking firm working on in-house products. Anyway I am back and not going to post cringe-worthy stuff like I have previously haha!

Reason for being back is that I am exploring .NET Core 2.0 and React. I am working on a little project that I thought up a long time ago and plan on blogging about my problems, journey and experiences on here. The project could be a WordPress site really, would be much easier. But then I won't get to do what I love, learn and actually make it scalable as I have bigger plans for it later, not limited to but including a mobile phone app that will make calls to the database.

Anyway tomorrow will be Day 1 and here is my list:
- Create VS2017 project and install React
- Create work item cards and mockups for home page
- Research local database setup

If anyone has any tips or helpful links regarding that list, please feel free to link in comments :D
(image) (image)



Fix for slow iPhone 5s after IOS 11.0.1 11.0.2

Thu, 12 Oct 2017 05:00:43 GMT

Originally posted on: http://geekswithblogs.net/BlueProbe/archive/2017/10/12/244608.aspx

Love my iPhone 5s but it was brutal after IOS 11 upgrade. I couldn’t even swipe to answer the phone. If your a fan-boy you can probably do this yourself, but I took it into the Apple Store and they restored it back to base 11.0.3 using the iTunes app on a Mac wiping everything. And bingo, we’re back in business. Every couple of years going back to metal on my laptop has been a good thing too.

(image) (image)



Installing a Kubernetes Cluster on CentOS 7

Thu, 12 Oct 2017 02:05:12 GMT

Originally posted on: http://geekswithblogs.net/alexhildyard/archive/2017/10/12/installing-a-kubernetes-cluster-on-centos-7.aspx


Having played around with the GKE and MiniKube "one stop" clusters, I wanted to build a multi-node K8 cluster from a bunch of CentOS VMs. The experience was pleasantly straightforward, following the instructions at https://kubernetes.io/docs/setup/independent/install-kubeadm/ with one or two caveats.

First of all, after the initial K8 installation, you will need to disable swap and remove the certificate keys on all nodes before you can either initialise the cluster on the master node or join a slave node to the cluster:

swapoff -a
service kubelet stop
rm -rf /var/lib/kubelet/pki

Also pay attention to the final output from "kubeadm init"; you will not be able to install a pod network until you have manually copied and permissioned the admin.conf file as described; instead, at least with K8 1.8, you'll get confusing W102 warnings about K8 falling back on localhost:8080:

scp root@:/etc/kubernetes/admin.conf ~/.kube/config

And you'll need to do the same thing on the slave nodes if you want to be able to run cluster commands from them in addition. 

Finally. if you want to schedule pods on your master node, check it's taint-free (returns no taint) with:

kubectl describe node | grep -i taint

So far as pod networks go, I installed Calico without any issues. You can then continue with the Sock Shop sample installation, and check all the expected services are running with:

kubectl get services --all-namespaces

(image) (image)



DAX Studio 2.7.0 Released

Mon, 02 Oct 2017 03:57:10 GMT

Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2017/10/02/dax-studio-2.7.0-released.aspxThe major change in this version is to the tracing engine. We’ve introduced a new trace type and made some changes to the way the tracing windows operate and incorporated some enhancements to crash reporting and enabling logging.We've also finished moving off our old codeplex home onto http://daxstudio.orgChanges to the way trace windows workPreviously when you clicked on a trace button, the window opened and the trace was started and when you switched off the trace the window closed. The running of the trace and the visibility of the window was closely linked. In v2.7 we have removed that tight linkage, when you click on a trace button the window opens and the trace still starts as it used to, but when you switch off the trace the window now remains open. The table below shows the 2 new states that trace windows now have. v2.6 and Earlier V2.7 or laterWindow Visible - Trace RunningN/AN/AWindow Closed – Trace Stopped Window Visible - Trace RunningWindow Visible – Trace Paused **Window Visible – Trace Stopped **Window Closed – Trace Stopped All trace windows now have a number of additional controls in their title area.  Starts a paused or stopped trace  Pauses a running trace  Stops a running trace   Clears any information captured in the current trace windowThe tabs for the traces now also have an indicator to show their state so that you can see the state of a given trace at a glance. In the image below you can see that the All Queries trace is stopped, while the Query Plan trace is running and the Server Timings trace is paused. Note that while a trace is paused the server side trace is still active it’s just the DAX Studio UI that is paused, so expensive trace events like Query Plans can still have an impact on the server.The other side effect of this change is that if a .dax file is saved while a trace window is open, when that file is re-opened the trace window will also re-open with the saved trace information, but now the trace will be in a stopped state (previously the trace would open and re-start). This prevents accidentally overwriting the saved information and also means that the saved trace information will open even if you cancel the connection dialog (which would not happen in v2.6 or earlier, cancelling the connection would cause the saved trace information not to open) The “All Queries” traceThe new trace type is called “All Queries” – which captures all queries against the current connection, regardless of the client application. This is useful for capturing queries from other client tools so that you can examine them. When the trace is active it will capture all events from any client tool. The screenshot below shows a capture session that was running against a PowerBI Desktop file. When you hover over the queries the tooltip shows you a larger preview of the query and double clicking on the query text copies it to the editor The “All Queries” trace has a few additional buttons in the title bar area.The following button in the trace window title will copy all the queries matching the current filter to the editor pane.The Filter button shows and hides the Filter controls, the clear filter button will clear any filter criteria from the filter controls. Filters can be set for a specific type of qu[...]



Huge XML document to CSV

Wed, 27 Sep 2017 02:35:48 GMT

Originally posted on: http://geekswithblogs.net/bconlon/archive/2017/09/27/huge-xml-document-to-csv.aspx

I have a huge XML document (over 2GB) exported from an automotive company system which I wanted to be able to export data from it to a CSV file to import into a legacy reporting system. Yes I know this is a bit back to front but a job is a job.

My first thought was to write some C# code to extract the data, but this quickly became difficult as the XML document’s schema was non-trivial due to its use of substitution groups. I may have been able to use XSLT, but this is not a strong point for me, and I think I would have the same complexities with the XSD.

So instead I looked at using the Liquid Data Mapper which is part of Liquid Studio 2017. There is a video explaining how to go from CSV to XML, so I just reversed the Source and Target and this seemed to work OK.

In fact it was quite easy as a lot of the field names matched so the automatic ‘Connect Child Nodes’ connection matching function connected many of these for me.

However, when I clicked the run button I received an ‘Out of Memory’ error. Not so good.

I contacted the Liquid Technologies support team and they responded very quickly suggesting that I try running the generated transform .exe file from the command line rather than from inside Liquid Studio as shown in the video ‘Generate an Executable (.exe) Data Mapping Transform’.

The size of file that can be processed is ultimately dependant on the memory of the PC, but they say they have tested with files > 5GB using a 32GB PC and it works OK.

Anyway, I followed the instructions on the video and it worked straight away. This is very impressive and I would highly recommend using this approach for mapping data.

#

(image) (image)



Mesh WiFi 101

Sat, 23 Sep 2017 04:03:06 GMT

Originally posted on: http://geekswithblogs.net/MikeParks/archive/2017/09/23/244603.aspxIf you're tired of dealing with WiFi connectivity headaches, dead zones, and weak signals, from your old outdated traditional router, upgrading to Mesh WiFi for your home network is worth checking out. What is Mesh WiFi?The best home network upgrade I've ever made. I immediately gained:Max WiFi signal across the entire houseNo more WiFi dead zonesNo more range extendersGigabit WiFi speed capabilityAutomatic security updatesParental Controls to pause WiFi connections and filter contentThe very first speed test I ran from my iPhone gave me this (I pay for 200Mbps through Spectrum): width="560" height="315" src="https://www.youtube.com/embed/LfBiR-mtvSM?rel=0&showinfo=0" frameborder="0" allowfullscreen=""> What are my options?A few of the top options for Mesh WiFi on the market right now are Eero, Google Wifi, and Orbi. How easy is it to setup?It took me 10 minutes to replace my old traditional router. EeroGoogle WifiOrbi width="250" src="https://www.youtube.com/embed/YysjcLqinHs?rel=0&showinfo=0" frameborder="0" allowfullscreen=""> width="250" src="https://www.youtube.com/embed/z7PPYNs5Xao?rel=0&showinfo=0" frameborder="0" allowfullscreen=""> width="250" src="https://www.youtube.com/embed/Y-IgLQJFj0U?rel=0&showinfo=0" frameborder="0" allowfullscreen=""> What do people think about it?They love it. Check the reviews. EeroGoogle WifiOrbi width="250" src="https://www.youtube.com/embed/zMgWzJSKcuM?rel=0&showinfo=0" frameborder="0" allowfullscreen=""> width="250" src="https://www.youtube.com/embed/PM9nV91XgAA?rel=0&showinfo=0" frameborder="0" allowfullscreen=""> width="250" src="https://www.youtube.com/embed/jCquEAKCUXg?rel=0&showinfo=0" frameborder="0" allowfullscreen=""> What do tech experts think about it?Eero - https://www.tomsguide.com/us/eero-mesh-wifi-router,review-4302.htmlGoogle Wifi - https://www.tomsguide.com/us/google-wifi,review-4307.htmlOrbi - https://www.tomsguide.com/us/netgear-orbi,review-4263.html Where can I buy it?Here's all the Amazon's Mesh Wifi search results goodness. Eero Google Wifi Orbi eero Home WiFi System (1 eero + 2 eero Beacons) - TrueMesh Network Technology, Gigabit Speed, WPA2 Encryption, Replaces Wireless Router, Works with Alexa (2nd Gen.) Google Wifi system (set of 3) - Router replacement for whole home coverage NETGEAR Orbi Home WiFi System: AC3000 Tri Band Home Network with Router & Satellite Extender for up to 5,000sqft of WiFi coverage (RBK50) Works with Amazon Alexa Hope that helps! Enjoy!- Mike [...]



Simplify WMI

Fri, 22 Sep 2017 04:14:27 GMT

Originally posted on: http://geekswithblogs.net/hroggero/archive/2017/09/22/simplify-wmi.aspx

Windows Management Instrumentation (WMI) is a key component of any Windows-based infrastructure. WMI helps companies prepare for disaster recovery, audit patch compliance, help with security management and also with general server inventory. However using WMI can be challenging for many reasons.

My blog has moved! Check out the rest of this article here: https://www.herveroggero.com/single-post/2017/09/21/Simplify-WMI

Thank you!

(image) (image)



PowerShell: A curse in disguise

Fri, 22 Sep 2017 04:12:57 GMT

Originally posted on: http://geekswithblogs.net/hroggero/archive/2017/09/22/powershell-a-curse-in-disguise.aspx

I rarely think of technology as a problem, as most of the time people or processes are usually the root of organizational concerns. Many times will developers blame technology for being badly documented, or testers blame developers for not writing proper code, or database administrators blame IT for not having enough memory on a server. But there are exceptions; some technologies can become severe burdens on an organization, and PowerShell, in my opinion, could be one of them. But as you will see later, perhaps there is nothing wrong with the technology itself.

My blog has moved! To continue reading this post, please visit:  https://www.herveroggero.com/single-post/2017/09/22/PowerShell-A-Curse-in-Disguise 

Thank you

(image) (image)



Database Enums

Wed, 20 Sep 2017 16:38:39 GMT

Originally posted on: http://geekswithblogs.net/TimothyK/archive/2017/09/20/database-enums.aspxSo is it better to store enumerated types in a database as a string/varchar or integer?  Well it depends, but in general as a string is your best bet.  In this post I explore the pros and cons of each. Lists verse Enums Before we get to that let’s first be clear that I’m talking about enums here, not lists.  Let me explain the difference. For example let’s say to have a list of a available weight units: pounds, kilograms, grams, short tons, metric tons, long tons, stones, ounces, and etcetera.  You might be able to design your database so that the list of possible weight units is in a table. Your application should not have any advanced knowledge of the weight units that are defined in this table.  The application reads this dynamic list at run time.  Any information needed about these weight units must come from the database.  This includes conversion factors, display names with multilingual support, flags to indicate in which situations it is appropriate to offer these units as a choice, mapping rules to external systems, and anything else the application may need.  There should be no requirements that pounds must exist as a row or that kilograms must always be ID #1. If you can soft code every behaviour of the weight unit required by applications in the database then the best design is to soft code this in a table.  Installers may add or remove rows from this table as needed.  This is a “List” of available values, which is defined in a database table. If the available weight units is hard coded into the application as an enum or strongly typed enum then this is an “enum” not a “list”.  The available values cannot change without changes to the programs.  Therefore the available values should not be a database table.  Adding support for new weight units requires code changes, so it is to the code that you must go to make this change.  Having a database table falsely implies that the values are easy to change. Lookup Times The main argument for storing enum values as strings instead of integers is for human readability.  People looking at the database can easily see what the values mean.  Which of these two tables is easier to read? OrderID Weight WeightUnits 1 14 kg 2 23 lb 3 25 kg 4 11 lb 5 18 kg OrderID Weight WeightUnitID 1 14 1 2 23 2 3 25 1 4 11 2 5 18 1 Storing the values as a string makes it immediately obvious to anyone looking at the database what the units are.  There is no need have external lookup tables either as database tables, database functions, or external documentation.  It simply makes the system faster and easier to use. The principle is similar to how applications should be developed so that users can get around with a minimum number of clicks.  Adding just a few hundred milliseconds to the load time of a w[...]



Azure App Service Tools VSCode Extension

Tue, 19 Sep 2017 23:47:07 GMT

Originally posted on: http://geekswithblogs.net/KyleBurns/archive/2017/09/20/azure-app-service-tools-vscode-extension.aspx

Microsoft made their new Azure App Service Tools extension available today in the Visual Studio Marketplace. I had the opportunity to preview this extension and was very pleased. The process of provisioning and deploying the app service from VSCode was quite intuitive. I was able to "guess" my way through the process with my only wrong guess being how to start. I also very much appreciated that it generated reusable scripts (and opened them to make sure that you discovered them) as part of the process because I rarely work with projects where manual deployment from an IDE is desired. 

The one thing I would like to see changed is that creation of the website did not subsequently deploy or ask me if I wanted to do so. Since I triggered the tool in the context of working with a project in VSCode, it's not likely that I just decided to create an app service that was unrelated to that project, but after going through the wizard and being offered a link to the provisioned site I was presented with default content instead of my deployed app. I would like to have seen the wizard generate the provisioning script and a deploy script and then execute both. With only one thing that I would prefer to see implemented differently, I 'd say overall great job and thank you Microsoft for continuing to make my job easier!
(image) (image)



Fix VS2008 DPI issues on 4K display

Wed, 13 Sep 2017 08:47:50 GMT

Originally posted on: http://geekswithblogs.net/DougMoore/archive/2017/09/13/fix-vs2008-dpi-issues-on-4k-display.aspx

I don't know if you guys use 4K displays for your dev work, I sure do, but anyways...
...using VS2008 on a 4K display was driving me crazy!

Every time I attached to the target to downloaded an image the VS window and dialogs shrunk to a size where the text was readable but extremely small and very painful [for me] to read.

Here is the fix, jic you've encountered this as well.

Regards,
Doug

---

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers]
"C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\Common7\\IDE\\devenv.exe"="~ DPIUNAWARE"


(image) (image)



How to use dapper for MySQL in C#

Fri, 08 Sep 2017 08:24:25 GMT

Originally posted on: http://geekswithblogs.net/anirugu/archive/2017/09/08/how-to-use-dapper-for-mysql-in-c.aspx

Few days ago I look for a solution so  I can  just save my time writing CRUD. So I found a solution. It’s called Dapper.

For creating c# class from the database you can use this code

https://gist.github.com/anirugu/9fb82ce773c45578f42f7a6d899f3221

later add Dapper into your project and add Dapper.Contrib

now you don’t need to write a open Datareader and write some reading writing code again and again.

Dapper.Contrib give you some cool functionality like Insert and Update. You still need to write your SQL queries but it’s going to be good and easier to maintain the code. Last year I was working on a c# project and project become full mess of these code. With one line of  SELECT , INSERT  OR UPDATE code and 100 line of code later to just read those things from DataReader.

Dapper can save a lot of your time doing those same repeated thing and do it pretty well. 

https://github.com/StackExchange/Dapper

https://stackoverflow.com/questions/tagged/dapper

Happy Coding (image)

(image) (image)



Scaffolding failed, failed to build the project

Wed, 06 Sep 2017 11:03:25 GMT

Originally posted on: http://geekswithblogs.net/anirugu/archive/2017/09/06/scaffolding-failed-failed-to-build-the-project.aspx

If you are adding a view and you see this error, your project is not in the state it can be compiled. If you compile your software it will fail.

For fix the issue , fix the error in Errors List in your current MVC project and then add the view it will work.

Happy coding (image)

(image) (image)



Responsive Select2 in HTML

Tue, 05 Sep 2017 12:07:24 GMT

Originally posted on: http://geekswithblogs.net/anirugu/archive/2017/09/05/responsive-select2-in-html.aspx

From last few years I use select2 for making effective dropdown in bootstrap. In last days I am trying to make it responsive but it doesn’t perform so well.

Here is a nice thread to make it work.

https://github.com/select2/select2/issues/3278

using these tricks you can make it responsive which is quite awesome. Still it’s missing something.

In my implementation I need a feature that I can put it on a small space and I want to use the full width when someone use it.

So I look at The HTML generated by the plugin. I see select2 plugin generate some html div just after the select. If you try to set width or something on select2 after generation or before applying select2 thing will not work well.

The solution is you can write the css code for generated div. for example if you write

.select2-container{

width:90px;

}

it will make the select2 90px. that’s the way we modify the css of generated html, but wait, what about if I click on the select2 and I want to use more width available on the screen. I inspect and go into more detail and found that the plugin generated the div to show that searchbar and list that I see in select2.

the dropdown that you seen on html page have these classes

select2-dropdown select2-dropdown—below

so if you assign it more width  (that you want to see when someone open the select) then you need to make width to these dropdown container div, for example

.select2-dropdown{

width:180px;

}

so in this implementation I have a select2 with 90px width that will used 180px width when someone use it for type something or select.

Here is a quick demo for the post. https://codepen.io/anirugu/pen/YxMaqR

Thanks for reading my post.

Happy coding (image)

(image) (image)