Subscribe: K. Scott Allen
Added By: Feedage Forager Feedage Grade A rated
Language: English
active directory  api  apis  application  azure  claims  code  core  group  management  net core  net  new  service  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: K. Scott Allen

OdeToCode by K. Scott Allen

OdeToCode by K. Scott Allen

Copyright: (c) 2004 to 2018 OdeToCode LLC

Byte Arrays and ASP.NET Core Web APIs

Thu, 22 Feb 2018 10:03:00 Z

I’ve decided to write down some of the steps I just went through in showing someone how to create and debug an ASP.NET Core controller. The controller is for an API that needs to accept a few pieces of data, including one piece of data as a byte array. The question asked specifically was how to format data for the incoming byte array. Instead of only showing the final solution, which you can find if you read various pieces of documentation, I want to show the evolution of the code and a thought process to use when trying to figure out the solution. While the details of this post are specific to sending byte arrays to an API, I think the general process is one to follow when trying to figure what works for an API, and what doesn’t work. To start, collect all the information you want to receive into a single class. The class will represent the input model for the API endpoint. public class CreateDocumentModel { public byte[] Document { get; set; } public string Name { get; set; } public DateTime CreationDate { get; set; } } Before we use the model as an input to an API, we’ll use the model as an output. Getting output from an API is usually easy. Sending input to an API can be a little bit trickier, because we need to know how to format the data appropriately and fight through some generic error messages. With that in mind, we’ll create a simple controller action to respond to a GET request and send back some mock data. [HttpGet] public IActionResult Get() { var model = new CreateDocumentModel() { Document = new byte[] { 0x03, 0x10, 0xFF, 0xFF }, Name = "Test", CreationDate = new DateTime(2017, 12, 27) }; return new ObjectResult(model); } Now we can use any tool to see what our data looks like in a response. The following image is from Postman. What we see in the response is a string of characters for the “byte array” named document. This is one of those situations where having a few years of experience can help. To someone new, the characters look random. To someone who has worked with this type of data before, the trailing “=” on the string is a clue that the byte array was base64 encoded into the response. I’d like to say this part is easy, but there is no substitute for experience. For beginners, one also has to see how C# properties in PascalCase map to JSON properties in camelCase, which is another non-obvious hurdle to formatting the input correctly.  Once you’ve figured out to use base64 encoding, it’s time to try to send this data back into the API. Before we add any logic, we’ll create a simple echo endpoint we can experiment with. [HttpPost] public IActionResult CreateDocument([FromBody] CreateDocumentModel model) { return new ObjectResult(model); } With the endpoint in place, we can use Postman to send data to the API and inspect the response. We’ll make sure to set a Content-Type header to application/json, and then fire off a POST request by copying data from the previous response. Voilà! The model the API returns looks just like the model we sent to the API. Being able to roundtrip the model is a good sign, but we are only halfway through the journey. We now have a piece of code we can experiment with interactively to understand how the code will behave in different circumstances. We want a deeper understanding of how the code behaves because our clients might not always send the model we expect, and we want to know what can go wrong before the wrong things happen. Here are some questions to ask. Q: Is a base64 encoded string the only format we can use for the byte array? A: No. The ASP.NET Core model binder for byte[] also understands how to process a JSON array. { "document": [1, 2, 3, 254], "name": "Test input", "creationDate": "2017-12-27T00:00:00" } Q: What happens if the document property is missing in the POST request? A: The Document property on the input model will be null. Q: What happens if the base64 encoding is corrupt, or when u[...]

Managing Azure AD Group Claims in ASP.NET Core

Wed, 21 Feb 2018 10:03:00 Z

In a previous post we looked at using Azure AD groups for authorization. I mentioned in that post how you need to be careful when pulling group membership claims from Azure AD. In this post we’ll look at the default processing of claims in ASP.NET Core and see how to avoid the overheard of carrying around too many group claims. The first issue I want to address in this post is the change in claims processing with ASP.NET Core 2.Missing Claims in the ASP.NET Core 2 OIDC HandlerDominick Baier has a blog post about missing claims in ASP.NET Core. This is a good post to read if you are using the OIDC services and middleware. The post covers a couple different issues, but I want to call out the “missing claims” issue specifically. The OIDC options for ASP.NET Core include a property named ClaimActions. Each object in this property’s collection can manipulate claims from the OIDC provider. By manipulate, I mean that all the claim actions installed by default will remove specific claims. For example, there is an action to delete the ipaddr claim, if present. Dom’s post includes the full list. I think ASP.NET Core is removing claims to reduce cookie bloat. In my experiments, the dozen or so claims dropped by the default settings will reduce the size of the authentication cookies by 1,500 bytes, or just over 30%. Many of the claims, like IP address, don’t have any ongoing value to most applications, so there is no need to store the value in a cookie and pass the value around in every request. If you want the deleted claims to stick around, there is a hard way and a straightforward way to achieve the goal. The Top Answer on Stack Overflow Isn’t Always the BestI’ve seen at least two software projects with the same 20 to 25 lines of code inside. The code originates from a Stack Overflow answer to solve the missing claims issue and explicitly parses all the claims from the OIDC provider. If you want all the claims, you don’t need 25 lines of code. You just need a single line of code. services.AddAuthentication() .AddOpenIdConnect(options => { // this one: options.ClaimActions.Clear(); }); However, make sure you really want all the claims saved in the auth cookie. In the case of AD group membership, the application might only need to know about 1 or 2 groups while the user might be a member of 10 groups. Let’s look at approaches to removing the unused group claims. Removing Group Claims with a Claims ActionMy first thought was to use the collection of ClaimActions on the OIDC options to remove group claims. The collection holds ClaimAction objects, where ClaimAction is an abstract base class in the ASP.NET OAuth libraries. None of the built-in concrete types do exactly what I’m looking for, so here is a new ClaimAction derived class to remove unused groups. public class FilterGroupClaims : ClaimAction { private string[] _ids; public FilterGroupClaims(params string[] groupIdsToKeep) : base("groups", null) { _ids = groupIdsToKeep; } public override void Run(JObject userData, ClaimsIdentity identity, string issuer) { var unused = identity.FindAll(GroupsToRemove).ToList(); unused.ForEach(c => identity.TryRemoveClaim(c)); } private bool GroupsToRemove(Claim claim) { return claim.Type == "groups" && !_ids.Contains(claim.Value); } } Now we Just need to add a new instance of this class to the ClaimActions, and pass in a list of groups we want to use. options.ClaimActions.Add(new FilterGroupClaims( "c5038c6f-c5ac-44d5-93f5-04ec697d62dc", "7553192e-1223-0109-0310-e87fd3402cb7" )); ClaimAction feels like an odd abstraction, however. It makes no sense for the base class constructor to need both a claim type and claim value type when these parameters go unused in the derived class logic. A ClaimAction is also specific to the OIDC handler in Core. Let’s try this again with a more generic claims transformation in .NET Core. Removing Group Cla[...]

Role Based Authorization in ASP.NET Core with Azure AD Groups

Tue, 20 Feb 2018 10:03:00 Z

Authenticating users in ASP.NET Core using OpenID Connect and Azure Active Directory is straightforward. The tools can even scaffold an application to support this scenario. In this post I want to go one step further and define authorization rules based on a user’s group membership in Azure AD. Those Tired Old Intranet AppsWhile the authentication picture is clear, authorization can be blurry. Authorization is where specific business rules meet software, and authorization requirements can vary from application to application even in the same organization. Not only will different applications need different types of authorization rules, but the data sources needed to feed data into those rules can vary, too. Over the years, however, many applications have used group membership in Windows Active Directory (AD) as a source of information when making authorization decisions. Group membership in AD is reliable, and static. For example, a new employee in the sales department who is placed into the “Sales” group will probably remain in the sales group for the rest of their term. Basing authorization rules on AD group membership was also easy in these apps. For ASP.NET developers building applications using IIS and Windows Authentication, checking a user’s group membership only required calling an IsInRole method. These New-Fangled Cloud AppsCloud native applications trade Windows Active Directory for Azure Active Directory and move away from Windows authentication protocols like NTLM and Kerberos to Internet friendly protocols like OpenID Connect. In this scenario, an organization typically synchronizes their Windows Active Directory into Azure AD with a tool like ADConnect. The synchronization allows users to have one identity that works inside the firewall for intranet resources, as well as outside the firewall with services like Office 365. Windows Active Directory and Azure Active Directory are two different creatures, but both directories support the concepts of users, groups, and group membership. With synchronization in place, the group membership behind the firewall are the same as the group memberships in the cloud. Imagine we have a group named “sales” in Azure AD. Imagine we want to build an application like the old days where only users in the sales group are authorized to use the application.Application SetupI’m going to assume you already know how to register an application with Azure AD. There is plenty of documentation on the topic. Unlike the old days, group membership information does not magically appear in an application when using OIDC. You either need to use the Graph API to retrieve the groups for a specific user after authenticating, which we can look at in a future post if there is interest, or configure Azure AD to send back claims representing a user’s group membership. We’ll take the simple approach for now and configure Azure AD to send group claims. There is a limitation to this approach I’ll mention later. Configuring Azure AD to send group claims requires a change in the application manifest. You can change the manifest using the AD graph API, or in the portal. In the portal, go to App registrations => All apps => select the app => click the manifest button on the top action bar. The key is the “groupMembershipClaims” property you can see in the bottom screenshot. Set the value to “SecurityGroup” for Azure to return group information as claims. The app manifest includes a number of settings that you cannot reach through the UI of the portal, including appRoles. You'll probably want to define appRoles if you are building a multi-tenant app. Testing ClaimsWith the above manifest in place, you should see one or more claims named “groups” in the collection of claims Azure AD will return. An easy way to see the claims for a user is to place the following code into a Razor page or Razor view: @foreach (var claim in User.Claims) {

PDF Generation in Azure Functions V2

Wed, 14 Feb 2018 10:03:00 Z

PDF generation. Yawn.But, every enterprise application has an “export to PDF” feature.There are obstacles to overcome when generating PDFs from Azure Web Apps and Functions. The first obstacle is the sandbox Azure uses to execute code. You can read about the sandbox in the “Azure Web App sandbox” documentation. This article explicitly calls out PDF generation as a potential problem. The sandbox prevents an app from using most of the kernel’s graphics API, which many PDF generators use either directly or indirectly. The sandbox document also lists a few PDF generators known to work in the sandbox. I’m sure the list is not exhaustive, (a quick web search will also find solutions using Node), but one library listed indirectly is wkhtmltopdf (open source, LGPLv3). The wkhtmltopdf library is interesting because the library is a cross platform library. A solution built with .NET Core and wkhtmltopdf should work on Windows, Linux, or Mac. The Azure Functions ProjectFor this experiment I used the Azure Functions 2.0 runtime, which is still in beta and has a few shortcomings. However, the ability to use precompiled projects and build on .NET Core are both appealing features for v2. To work with the wkhtmltopdf library from .NET Core I used the DinkToPdf wrapper. This package hides all the P/Invoke messiness, and has friendly options to control margins, headers, page size, etc. All an app needs to do is feed a string of HTML to a Dink converter, and the converter will return a byte array of PDF bits. Here’s an HTTP triggered function that takes a URL to convert and returns the bytes as application/pdf. using DinkToPdf; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.Azure.WebJobs.Host; using System; using System.Net.Http; using System.Threading.Tasks; using IPdfConverter = DinkToPdf.Contracts.IConverter; namespace PdfConverterYawnSigh { public static class HtmlToPdf { [FunctionName("HtmlToPdf")] public static async Task Run( [HttpTrigger(AuthorizationLevel.Function, "post")] ConvertPdfRequest request, TraceWriter log) { log.Info($"Converting {request.Url} to PDF"); var html = await FetchHtml(request.Url); var pdfBytes = BuildPdf(html); var response = BuildResponse(pdfBytes); return response; } private static FileContentResult BuildResponse(byte[] pdfBytes) { return new FileContentResult(pdfBytes, "application/pdf"); } private static byte[] BuildPdf(string html) { return pdfConverter.Convert(new HtmlToPdfDocument() { Objects = { new ObjectSettings { HtmlContent = html } } }); } private static async Task FetchHtml(string url) { var response = await httpClient.GetAsync(url); if (!response.IsSuccessStatusCode) { throw new InvalidOperationException($"FetchHtml failed {response.StatusCode} : {response.ReasonPhrase}"); } return await response.Content.ReadAsStringAsync(); } static HttpClient httpClient = new HttpClient(); static IPdfConverter pdfConverter = new SynchronizedConverter(new PdfTools()); } } What to Worry AboutNotice the converter class has the name SynchronizedConverter. The word synchronized is a clue that the converter is single threaded. Although the library can buffer conversion requests until a thread is free to process those requests, it would be safer to trigger the function with a message queue to avoid losing conversion requests in case of a restart. You should also know that the function will not execute successfully in a consumptio[...]

When to Create a New C# Class Definition

Tue, 13 Feb 2018 10:03:00 Z

A recurring question in my C# workshops and videos sounds like: "How do you know when to define a new class?"This question is a quintessential question for most object-oriented programming languages. The answer could require a 3-day workshop or a 300 page book. I'll try to distill some of my answers to the question into this blog post. The ScenarioThe question for this post regularly pops up in my grade book scenario. In the scenario I create a GradeBook class to track homework grades for a fictional class of students. The GradeBook starts simple and only offers the ability to add a new grade or fetch existing grades. Eventually we reach the point where we need to compute some statistics on the grades stored inside the grade book. The statistics include the average grade, lowest grade, and highest grade. Later in the course we use the stats to compute a letter grade. It is the statistics part where I show how to create a new class to encapsulate the statistics. Why? Why not just add some properties to the existing GradeBook with the statistical values? Wouldn't it be better to have the statistics computed live when the program adds a new grade? I'm always thrilled with these questions. Asking these questions means a student is progressing beyond the opening struggles of learning to program and is no longer just trying to make something work. They've grown comfortable with the tools and have fought off a few compiler errors to gain confidence. They've internalized some of the basic language syntax and are beginning to think about how to make thing work the right way. It’s difficult to explain how the right way is never perfectly obvious. Most of us make software design decisions based on a combination of instincts, heuristics, and with our own biases, because there is no strict formula to follow. There can be more than one right way to solve every problem, and the right way for the same problem can change depending on the setting. Remembering Who and Where You AreThere are many different types of developers, applications, and business goals. All these different contexts influence how you write code. Some developers write code for risk averse companies where application updates are a major event, and they make slow, deliberate decisions. Other developers write code for fast moving businesses, so having something delivered by next week is of utmost importance. Some developers write code in the inner loop of a game engine, so the code must be as fast as possible. Other developers write code protecting private data, so the code must be as secure as possible. Some developers pride themselves on craftsmanship. The quality of the code base is as important as the quality of the application itself. Other developers pride themselves on getting stuff done. Putting software in front of a user is the only measure of success. Exitus ācta probat.Code that is good in one context might not be as good in one of the other contexts. Blog posts and tweets about design principles and best practices often neglect context. It is easy for an author to assume all the readers work in the same context as the author, and all the readers carry the same personal values about how to construct software. Trying to avoid assumptions about the reader’s context makes posts like this more difficult to write. But, like the Smashing Pumpkins song with the lyrics ‘paperback scrawl your hidden poems’, let me try, try, try.But first, some background. Transaction Scripts Are One ExtremeIn a language like C# we have classes. Classes allow us to use an object-oriented approach to solving problems. The antithesis of object-oriented programming is the transaction script. In a transaction script you write code inside a function from top to bottom. Object-oriented aristocrats will frown on transaction scripts as an anti-pattern because a transaction script is a procedural way of thinking. My code does step 1, step 2, ... step n. There is little to no enc[...]

Working with Azure Management REST APIs

Tue, 06 Feb 2018 10:03:00 Z

In previous posts we looked at how to choose an approach for working with the management APIs, and how to setup a service principal name to authenticate an application that invokes the APIs. In that first post we decided (assuming "we" are .NET developers) that we want to work with the APIs using an SDK instead of building our own HTTP messages using HttpClient. However, even here there are choices in which SDK to use. In this post we will compare and contrast two SDKs, and I’ll offer some tips I’ve learned in figuring out how the SDKs work. Before we dig in, I will say that having the REST API reference readily available is still useful even when working with the higher level SDKs. It is often easier to find the available operations for a given resource and what the available parameters control by looking at the reference directly. The SDKs for working with the management APIs from C# can be broadly categorized into either generated SDKs, or fluent SDKs. Generated SDKs cover nearly all operations in the management APIs and Microsoft creates these libraries by generating C# code from metadata (OpenAPI specs, formerly known as Swagger). In the other category, human beings craft the fluent version of the libraries to make code readable and operations discoverable, although you won’t find a fluent package for every API area. The Scenario In this post we’ll work with the Azure SQL management APIs.  Imagine we want to programmatically change the Pricing tier of an Azure SQL instance to scale a database up and down. Scaling up to a higher pricing tier gives the database more DTUs to work with. Scaling down gives the database fewer DTUs, but also is less expensive. If you've worked with Azure SQL, you'll know DTUs are the frustratingly vague measurement of how many resources an Azure SQL instance can utilize to process your workload. More DTUs == more powerful SQL database. The Generated SDKs The Azure SQL management SDK is in the Microsoft.Azure.Management.Sql NuGet package, which is still in preview. I prefer this package to the package with the word Windows in the name, as this package is actively updated. The management packages support both .NET Core (netstandard 1.4), and the .NET framework. The first order of business is to generate a token that will give the app an identity and authorize the app to work with the management APIs. You can obtain the token using raw HTTP calls, or use the Microsoft.IdentityModel.Clients.ActiveDirectory package, also known as ADAL (Active Directory Authentication Library). You’ll need your application’s ID and secret, which are setup in the previous post when registering the app with Azure AD, as well as your tenant ID, also known as the directory ID, which is the ID of your Azure AD instance. By the way, have you noticed the recurring theme in these post of having two names for every important object? Take the above ingredients and cook them in an AuthenticationContext to produce an bearer token: public async Task MakeTokenCredentials() { var appId = "798dccc9-....-....-....-............"; var appSecret = "8a9mSPas....................................="; var tenantId = "11be8607-....-....-....-............"; var authority = $"{tenantId}"; var resource = """; var authContext = new AuthenticationContext(authority); var credential = new ClientCredential(appId, appSecret); var authResult = await authContext.AcquireTokenAsync(resource, credential); return new TokenCredentials(authResult.AccessToken, "Bearer"); } In the above example, I’ve hard coded all the pieces of information to make the code easy to read, but you’ll certainly make a parameter object for flexibility. Note the authority will be for [...]

Setting Up Service Principals to Use the Azure Management APIs

Thu, 01 Feb 2018 10:03:00 Z

In a previous post, I wrote about choosing an approach to work with the Azure Management APIs (the REST APIs, as they call them). Before you can make calls to the API from a program, you’ll want to create a service account in Azure for authentication and authorization. Yes, you could authenticate using your own identity, but there are a few good reason not to use your own identity. For starters, the management APIs are generally invoked from a non-interactive environment. Also, you can give your service the least set of privileges necessary for the job to help avoid accidents and acts of malevolence. This post details the steps you need to take, and tries to clear up some of the confusion I’ve encountered in this space. Terminology The various terms you’ll encounter in using the management APIs are a source of confusion. The important words you’ll see in documentation are different from the words you’ll see in the UI of the portal, which can be different from what you’ll read in a friendly blog post. Even the same piece of writing or speaking can transition between two different terms for the same object, because there are varying perspectives on the same abstract concept. For example, an “application” can morph into a “service principal name” after just a few paragraphs. I’ll try not to add to the confusion and in a few cases try to explain why we haven’t different terms, but I fear this is not entirely possible. To understand the relationship between an application and a service principal, the "Application and service principle objects in Azure Active Directory" article is a good overview. In a nutshell, when you register an application in Azure AD, you also create a service principal. An application will only have a single registration in a single AD tenant. Service principals can exist for the single application in multiple tenants, and it is a service principal that represents the identity of an application when accessing resources in Azure. If this paragraph makes any sense (and yes, it can take some time to internalize), then you'll begin to see why it is easy to interchange the terms "application" and "service principal" in some scenarios. Setup There are three basic steps to follow when setting up the service account (a.k.a application, a.k.a service principal name). 1. Create an application in the Azure Active Directory containing the subscription you want the program to interact with. 2. Create a password for the application (unlike a user, a service can have multiple passwords, which are also referred to as keys). 3. Assign role-based access control to the resources, resource groups, or subscriptions your service needs to interact with. If you want to work through this setup using the portal, there is a good piece of Microsoft documentation with a sentence case title here : “Use portal to create an Azure Active Directory application and service principal that can access resources”. Even if you use other tools to setup the service account, you might occasionally come to the portal and try to see what is happening. Here are a couple hiccups I’ve seen encountered in the portal. First, if you navigate in the Azure portal to Azure Active Directory –> App registrations, you would probably expect to see the service you’ve registered. This used to be the case, I believe, but I’m also certain that even applications that I personally register do not appear in the list of app registrations until I select the “All apps” option on this page. And yes, the portal will list your service as a “Web app / API” type, even if your application is a console application. This is normal behavior. You don’t want to register your service as a “native application”, no matter how tempting that may be. The confusing terminolo[...]

Choosing an Approach to Work with Azure REST APIs

Tue, 30 Jan 2018 10:03:00 Z

The Azure REST APIs allow us to interact with nearly every type of resource in Azure programmatically. We can create virtual machines, restart a web application, and copy an Azure SQL database using HTTP requests. There's a few choices to make when deciding how to interact with these resource manager APIs, and some potential areas of confusion. In this post and future posts I hope to provide some guidance on how to work with the APIs effectively and avoid some uncertainties. Pure HTTP If you can send HTTP messages, you can interact with the resource manager APIs at a low level. The Azure REST API Reference includes a list of all possible operations categorized by resource. For example, backing up a web site. Each endpoint gives you a URI, the available HTTP methods (GET, PUT, POST, DELETE, PATCH), and a sample request and response. All these HTTP calls need to be authenticated and authorized, a topic for a future post, but the home page describes how to send the correct headers for any request. These low levels APIs are documented and available to use, but generally you want to write scripts and programs using a slightly higher level of abstraction and only know about the underlying API for reference and debugging. Fortunately, specifications for all resource manager APIs are available in OpenAPI / Swagger format. You can find these specifications in the azure-rest-api-specs GItHub repository. With a codified spec in hand, we can generate wrappers for the API. Microsoft has already generated wrappers for us in several different languages. Using a Client Library SDK Microsoft provides Azure management libraries that wrap these underlying APIs for a number of popular languages. You can find links on the Microsoft Azure SDKs page. When looking for a management SDK, be sure to select a management SDK instead of a service SDK. A blob storage management SDK is an SDK for creating and configuring a storage account, whereas the service SDK is for reading and writing blobs inside the storage account. A management SDK generally has the name "management" or "arm" in the name (where arm stands for Azure Resource Manager), but the library names are not consistent across different languages. Instead, the names match the conventions for the ecosystem, and Node packages follow a different style than .NET and Java. As an example, the service SDK for storage in Node is azure-storage-node, whereas the management package is azure-arm-storage. Using the Command Line In addition to SDKs, there are command line utilities for managing Azure. PowerShell is one option. In my experience, PowerShell provides the most complete coverage of the management APIs, and over the years I've seen a few operations that you cannot perform in the Azure portal, but can perform with PowerShell. However, my favorite command line tool is the cross-platform Azure CLI. Not being a regular user of PowerShell, I find the CLI easier to work with and the available commands are easier to discover. That being said, Azure CLI doesn't cover all of Azure, although new features arrive on a regular cadence. In general, stick with the command line tools if you have quick, simple scripts to run. Some applications, however, require more algorithms, logic, heuristics, and cooperation with other services. For these scenarios, I'd prefer to work with an SDK in a programming language like C#. Speaking of which ... Choices for C# Developers If you are a C# developer who wants to manage Azure using C# code, you have the option of going with raw HTTP messages using a class like HttpClient, or using the SDK. Use the SDK. There is enough flexibility in the SDKs to do everything you need, and you don't need to build your own encapsulation of the APIs. You do need to choose the correct version of the SDKs. If you search the web for examples of managing Azure from C# code, you'll run across NuGet package[...]

New Pluralsight Course on Packaging and Deploying ASP.NET Core

Mon, 29 Jan 2018 10:03:00 Z

Recorded many months ago in my previous life, this new course shows how to deploy ASP.NET Core into Azure using a few different techniques. We'll start by using Git, then progress to using a build and release pipeline in Visual Studio Team Services. We'll also demonstrate how to use Docker and containers for deployment, and how to use Azure Resource Manager Templates to automate the provisioning and updates of all Azure resources. 

Lars has a blog post with a behind the scenes look, and you'll find the new course on


Interacting with Azure SQL Using All Command Line Tools

Tue, 23 Jan 2018 15:05:00 Z

Microsoft's collection of open source command line tools built on Python continues to expand. Let's take the scenario where I need to execute a query against an Azure SQL database. The first step is poking a hole in the firewall for my current IP address. I'll use the Azure CLI 2.0:

λ az login
To sign in, use a web browser to open the page 
 and enter the code DRAMCY103 to authenticate.
     ...subscription 1 ...
     ... subscription 2 ...

For the firewall settings, az has firewall-rule create command:

λ az sql server firewall-rule create -g resourcegroupname 
   -s mydbserver -n watercressip --start-ip-address 
  "endIpAddress": "",
  "type": "Microsoft.Sql/servers/firewallRules"

Now I can launch the mssql-cli tool.

λ mssql-cli -S 
            -U appusername  -d appdbname

Auto-complete for columns works well when you have a FROM clause in place (maybe LINQ had it right after all).


If I'm in transient mode, I'll clean up and remove the firewall rule.

λ az sql server firewall-rule delete 
   -g resourcegroupname -s mydbserver -n watercressip               

The mssql-cli has a roadmap, and I'm looking forward to future improvements.