Subscribe: Winterdom
http://feeds.feedburner.com/Commonality
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
azure  certificate  create  event  key vault  key  microsoft eventgrid  microsoft  new  parameters  properties  resource  sql  vault 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Winterdom

Winterdom



by dæmons be driven - a site by Tomas Restrepo



 



Deploying a Key Vault-based TDE protector for Azure SQL

Thu, 07 Sep 2017 00:00:00 +0000

Azure SQL now supports setting up Transparent Data Encryption while bringing your own encryption keys. This is easy to setup in the Azure Portal, but I wanted to try setting this up in an automated manner, preferably leveraging ARM templates as much as possible. This turned out to be a bit more complex than I expected, but totally doable. Here are the necessary steps: Creating the Key Deploying the Server Granting Key Permissions Creating the Protector The ARM templates for this post can be found on GitHub. Creating the Key When using the BYOK (bring your own key) feature, you need to store your encryption key in an Azure Key Vault. So let’s first create one: In the Azure Portal, locate your Key Vault Click on the Keys icon Click the + Add button Select the Generate option, the name, and other required properties Click the Create button to complete the process You could do the same in a script with an Azure CLI command like the following: az keyvault key create --vault-name $vaultName --name $keyName --size 2048 --protection software Once you’ve created the key, make sure to grab the name of the current version of the key. You can do this in the portal, or by using the command line: az keyvault key show --vault-name $vaultName --name $keyName --query key.kid The URI returned will be something similar to https://.vault.azure.net/keys//. We’re interested in the last segment. Deploying the Server The next step is to use an ARM template to deploy the Azure SQL Server resource. At this point, we are not actually creating the TDE protector, as we need to do something else first. Here’s the core of the template to create the server: { "name": "[parameters('sqlServerName')]", "type": "Microsoft.Sql/servers", "location": "[resourceGroup().location]", "apiVersion": "2015-05-01-preview", "dependsOn": [], "tags": { "displayName": "SQL Logical Server" }, "identity": { "type": "SystemAssigned" }, "properties": { "administratorLogin": "[parameters('sqlServerAdminLogin')]", "administratorLoginPassword": "[parameters('sqlServerAdminLoginPassword')]" } } The interesting bit here is the identity property. When we set it to SystemAssigned, SQL will go to the Azure Active Directory Tenant associated with the Azure Subscription and create a new Service Principal named RN_. This principal will be setup with X509 certificate credentials for authentication. I’m unsure at this point if it’s possible to create this Service Principal manually, as it would simplify things somewhat. Once the server is created, we’ll see this identity reflected in the resource properties: While we’re at it, let’s create a new database on the server, and enable TDE. The latter is done by creating a nested resource of type transparentDataEncryption: { "name": "[parameters('sqlDbName')]", "type": "databases", "location": "[resourceGroup().location]", "apiVersion": "2014-04-01-preview", "dependsOn": [ "[resourceId('Microsoft.Sql/servers', parameters('sqlServerName'))]" ], "tags": { "displayName": "SQL DB" }, "properties": { "collation": "[parameters('sqlDbCollation')]", "edition": "[parameters('sqlDbEdition')]", "maxSizeBytes": "1073741824", "requestedServiceObjectiveName": "[parameters('sqlDbRequestedServiceObjectiveName')]" }, "resources": [ { "comments": "Transparent Data Encryption", "name": "current", "type": "transparentDataEncryption", "apiVersion": "2014-04-01-preview", "properties": { "status": "Enabled" }, "dependsOn": [ "[parameters('sqlDbName')]" ] } ] } At this point, TDE will be enabled on the new Database using the default Server Managed encryption key. Granting Key Permissions Once the new Azure SQL Ser[...]



Authenticating to SQL Azure with delegated tokens

Thu, 31 Aug 2017 00:00:00 +0000

In a previous post, I discussed how to authenticate to an Azure SQL database from a Web Application (running in Azure App Service) using an Azure Active Directory Service Principal. For this I used a certificate stored in Key Vault to authenticate the principal and obtain a token I could present to SQL. You can find the updated code for this post on GitHub. In this post, let’s expand this a bit further: I will add authentication support to the Web Application by federating through OpenId Connect to Azure Active Directory, and then delegate the user credentials all the way to the database. Configure new users in SQL Since I want to allow other Azure AD users to connect to the database, I need to grant them permissions to it. To do this, I just follow the steps we used in my previous post, and use the T-SQL commands: Create the user: CREATE USER [user@tenant] FROM EXTERNAL PROVIDER Grant permissions: ALTER ROLE Setting up the AD Application So far, I’ve been using an Azure Active Directory Service Principal to authenticate to SQL. In order to be able to setup OpenId Connect on our Web App, I also need an Azure AD application to go with it. So what I want is to configure my existing Service Principal so that I can use it for this purpose as well. Note: using a separate AD application registration won’t work because doing the On_Behalf_Of delegation will fail. To do this, I’m going to open the Azure AD portal and find the Service Principal in the App Registrations section of the portal. Here, I open the Reply URLs page and add the application host (i.e. https://.azurewebsites.net/): Granting Sign-In permission For users to be able to sign-in, I need to give the application the right permissions. I do this by opening the Required permissions page, and clicking on the + Add button. Here, I select Windows Azure Active Directory (Microsoft.Azure.ActiveDirectory) as the API, and check the “Sign in and read user profile” option under “Delegated Permissions”: Then click save and confirm all changes. Granting Access to SQL There is one other permission that I need to grant to the application, so that we can delegate credentials to SQL Server. Again, click on the + Add button, and select Azure SQL Database as the API. Then, check the “Access Azure SQL DB and Data Warehouse” option under the “Delegated Permissions” section: Then save all the changes. As a final step, I locate the Application Id property in the application properties; we’ll need this value in a moment. Enabling Authentication Now I can enable federated authentication. I could do this by leveraging the Easy Authentication feature in App Service, but since I’ll need to change the application code later on, I’ll do this in code just as easy. First, let’s add references to the following NuGet packages into the project: Microsoft.Owin.Host.SystemWeb Microsoft.Owin.Security.OpenIdConnect Microsoft.Owin.Security.Cookies Now I can write our startup code to enable authentication: using Microsoft.Owin.Security; using Microsoft.Owin.Security.Cookies; using Microsoft.Owin.Security.OpenIdConnect; using Owin; using System; using System.Configuration; namespace UsingTokenAuthApp { public class AuthConfig { public static void RegisterAuth(IAppBuilder app) { var clientId = ConfigurationManager.AppSettings["APP_CLIENT_ID"]; var tenant = ConfigurationManager.AppSettings["AAD_TENANT_ID"]; var authority = $"https://login.microsoftonline.com/{tenant}"; app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); app.UseCookieAuthentication(new CookieAuthenticationOptions()); var options = new OpenIdConnectAuthenticationOptions { Authority = authority, ClientId = clientId, }; app.UseOpenIdConnectAuthentication(options); } } } This is obviously a very minimal im[...]



Token authentication to SQL Azure with a Key Vault Certificate

Tue, 29 Aug 2017 00:00:00 +0000

In a previous post, I presented a PowerShell script to create a new Service Principal in Azure Active Directory, using a self-signed certificate generated directly in Azure Key Vault for authentication. Now, let’s try using it for somethig useful. All the code and samples for this article can be found on GitHub. We can use the Key Vault certificate in a Web Application deployed to Azure App Service to authenticate to Azure Active Directory using our Service Principal, and then obtain a token to connect to SQL Azure. This saves us from having to store passwords anywhere in our configuration, since Key Vault and App Service will provide us with easy, secure access to our authentication certificate. In order to do this, we need a few things: Our Service Principal identity in AAD, with the Key Vault certificate A SQL Azure Database A SQL Server (Azure) login based on our AAD Service Principal, with permissions on the database in question. A Web App deployed with our Key Vault certificate An ASP.NET App that will use the certificate to authenticate to AAD, then use the token to connect to SQL. Let’s get started! Creating the Database For this sample, I’m going to create a new Azure SQL Server logical server, then deploy a new, blank database on it. We’ll also set up the server firewall to allow connections from other Azure resources. Since we want to use Azure Active Directory authentication, we also need to setup our new server to have an AzureAD admin user. For this we need both the username (user@domain) and the object id of the account in the domain. As usual, let’s use Azure Resource Manager (ARM) Templates for this, by creating a resource of type Micosoft.Sql/servers/administrators: { "name": "[parameters('sqlServerName')]", "type": "Microsoft.Sql/servers", "location": "[resourceGroup().location]", "apiVersion": "2014-04-01-preview", "dependsOn": [], "tags": { "displayName": "SQL Logical Server" }, "properties": { "administratorLogin": "[parameters('sqlServerAdminLogin')]", "administratorLoginPassword": "[parameters('sqlServerAdminLoginPassword')]" }, "resources": [ { "name": "activedirectory", "type": "administrators", "location": "[resourceGroup().location]", "apiVersion": "2014-04-01-preview", "dependsOn": [ "[resourceId('Microsoft.Sql/servers', parameters('sqlServerName'))]" ], "properties": { "administratorType": "ActiveDirectory", "login": "[parameters('sqlServerAdAdmin')]", "sid": "[parameters('sqlServerAdAdminObjectId')]", "tenantId": "[subscription().tenantId]" } }, { "name": "AllowAllWindowsAzureIps", "type": "firewallrules", "location": "[resourceGroup().location]", "apiVersion": "2014-04-01-preview", "dependsOn": [ "[resourceId('Microsoft.Sql/servers', parameters('sqlServerName'))]" ], "properties": { "startIpAddress": "0.0.0.0", "endIpAddress": "0.0.0.0" } }, { "name": "[parameters('sqlDbName')]", "type": "databases", "location": "[resourceGroup().location]", "apiVersion": "2014-04-01-preview", "dependsOn": [ "[resourceId('Microsoft.Sql/servers', parameters('sqlServerName'))]" ], "tags": { "displayName": "SQL DB" }, "properties": { "collation": "[parameters('sqlDbCollation')]", "edition": "[parameters('sqlDbEdition')]", "maxSizeBytes": "1073741824", "requestedServiceObjectiveName": "[parameters('sqlDbRequestedServiceObjectiveName')]" } } ] } Creating the SQL Login To define our [...]



Azure AD Service Principal with a Key Vault Certificate

Mon, 28 Aug 2017 00:00:00 +0000

It is often useful to create Azure Active Directory Service Principal objects for authenticating applications and automating tasks in Azure. While you can authenticate a Service Principal using a password (client secret), it might be better to use an X509 certificate as an alternative. You still need to find a way to keep the certificate secure, though. That’s where Azure Key Vault comes in, allowing you to store the authentication certificate in a secure manner. An application could then obtain the certificate from Key Vault as needed, or if it’s running in Azure, there might be ways to provision the certificate automatically so that we don’t need to copy stuff around. You could obtain a certificate from any valid certification authority and store it safely in Key Vault. However, Key Vault can also generate self-signed certificates, which might be good enough for many scenarios. Here is a useful PowerShell script that will create a new self-signed certificate directly in Key Vault. Then it will create a new service principal in the subscription tenant, with the new certificate for authentication. [CmdletBinding()] param( [Parameter(Mandatory = $true)] [String]$keyVaultName, [Parameter(Mandatory = $true)] [String]$principalName, [Parameter()] [int]$validityInMonths = 12 ) function New-KeyVaultSelfSignedCert { param($keyVault, $certificateName, $subjectName, $validityInMonths, $renewDaysBefore) $policy = New-AzureKeyVaultCertificatePolicy ` -SubjectName $subjectName ` -ReuseKeyOnRenewal ` -IssuerName 'Self' ` -ValidityInMonths $validityInMonths ` -RenewAtNumberOfDaysBeforeExpiry $renewDaysBefore $op = Add-AzureKeyVaultCertificate ` -VaultName $keyVault ` -CertificatePolicy $policy ` -Name $certificateName while ( $op.Status -ne 'completed' ) { Start-Sleep -Seconds 1 $op = Get-AzureKeyVaultCertificateOperation -VaultName $keyVault -Name $certificateName } (Get-AzureKeyVaultCertificate -VaultName $keyVault -Name $certificateName).Certificate } $certName = "SPCert-$principalName" $cert = New-KeyVaultSelfSignedCert -keyVault $keyVaultName ` -certificateName $certName ` -subjectName "CN=$principalName" ` -validityInMonths $validityInMonths ` -renewDaysBefore 1 Write-Verbose "Certificate generated $($cert.Thumbprint)" $certString = [Convert]::ToBase64String($cert.GetRawCertData()) New-AzureRmADServicePrincipal -DisplayName $principalName ` -CertValue $certString ` -EndDate $cert.NotAfter.AddDays(-1) The script assumes you’ve already signed in to your Azure Subscription using Login-AzureRMAccount. Let’s try executing the script: If we go into the Key Vault in the Azure Portal, we can see the new certificate generated: We can also query the new Service Principal and verify that it is indeed setup with certificate-based authentication: Get-AzureRmADSpCredential -ObjectId 37800b1f-5d17-461b-80a3-c4a8df10b319 StartDate EndDate KeyId Type --------- ------- ----- ---- 8/28/2017 2:29:54 AM 8/27/2018 2:29:49 AM 8a3250a4-4383-4a5f-acdc-4deafd930e6d AsymmetricX509Cert I’ll show some useful scenarios for this in a follow-up post. [...]



Creating Event Grid Subscriptions

Mon, 21 Aug 2017 00:00:00 +0000

A few days ago, I wrote about using Azure Resource Manager (ARM) templates to deploy Azure Event Grid. That sample showed how to create a new Event Grid Topic resource. This basically gives you an URL you can publish custom events to and have them routed to one or more event subscribers. However, one of the very powerful features in Event Grid is not custom topics, but subscribing to events published by the Azure fabric itself; that is, events published by Resource Manager Providers. As of these writings, only a few providers support Event Grid, but this number is sure to grow in the coming months. Supported Event Publishers What Azure resource manager providers support Event Grid? An easy way to find this out is to ask Azure itself. To do this, we can leverage the excellent ArmClient tool. Resource Managers that support publishing events through Event Grid are called Topic Types, and we can query these: armclient get /providers/Microsoft.EventGrid/topicTypes?api-version=2017-06-15-preview If the command succeeds, we should see something like this: { "value": [ { "properties": { "provider": "Microsoft.Eventhub", "displayName": "EventHubs Namespace", "description": "Microsoft EventHubs service events.", "resourceRegionType": "RegionalResource", "provisioningState": "Succeeded" }, "id": "providers/Microsoft.EventGrid/topicTypes/Microsoft.Eventhub.Namespaces", "name": "Microsoft.Eventhub.Namespaces", "type": "Microsoft.EventGrid/topicTypes" }, ... ] } You can also use the Azure CLI command az eventgrid topic-type list on version 2.0.14 or later. Knowing what event publishes exists is only half the story, though. We also want to know what type of events a publisher supports. These are called Event Types in Event Grid, and we can query those as well. For example, let’s say we want to find out events supported by the Microsoft.Resources.ResourceGroups topic type: armclient get /providers/Microsoft.EventGrid/topicTypes/Microsoft.Resources.ResourceGroups/eventTypes?api-version=2017-06-15-preview If the command succeeds, we should see an output similar to the following: { "value": [ { "properties": { "displayName": "Resource Write Success", "description": "Raised when a resource create or update operation succeeds.", "schemaUrl": "TBD" }, "id": "providers/Microsoft.EventGrid/topicTypes/Microsoft.Resources.ResourceGroups/eventTypes/Microsoft.Resources.ResourceWriteSuccess", "name": "Microsoft.Resources.ResourceWriteSuccess", "type": "Microsoft.EventGrid/topicTypes/eventTypes" }, ... ] } The equivalent Azure CLI command would be az eventgrid topic-type list-event-types --name Microsoft.Resources.ResourceGroups. Now let’s see how we can subscribe to events published by the Azure fabric. Event Hub Namespaces Currently, you can only subscribe to events published at the Event Hub Namespace level, not an individual Event Hub itself. For this we’d use the Microsoft.EventHub.Namespaces topic type to create a nested resource of type Microsoft.EventGrid/eventSubscriptions: { "apiVersion": "2017-06-17-preview", "name": "[concat(parameters('eventHubNamespaceName'), '/Microsoft.EventGrid/', parameters('subscriptionName'))]", "type": "Microsoft.EventHub/namespaces/providers/eventSubscriptions", "tags": { "displayName": "Webhook Subscription" }, "dependsOn": [ "[concat('Microsoft.EventHub/Namespaces/', parameters('eventHubNamespaceName'))]" ], "properties": { "destination": { "endpointType": "WebHook", "properties": { "endpointUrl": "[parameters('webhookUrl')]" } }, "filter": { "includedEventTypes": [ "All" ], "subjectBeginsWith": "", "subjectEndsWith": "", "subjectIsCaseSensitive": fa[...]



Deploying an Event Grid + WebHook with ARM

Thu, 17 Aug 2017 00:00:00 +0000

Azure Event Grid was announced a couple of days ago for building event-driven architecture. While the initial preview is a bit limited in the number of supported event publishers, it has tons of promise and I was immediately intrigued by the possibilities! Being a fan of Azure Resource Manager templates, I soon tried to figure out how you could automate the creation of Event Grid resources. After some trial and error and some research, I was able to come up with an initial template to create: An Event Grid topic resource A WebHook-based event subscription. You can find the complete sample template here. The sample template first declares some simple parameters and variables: { "parameters": { "eventGridName": { "type": "string", "minLength": 1 }, "webhookName": { "type": "string", "minLength": 1 }, "webhookUrl": { "type": "string", "minLength": 1 }, "webhookEventTypes": { "type": "array", "defaultValue": [ "All" ] }, "webhookPrefixFilter": { "type": "string", "defaultValue": "" }, "webhookSuffixFilter": { "type": "string", "defaultValue": "" }, "webhookCaseSensitive": { "type": "bool", "defaultValue": false }, "webhookLabels": { "type": "array", "defaultValue": [ "" ] } }, "variables": { "apiVersion": "2017-06-15-preview" } } Creating the Event Grid topic is relatively simple; it’s a resource of type Microsoft.EventGrid/topics. We just need to supply the right API version (2017-06-15-preview in this release), the grid name, and one of the supported Azure regions: { "apiVersion": "[variables('apiVersion')]", "name": "[parameters('eventGridName')]", "type": "Microsoft.EventGrid/topics", "location": "[resourceGroup().location]", "tags": { "displayName": "EventGrid" }, "properties": { "name": "[parameters('eventGridName')]" } } Creating the WebHook subscription took a bit of work to figure out. Eventually realized this needed to be a nested resource of type Microsoft.EventGrid/topics/providers/eventSubscriptions, and it needed a special name in the pattern /Microsoft.EventGrid/. You can then specify the WebHook URL, and the filter of which events you want delivered to it: { "apiVersion": "[variables('apiVersion')]", "name": "[concat(parameters('eventGridName'), '/Microsoft.EventGrid/', parameters('webhookName'))]", "type": "Microsoft.EventGrid/topics/providers/eventSubscriptions", "tags": { "displayName": "Webhook Subscription" }, "dependsOn": [ "[concat('Microsoft.EventGrid/topics/', parameters('eventGridName'))]" ], "properties": { "destination": { "endpointType": "WebHook", "properties": { "endpointUrl": "[parameters('webhookUrl')]" } }, "filter": { "includedEventTypes": "[parameters('webhookEventTypes')]", "subjectBeginsWith": "[parameters('webhookPrefixFilter')]", "subjectEndsWith": "[parameters('webhookSuffixFilter')]", "subjectIsCaseSensitive": "[parameters('webhookCaseSensitive')]" }, "labels": "[parameters('webhookLabels')]" } } End result of applying the template in the portal shows are hook created correctly: I also added some outputs to the ARM template so that it returns the topic endpoint URL of the created Event Grid, as well as the first access key: "outputs": { "eventGridUrl": { "type": "string", "value": "[reference(parameters('eventGridName')).endpoint]" }, "eventGridKey": { "type": "string", [...]



VSTS Build hanging with XUnit tests

Tue, 15 Aug 2017 00:00:00 +0000

I was setting up a simple demo on Visual Studio Team Services (VSTS) today and ran into an oddity.The sample project I was using had a mixture of MSTest and XUnit-based tests. This would run just fine in Visual Studio, but after setting up a hosted build in VSTS, I noticed that the build would seem to hang after apparently running all tests, so I had to cancell builds.

Looking at the logs, I eventually found this line, which was not expected:

2017-08-15T16:48:44.4074176Z Information: [xUnit.net 00:00:00.8781599]   Starting:    Microsoft.VisualStudio.QualityTools.UnitTestFramework

Suspecting this could be related, I modified the build definition. In the Test Assemblies task, I modified the Test assemblies property to include the following line:

!**\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll

This tells the test runner not to try to attempt to discover or run tests in this DLL.

Surprisingly, this worked and allow builds to complete normally after running all tests.

(image)



Trying out PowerShell 6.0 in containers

Mon, 14 Aug 2017 00:00:00 +0000

I’ve been meaning to give a try to the PowerShell Core 6.0 builds on Linux recently. The PowerShell team offers some nice pre-built Docker images you can use to test it, so that’s great. I thought that would be a simple, but cool scenario to try out the new Azure Container Instances service! Of course, you cannot use the pre-built image directly for this, as you wouldn’t have a direct way to connect to the hosted container, so I decided to create a new image that not only had PowerShell Core 6.0 in it, but also have SSH built-in. This would be useful to quickly spin up a PowerShell-on-Linux environment for demos and simple tests. So the first thing to do is to create a new empty directory for our new container files: mkdir powershell cd powershell Then, using a text editor (Vim, is my poison of choice) I created a new dockerfile in this directory, with the following contents: FROM microsoft/powershell RUN apt-get update && apt-get install -y openssh-server RUN mkdir /var/run/sshd ENV newuser tomasr # add new user RUN useradd -m ${newuser} RUN usermod -aG sudo ${newuser} # copy SSH keys RUN mkdir -p /home/${newuser}/.ssh ADD authorized_keys /home/${newuser}/.ssh/authorized_keys # set right permissions RUN chown ${newuser}:${newuser} /home/${newuser}/.ssh RUN chown ${newuser}:${newuser} /home/${newuser}/.ssh/authorized_keys RUN chmod 700 /home/${newuser}/.ssh/authorized_keys EXPOSE 22 CMD ["/usr/sbin/sshd", "-D"] This dockerfile does a few simple things: Inherits from the official PowerShell docker image Installs SSHD Creates a new user called tomasr Copies my authorized_keys file to the new user dir and sets the right permissions Starts SSHD Now, let’s build ourselves a new docker image: docker build -t powershell . If everything works, you should see a message similar to Successfully built 34f996c46a23. Now, before I can use this image with Azure Container Instances, I need to push this to some image repository. I’ll use Azure Container Registry, which I have already provisioned on my Azure account by this point. So let’s provide docker with the credentials to my registry, tag the image, and then push it to the registry: sudo docker login blogtests.azurecr.io -u -p sudo docker tag powershell blogtests.azurecr.io/tomasr/powershell sudo docker push blogtests.azurecr.io/tomasr/powershell Once this is complete, we can use the Azure CLI to create a new container instance called p4 using our new image. We’ll give it a public IP address listening on port 22: az container create --name p4 --image blogtests.azurecr.io/tomasr/powershell --resource-group blogtopics --ip-address public --port 22 --registry-username --registry-password After a few minutes, we can check if our deployment is complete and our container is running, using the following command: az container show --name p4 --resource-group blogtopics If the container is ready, we should see something like: "ipAddress": { "ip": "52.168.164.152", "ports": [ { "port": 22, "protocol": "TCP" } ] }, "location": "eastus", "name": "p4", "osType": "Linux", "provisioningState": "Succeeded", When the container is fully deployed, we just need to use an SSH agent with the right private key to connect and run powershell on it: Once we’re done playing with our environment, we can just delete the container instance: az container delete --name p4 --resource-group blogtopics Conclusion Using containers, we’ve built a simple way to quickly provision a new machine to do simple PowerShell demonstrations. It’s not terribly useful, but still a fun use of technology! [...]



Using Azure Functions to create AppInsights release annotations

Tue, 08 Aug 2017 00:00:00 +0000

I recently had the opportunity to discuss with a customer one cool feature in Application Insights: Release Annotations. As the article above shows, you can easily create Release Annotations using Visual Studio Team Services release pipelines. In this post, I’d like to provide an alternative way to implement release annotations for applications hosted on Azure App Service, by using the WebHooks support in Kudu, and Azure Functions. Context Let’s assume that you already have a Web application deployed to a Azure App Service, and have hooked up some sort of automated deployment story. For example, you may be using the local git repository to deploy a new version of your app through a git push command. Let’s also assume you already have an Application Insights resource you’re using to monitor this application. Step 1: Obtain Application Key The first step in automating the creation of release annotations is going to be to prepare our Application Insights resource by obtaining a new Application Key. We can do this from the Azure Portal, by finding our Application Insights resource, and selecting the API Access option on the left-hand menu: Press the Create API key option at the top-left. Provide a description for your new key (such as “Create Annotations”). Select the “Write annotations” permission. Press the “Generate key” button. Once the key has been generated, copy the value. If you don’t, you’ll have to create a brand new key! Also, note the Application ID value for your Application Insights resource, displayed alongside your new API key. We will need both to create a new annotation later on. Step 2: Create the Function App Create a brand new Function App in your Azure Subscription. For this demo, I’ll create a new one in a Consumption Plan: Now, create a new Function. For simplicity, I’m going to use the “Webhook + API” template in JavaScript: The request sent by Kudu is going to contain some useful properties for our new function: id: the deployment ID. siteName: The name of our Web App. You could use this, for example, to have a single function support deployments for multiple applications. message: the deployment message. For a Git-based deployment, this could be part of the commit message. authorEmail: For Git-based deployment, this would be the author of the last commit. You can find a complete list of the properties here. Creating the release annotation requires a few things besides the Application Insights application id and key: The name you want to give to the release annotation. We’ll use a combination of the site name and the deployment ID. The time the release happened. We’ll just use the current date/time. A list of arbitrary properties: We’ll store the commit message, as well as the author. Our function code would look something like this: var request = require('request'); module.exports = function (context, req) { var deploymentId = req.body.id; var siteName = req.body.siteName; var releaseName = siteName + '_' + deploymentId; var appId = process.env.APPINSIGHTS_APPID; var apiKey = process.env.APPINSIGHTS_APIKEY; var releaseProperties = { ReleaseName: releaseName, Message: req.body.message.trim(), By: req.body.authorEmail }; context.log('Creating a new release annotation: ' + releaseName); var body = { Id: deploymentId, AnnotationName: releaseName, EventTime: (new Date()).toISOString(), Category: 'Deployment', // the Properties part contains a JSON object as a string Properties: JSON.stringify(releaseProperties) }; var options = { url: 'https://aigs1.aisvc.visualstudio.com/applications/' + appId + '/Annotations?api-version=2015-11', method: 'PUT', [...]



Viasfora v3.6 Released

Sun, 06 Aug 2017 00:00:00 +0000

Today I pushed to the Visual Studio Gallery version v3.6 of my Viasfora Visual Studio Extension.

This version includes a few minor bugfixes, and some new features, such as:

  • Export/Import settings
  • Export/Import color themes
  • Support for JavaScript template literals

If you run into any issues, or have any feature suggestion, please create a new issue on GitHub!

(image)