Subscribe: Poonam Lall
http://odetocode.com/Blogs/poonam/Rss.aspx
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
app service  app  azure functions  azure  code  function  functions  net core  net  public  scale  service  services  system 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Poonam Lall

OdeToCode by K. Scott Allen



OdeToCode by K. Scott Allen



Copyright: (c) 2004 to 2017 OdeToCode LLC
 



New Course on Building Secure Services with Microsoft Azure

Fri, 28 Jul 2017 07:12:00 Z

My latest course on Pluralsight cover a range of topics centered around building secure services.

The first topic is software containers. I’ll show you how to work with Docker tools to run .NET Core software in Windows and Linux containers, and how to deploy containers into Azure App Services.

(image)

We’ll also look at automating Azure using Resource Manager templates. Automation is crucial for repeatable, secure deployments. In this section we’ll also see how to work with Azure Key Vault as a place to store cryptographic keys and secrets.

(image)

The third module focuses on Service Fabric. We’ll see how to write stateful and stateless services, including an ASP.NET Core Web application, and host those services in a service fabric cluster.

(image)

Finally, we’ll use Azure Active Directory with protocols like OIDC and OAuth 2 to secure both web applications and web APIs. We’ll also make secure calls to an API by obtaining a token from Azure AD and passing the token to the secure API, and look at using Azure AD B2C as a replacement for ASP.NET Identity and membership. 

(image)

With this course I now have over 14 hours of Azure videos available. You can see the entire list here: https://www.pluralsight.com/authors/scott-allen




Scaling Directions and Partitioning

Thu, 27 Jul 2017 05:23:27 Z

When a system needs more resources, we should favor horizontal scale versus vertical scale. In this document, we’ll look at scaling with some Microsoft Azure specifics. #DefineWhen we scale a system, we add more compute, storage, or networking resources to a system so the system can handle more load. For a web application with customer growth, we’ll hopefully reach a point where the number of HTTP requests overwhelm the system. The number of requests will cause delays and errors for our customers. For systems running in Azure App Services, we can scale up, or we can scale out. Scaling up is replacing instances of our existing virtual hardware with more powerful hardware. With the click of a button in the portal, or a simple script, we can move from a machine with 1 CPU and 1.75 GB of memory to a machine with double the number of cores and memory, or more. We sometimes refer to scaling up (and down) as vertical scaling.  Scaling out is adding more instances of your existing virtual hardware. Instead of running 2 instances of an App Service plan, you can run 4, or 10. The maximum number depends on the pricing tier you select. Scaling out and in is what we call horizontal scaling. Advantages to Horizontal ScaleHorizontal scaling is the preferred scale technique for several reasons. First, you can program Azure with rules to automatically scale out. Likewise, you can also apply rules to scale in when the load is light. Secondly, horizontal scale adds more redundancy to a system. Commodity hardware in the cloud can fail. When there is a failure, the load balancer in front of an app service plan can send traffic to other available instances until a replacement instance comes on-line. We want to test a system using at least two instances from the start to ensure a system can scale in the horizontal direction without running into problems, like when using in-memory session state. Make sure to turn off Application Request Routing (ARR) in the app service to make effective use of multiple instances. ARR is on by default, meaning the load balancer in front of the app service will inject cookies to make a user’s browser session sticky to a specific instance. Thirdly, compared to vertical scaling, horizontal scale offers more headroom. A scale up strategy can only last until we reach the largest machine size, at which point the only choice is to scale out. Although the maximum number of instances in an App Service plan is limited, using a DNS load balancer like Azure Traffic manager allows a system to process requests across multiple app service plans, and the plans can live in different data centers, providing additional availability in the face of disaster and practically an infinite horizontal scale. Advantages to Vertical ScaleThere are some types of systems which will benefit more from a scale up approach. A CPU bound system using multiple threads to execute heavy processing algorithms might benefit from having more cores available on every machine. A memory bound system to manipulate large data structures might benefit from having more memory available on every machine.  The only way to know the best approach for a specific system is to run tests and record benchmarks and metrics. Other Scale StrategiesScale up and scale out are two strategies to consider for scaling specific components of a system. Generally, we consider these two strategies for stateless front-end web servers. Other areas of a system might require different strategies, particularly networking and data storage components. These components often require a partitioning strategy.  Partitioning comes into play when an aspect of the system outgrows the defined physical limits of a resource. For example, an Azure Service Bus namespace has defined limits on the number of concurrent connections. Azure Storage has defined limits for IOPS on an account. Azure SQL databases, servers, and pools have defined limits on the number of concurrent logins and database transaction limits. To understand how partitioning works, and t[...]



Patterns for the Cloud and Microsoft Azure Now Available on Pluralsight

Tue, 11 Jul 2017 12:58:00 Z

Do you want to design a resilient, scalable system in the cloud? Have you wondered what platforms and services Microsoft Azure offers to make building resilient, scalable systems easier?

Last month I released a new course on design patterns for the cloud with a focus on Microsoft Azure. The first of the four modules in this course provides an overview of the various platforms and services available in Azure, and also demonstrates a few different system designs to show how the various resources can work together.

(image)

The second module of the course is all about building resilient systems – systems that are highly available and recover from disasters small and large. You’ll see how to use resources like Azure Service Bus and Azure Traffic Manager.

(image)

The third module is all about scalability. We will talk about partitioning, sharding, caching, and CDNs. We’ll also take a look at the Azure API Management service and see how to build an API gateway.

(image)

The last module demonstrates how to use the load testing tools in Visual Studio and VSTS Online. It’s important to stress your systems and prove their scalability and resiliency features.

(image)

I’m pleased with the depth and breadth of the technical content in this course. I hope you enjoy watching it!




Thoughts on Azure Functions and Serverless Computing

Mon, 10 Jul 2017 09:11:00 Z

For the last 10 months I’ve been working with Azure Functions and mulling over the implications of computing without servers. If Azure Functions were an entrée, I’d say the dish layers executable code over unspecified hardware in the cloud with a sprinkling of input and output bindings on top. However, Azure Functions are not an entrée, so it might be better to describe the capabilities without using culinary terms. With Azure Functions, I can write a function definition in a variety of languages – C#, F#, JavaScript, Bash, Powershell, and more. There is no distinction between compiled languages, interpreted languages, and languages typically associated with a terminal window. I can then declaratively describe an input to my function. An input might be a timer (I want the function to execute every 2 hours), or an HTTP message (I want the function invoked when an HTTPS request arrives at functions-test.azurewebsites.net/api/foo), or when a new blob appears in a storage container, or a new message appears in a queue. There are other inputs, as well as output bindings to send results to various destinations. By using a hosting plan known as a consumption plan, I can tell Azure to execute my function with whatever resources are necessary to meet the demand on my function. It’s like saying “here is my code, don’t let me down”. The Good Azure Functions are cheap. While most cloud services will send a monthly invoice based on how many resources you’ve provisioned, Azure Functions only bill you for the resources you actively consume. Cost is based on execution time and memory usage for each function invocation. Azure Functions are simple for simple scenarios. If you have the need for a single webhook to take some data from an HTTP POST and place the data into a database, then with functions there is no need to create an entire project, or provision an app service plan.  The amount of code you write in a function will probably be less than writing the same behavior outside of Azure Functions. There’s less code because the declarative input and output bindings can remove boilerplate code. For example, when using Azure storage, there is no need to write code to connect to an account and find a container. The function runtime will wire up the everything the function needs and pass more actionable objects as function parameters. The following is an example function.json file that defines the bindings for a function. { "disabled": false, "bindings": [ { "authLevel": "function", "name": "req", "type": "httpTrigger", "direction": "in" }, { "name": "$return", "type": "http", "direction": "out" } ] } Azure Function are scalable. Of course, many resources in Azure are scalable, but other resources require configuration and care to behave well under load. The Cons One criticism of Azure functions, and PaaS solutions in general, is vendor lock-in. The languages and the runtimes for Azure Functions are not specialized, meaning the C# and .NET or JavaScript and NodeJS code you write will move to other environments. However, the execution environment for Azure Functions is specialized. The input and output bindings that remove the boilerplate code necessary for connecting to other resources is a feature only the function environment provides. Thus, it requires some work to move Azure Function code to another environment. One of the biggest drawbacks to Azure functions, actually, is that deploying, authoring, testing, and executing a function has been difficult in any environment outside of Azure and the Azure portal, although this situation is improving (see The Future section below). There have been a few attempts at creating Visual Studio project templates and command line tools which have never progressed beyond a preview version. The experience for maintaining multiple functions in a larger scale project has been frustratin[...]



Developing with Node on Microsoft Azure

Tue, 20 Jun 2017 09:12:00 Z

Catching up on announcements …

About 3 months ago, Pluralsight released my Developing with Node on Microsoft Azure course.  Recorded on macOS and taking the perspective of a Node developer, this course shows how to use Azure App Services to host a NodeJS application, as well as how to manage, monitor, debug, and scale the application. I also show how to use Cosmos DB, which is the NoSQL DB formerly known as Document DB, by taking advantage of the Mongo DB APIs.

Other topics include how to use Azure SQL, CLI tools, blob storage, and some cognitive services are thrown into the mix, too. An entire module is dedicated to serverless computing with Azure functions (implemented in JavaScript, of course).

In the finale of the course I show you how to setup a continuous delivery pipeline using Visual Studio Team Services to build, package, and deploy a Node app into Azure.

I hope you enjoy the course.

(image)




ASP.NET Configuration Options Will Understand Arrays

Mon, 24 Apr 2017 09:12:00 Z

Continuing on topics from code reviews.

Last year I saw some C# code working very hard to process an application config file like the following:

{
  "Storage": {
    "Timeout":  "25", 
    "Blobs": [
      {
        "Name": "Primary",
        "Url": "foo.com"

      },
      {
        "Name": "Secondary",
        "Url": "bar.com"

      }
    ]
  }
}

Fortunately, the Options framework in ASP.NET Core understands how to map this JSON into C#, including the Blobs array. All we need are some plain classes that follow the structure of the JSON.

public class AppConfig
{
    public Storage Storage { get; set; }            
}

public class Storage
{
    public int TimeOut { get; set; }
    public BlobSettings[] Blobs { get; set; }
}

public class BlobSettings
{
    public string Name { get; set; }
    public string Url { get; set; }
}

Then, we setup our IConfiguration for the application.

var config = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json")
    .Build();

And once you’ve brought in the Microsoft.Extensions.Options package, you can configure the IOptions service and make AppConfig available.

public void ConfigureServices(IServiceCollection services)
{
    // ...

    services.AddOptions();
    services.Configure(config);
}

With everything in place, you can inject IOptions anywhere in the application, and the object will have the settings from the configuration file.




ASP.NET Core Dependency Injection Understands Unbound Generics

Fri, 21 Apr 2017 09:11:00 Z

Continuing with topics based on ASP.NET Core code reviews.

Here is a bit of code I came across in an application’s Startup class.

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped, SqlStore>();
    services.AddScoped, SqlStore>();
    services.AddScoped, SqlStore>();
    // ...
}

The actual code ran for many more lines, with the general idea that the application needs an IStore implementation for a number of distinguished objects in the system.

Because ASP.NET Core understands unbound generics, there is only one line of code required.

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped(typeof(IStore<>), typeof(SqlStore<>));
}

Unbound generics are not useful in day to day business programming, but if you are curious how the process works, I did show how to use unbound generics at a low level in my C# Generics course.

One downside to this approach is the fact that you might experience a runtime error (instead of a compile error) if a component requests an implementation of IStore that isn’t possible. For example, if a concrete implementation of IStore uses a generic constraint of class, then the following would happen:

Assert.Throws(() =>
{
    services.GetRequiredService>();
});

However, this problem should be avoidable.




ASP.NET Core Middleware Components are Singletons

Wed, 19 Apr 2017 09:12:00 Z

This is the first post in a series of posts based on code reviews of systems where ASP.NET Core is involved.

I recently came across code like the following:

public class FaultyMiddleware
{
    public FaultyMiddleware(RequestDelegate next)
    {
        _next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        // saving the context so we don't need to pass around as a parameter
        this._context = context;

        DoSomeWork();

        await _next(context);            
    }

    private void DoSomeWork()
    {
        // code that calls other private methods
    }

    // ...

    HttpContext _context;
    RequestDelegate _next;
}

The problem here is a misunderstanding of how middleware components work in the ASP.NET Core pipeline. ASP.NET Core uses a single instance of a middleware component to process multiple requests, so it is best to think of the component as a singleton. Saving state into an instance field is going to create problems when there are concurrent requests working through the pipeline.

If there is so much work to do inside a component that you need multiple private methods, a possible solution is to delegate the work to another class and instantiate the class once per request. Something like the following:

public class RequestProcessor
{
    private readonly HttpContext _context;

    public RequestProcessor(HttpContext context)
    {
        _context = context;
    }

    public void DoSomeWork()
    {
        // ... 
    }
}

Now the middleware component has the single responsibility of following the implicit middleware contract so it fits into the ASP.NET Core processing pipeline. Meanwhile, the RequestProcessor, once given a more suitable name, is a class the system can use anytime there is work to do with an HttpContext.




Developing with .NET on Microsoft Azure

Tue, 04 Apr 2017 09:12:00 Z

My latest Pluralsight course is alive and covers Azure from a .NET developers perspective. Some of what you’ll learn includes:

- How to create an app service to host your web application and API backend

- How to monitor, manage, debug, and scale an app service

- How to configure and use an Azure SQL database

- How to configure and use a DocumentDB collection

- How to work with storage accounts and blob storage

- How to take advantage of server-less computing with Azure Functions

- How to setup a continuous delivery pipeline into Azure from Visual Studio Team Services

- And much more …

(image)

And here is some early feedback from the Twitterverse:

Thanks for watching!




The Joy of Azure CLI 2.0

Mon, 03 Apr 2017 09:12:00 Z

The title here is based on a book I remember in my mom’s kitchen: The Joy of Cooking. The cover of her book was worn, while the inside was dog eared and bookmarked with notes. I started reading my mom’s copy when I started working in a restaurant for spending money. In the days before TV channels dedicated to cooking, I learned quite a bit about cooking from this book and on-the-job training. The book is more than a collection of recipes. There is prose and personality inside. I have a copy in my kitchen now. Azure CLI 2 The new Azure CLI 2 is my favorite tool for Azure operations from the command line. The installation is simple and does have a dependency on Python. I look at the Python dependency as a good thing, since Python allows the CLI to work on macOS, WIndows, and Linux. You do not need to know anything about Python to use the CLI, although Python is a fun language to learn and use. I’ve done one course with Python and one day hope to do more. The operations you can perform with the CLI are easy to find, since the tool organizes operations into hierarchical groups and sub-groups. After installation, just type “az” to see the top-level commands (many of which are not in the picture). You can use the ubiquitous -h switch to find additional subgroups. For example, here are the commands available for the “az appservice web” group. For many scenarios, you can use the CLI instead of using the Azure portal. Let’s say you’ve just used a scaffolding tool to create an application with Node or .NET Core, and now you want to create a web site in Azure with the local code. First, we’d place the code into a local git repository. git init git add . git commit -a -m “first commit” Now you use a combination of git and az commands to create an app service and push the application to Azure. az group create --location “Eastus” --name sample-app az appservice plan create --name sample-app-plan --resource-group sample-app --sku FREE az appservice web create --name sample-app --resource-group sample-group --plan sample-app-plan az appservice web source-control config-local-git --name sample-app --resource-group sample-app git remote add azure “https://[url-result-from-previous-operation]” git push azure master We can then have the CLI launch a browser to view the new application. az appservice web browse --name sample-app --resource-group sample-app To shorten the above commands, use -n for the name switch, and -g for the resource group name. Joyous. [...]