Subscribe: Poonam Lall
http://odetocode.com/Blogs/poonam/Rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
apis  app  application  azure  code  collection  core  create  framework  net core  net  new  public  resource  token 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Poonam Lall

OdeToCode by K. Scott Allen



OdeToCode by K. Scott Allen



Copyright: (c) 2004 to 2017 OdeToCode LLC
 



ASP.NET Configuration Options Will Understand Arrays

Mon, 24 Apr 2017 09:12:00 Z

Continuing on topics from code reviews.

Last year I saw some C# code working very hard to process an application config file like the following:

{
  "Storage": {
    "Timeout":  "25", 
    "Blobs": [
      {
        "Name": "Primary",
        "Url": "foo.com"

      },
      {
        "Name": "Secondary",
        "Url": "bar.com"

      }
    ]
  }
}

Fortunately, the Options framework in ASP.NET Core understands how to map this JSON into C#, including the Blobs array. All we need are some plain classes that follow the structure of the JSON.

public class AppConfig
{
    public Storage Storage { get; set; }            
}

public class Storage
{
    public int TimeOut { get; set; }
    public BlobSettings[] Blobs { get; set; }
}

public class BlobSettings
{
    public string Name { get; set; }
    public string Url { get; set; }
}

Then, we setup our IConfiguration for the application.

var config = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json")
    .Build();

And once you’ve brought in the Microsoft.Extensions.Options package, you can configure the IOptions service and make AppConfig available.

public void ConfigureServices(IServiceCollection services)
{
    // ...

    services.AddOptions();
    services.Configure(config);
}

With everything in place, you can inject IOptions anywhere in the application, and the object will have the settings from the configuration file.




ASP.NET Core Dependency Injection Understands Unbound Generics

Fri, 21 Apr 2017 09:11:00 Z

Continuing with topics based on ASP.NET Core code reviews.

Here is a bit of code I came across in an application’s Startup class.

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped, SqlStore>();
    services.AddScoped, SqlStore>();
    services.AddScoped, SqlStore>();
    // ...
}

The actual code ran for many more lines, with the general idea that the application needs an IStore implementation for a number of distinguished objects in the system.

Because ASP.NET Core understands unbound generics, there is only one line of code required.

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped(typeof(IStore<>), typeof(SqlStore<>));
}

Unbound generics are not useful in day to day business programming, but if you are curious how the process works, I did show how to use unbound generics at a low level in my C# Generics course.

One downside to this approach is the fact that you might experience a runtime error (instead of a compile error) if a component requests an implementation of IStore that isn’t possible. For example, if a concrete implementation of IStore uses a generic constraint of class, then the following would happen:

Assert.Throws(() =>
{
    services.GetRequiredService>();
});

However, this problem should be avoidable.




ASP.NET Core Middleware Components are Singletons

Wed, 19 Apr 2017 09:12:00 Z

This is the first post in a series of posts based on code reviews of systems where ASP.NET Core is involved.

I recently came across code like the following:

public class FaultyMiddleware
{
    public FaultyMiddleware(RequestDelegate next)
    {
        _next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        // saving the context so we don't need to pass around as a parameter
        this._context = context;

        DoSomeWork();

        await _next(context);            
    }

    private void DoSomeWork()
    {
        // code that calls other private methods
    }

    // ...

    HttpContext _context;
    RequestDelegate _next;
}

The problem here is a misunderstanding of how middleware components work in the ASP.NET Core pipeline. ASP.NET Core uses a single instance of a middleware component to process multiple requests, so it is best to think of the component as a singleton. Saving state into an instance field is going to create problems when there are concurrent requests working through the pipeline.

If there is so much work to do inside a component that you need multiple private methods, a possible solution is to delegate the work to another class and instantiate the class once per request. Something like the following:

public class RequestProcessor
{
    private readonly HttpContext _context;

    public RequestProcessor(HttpContext context)
    {
        _context = context;
    }

    public void DoSomeWork()
    {
        // ... 
    }
}

Now the middleware component has the single responsibility of following the implicit middleware contract so it fits into the ASP.NET Core processing pipeline. Meanwhile, the RequestProcessor, once given a more suitable name, is a class the system can use anytime there is work to do with an HttpContext.




Developing with .NET on Microsoft Azure

Tue, 04 Apr 2017 09:12:00 Z

My latest Pluralsight course is alive and covers Azure from a .NET developers perspective. Some of what you’ll learn includes:

- How to create an app service to host your web application and API backend

- How to monitor, manage, debug, and scale an app service

- How to configure and use an Azure SQL database

- How to configure and use a DocumentDB collection

- How to work with storage accounts and blob storage

- How to take advantage of server-less computing with Azure Functions

- How to setup a continuous delivery pipeline into Azure from Visual Studio Team Services

- And much more …

(image)

And here is some early feedback from the Twitterverse:

Thanks for watching!




The Joy of Azure CLI 2.0

Mon, 03 Apr 2017 09:12:00 Z

The title here is based on a book I remember in my mom’s kitchen: The Joy of Cooking. The cover of her book was worn, while the inside was dog eared and bookmarked with notes. I started reading my mom’s copy when I started working in a restaurant for spending money. In the days before TV channels dedicated to cooking, I learned quite a bit about cooking from this book and on-the-job training. The book is more than a collection of recipes. There is prose and personality inside. I have a copy in my kitchen now. Azure CLI 2 The new Azure CLI 2 is my favorite tool for Azure operations from the command line. The installation is simple and does have a dependency on Python. I look at the Python dependency as a good thing, since Python allows the CLI to work on macOS, WIndows, and Linux. You do not need to know anything about Python to use the CLI, although Python is a fun language to learn and use. I’ve done one course with Python and one day hope to do more. The operations you can perform with the CLI are easy to find, since the tool organizes operations into hierarchical groups and sub-groups. After installation, just type “az” to see the top-level commands (many of which are not in the picture). You can use the ubiquitous -h switch to find additional subgroups. For example, here are the commands available for the “az appservice web” group. For many scenarios, you can use the CLI instead of using the Azure portal. Let’s say you’ve just used a scaffolding tool to create an application with Node or .NET Core, and now you want to create a web site in Azure with the local code. First, we’d place the code into a local git repository. git init git add . git commit -a -m “first commit” Now you use a combination of git and az commands to create an app service and push the application to Azure. az group create --location “Eastus” --name sample-app az appservice plan create --name sample-app-plan --resource-group sample-app --sku FREE az appservice web create --name sample-app --resource-group sample-group --plan sample-app-plan az appservice web source-control config-local-git --name sample-app --resource-group sample-app git remote add azure “https://[url-result-from-previous-operation]” git push azure master We can then have the CLI launch a browser to view the new application. az appservice web browse --name sample-app --resource-group sample-app To shorten the above commands, use -n for the name switch, and -g for the resource group name. Joyous. [...]



Notes for Getting Started with Power BI Embedded

Thu, 16 Feb 2017 09:11:00 Z

Doing some work where I thought Power BI Embedded would make for a good solution. The visuals are appealing and modern, and for customization there is the ability to use D3 behind the scenes. I was also encouraged to see support in Azure for hosting Power BI reports. There were a few hiccups along the way, so here are some notes for anyone trying to use Power BI Embedded soon. Getting Started The Get started with Microsoft Power BI Embedded document is a natural place to go first. A good document, but there are a few key points that are left unsaid, or at least understated. The first few steps of the document outline how to create a Power BI Embedded Workspace Collection. The screen shot at the end of the section shows the collection in the Azure portal with a workspace included in the collection. However, if you follow the same steps you won’t have a workspace in your collection, you’ll have just an empty collection. This behavior is normal, but when combined with some of the other points I’ll make did add to the confusion. Not mentioned in the portal or the documentation is the fact that the Workspace collection name you provide needs to be unique in the Azure world of collection names. Generally, in the Azure portal, the configuration blades will let you know when a name must be unique (by showing a domain the name will prefix). Power BI Embedded works a bit differently, and when it comes time to invoke APIs with a collection name it will make more sense to think of the name as unique. I’ll caveat this paragraph by saying I am deducing the uniqueness of a collection name based on behavior and API documentation. Creating a Workspace After creating a collection you’ll need to create a workspace to host reporting artifacts. There is currently no UI in the portal or PBI desktop tool to create a workspace in Azure, which feels odd. Everything I’ve worked with in the Azure portal has at least a minimal UI for common configuration of a resource, and creating a workspace is a common task. Currently the only way to create a workspace is to use the HTTP APIs provided by Power BI. For automated software deployments, the API is a must have, but for experimentation it would also be nice to have a more approachable workspace setup to get the feel of how everything works. The APIs There are two sets of APIs to know about. There are the Power BI REST Operations, and the Power BI Resource Provider APIs. You can think of the resource provider APIs as the usual Azure resource provider APIs that would attached to any type of resource in Azure – virtual machines, app services, storage, etc. You can use these APIs to create a new workspace collection instead of using the portal in the UI. You can also achieve common tasks like listing or regenerating the access keys. These APIs require an Azure access token from Azure AD. The Power BI REST operations allow you to work inside a workspace collection to create workspaces, import reports, and define data sources. There is some orthogonality missing to the API, it appears, as you can use an HTTP POST to create workspaces and reports, use HTTP GET to retrieve resource definitions, but in many cases, there are no HTTP DELETE operations to remove an item. These Power BI operations have a different base URL than the resource manager operations, they use https://api.powerbi.com, and they do not require a token from Azure AD. All you need for authorization is one of the access keys defined by the workspace collection. The mental model to have here is the same model you would have for Azure Storage or DocumentDB, as two examples. There are the APIs to manage the resource which require an AD token (like to create a storage a account), and then there are the APIs to act as a client of the resource, and these APIs require only an access key (like to upload a blob into storage). The Sample Program To see how you can work with these APIs, Microsof[...]



ASP.NET Core and the Enterprise Part 4: Data Access

Tue, 14 Feb 2017 09:11:00 Z

When creating .NET Core and ASP.NET Core applications, programmers have many options for data storage and retrieval available. You’ll need to choose the option that fits your application’s needs and your team’s development style. In this article, I’ll give you a few thoughts and caveats on data access in the new world of .NET Core. Data Options Remember that an ASP.NET Core application can compile against the .NET Core framework or the full .NET framework. If you choose to use the full .NET framework, you’ll have all the same data access options that you had in the past. These options include low-level programming interfaces like ADO.NET and high-level ORMs like the Entity Framework. If you want to target .NET Core, you have fewer options available today. However, because .NET Core is still new, we will see more options appear over time. Bertrand Le Roy recently posted a comprehensive list of Microsoft and third-party .NET Core packages for data access. The list shows NoSQL support for Azure DocumentDB, RavenDB, MongoDB and Redis. For relational databases, you can connect to Microsoft SQL Server, PostgreSQL, MySQL and SQLite. You can choose Npoco, Dapper and the new Entity Framework Core as an ORM frameworks for .NET Core. Entity Framework Core Because the Entity Framework is a popular data access tool for .NET development, we will take a closer look at the new version of EF Core. On the surface, EF Core is like its predecessors, featuring an API with DbContext and DbSet classes. You can query a data source using LINQ operators like Where, Order By and Select. Under the covers, however, EF Core is significantly different from previous versions of EF. The EF team rewrote the framework and discarded much of the architecture that had been around since version 1 of the project. If you’ve used EF in the past, you might remember there was an ObjectContext hiding behind the DbContext class plus an unnecessarily complex entity data model. The new EF Core is considerably lighter, which brings us to the discussion of pros and cons. What’s Missing? In the EF Core rewrite,you won't find an entity data model or EDMX design tool. The controversial lazy loading feature is not supported for now but is listed on the roadmap. The ability to map stored procedures to entity operations is not in EF Core, but the framework still provides an API for sending raw SQL commands to the database. This feature currently allows you to map only results from raw SQL into known entity types. Personally, I’ve found the ability to consume views from SQL Server to be too restrictive. With EF Core, you can take a “code first” approach to database development by generating database migrations from class definitions. However, the only tooling to support a “database first” approach to development is a command line scaffolding tool that can generate C# classes from database tables. There are no tools in Visual Studio to reverse engineer a database or update entity definitions to match changes in a database schema. Model visualization is another feature on the future roadmap. Like EF 6, EF Core supports popular relational databases, including SQL Server, MySQL, SQLite and PostgreSQL, but Oracle is currently not supported in EF Core. What’s Better? EF Core is a cross-platform framework you can use on Linux, macOS and Windows. The new framework is considerably lighter than frameworks of the past and is also easier to extend and customize thanks to the application of the dependency inversion principle. EF Core plans to extend the list of supported database providers beyond relational databases. Redis and Azure Table Storage providers are on the roadmap for the future. One exciting new feature is the new in-memory database provider. The in-memory provider makes unit testing easier and is not intended as a provider you would ever use in production. In a unit test, you ca[...]



A Train in the Night

Sun, 12 Feb 2017 23:59:00 Z

(image) I’ll lived all my life near a town with the nickname “Hub City”. I know my town is not the only town in the 50 states with such a nickname, but we do have two major interstates, two mainline rail tracks, and one historic canal in the area. This is not Chicago, but we did have Ludacris fly through the regional airport last year.

The railroad tracks here have always piqued my interest. Trains too, but even more the mystery and history of the line itself. As a kid, I was told not to hang around railroad lines. But, being a kid, with a bike and a curiosity, I did anyway.

Where does it come from? Where does it go?

Those types of questions are easier to answer these days with all the satellite imagery and sites like OpenRailwayMap. I discovered, for example, the line closest to me now was built in the late 1800s when railroads were expanding. Back then, the more lines you built, the better chance you had of taking market share. When railroad companies consolidated in the 1970s, they abandoned most of this track. Still, there is a piece being used, albeit infrequently.

When the line is used on a cold winter night, the distant train whistle makes me hold my breath and listen. Two long, one short, one long. A B major 7th, I think. The 7th is there to tingle the hairs on your neck. It’s hard to believe how machinery and compressed air can provoke an emotional response. After all, there is the occasional horned owl in the area whose hollow cooing is always distant, lonely, and organic. Yet, the mechanical whistle is somehow more urgent, searching, and all-pervading. A proclamation.

I know where I’ve been. I know where I’m going.

Code Whistles

It’s hard to believe how code and technology can provoke an emotional response. The shape of the code, the whitespace between. The spark that lights a fire when you uncover a new secret. Now that you’ve learned it won’t go away, but you had to earn it. Idioms and idiosyncrasies pour into the brain like milk into cereal. Changing something, and it’s good.

The whistle. How quickly things change. Or, perhaps the process was slower than I thought. Your idioms impossible, your idiosyncrasies an irritation. If only we could reverse the clock to reach the point before these neurons put together that particular chemical reaction, but there are high winds tonight. I’ve lost power. There was the whistle.

I know where I’ve been, but I don’t know where I’m going.




On .NET Rocks

Fri, 10 Feb 2017 09:11:00 Z

In episode 1405 I sit down with Carl and Richard at NDC London to talk about ASP.NET Core. I hope you find the conversation valuable.

(image)




Anti-Forgery Tokens and ASP.NET Core APIs

Mon, 06 Feb 2017 09:11:00 Z

In modern web programming, you can never have too many tokens. There are access tokens, refresh tokens, anti-XSRF tokens, and more. It’s the last type of token that I’ve gotten a lot of questions about recently. Specifically, does one need to protect against cross site requests forgeries when building an API based app? And if so, how does one create a token in an ASP.NET Core application? Do I Need an XSRF Token? In any application where the browser can implicitly authenticate the user, you’ll need to protect against cross-site request forgeries. Implicit authentication happens when the browser sends authentication information automatically, which is the case when using cookies for authentication, but also for applications using Windows authentication. Generally, APIs don’t use cookies for authentication. Instead, APIs typically use bearer tokens, and custom JavaScript code running in the browser must send the token along by explicitly adding the token to a request. However, there are also APIs living inside the same server process as a web application and using the same cookie as the application for authentication. This is the type of scenario where you must use anti forgery tokens to prevent an XSRF. XSRF Tokens and ASP.NET Core APIs There is no additional work required to validate an anti-forgery token in an API request, because the [ValidateAntiForgeryToken] attribute in ASP.NET Core will look for tokens in a posted form input, or in an HTTP header. But, there is some additional work required to give the client a token. This is where the IAntiforgery service comes in. [Route("api/[controller]")] public class XsrfTokenController : Controller { private readonly IAntiforgery _antiforgery; public XsrfTokenController(IAntiforgery antiforgery) { _antiforgery = antiforgery; } [HttpGet] public IActionResult Get() { var tokens = _antiforgery.GetAndStoreTokens(HttpContext); return new ObjectResult(new { token = tokens.RequestToken, tokenName = tokens.HeaderName }); } } In the above code, we can inject the IAntiforgery service for an application and provide an endpoint a client can call to fetch the token and token name it needs to use in a request. The GetAndStoreTokens method will not only return a data structure with token information, it will also issue the anti-forgery cookie the framework will use in one-half of the validation algorithm. We can use a new ObjectResult to serialize the token information back to the client. Note: if you want to change the header name, you can change the AntiForgeryOptions during startup of the application [1]. With the endpoint in place, you’ll need to fetch and store the token from JavaScript on the client. Here is a bit of Typescript code using Axios to fetch the token, then configure Axios to send the token with every HTTP request. import axios, { AxiosResponse } from "axios"; import { IGolfer, IMatchSet } from "models" import { errorHandler } from "./error"; const XSRF_TOKEN_KEY = "xsrfToken"; const XSRF_TOKEN_NAME_KEY = "xsrfTokenName"; function reportError(message: string, response: AxiosResponse) { const formattedMessage = `${message} : Status ${response.status} ${response.statusText}` errorHandler.reportMessage(formattedMessage); } function setToken({token, tokenName}: { token: string, tokenName: string }) { window.sessionStorage.setItem(XSRF_TOKEN_KEY, token); window.sessionStorage.setItem(XSRF_TOKEN_NAME_KEY, tokenName); axios.defaults.headers.common[tokenName] = token; } function initializeXsrfToken() { let token = window.sessionStorage.getItem(XSRF_TOKEN_KEY); let tokenName = window.sessionStorage.getItem(XSRF_TOKEN_NAME_KEY); if (!token || !tokenName) { axios.get("/api/xsrf[...]