Subscribe: Poonam Lall
Added By: Feedage Forager Feedage Grade B rated
Language: English
apis  application  azure  code  core  data  feature  middleware  net core  net  new  piece middleware  server  token  webpack 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Poonam Lall

OdeToCode by K. Scott Allen

OdeToCode by K. Scott Allen

Copyright: (c) 2004 to 2017 OdeToCode LLC

Notes for Getting Started with Power BI Embedded

Thu, 16 Feb 2017 09:11:00 Z

Doing some work where I thought Power BI Embedded would make for a good solution. The visuals are appealing and modern, and for customization there is the ability to use D3 behind the scenes. I was also encouraged to see support in Azure for hosting Power BI reports. There were a few hiccups along the way, so here are some notes for anyone trying to use Power BI Embedded soon. Getting Started The Get started with Microsoft Power BI Embedded document is a natural place to go first. A good document, but there are a few key points that are left unsaid, or at least understated. The first few steps of the document outline how to create a Power BI Embedded Workspace Collection. The screen shot at the end of the section shows the collection in the Azure portal with a workspace included in the collection. However, if you follow the same steps you won’t have a workspace in your collection, you’ll have just an empty collection. This behavior is normal, but when combined with some of the other points I’ll make did add to the confusion. Not mentioned in the portal or the documentation is the fact that the Workspace collection name you provide needs to be unique in the Azure world of collection names. Generally, in the Azure portal, the configuration blades will let you know when a name must be unique (by showing a domain the name will prefix). Power BI Embedded works a bit differently, and when it comes time to invoke APIs with a collection name it will make more sense to think of the name as unique. I’ll caveat this paragraph by saying I am deducing the uniqueness of a collection name based on behavior and API documentation. Creating a Workspace After creating a collection you’ll need to create a workspace to host reporting artifacts. There is currently no UI in the portal or PBI desktop tool to create a workspace in Azure, which feels odd. Everything I’ve worked with in the Azure portal has at least a minimal UI for common configuration of a resource, and creating a workspace is a common task. Currently the only way to create a workspace is to use the HTTP APIs provided by Power BI. For automated software deployments, the API is a must have, but for experimentation it would also be nice to have a more approachable workspace setup to get the feel of how everything works. The APIs There are two sets of APIs to know about. There are the Power BI REST Operations, and the Power BI Resource Provider APIs. You can think of the resource provider APIs as the usual Azure resource provider APIs that would attached to any type of resource in Azure – virtual machines, app services, storage, etc. You can use these APIs to create a new workspace collection instead of using the portal in the UI. You can also achieve common tasks like listing or regenerating the access keys. These APIs require an Azure access token from Azure AD. The Power BI REST operations allow you to work inside a workspace collection to create workspaces, import reports, and define data sources. There is some orthogonality missing to the API, it appears, as you can use an HTTP POST to create workspaces and reports, use HTTP GET to retrieve resource definitions, but in many cases, there are no HTTP DELETE operations to remove an item. These Power BI operations have a different base URL than the resource manager operations, they use, and they do not require a token from Azure AD. All you need for authorization is one of the access keys defined by the workspace collection. The mental model to have here is the same model you would have for Azure Storage or DocumentDB, as two examples. There are the APIs to manage the resource which require an AD token (like to create a storage a account), and then there are the APIs to act as a client of the resource, and these APIs require only an access key (like to upload a blob into storage). The Sample Program To see how you can work with these APIs, Microsoft provides a sample console mode application on GitHub. After I cloned the repo I had to fix NuGet package references and assembly refere[...]

ASP.NET Core and the Enterprise Part 4: Data Access

Tue, 14 Feb 2017 09:11:00 Z

When creating .NET Core and ASP.NET Core applications, programmers have many options for data storage and retrieval available. You’ll need to choose the option that fits your application’s needs and your team’s development style. In this article, I’ll give you a few thoughts and caveats on data access in the new world of .NET Core. Data Options Remember that an ASP.NET Core application can compile against the .NET Core framework or the full .NET framework. If you choose to use the full .NET framework, you’ll have all the same data access options that you had in the past. These options include low-level programming interfaces like ADO.NET and high-level ORMs like the Entity Framework. If you want to target .NET Core, you have fewer options available today. However, because .NET Core is still new, we will see more options appear over time. Bertrand Le Roy recently posted a comprehensive list of Microsoft and third-party .NET Core packages for data access. The list shows NoSQL support for Azure DocumentDB, RavenDB, MongoDB and Redis. For relational databases, you can connect to Microsoft SQL Server, PostgreSQL, MySQL and SQLite. You can choose Npoco, Dapper and the new Entity Framework Core as an ORM frameworks for .NET Core. Entity Framework Core Because the Entity Framework is a popular data access tool for .NET development, we will take a closer look at the new version of EF Core. On the surface, EF Core is like its predecessors, featuring an API with DbContext and DbSet classes. You can query a data source using LINQ operators like Where, Order By and Select. Under the covers, however, EF Core is significantly different from previous versions of EF. The EF team rewrote the framework and discarded much of the architecture that had been around since version 1 of the project. If you’ve used EF in the past, you might remember there was an ObjectContext hiding behind the DbContext class plus an unnecessarily complex entity data model. The new EF Core is considerably lighter, which brings us to the discussion of pros and cons. What’s Missing? In the EF Core rewrite,you won't find an entity data model or EDMX design tool. The controversial lazy loading feature is not supported for now but is listed on the roadmap. The ability to map stored procedures to entity operations is not in EF Core, but the framework still provides an API for sending raw SQL commands to the database. This feature currently allows you to map only results from raw SQL into known entity types. Personally, I’ve found the ability to consume views from SQL Server to be too restrictive. With EF Core, you can take a “code first” approach to database development by generating database migrations from class definitions. However, the only tooling to support a “database first” approach to development is a command line scaffolding tool that can generate C# classes from database tables. There are no tools in Visual Studio to reverse engineer a database or update entity definitions to match changes in a database schema. Model visualization is another feature on the future roadmap. Like EF 6, EF Core supports popular relational databases, including SQL Server, MySQL, SQLite and PostgreSQL, but Oracle is currently not supported in EF Core. What’s Better? EF Core is a cross-platform framework you can use on Linux, macOS and Windows. The new framework is considerably lighter than frameworks of the past and is also easier to extend and customize thanks to the application of the dependency inversion principle. EF Core plans to extend the list of supported database providers beyond relational databases. Redis and Azure Table Storage providers are on the roadmap for the future. One exciting new feature is the new in-memory database provider. The in-memory provider makes unit testing easier and is not intended as a provider you would ever use in production. In a unit test, you can configure EF Core to use the in-memory provider instead of writing mock objects or fake objects around a DbContext class, which can le[...]

A Train in the Night

Sun, 12 Feb 2017 23:59:00 Z

(image) I’ll lived all my life near a town with the nickname “Hub City”. I know my town is not the only town in the 50 states with such a nickname, but we do have two major interstates, two mainline rail tracks, and one historic canal in the area. This is not Chicago, but we did have Ludacris fly through the regional airport last year.

The railroad tracks here have always piqued my interest. Trains too, but even more the mystery and history of the line itself. As a kid, I was told not to hang around railroad lines. But, being a kid, with a bike and a curiosity, I did anyway.

Where does it come from? Where does it go?

Those types of questions are easier to answer these days with all the satellite imagery and sites like OpenRailwayMap. I discovered, for example, the line closest to me now was built in the late 1800s when railroads were expanding. Back then, the more lines you built, the better chance you had of taking market share. When railroad companies consolidated in the 1970s, they abandoned most of this track. Still, there is a piece being used, albeit infrequently.

When the line is used on a cold winter night, the distant train whistle makes me hold my breath and listen. Two long, one short, one long. A B major 7th, I think. The 7th is there to tingle the hairs on your neck. It’s hard to believe how machinery and compressed air can provoke an emotional response. After all, there is the occasional horned owl in the area whose hollow cooing is always distant, lonely, and organic. Yet, the mechanical whistle is somehow more urgent, searching, and all-pervading. A proclamation.

I know where I’ve been. I know where I’m going.

Code Whistles

It’s hard to believe how code and technology can provoke an emotional response. The shape of the code, the whitespace between. The spark that lights a fire when you uncover a new secret. Now that you’ve learned it won’t go away, but you had to earn it. Idioms and idiosyncrasies pour into the brain like milk into cereal. Changing something, and it’s good.

The whistle. How quickly things change. Or, perhaps the process was slower than I thought. Your idioms impossible, your idiosyncrasies an irritation. If only we could reverse the clock to reach the point before these neurons put together that particular chemical reaction, but there are high winds tonight. I’ve lost power. There was the whistle.

I know where I’ve been, but I don’t know where I’m going.

On .NET Rocks

Fri, 10 Feb 2017 09:11:00 Z

In episode 1405 I sit down with Carl and Richard at NDC London to talk about ASP.NET Core. I hope you find the conversation valuable.


Anti-Forgery Tokens and ASP.NET Core APIs

Mon, 06 Feb 2017 09:11:00 Z

In modern web programming, you can never have too many tokens. There are access tokens, refresh tokens, anti-XSRF tokens, and more. It’s the last type of token that I’ve gotten a lot of questions about recently. Specifically, does one need to protect against cross site requests forgeries when building an API based app? And if so, how does one create a token in an ASP.NET Core application? Do I Need an XSRF Token? In any application where the browser can implicitly authenticate the user, you’ll need to protect against cross-site request forgeries. Implicit authentication happens when the browser sends authentication information automatically, which is the case when using cookies for authentication, but also for applications using Windows authentication. Generally, APIs don’t use cookies for authentication. Instead, APIs typically use bearer tokens, and custom JavaScript code running in the browser must send the token along by explicitly adding the token to a request. However, there are also APIs living inside the same server process as a web application and using the same cookie as the application for authentication. This is the type of scenario where you must use anti forgery tokens to prevent an XSRF. XSRF Tokens and ASP.NET Core APIs There is no additional work required to validate an anti-forgery token in an API request, because the [ValidateAntiForgeryToken] attribute in ASP.NET Core will look for tokens in a posted form input, or in an HTTP header. But, there is some additional work required to give the client a token. This is where the IAntiforgery service comes in. [Route("api/[controller]")] public class XsrfTokenController : Controller { private readonly IAntiforgery _antiforgery; public XsrfTokenController(IAntiforgery antiforgery) { _antiforgery = antiforgery; } [HttpGet] public IActionResult Get() { var tokens = _antiforgery.GetAndStoreTokens(HttpContext); return new ObjectResult(new { token = tokens.RequestToken, tokenName = tokens.HeaderName }); } } In the above code, we can inject the IAntiforgery service for an application and provide an endpoint a client can call to fetch the token and token name it needs to use in a request. The GetAndStoreTokens method will not only return a data structure with token information, it will also issue the anti-forgery cookie the framework will use in one-half of the validation algorithm. We can use a new ObjectResult to serialize the token information back to the client. Note: if you want to change the header name, you can change the AntiForgeryOptions during startup of the application [1]. With the endpoint in place, you’ll need to fetch and store the token from JavaScript on the client. Here is a bit of Typescript code using Axios to fetch the token, then configure Axios to send the token with every HTTP request. import axios, { AxiosResponse } from "axios"; import { IGolfer, IMatchSet } from "models" import { errorHandler } from "./error"; const XSRF_TOKEN_KEY = "xsrfToken"; const XSRF_TOKEN_NAME_KEY = "xsrfTokenName"; function reportError(message: string, response: AxiosResponse) { const formattedMessage = `${message} : Status ${response.status} ${response.statusText}` errorHandler.reportMessage(formattedMessage); } function setToken({token, tokenName}: { token: string, tokenName: string }) { window.sessionStorage.setItem(XSRF_TOKEN_KEY, token); window.sessionStorage.setItem(XSRF_TOKEN_NAME_KEY, tokenName); axios.defaults.headers.common[tokenName] = token; } function initializeXsrfToken() { let token = window.sessionStorage.getItem(XSRF_TOKEN_KEY); let tokenName = window.sessionStorage.getItem(XSRF_TOKEN_NAME_KEY); if (!token || !tokenName) { axios.get("/api/xsrfToken") .then(r => setToken( .catch(r => reportError("Could not fetch XSRFTOKEN", r)); } else [...]

Building Vendor and Feature Bundles with webpack

Thu, 01 Dec 2016 09:12:00 Z

The joke I’ve heard goes like this: I went to an all night JavaScript hackathon and by morning we finally had the build process configured! Like most jokes there is an element of truth to the matter. I’ve been working on an application that is mostly server rendered and requires minimal amounts of JavaScript. However, there are “pockets” in the application that require a more sophisticated user experience, and thus a heavy dose of JavaScript. These pockets all map to a specific application feature, like “the accounting dashboard” or “the user profile management page”. These facts led me to the following requirements: 1. All third party code should build into a single file. 2. Each application feature should build into a distinct file. Requirement #1 requires the “vendor bundle”. This bundle contains all the frameworks and libraries each application feature depends on. By building all this code into a single bundle, the client can effectively cache the bundle, and we only need to rebuild the bundle when a framework updates. Requirement #2 requires multiple “feature bundles”. Feature bundles are smaller than the vendor bundle, so feature bundles can re-build each time a file inside changes. In my project, an ASP.NET Core application using feature folders, the scripts for features are scattered inside the feature folders. I want to build feature bundles into an output folder and retain the same feature folder structure (example below). I tinkered with various JavaScript bundlers and task runners until I settled on webpack. With webpack  I found a solution that would support the above requirements and provide a decently fast development experience. The Vendor Bundle Here is a webpack configuration file for building the vendor bundle. In this case we will build a vendor bundle that includes React and ReactDOM, but webpack will examine any JS module name you add to the vendor array of the configuration file. webpack will place the named module and all of its dependencies into the output bundle named vendor. For example, Angular 2 applications would include “@angular/common” in the list. Since this is an ASP.NET Core application, I’m building the bundle into a subfolder of the wwwroot folder. const webpack = require("webpack"); const path = require("path"); const assets = path.join(__dirname, "wwwroot", "assets"); module.exports = { resolve: { extensions: ["", ""] }, entry: { vendor: [ "react", "react-dom" ... and so on ... ] }, output: { path: assets, filename: "[name]", library: "[name]_dll" }, plugins: [ new webpack.DllPlugin({ path: path.join(assets, "[name]-manifest.json"), name: '[name]_dll' }), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] }; webpack offers a number of different plugins to deal with common code, like the CommonsChunk plugin. After some experimentation, I’ve come to prefer the DllPlugin for this job. For Windows developers, the DllPlugin name is confusing, but the idea is to share common code using “dynamically linked libraries”, so the name borrows from Windows. DllPlugin will keep track of all the JS modules webpack includes in a bundle and will write these module names into a manifest file. In this configuration, the manifest name is vendor-manifest.json. When we build the individual feature bundles, we can use the manifest file to know which modules do not need to appear in those feature bundles. Important note: make sure the output.library property and the DllPlugin name property match. It is this match that allows a library to dynamically “link” at runtime. I typically place this vendor configuration into a file named webpack.vendor.config. A [...]

AddFeatureFolders and UseNodeModules On Nuget For ASP.NET Core

Tue, 29 Nov 2016 09:12:00 Z

Here are a few small projects I put together last month. AddFeatureFolders I think feature folders are the best way to organize controllers and views in ASP.NET MVC. If you aren’t familiar with feature folders, see Steve Smith’s MSDN article: Feature Slices for ASP.NET Core MVC. To use feature folders with the OdeToCode.AddFeatureFolders NuGet package, all you need is to install the package and add one line of code to ConfigureServices. public void ConfigureServices(IServiceCollection services) { services.AddMvc() .AddFeatureFolders(); // "Features" is the default feature folder root. To override, pass along // a new FeatureFolderOptions object with a different FeatureFolderName } The sample application in GitHub demonstrates how you can still use Layout views and view components with feature folders. I’ve also allowed for nested folders, which I’ve found useful in complex, hierarchical applications. Nesting allows the feature structure to follow the user experience when the UI offers several layers of drill-down. UseNodeModules With the OdeToCode.UseNodeModules package you can serve files directly from the node_modules folder of a web project. Install the middleware in the Configure method of Startup. public void Configure(IApplicationBuilder app, IHostingEnvironment environment) { // ... app.UseNodeModules(environment); // ... } I’ve mentioned using node_modules on this blog before, and the topic generated a number of questions. Let me explain when and why I find UseNodeModules useful. First, understand that npm has traditionally been a tool to install code you want to execute in NodeJS. But, over the last couple of years, more and more front-end dependencies have moved to npm, and npm is doing a better job supporting dependencies for both NodeJS and the browser. Today, for example, you can install React, Bootstrap, Aurelia, jQuery, Angular 2, and many other front-end packages of both the JS and CSS flavor. Secondly, many people want to know why I don’t use Bower. Bower played a role in accelerating front-end development and is a great tool. But, when I can fetch all the resources I need directly using npm, I don’t see the need to install yet another package manager.  Thirdly, many tools understand and integrate with the node_modules folder structure and can resolve dependencies using package.json files and Node’s CommonJS module standard. These are tools like TypeScript and front-end tools like WebPack. In fact, TypeScript has adopted the “no tools required but npm” approach. I no longer need to use tsd or typings when I have npm and @types. Given the above points, it is easy to stick with npm for all third-party JavaScript modules. It is also easy to install a library like Bootstrap and serve the minified CSS file directly from Bootstrap’s dist folder. Would I recommend every project take this approach? No! But, in certain conditions I’ve found it useful to serve files directly from node_modules. With the environment tag helper in ASP.NET Core you can easily switch between  serving from node_modules (say, for debugging) and a CDN in production and QA. Enjoy! [...]

ASP.NET Core and the Enterprise Part 3: Middleware

Tue, 22 Nov 2016 09:12:00 Z

An enterprise developer moving to ASP.NET Core must feel a bit like a character in Asimov’s “The Gods Themselves”. In the book, humans contact aliens who live in an alternate universe with different physical laws. The landscape of ASP.NET Core is familiar. You can still find controllers, views, models, DbContext classes, script files, and CSS. But, the infrastructure and the laws are different. For example, the hierarchy of XML configuration files in this new world is gone. The twin backbones of HTTP processing, HTTP Modules and HTTP Handlers, are also gone. In this post, we’ll talk about the replacement for modules and handlers, which is middleware. Processing HTTP Requests Previous versions of ASP.NET gave us a customizable but rather inflexible HTTP processing pipeline. This pipeline allowed us to install HTTP modules and execute logic for cross cutting concerns like logging, authentication, and session management. Each module had the ability to subscribe to preset events raised by ASP.NET. When implementing a logger, for example, you might subscribe to the BeginRequest and EndRequest events and calculate the amount of time spent in between. One of the tricks in implementing a module was knowing the order of events in the pipeline so you could subscribe to an event and inspect an HTTP message at the right time. Catch a too-early event, and you might not know the user’s identity. Catch a too-late event and a handler might have already changed a record in the database. Although the old model of HTTP processing served us well for over a decade, ASP.NET Core brings us a new pipeline based on middleware. The new pipeline is completely ours to configure and customize. During the startup of our application, we’ll use code to tell ASP.NET which pieces of middleware we want in the application, and the order in which the middleware should execute. Once an HTTP request arrives at the ASP.NET server, the server will pass the request to the first piece of middleware in our application. Each piece of middleware has the option of creating a response, or calling into the next piece of middleware. One way to visualize the middleware is to think of a stack of components in your application. The stack builds a bi-directional pipeline. The first component will see every incoming request. If the first component passes a request to the next component in the stack, the first component will eventually see the response coming out of a component further up the stack. A piece of middleware that comes late in the stack may never see a request if the previous piece of middleware does not pass the request along. This might happen, for example, because a piece of middleware you use for authorization checks finds out that the current user doesn’t have access to the application. It’s important to know that some pieces of middleware will never create a response and only exist to implement cross cutting concerns. For example, there is a middleware component to transform an authentication token into a user identity, and another middleware component to add CORS headers into an outgoing response. Microsoft and other third parties provide us with hundreds of middleware components. Other pieces of middleware will sometimes jump in to create or override an HTTP response at the appropriate time. For example, Microsoft provides a piece of middleware that will catch unhandled exceptions in the pipeline and create a “developer friendly” HTML response with a stack trace. A different piece of middleware will map the exception to a “user friendly” error page. You can configure different middleware pipelines for different environments, such as development versus production. Another way to visualize the middleware pipeline is to think of a chain of responsibility. Each piece of middleware has a specific f[...]

Getting Started with Reactive Programming Using RxJS

Wed, 16 Nov 2016 09:12:00 Z

My latest course is now available on Plurasight. From the description:

Reactive programming is more than an API. Reactive programming is a mindset. In this course,you'll see how to setup and install RxJS and work with your first Observable and Observer. You'll use RxJS to manage asynchronous data delivered from DOM events, network requests, and JavaScript promises. You'll learn how to handle errors and exceptions in asynchronous code, and learn about the RxJS operators you can use as composable building blocks in a data processing pipeline. By the end of the course, you'll have the fundamental knowledge you need to use RxJS in your own applications, and use other frameworks that rely on RxJS. 


ASP.NET Core and the Enterprise Part 2: Hosting

Tue, 25 Oct 2016 09:12:00 Z

The hosting model for ASP.NET Core is dramatically different from previous versions of ASP.NET. This is also one area where I’ve seen a fair amount of misunderstanding. ASP.NET Core is a set of libraries you can install into a project using the NuGet package manager. One of the packages you might install for HTTP message processing is a package named Microsoft.AspNetCore.Server.Kestrel. The word server is in the name because this new version of ASP.NET includes its own web servers, and the featured server has the name Kestrel.  In the animal kingdom, a Kestrel is a bird of prey in the falcon family, but in the world of ASP.NET, Kestrel is a cross-platform web server. Kestrel builds on top of libuv, a cross-platform library for asynchronous I/O. libuv gives Kestrel a consistent streaming API to use across Windows and Linux. You also have the option of plugging in a server based on the Windows HTTP Server API (Web Listener), or writing your own IServer implementation. Without good reason, you’ll want to use Kestrel by default. An Overview of How It Works You can configure the server for your application in the entry point of the application. There is no Application_Start event in this new version of ASP.NET, nor is there any default XML configuration files. Instead, the start of the application is a static Main method, and configuration lives in the code. public class Program { public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseIISIntegration() .UseStartup() .Build(); host.Run(); } } If you are looking at the above Program class with a static Main method and thinking the code looks like what you would see in a .NET console mode application, then you are thinking correctly. Compiling an ASP.NET project still produces a .dll file, but with .NET Core we launch the web server from the command line with the dotnet command line interface. The dotnet host will ultimately call into the Main method. In this way of working, .NET Core resembles environments like Java, Ruby, and Python.   If you are working on ASP.NET Core from Visual Studio, then you might never see the command line. Visual Studio continues to do a job it has always done, which is to hide some of the lower level details. With Visual Studio, you can set the application to run with Kestrel as a direct host, or to run the application in IIS Express (the default setting). In both cases, the dotnet host and Kestrel server are in play, even when using IIS Express. This brings us to the topic of running applications in production.   ASP.NET Core Applications in Production One you realize that ASP.NET includes a cross-platform host and web server, you might think you have all the pieces you need to push to production. There is some truth to this line of thought. Once you’ve invoked the Run method on the WebHost object in the above code, you have a running web server that will listen to HTTP requests and can work on everything from a 32 core Linux server to a Raspberry Pi. However, Microsoft strongly suggests using a hardened reverse proxy in front of your Kestrel server in production. The proxy could be IIS on Windows, or Apache or NGINX.     Why the reverse proxy? In short because technologies like IIS and Apache have been around for over 20 years and have seen all the evils the Internet can deliver to a network socket. Kestrel, on the other hand, is still a newborn babe. Also, reliable servers require additional infrastructure like careful process management to restart failed applications. Outside of ASP.NET, in the world of Java, Python, Ruby, and NodeJS web apps, you’ll see tools like[...]