Subscribe: Poonam Lall
http://odetocode.com/Blogs/poonam/Rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
application  code  core  feature  framework  javascript  middleware  module  modules  net core  net  new  rdquo  server  webpack 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Poonam Lall

OdeToCode by K. Scott Allen



OdeToCode by K. Scott Allen



Copyright: (c) 2004 to 2016 OdeToCode LLC
 



Building Vendor and Feature Bundles with webpack

Thu, 01 Dec 2016 09:12:00 Z

The joke I’ve heard goes like this: I went to an all night JavaScript hackathon and by morning we finally had the build process configured! Like most jokes there is an element of truth to the matter. I’ve been working on an application that is mostly server rendered and requires minimal amounts of JavaScript. However, there are “pockets” in the application that require a more sophisticated user experience, and thus a heavy dose of JavaScript. These pockets all map to a specific application feature, like “the accounting dashboard” or “the user profile management page”. These facts led me to the following requirements: 1. All third party code should build into a single file. 2. Each application feature should build into a distinct file. Requirement #1 requires the “vendor bundle”. This bundle contains all the frameworks and libraries each application feature depends on. By building all this code into a single bundle, the client can effectively cache the bundle, and we only need to rebuild the bundle when a framework updates. Requirement #2 requires multiple “feature bundles”. Feature bundles are smaller than the vendor bundle, so feature bundles can re-build each time a file inside changes. In my project, an ASP.NET Core application using feature folders, the scripts for features are scattered inside the feature folders. I want to build feature bundles into an output folder and retain the same feature folder structure (example below). I tinkered with various JavaScript bundlers and task runners until I settled on webpack. With webpack  I found a solution that would support the above requirements and provide a decently fast development experience. The Vendor Bundle Here is a webpack configuration file for building the vendor bundle. In this case we will build a vendor bundle that includes React and ReactDOM, but webpack will examine any JS module name you add to the vendor array of the configuration file. webpack will place the named module and all of its dependencies into the output bundle named vendor. For example, Angular 2 applications would include “@angular/common” in the list. Since this is an ASP.NET Core application, I’m building the bundle into a subfolder of the wwwroot folder. const webpack = require("webpack"); const path = require("path"); const assets = path.join(__dirname, "wwwroot", "assets"); module.exports = { resolve: { extensions: ["", ""] }, entry: { vendor: [ "react", "react-dom" ... and so on ... ] }, output: { path: assets, filename: "[name]", library: "[name]_dll" }, plugins: [ new webpack.DllPlugin({ path: path.join(assets, "[name]-manifest.json"), name: '[name]_dll' }), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] }; webpack offers a number of different plugins to deal with common code, like the CommonsChunk plugin. After some experimentation, I’ve come to prefer the DllPlugin for this job. For Windows developers, the DllPlugin name is confusing, but the idea is to share common code using “dynamically linked libraries”, so the name borrows from Windows. DllPlugin will keep track of all the JS modules webpack includes in a bundle and will write these module names into a manifest file. In this configuration, the manifest name is vendor-manifest.json. When we build the individual feature bundles, we can use the manifest file to know which modules do not need to appear in those feature bundles. Important note: make sure the output.library property and the DllPlugin name property match. It is this match that allows a library to dynamically “link” at runtime. I typically place this vendor configuration into a file named webpack.vendor.config. A simple npm script entry of “webpack --config webpack.vendor.config” will build the bundle on an as-needed basis. Feature Bund[...]



AddFeatureFolders and UseNodeModules On Nuget For ASP.NET Core

Tue, 29 Nov 2016 09:12:00 Z

Here are a few small projects I put together last month. AddFeatureFolders I think feature folders are the best way to organize controllers and views in ASP.NET MVC. If you aren’t familiar with feature folders, see Steve Smith’s MSDN article: Feature Slices for ASP.NET Core MVC. To use feature folders with the OdeToCode.AddFeatureFolders NuGet package, all you need is to install the package and add one line of code to ConfigureServices. public void ConfigureServices(IServiceCollection services) { services.AddMvc() .AddFeatureFolders(); // "Features" is the default feature folder root. To override, pass along // a new FeatureFolderOptions object with a different FeatureFolderName } The sample application in GitHub demonstrates how you can still use Layout views and view components with feature folders. I’ve also allowed for nested folders, which I’ve found useful in complex, hierarchical applications. Nesting allows the feature structure to follow the user experience when the UI offers several layers of drill-down. UseNodeModules With the OdeToCode.UseNodeModules package you can serve files directly from the node_modules folder of a web project. Install the middleware in the Configure method of Startup. public void Configure(IApplicationBuilder app, IHostingEnvironment environment) { // ... app.UseNodeModules(environment); // ... } I’ve mentioned using node_modules on this blog before, and the topic generated a number of questions. Let me explain when and why I find UseNodeModules useful. First, understand that npm has traditionally been a tool to install code you want to execute in NodeJS. But, over the last couple of years, more and more front-end dependencies have moved to npm, and npm is doing a better job supporting dependencies for both NodeJS and the browser. Today, for example, you can install React, Bootstrap, Aurelia, jQuery, Angular 2, and many other front-end packages of both the JS and CSS flavor. Secondly, many people want to know why I don’t use Bower. Bower played a role in accelerating front-end development and is a great tool. But, when I can fetch all the resources I need directly using npm, I don’t see the need to install yet another package manager.  Thirdly, many tools understand and integrate with the node_modules folder structure and can resolve dependencies using package.json files and Node’s CommonJS module standard. These are tools like TypeScript and front-end tools like WebPack. In fact, TypeScript has adopted the “no tools required but npm” approach. I no longer need to use tsd or typings when I have npm and @types. Given the above points, it is easy to stick with npm for all third-party JavaScript modules. It is also easy to install a library like Bootstrap and serve the minified CSS file directly from Bootstrap’s dist folder. Would I recommend every project take this approach? No! But, in certain conditions I’ve found it useful to serve files directly from node_modules. With the environment tag helper in ASP.NET Core you can easily switch between  serving from node_modules (say, for debugging) and a CDN in production and QA. Enjoy! [...]



ASP.NET Core and the Enterprise Part 3: Middleware

Tue, 22 Nov 2016 09:12:00 Z

An enterprise developer moving to ASP.NET Core must feel a bit like a character in Asimov’s “The Gods Themselves”. In the book, humans contact aliens who live in an alternate universe with different physical laws. The landscape of ASP.NET Core is familiar. You can still find controllers, views, models, DbContext classes, script files, and CSS. But, the infrastructure and the laws are different. For example, the hierarchy of XML configuration files in this new world is gone. The twin backbones of HTTP processing, HTTP Modules and HTTP Handlers, are also gone. In this post, we’ll talk about the replacement for modules and handlers, which is middleware. Processing HTTP Requests Previous versions of ASP.NET gave us a customizable but rather inflexible HTTP processing pipeline. This pipeline allowed us to install HTTP modules and execute logic for cross cutting concerns like logging, authentication, and session management. Each module had the ability to subscribe to preset events raised by ASP.NET. When implementing a logger, for example, you might subscribe to the BeginRequest and EndRequest events and calculate the amount of time spent in between. One of the tricks in implementing a module was knowing the order of events in the pipeline so you could subscribe to an event and inspect an HTTP message at the right time. Catch a too-early event, and you might not know the user’s identity. Catch a too-late event and a handler might have already changed a record in the database. Although the old model of HTTP processing served us well for over a decade, ASP.NET Core brings us a new pipeline based on middleware. The new pipeline is completely ours to configure and customize. During the startup of our application, we’ll use code to tell ASP.NET which pieces of middleware we want in the application, and the order in which the middleware should execute. Once an HTTP request arrives at the ASP.NET server, the server will pass the request to the first piece of middleware in our application. Each piece of middleware has the option of creating a response, or calling into the next piece of middleware. One way to visualize the middleware is to think of a stack of components in your application. The stack builds a bi-directional pipeline. The first component will see every incoming request. If the first component passes a request to the next component in the stack, the first component will eventually see the response coming out of a component further up the stack. A piece of middleware that comes late in the stack may never see a request if the previous piece of middleware does not pass the request along. This might happen, for example, because a piece of middleware you use for authorization checks finds out that the current user doesn’t have access to the application. It’s important to know that some pieces of middleware will never create a response and only exist to implement cross cutting concerns. For example, there is a middleware component to transform an authentication token into a user identity, and another middleware component to add CORS headers into an outgoing response. Microsoft and other third parties provide us with hundreds of middleware components. Other pieces of middleware will sometimes jump in to create or override an HTTP response at the appropriate time. For example, Microsoft provides a piece of middleware that will catch unhandled exceptions in the pipeline and create a “developer friendly” HTML response with a stack trace. A different piece of middleware will map the exception to a “user friendly” error page. You can configure different middleware pipelines for different environments, such as development versus production. Another way to visualize the middleware pipeline is to think of a chain of responsibility. Each piece of middleware has a specific focus. A piece of middleware to log every request would appear early in the chain to ensure the logging middleware sees every request. A l[...]



Getting Started with Reactive Programming Using RxJS

Wed, 16 Nov 2016 09:12:00 Z

My latest course is now available on Plurasight. From the description:

Reactive programming is more than an API. Reactive programming is a mindset. In this course,you'll see how to setup and install RxJS and work with your first Observable and Observer. You'll use RxJS to manage asynchronous data delivered from DOM events, network requests, and JavaScript promises. You'll learn how to handle errors and exceptions in asynchronous code, and learn about the RxJS operators you can use as composable building blocks in a data processing pipeline. By the end of the course, you'll have the fundamental knowledge you need to use RxJS in your own applications, and use other frameworks that rely on RxJS. 

(image)




ASP.NET Core and the Enterprise Part 2: Hosting

Tue, 25 Oct 2016 09:12:00 Z

The hosting model for ASP.NET Core is dramatically different from previous versions of ASP.NET. This is also one area where I’ve seen a fair amount of misunderstanding. ASP.NET Core is a set of libraries you can install into a project using the NuGet package manager. One of the packages you might install for HTTP message processing is a package named Microsoft.AspNetCore.Server.Kestrel. The word server is in the name because this new version of ASP.NET includes its own web servers, and the featured server has the name Kestrel.  In the animal kingdom, a Kestrel is a bird of prey in the falcon family, but in the world of ASP.NET, Kestrel is a cross-platform web server. Kestrel builds on top of libuv, a cross-platform library for asynchronous I/O. libuv gives Kestrel a consistent streaming API to use across Windows and Linux. You also have the option of plugging in a server based on the Windows HTTP Server API (Web Listener), or writing your own IServer implementation. Without good reason, you’ll want to use Kestrel by default. An Overview of How It Works You can configure the server for your application in the entry point of the application. There is no Application_Start event in this new version of ASP.NET, nor is there any default XML configuration files. Instead, the start of the application is a static Main method, and configuration lives in the code. public class Program { public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseIISIntegration() .UseStartup() .Build(); host.Run(); } } If you are looking at the above Program class with a static Main method and thinking the code looks like what you would see in a .NET console mode application, then you are thinking correctly. Compiling an ASP.NET project still produces a .dll file, but with .NET Core we launch the web server from the command line with the dotnet command line interface. The dotnet host will ultimately call into the Main method. In this way of working, .NET Core resembles environments like Java, Ruby, and Python.   If you are working on ASP.NET Core from Visual Studio, then you might never see the command line. Visual Studio continues to do a job it has always done, which is to hide some of the lower level details. With Visual Studio, you can set the application to run with Kestrel as a direct host, or to run the application in IIS Express (the default setting). In both cases, the dotnet host and Kestrel server are in play, even when using IIS Express. This brings us to the topic of running applications in production.   ASP.NET Core Applications in Production One you realize that ASP.NET includes a cross-platform host and web server, you might think you have all the pieces you need to push to production. There is some truth to this line of thought. Once you’ve invoked the Run method on the WebHost object in the above code, you have a running web server that will listen to HTTP requests and can work on everything from a 32 core Linux server to a Raspberry Pi. However, Microsoft strongly suggests using a hardened reverse proxy in front of your Kestrel server in production. The proxy could be IIS on Windows, or Apache or NGINX.     Why the reverse proxy? In short because technologies like IIS and Apache have been around for over 20 years and have seen all the evils the Internet can deliver to a network socket. Kestrel, on the other hand, is still a newborn babe. Also, reliable servers require additional infrastructure like careful process management to restart failed applications. Outside of ASP.NET, in the world of Java, Python, Ruby, and NodeJS web apps, you’ll see tools like Phusion Passenger and PM2 work in combination with the reverse proxy. These types of tools provide the watchdog monitoring, logging, ba[...]



ASP.NET Core and the Enterprise: Part 1 Frameworks

Tue, 11 Oct 2016 09:11:00 Z

When larger companies with larger development teams ask me about ASP.NET Core, I generally frame the conversation in terms of risk and reward. Yes, there is a new architecture and yes, there are new features, but, when working on business applications with long lifecycles you want to look beyond the obvious topics for evangelism and measure the risks. There are six areas to consider in evaluating ASP.NET Core and it’s impact on business and operations. First is understanding ASP.NET Core’s relationship with the new .NET Core framework. That’s the topic for this post. In the future we’ll also be evaluating the new hosting model for ASP.NET Core, the HTTP processing pipeline, security features, the new data access landscape, and finally ASP.NET Core itself. ASP.NET Core and the .NET Frameworks We now have two distinct flavors of the .NET framework to choose from. First there is the full .NET framework. The full .NET framework is the mature .NET framework that has been around since the beginning, is pre-installed with Windows, and includes application level frameworks like windows forms, web forms, WCF, and WPF. The most recent version at this time is 4.6.2.   We also have .NET Core, a new and modular version of .NET that runs on more than just the Windows operating system. Where ASP.NET Core fits into the picture is that ASP.NET Core is an application framework that can run on either the full .NET framework or on .NET Core. Selecting a .NET framework flavor is one of your first decisions when making a move to ASP.NET Core. Do you want to run on the full framework, or .NET Core, or will you need to support both? When you choose to run on the full .NET framework you are running on the framework you already know. Yes, ASP.NET Core will be a bit different from the ASP.NET frameworks of the past, but your underlying framework is the same. In order to understand why you might chose .NET Core instead of the full framework we need to dig into the risk and rewards of .NET Core. .NET Core Rewards .NET Core is a cross-platform framework, meaning.NET Core will run on Windows, on the Mac, and on various flavors of Linux. .NET Core also works in a Docker container for those who are using or thinking about using Docker software containers. One must understand that ASP.NET Core does not require .NET Core as the underlying framework. But, if you do choose to use .NET Core as your underlying framework, you will be able to author and deploy ASP.NET applications and services on all of these various platforms. Linux is, of course, a big target for server-side applications. Most enterprises are heterogeneous and already have the IT expertise to run business applications on both Windows and Linux servers. There is also the opportunity to save money as Linux servers typically run a bit cheaper, particularly when using cloud based infrastructure. On Azure, for example, a single 4 core virtual machine running Linux is currently around $90 cheaper per month than its Windows counterpart. What’s not immediately obvious when talking about a cross-platform .NET framework is how the necessary tooling also works across platforms. .NET Core has no hard dependency on Windows or Visual Studio. All of the low level tools you need to do work will run from the command line. There is an entire new world of text editors and IDEs available that we can now use to develop .NET applications, including Visual Studio code from Microsoft, Project Rider from Jet Brains, as well as text editors like Sublime and Atom. I’ve worked on more than one project over the years where there is a front-end specialist who uses a Mac. The front-end developer hasn’t been able to install Visual Studio and work with an ASP.NET project the same way a Windows developer would work with the project. This type of scenario is considerably easier with .NET Core because a developer on Apple hardware can wri[...]



Updated Videos For ASP.NET Core

Wed, 05 Oct 2016 11:12:00 Z

I’ve re-recorded my ASP.NET Core Fundamentals course for Pluralsight using the released bits of ASP.NET Core 1.0. I hope you enjoy the videos!

In the future I’d like to record videos showing my opinionated approach to using ASP.NET on larger projects. Let Pluralsight know if that’s the type of content you’d like to see!

 

 

(image)




The Troubles With JavaScript Modules

Tue, 04 Oct 2016 11:12:00 Z

This post is one in a series of posts where I describe common problems developers face using ES2015 features of JavaScript. In this post we look at modules. The Syntax Pitfall The first pitfall developers hit when using modules is making assumptions about the syntax. I fell into this trap myself. import {Person, Animal} from "./lib" Curly braces in JavaScript appear everywhere. We use them to define block statements, object literals, and more recently, use them for destructuring. Once I learned about destructuring, I looked at my next import statement and wrongly assumed JavaScript was destructuring a node-like module object into new variables. That’s all wrong! Import statements create bindings with behaviors that transcend mere variable declarations. Immutable Bindings Consider a module with the following export. export let counter = 0; And now a module that wants to consume the export. import {counter} from "./lib/exporter"; counter = 2; The code trying to set the counter is in error because import bindings are immutable. What sort of error will you see? The specification calls for a TypeError, however:     - we currently don’t have a runtime environment that uses ES modules natively because the module loading spec is still a work in progress, and     - we rely on transpilers to transform ES 2015 imports and exports into de-facto standards like CommonJS where the rules are relaxed For those reasons, the error we will see (or not see) depends on the tools we use. For example, the TypeScript compiler will give an error on any assignment to counter – “Invalid left-hand side of assignment expression”. Babel will give us a similar build-time error. As an aside, this is the type of scenario that worries me. Features like import bindings, variable scopes, const, and others might work differently when we transpile for newer runtimes in the future and use these features natively. I don’t foresee catastrophic problems, but there will be some headaches along the way. Live Bindings The behavior of bindings also surprises some developers, particularly when importing state from another module. Let’s add some additional exports to the exporting module. export let counter = 0; export let creature = { name: "Oscar" }; export function increment() { counter += 1; return counter; } export function inspect() { return creature.name; } export function reset() { creature = { name: "Oscar" }; } Although we can’t import and then mutate the value of the counter binding, we can call a piece of code in the exporting module that can change the value of the counter. import {counter, increment} from "./lib/exporter"; describe("binding behavior", () => { it("is live", () => { expect(counter).toBe(0); increment(); expect(counter).toBe(1); }); }); Notice the change to counter is visible inside the importing module. The same behavior holds for objects, too. import {creature, inspect, reset} from "./lib/exporter"; describe("binding behavior", () => { it("is live", () => { expect(creature.name).toBe("Oscar"); // this is legal - not trying to change the binding creature.name = "Scott"; // everyone sees the change, even the exporting module expect(inspect()).toBe("Scott"); // but only the exporter can change the binding value reset(); expect(creature.name).toBe("Oscar"); }) }); For developers, it’s important to understand that modules are singletons. Any module importing counter and creature will see the same values. Static Semantics Node developers accustomed to the flexibility of ConmmonJS can be disappointed by the inflexible, concrete nature of ES 2015 modules. The ES specification gives tools and runtimes the ability to statically analyze module code to d[...]



Database Migrations and Seeding in ASP.NET Core

Tue, 20 Sep 2016 09:12:00 Z

There is an instant in time when an ASP.NET application is fully alive and configured but it still held in check and waiting for a signal from the starter’s gun. This moment exists between the lines of code in Program.cs, and it is here where I’ve found a nice place to automatically run database migrations and seed a database based on command line arguments to the program.

public static void Main(string[] args)
{
    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseIISIntegration()
        .UseStartup()
        .Build();

    ProcessDbCommands(args, host);

    host.Run();
}

ProcessDBCommands is the method I use in the above code, and the logic here can be as simple or as complicated as you need. In my case, I’m just going to look for keywords in the arguments to drop, migrate, and seed the database. For example, running “dotnet run dropdb migratedb seeddb” will execute all three options against the configured database.

private static void ProcessDbCommands(string[] args, IWebHost host)
{
    var services = (IServiceScopeFactory)host.Services.GetService(typeof(IServiceScopeFactory));

    using (var scope = services.CreateScope())
    {
        if (args.Contains("dropdb"))
        {
            Console.WriteLine("Dropping database");
            var db = GetLeagueDb(scope);
            db.Database.EnsureDeleted();
        }
        if (args.Contains("migratedb"))
        {
            Console.WriteLine("Migrating database");
            var db = GetLeagueDb(scope);
            db.Database.Migrate();
        }
        if (args.Contains("seeddb"))
        {
            Console.WriteLine("Seeding database");
            var db = GetLeagueDb(scope);
            db.Seed();
        }
    }        
}

private static LeagueDb GetLeagueDb(IServiceScope services)
{
    var db = services.ServiceProvider.GetRequiredService();           
    return db;
}

A couple notes on the above code.

IWebHost gives us access to a fully configured environment, so connection strings and services are available just as they are inside the rest of the post-startup application code.

The db.Database.EnsureDeleted and db.Database.Migrate methods are built-in APIs for EF Core. The Seed method, on the other hand, is a custom extension method.




The Troubles with JavaScript Classes

Tue, 13 Sep 2016 09:12:00 Z

Over the summer I gave a talk titled “The New Dragons of JavaScript”. The idea was to provide, like the cartographers of the Old World, a map of where the dragons and sea serpents live in the new JavaScript feature landscape. These mythological beasts have a tendency to introduce confusion or pain in software development.   One area I covered were the quirks you might run into with JavaScript classes. Some introductions explain how classes work by describing the de-sugaring a transpiler applies to transform a class into the classical constructor function and prototype manipulation we’ve used in JavaScript for many years. class Employee { constructor(name) { this._name = name; } doWork() { return `${this._name} is working`; } } // above code becomes ... let Employee = function(name) { this._name = name; }; Employee.prototype = { doWork: function() { return `${this._name} is working`; } } Constructor functions and prototypes are a useful mental model to have at times, but also leads to trouble because classes aren’t exactly like using constructor functions. For example, functions in JavaScript will hoist, but classes do not. If you ever want to push the definition of a small utility class to the bottom of a file and try to use the class in the code at the top of the file, you’ll be setting yourself up for an error. // this code works const e = new Employee(); function Employee() { } // this code produces a ReferenceError const e = new Employee(); class Employee { } Technically, classes (and variables declared with let and const) do hoist themselves, but they hoist themselves into an area the early specs referred to as the “temporal dead zone”. Accessing a symbol in its TDZ creates a RefernceError.  As an aside, “Temporal dead zone” is, I think, one of the greatest computer science terms ever conceived and should also be the title of a Hollywood film starring Mark Wahlberg. Another difference between creating an object using a class and creating an object with a constructor function is in reflective code. It’s easy to discover the methods of an object instantiated with a constructor function using a for in loop. const Human = function () { } Human.prototype.doWork = function () { }; let names = []; for (const p in new Human()) { names.push(p); } // ["doWork"] The same code won’t work when using a class definition. class Horse { constructor() {} doWork() { } } names = []; for (const p in new Horse()) { names.push(p); } // [] However, it is possible to get to the methods of a class using some Object APIs. names = []; const prototype = Object.getPrototypeOf(new Horse()); for(const name of Object.getOwnPropertyNames(prototype)) { names.push(name); } // ["constructor", "doWork"] Coming soon – The Troubles with Modules. Previously in this series – The Trouble with JavaScript Arrow Functions [...]