Subscribe: Scott Hanselman's Computer Zen
http://www.hanselman.com/blog/SyndicationService.asmx/GetAtom
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
build  code  command  core  dotnet run  dotnet  https  ligatures  linux  net core  net  publish  run  urls  windows 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Scott Hanselman's Computer Zen

Scott Hanselman's Blog



Scott Hanselman on Programming, User Experience, The Zen of Computers and Life in General



 



Monospaced Programming Fonts with Ligatures

Thu, 20 Jul 2017 22:13:29 GMT

Typographic ligatures are when multiple characters appear to combine into a single character. Simplistically, when you type two or more characters and they magically attach to each other, you're using ligatures that were supported by your OS, your app, and your font. I did a blog post in 2011 on using OpenType Ligatures and Stylistic Sets to make nice looking wedding invitations. Most English laypeople aren't familiar with ligatures as such and are impressed by them! However, if your language uses ligatures as a fundamental building block, this kind of stuff is old hat. Ligatures are fundamental to Arabic script and when you're typing it up you'll see your characters/font change and ligatures be added as you type. For example here is ل ا with a space between them, but this is لا the same two characters with no space. Ligatures kicked in. OK, let's talk programming. Picking a programming font is like picking a religion. No matter what you pick someone will say you're wrong. Most people will agree at least that monospaced fonts are ideal for reading code and that both of you who use proportionally spaced fonts are destined for hell, or at the very least, purgatory. Beyond that, there's some really interesting programming fonts that have ligature support built in. It's important that you - as programmers - understand and remember that ligatures are just a view on the bytes that are your code. If you custom make a font that makes the = equals site a poop emoji, that's between you and your font. The same thing applies to ligatures. Your code is the same. Three of the most interesting and thoughtful monospaced programming fonts with ligatures are Fira Code, Monoid, and Hasklig. I say "thoughtful" but that's what I really mean - these folks have designed these fonts with programming in mind, considering spacing, feel, density, pleasantness, glance-ability, and a dozen other things that I'm not clever enough to think of. I'll be doing screenshots (and coding) in the free cross-platform Visual Studio Code. Go to your User Settings (Ctrl-,) or File | Preferences, and add your font name and turn on ligatures if you want to follow along. Example:// Place your settings in this file to overwrite the default settings { "editor.fontSize": 20, "editor.fontLigatures": true, "editor.fontFamily": "Fira Code" } Most of these fonts have dozens and dozens of ligature combinations and there is no agreement for "make this a single glyph" or "use ligatures for -> but not ==> so you'll need to try them out with YOUR code and make a decision for yourself. My sample code example can't be complete and how it looks and feels to you on your screen is all that matters. Here's my little sample. Note the differences.// FIRA CODEobject o; if (o is int i || (o is string s && int.TryParse(s, out i)) { /* use i */ }var x = 0xABCDEF;-> --> ==> != === !== && ||<=< http://www.hanselman.com <=>i++; #### *** Fira Code There's so much here. Look at how "www" turned into an interesting glyph. Things like != and ==> turn into arrows. HTML Comments are awesome. Double ampersands join together. I was especially impressed by the redefined hex "x". See how it's higher up and smaller than var x? Monoid Monoid prides itself on being crisp and readable on retina displays as well as at 9pt on low-res displays. I frankly can't understand how tiny font people can function. It gives me a headache to even consider programming at anything less than 14 to 16pt and I am usually around 20pt. And my vision is fine. ;) Monoid's goal is to be sleek and precise and the designer has gone out of their way to make sure there's no confusion between any two characters. Hasklig Hasklig takes the Source Code Pro font and adds ligatures. As you can tell by the name, it's great in Haskell, as for a while a number of Haskell people were taking to using single character (tiny) Unicode glyphs like ⇒ for things like =>. Clearly this was a problem best solved [...]



13 hours debugging a segmentation fault in .NET Core on Raspberry Pi and the solution was...

Tue, 18 Jul 2017 06:54:41 GMT

Debugging is a satisfying and special kind of hell. You really have to live it to understand it. When you're deep into it you never know when it'll be done. When you do finally escape it's almost always a DOH! moment. I spent an entire day debugging an issue and the solution ended up being a checkbox. NOTE: If you get a third of the way through this blog post and already figured it out, well, poop on you. Where were you after lunch WHEN I NEEDED YOU? I wanted to use a Raspberry Pi in a tech talk I'm doing tomorrow at a conference. I was going to show .NET Core 2.0 and ASP.NET running on a Raspberry Pi so I figured I'd start with Hello World. How hard could it be? You'll write and build a .NET app on Windows or Mac, then publish it to the Raspberry Pi. I'm using a preview build of the .NET Core 2.0 command line and SDK (CLI) I got from here. C:\raspberrypi> dotnet new consoleC:\raspberrypi> dotnet runHello World!C:\raspberrypi> dotnet publish -r linux-armMicrosoft Build Engine version for .NET Core raspberrypi1 -> C:\raspberrypi\bin\Debug\netcoreapp2.0\linux-arm\raspberrypi.dll raspberrypi1 -> C:\raspberrypi\bin\Debug\netcoreapp2.0\linux-arm\publish\ Notice the simplified publish. You'll get a folder for linux-arm in this example, but could also publish osx-x64, etc. You'll want to take the files from the publish folder (not the folder above it) and move them to the Raspberry Pi. This is a self-contained application that targets ARM on Linux so after the prerequisites that's all you need. I grabbed a mini-SD card, headed over to https://www.raspberrypi.org/downloads/ and downloaded the latest Raspbian image. I used etcher.io - a lovely image burner for Windows, Mac, or Linux - and wrote the image to the SD Card. I booted up and got ready to install some prereqs. I'm only 15 min in at this point. Setting up a Raspberry Pi 2 or Raspberry Pi 3 is VERY smooth these days. Here's the prereqs for .NET Core 2 on Ubuntu or Debian/Raspbian. Install them from the terminal, natch.sudo apt-get install libc6 libcurl3 libgcc1 libgssapi-krb5-2 libicu-dev liblttng-ust0 libssl-dev libstdc++6 libunwind8 libuuid1 zlib1g I also added an FTP server and ran vncserver, so I'd have a few ways to talk to the Raspberry Pi. Yes, I could also SSH in but I have a spare monitor, and with that monitor plus VNC I didn't see a need.sudo apt-get pure-ftpdvncserver Then I fire up Filezilla - my preferred FTP client - and FTP the publish output folder from my dotnet publish above. I put the files in a folder off my ~\Desktop. Then from a terminal Ipi@raspberrypi:~/Desktop/helloworld $ chmod +x raspberrypi (or whatever the name of your published "exe" is. It'll be the name of your source folder/project with no extension. As this is a self-contained published app, again, all the .NET Core runtime stuff is in the same folder with the app.pi@raspberrypi:~/Desktop/helloworld $ ./raspberrypi Segmentation fault The crash was instant...not a pause and a crash, but it showed up as soon as I pressed enter. Shoot. I ran "strace ./raspberrypi" and got this output. I figured maybe I missed one of the prerequisite libraries, and I just needed to see which one and apt-get it. I can see the ld.so.nohwcap error, but that's a historical Debian-ism and more of a warning than a fatal. I used to be able to read straces 20 years ago but much like my Spanish, my skills are only good at Chipotle. I can see it just getting started loading libraries, seeking around in them, checking file status,  mapping files to memory, setting memory protection, then it all falls apart. Perhaps we tried to do something inappropriate with some memory that just got protected? We are dereferencing a null pointer. Maybe you can read this and you already know what is going to happen! I did not. I run it under gdb:pi@raspberrypi:~/Desktop/WTFISTHISCRAP $ gdb ./raspberrypi GNU gdb (Raspbian 7.7.1+dfsg-5+rpi1) 7.7.1Copyright (C) 2014 Free Software Foundation, Inc.This GDB was configured as "arm-linux-gnueabihf"."/home/pi/Desktop/helloworl[...]



Ubuntu now in the Windows Store: Updates to Linux on Windows 10 and Important Tips

Mon, 10 Jul 2017 21:43:39 GMT

I noticed this blog post about Ubuntu over at the Microsoft Command Line blog. Ubuntu is now available from the Windows Store for builds of Windows over 16215. You can run "Winver" to see your build number of Windows. If you run Windows 10 you can certainly sign up for the Windows Insiders builds, or you can wait a few months until these features make their way to the mainstream. I've been running Windows 10 Insiders "Fast ring" for a while with a few issues but nothing blocking. The addition of Ubuntu to the Windows Store may initially seem confusing or even a little bizarre. However, given a minute to understand the larger architecture it make a lot of sense. However, for those of us who have been beta-testing these features, the move to the Windows Store will require some manual steps in order for you to reap the benefits. Here's how I see it. For the early betas of the Windows Subsystem for Linux you type bash from anywhere and it runs Ubuntu on Windows. Ubuntu on Windows hides its filesystem in C:\Users\scott\AppData\Local\somethingetcetc and you shouldn't go there or touch it. By moving the tar files and Linux distro installation into the store, that allows us users to use the Store's CDN (Content Distrubution Network) to get Distros quickly and easily.  Just turn on the feature and REBOOTEnable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux then hit the store to get the binaries! Ok, now this is where and why it gets interesting. Soon (later this month I'm told) we will be able to have n number of native Linux distros on our Windows 10 machines at one time. You can install as many as you like from the store. No VMs, just fast Linux...on Windows! There is a utility for the Windows Subsystem for Linux called "wslconfig" that Windows 10 has.C:\>wslconfigPerforms administrative operations on Windows Subsystem for LinuxUsage: /l, /list [/all] - Lists registered distributions. /all - Optionally list all distributions, including distributions that are currently being installed or uninstalled. /s, /setdefault - Sets the specified distribution as the default. /u, /unregister - Unregisters a distribution.C:\WINDOWS\system32>wslconfig /lWindows Subsystem for Linux Distributions:Ubuntu (Default) FedoraOpenSUSE At this point when I type "bash" at the regular Windows command prompt or PowerShell I will be launching my default Linux. I can also just type "Ubuntu" or "Fedora," etc to get a specific one. If I wanted to test my Linux code (.NET, node, go, ruby, whatever) I could script it from Windows and run my tests on n number of distros. Slick for developers. TODOs if you have WSL and Bash from earlier betas If you already have "bash" on your Windows 10 machine and want to move to the "many distros" you'll just install the Ubuntu distro from the store and then move your distro customizations out of the "legacy/beta bash" over to the "new train but beta although getting closer to release WSL." I copied my ~/ folder over to /mnt/c/Users/Scott/Desktop/WSLBackup, then opened Ubuntu and copied my .rc files and whatnot back in. Then I removed my original bash with lxrun /uninstall. Once I've done that, my distro are managed by the store and I can have as many as I like. Other than customizations, it's really easy (like, it's not a big deal and it's fast) to add or remove Linuxes on Windows 10 so fear not. Backup your stuff and this will be a 10 min operation, plus whatever apt-get installs you need to redo. Everything else is the same and you'll still want to continue storing and sharing files via /mnt/c. NOTE: I did a YouTube video called Editing code and files on Windows Subsystem for Linux on Windows 10 that I'd love if you checked out and shared on social media! Enjoy! Sponsor: Seq is simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Version 4 adds integrated dashboards and alerts[...]



URLs are UI

Fri, 07 Jul 2017 22:49:07 GMT

What a great title. "URLs are UI." Pithy, clear, crisp. Very true. I've been saying it for years. Someone on Twitter said "this is the professional quote of 2017" because they agreed with it. Except Jakob Nielsen said it in 1999. And Tim Berners-Lee said "Cool URIs don't change" in 1998. So many folks spend time on their CSS and their UX/UI but still come up with URLs that are at best, comically long, and at worst, user hostile. Search Results that aren't GETs - Make it easy to share Even non-technical parent or partner things URLs are UI? How do I know? How many times has a relative emailed you something like this: "Check out this house we found! https://www.somerealestatesite.com/homes/for_sale/search_results.asp" That's not meant to tease non-technical relative! It's not their fault! The URL is the UI for them. It's totally reasonable for them to copy-paste from the box that represents where they are and give it to you so you can go there too! Make it a priority that your website supports shareable URLs. URLs that are easy to shorten - Can you easily shorten a URL? I love Stack Overflow's URLs. Here's an example: https://stackoverflow.com/users/6380/scott-hanselman  The only thing that matters there is the 6380. Try it https://stackoverflow.com/users/6380 or https://stackoverflow.com/users/6380/fancy-pants also works. SO will even support this! http://stackoverflow.com/u/6380. Genius. Why? Because they decided it matters. Here's another https://stackoverflow.com/questions/701030/whats-the-significance-of-oct-12-1999 again, the text after the ID doesn't matter. https://stackoverflow.com/questions/701030/ This is a great model for URLs where you want a to use a unique ID but the text/title in the URL may change. I use this for my podcasts so https://hanselminutes.com/587/brandon-bouier-on-the-defense-digital-service-and-deploying-code-in-a-war-zone is the same as https://hanselminutes.com/587. Unnecessarily long or unintuitive URLs - Human Readable and Human Guessable Sometimes if you want context to be carried in the URL you have to, well, carry it along. There was a little debate  on Twitter recently about URLs like this https://fabrikam.visualstudio.com/_projects. What's wrong with it? The _ is not intuitive at all. Why not https://fabrikam.visualstudio.com/projects? Because obscure technical reason. In fact, all the top level menu items for doing stuff in VSTS start with _. Not /menu/ or /action or whatever. My code is https://fabrikam.visualstudio.com/_git/FabrikamVSO and I clone from here https://fabrikam.visualstudio.com/DefaultCollection/_git/FabrikamVSO. That's weird. Where did Default Collection come from? Why can't I just add a ".git" extension to my project's URL and clone that? Well, maybe they want the paths to be nice in the URL. Nope. https://fabrikam.visualstudio.com/_git/FabrikamVSO?path=%2Fsrc%2Fsetup%2Fcleanup.local.ps1&version=GBmaster&_a=contents is a file. Compare that to https://github.com/shanselman/TinyOS/blob/master/readme.md at GitHub. Again, I am sure there is a good, and perhaps very valid technical reason. But another valid reason is very frank. URLs weren't a UX priority. Same with OneDrive https://onedrive.live.com/?id=CD0633A7367371152C%21172&cid=CD06A73371152C vs. DropBox https://www.dropbox.com/home/Games As a programmer, I am sympathetic. As a user, I have zero sympathy. Now I have to remember that there is a _ and it's a thing. I proposed this. URLs are rarely a tech problem They are an organizational willpower problem. You care a lot about the evocative 2meg jpg hero image on your website. You change fonts, move CSS around ad infinitum, and agonize over single pixels. You should also care about your URLs. SIDE NOTE: Yes, I am fully aware of my own hypocrisy with this issue. My blog software was written by a bunch of us in 2002 and our URLs are close to OK, but their age is showing. I need to find a balance between "Cool URLs don't change" and "should I change totally uncool UR[...]



Review: The AmpliFi HD (High-Density) Home Wi-Fi Mesh Networking System

Thu, 06 Jul 2017 04:50:52 GMT

I've been very happy with the TP-Link AC3200 Router I got two years ago. It's been an excellent and solid router. However, as the kids get older and the number of mobile devices (and smart(ish) devices) in the house increase, the dead wifi spots have become more and more noticeable. Additionally I've found myself wanting more control over the kids' internet access. There's a number of great WiFi Survey Apps but I was impressed with the simplicity of this Windows 10 WiFi Survey app, so I used it to measure the signals around my house, superimposed with a picture of the floor plan. Here's the signal stretch of the TP-Link. Note that when you're using a WiFi Survey app you need to take into consideration if you're measuring 2.4GHz that gives you better distance at slower speeds, or 5GHz that can give you a much faster connection at the cost of range. As a general rule in a single room or small house, 5GHz is better and you'll absolutely notice it with video streaming like Netflix. Below is a map of the 5GHz single for my single TP-Link router. It's "fine" but it's not epic if you move around. You can guess from the map that the router is under the stairs in the middle. You can also guess where concrete walls are, as well as the angles of certain vectors that pass through thick walls diagonally and affect the signal. Again, it's OK but it's starting to be annoying and I wanted to see if I could fix it. SIDE BAR: It is certainly possible to take two routers and combine them into one network with a shared SSID. If you know how to do this kind of thing (and enjoy it) then more power to you. I tried it out in 2010 and it worked OK, but I want my network to "just work" 100% of the time, out of the box. I like the easy setup of a consumer device with minimal moving parts. Mesh Networking products are reaching the consumer at a solid price point with solid tech so I thought it was time to make the switch. Below is the same map with the same locations, except using the AmpliFi HD (High-Density) Home Wi-Fi System from Ubiquiti Networks. This is the consumer (or "prosumer") version of the technology that Ubiquiti (UBNT) uses in their commercial products. AmpliFi HD includes the router and two "mesh points." These are extenders that use a mesh tech called 3x3 MIMO. They can transmit and receive via 3 streams at a low level. MIMO is part of the 802.11n spec. Note that this improvement is JUST using the AmpliFi main router. When you do a Wifi Survey the "Mesh Points" will show up as the same SSID (the same wireless network) but they'll have different MAC Address. That means in my list of networks in the Survey tool my "HanselMesh" network appears three times. Don't worry, it's one SSID and your computers will only see ONE network - it's just advanced tools that see each point. It's that "meshing" of n number of access points that is the whole point. These two maps below are the relative strengths of just the mesh points. It's the union of all three of these maps that gives the clear picture. For example, one mesh point covers the living area fantastically (as does the router itself) while the other covers the garage (not that it needs it) and the entire office. Between the main router and the two included mesh points there are NO dead spots in the house. I'll find the kids in odd corners with an iPad, behind a couch in the play room where they couldn't get signal before. I'm finding myself sitting in different rooms than I did before just because I can roam without thinking about it. I would suspect I could get away with buying just the AmpliFi Router (around US$133) and maybe one mesh point extender but the price for all three (router + 2 mesh points) is decent. The slick part is that you can add mesh points OR a second router. It's the second router idea that is most compelling for multi-floor buildings that also have a wired network. For example, I could add a second router (not a mesh point) upstairs and plug it int[...]



Porting a 15 year old .NET 1.1 Virtual CPU Tiny Operating System school project to .NET Core 2.0

Sun, 02 Jul 2017 01:27:08 GMT

I've had a number of great guests on the podcast lately. One topic that has come up a number of times is the "toy project." I've usually kept mine private - never putting them on GitHub - Somewhat concerned that people would judge me and my code. However, hypocrite that am (aren't we all?) I have advocated that others put their "Garage Sale Code" online. So here's some crappy code. ;) The Preamble While I've been working as an engineer for 25 years this year, I didn't graduate from school with a 4 year degree until 2003 - I just needed to get it done, for myself. I was poking around recently and found my project from OIT's CST352 "Operating Systems" class. One of the projects was to create a "Virtual CPU and OS." This is kind of a thought exercise. It's not really a parser/lexer - although there is both - and it's not a real OS. But it needs to be able to take in a made-up quasi-Assembly Language instruction set and execute them on a virtual CPU while managing virtual memory of arbitrary size. Again, a thought exercise made real to confirm that the student understands the responsibilities of a CPU. Here's an example "application." Confused yet? Here's the original spec I was given in 2002 that includes the 36 instructions the "CPU" should understand. It has 10 general-purpose 32bit registers address as 1 through 10. Register 10 is the stack pointer. There are two bit flag registers - sign flag and zero flag. Instructions are "opcode arg1 arg2" with constants prefixed with "$."11 r8 ;Print r86 r1 $10 ;Move 10 into r16 r2 $6 ;Move 6 into r26 r3 $25 ;Move 25 into r323 r1 ;Acquire lock in r1 (currently 10)11 r3 ;Print r3 (currently 25)24 r1 ;Release r1 (currently 10)25 r3 ;Sleep r3 (currently 25)11 r3 ;Print r3 (currently 25)27 ;Exit I write my homework assignment in 2002 in the idiomatic C# of the time on .NET 1.1. That means no Generics - I had to make my own strongly typed collections. That means C# has dozens of (if not a hundred) language and syntax improvements. I didn't use a Unit Testing Framework as TDD was just starting around 1999 during the XP (eXtreme Programming) days and NUnit was just getting start. It also uses "unsafe" to pin down memory in a few places. I'm sure there are WAY WAY WAY better and more sophisticated ways to do this today in idiomatic C# of 2017. Those are excuses, the real reasons are my own ignorance, ability, combined with some night-school laziness. One of the more fun parts of this exercise was moving from physical memory (a byte array as I recall) to a full-on Memory Manager where each Process thought it could address a whole bunch of Virtual Memory while actual Physical Memory was arbitrarily sized. Then - as a joke - I would swap out memory pages as XML! ;) Yes, to be clear, it was a joke and I still love it. You can run an "app" by passing in the total physical memory along with the text file containing the program, but you can also run an arbitrary number of programs by passing in an arbitrary number  of text files! The "TinyOS" will handle each process thinking it has its own memory and will time If you are more of a visual learner, perhaps you'd prefer this 20-slide PowerPoint on this Tiny CPU that I presented in Malaysia later that year. You dig those early 2000-era slides? I KNOW YOU DO.   Updating a .NET 1.1 app to cross-platform .NET Core 2.0 Step 1 was to download the original code from my own blog. ;) This is also Reason #4134 why you should have a blog. I decided to use Visual Studio 2017 to upgrade it, and even worse I decided to use .NET Core 2.0 which is currently in Preview. I wanted to use .NET Core 2.0 not just because it's cross-platform but also because it promises to have a pretty large API surface area and I want this to "just work." The part about getting my old application running on Linux is going to be awesome, though. Visual Studio[...]



Speed of dotnet run vs the speed of dotnet for published apps (plus self-contained .NET Core apps)

Wed, 28 Jun 2017 23:18:57 GMT

The .NET Core team really prides themselves on performance. However, it's not immediately obvious (as with all systems) if you just do Hello World as a developer. Just today I was doing a Ruby on Rails app in Development Mode with mruby - but that's not what you'd go to production with. Let's look at a great question I got today on Twitter. @shanselman @davidfowl Is it normal for dotnet run on the latest .net core 2.0 preview 2 bits to take 4 seconds to run? pic.twitter.com/wvD2aqtfi0— Jerome Terry (@jeromeleoterry) June 28, 2017 Dotnet Run - Builds and Runs Source Code in Development That's a great question. If you install .NET Core 2.0 Preview - this person is on a Mac, but you can use Linux or Windows as well - then do just this:$ dotnet new console$ dotnet run It'll be about 3-4 seconds. dotnet is the SDK and dotnet run will build and run your source code. Here's a short bit from the docs: The dotnet run command provides a convenient option to run your application from the source code with one command. It's useful for fast iterative development from the command line. The command depends on the dotnet build command to build the code. Any requirements for the build, such as that the project must be restored first, apply to dotnet run as well. While this is super convenient, it's not totally obvious that dotnet run isn't something you'd go to production with (especially Hello World Production, which is quite demanding! ;) ). Dotnet Publish then Dotnet YOUR.DLL for Production Instead, do a dotnet publish, note the compiled DLL created, then run "dotnet tst.dll." For example:C:\Users\scott\Desktop\tst> dotnet publishMicrosoft (R) Build Engine version 15.3 for .NET CoreCopyright (C) Microsoft Corporation. All rights reserved. tst -> C:\Users\scott\Desktop\tst\bin\Debug\netcoreapp2.0\tst.dll tst -> C:\Users\scott\Desktop\tst\bin\Debug\netcoreapp2.0\publish\C:\Users\scott\Desktop\tst> dotnet run .\bin\Debug\netcoreapp2.0\tst.dllHello World! On my machine, dotnet run is 2.7s, but dotnet tst.dll is 0.04s. Dotnet publish --self-contained I could then publish a complete self-contained app - I'm using Windows, so I'll publish for Windows but you could even build on a Windows machine but target a Mac runtime, etc and that will make a \publish folder.C:\Users\scott\Desktop\tst> dotnet publish --self-contained -r win10-x64Microsoft (R) Build Engine version 15.3 for .NET CoreCopyright (C) Microsoft Corporation. All rights reserved. tst -> C:\Users\scott\Desktop\tst\bin\Debug\netcoreapp2.0\win10-x64\tst.dll tst -> C:\Users\scott\Desktop\tst\bin\Debug\netcoreapp2.0\win10-x64\publish\C:\Users\scott\Desktop\tst> .\bin\Debug\netcoreapp2.0\win10-x64\publish\tst.exeHello World! Note in this case I have a "Self-Contained" app, so all of .NET Core is in that folder and below. Here I run tst.exe, not dotnet.exe because now I'm an end-user. I hope this helps clear things up. Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!© 2017 Scott Hanselman. All rights reserved.       [...]



Exploring CQRS within the Brighter .NET open source project

Sun, 25 Jun 2017 09:49:00 GMT

There's a ton of cool new .NET Core open source projects lately, and I've very much enjoyed exploring this rapidly growing space. Today at lunch I was checking out a project called "Brighter." It's actually been around in the .NET space for many years and is in the process of moving to .NET Core for greater portability and performance. Brighter is a ".NET Command Dispatcher, with Command Processor features for QoS (like Timeout, Retry, and Circuit Breaker), and support for Task Queues" Whoa, that's a lot of cool and fancy words. What's it mean? The Brighter project is up on GitHub incudes a bunch of libraries and examples that you can pull in to support CQRS architectural styles in .NET. CQRS stands for Command Query Responsibility Segregation. As Martin Fowler says, "At its heart is the notion that you can use a different model to update information than the model you use to read information." The Query Model reads and the Command Model updates/validates. Greg Young gives the first example of CQRS here. If you are a visual learner, there's a video from late 2015 where Ian Cooper explains a lot of this a the London .NET User Group or an interview with Ian Cooper on Channel 9. Brighter also supports "Distributed Task Queues" which you can use to improve performance when you're using a query or integrating with microservices. When building distributed systems, Hello World is NOT the use case. BUT, it is a valid example in that it strips aside any business logic and shows you the basic structure and concepts. Let's say there's a command you want to send. The GreetingCommand. A command can be any write or "do this" type command.internal class GreetingCommand : Command{ public GreetingCommand(string name) :base(new Guid()) { Name = name; } public string Name { get; private set; }} Now let's say that something else will "handle" these commands. This is the DoIt() method. No where do we call Handle() ourselves. Similar to dependency injection, we won't be in the business of calling Handle() ourselves; the underlying framework will abstract that away.internal class GreetingCommandHandler : RequestHandler{ [RequestLogging(step: 1, timing: HandlerTiming.Before)] public override GreetingCommand Handle(GreetingCommand command) { Console.WriteLine("Hello {0}", command.Name); return base.Handle(command); }} We then register a factory that takes types and returns handlers. In a real system you'd use IoC (Inversion of Control) dependency injection for this mapping as well. Our Main() has a registry that we pass into a larger pipeline where we can set policy for processing commands. This pattern may feel familiar with "Builders" and "Handlers."private static void Main(string[] args){ var registry = new SubscriberRegistry(); registry.Register(); var builder = CommandProcessorBuilder.With() .Handlers(new HandlerConfiguration( subscriberRegistry: registry, handlerFactory: new SimpleHandlerFactory() )) .DefaultPolicy() .NoTaskQueues() .RequestContextFactory(new InMemoryRequestContextFactory()); var commandProcessor = builder.Build(); ...} Once we have a commandProcessor, we can Send commands to it easier and the work will get done. Again, how you ultimately make the commands is up to you.commandProcessor.Send(new GreetingCommand("HanselCQRS")); Methods within RequestHandlers can also have other behaviors associated with them, as in the case of "[RequestLogging] on the Handle() method above. You can add other stuff like Validation, Retries, or Circuit Breakers. The idea is that Brighter offers a pipeline of handlers that can all operate on a Command. The Celery Project is a similar project except written in Python. The Brighter project has stated they have lo[...]