Subscribe: Scott Hanselman's Computer Zen
Added By: Feedage Forager Feedage Grade A rated
Language: English
app  build  containers  core  css  docker  dotnet  net core  net  run  runtime  smidge  switch  visual studio  windows 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Scott Hanselman's Computer Zen

Scott Hanselman's Blog

Scott Hanselman on Programming, User Experience, The Zen of Computers and Life in General


Trying out new .NET Core Alpine Docker Images

Wed, 22 Nov 2017 19:27:14 GMT

I blogged recently about optimizing .NET and ASP.NET Docker files sizes. .NET Core 2.0 has previously been built on a Debian image but today there is preview image with .NET Core 2.1 nightlies using Alpine. You can read about the announcement here about this new Alpine preview image. There's also a good rollup post on .NET and Docker. They have added two new images: 2.1-runtime-alpine 2.1-runtime-deps-alpine Alpine support is part of the .NET Core 2.1 release. .NET Core 2.1 images are currently provided at the microsoft/dotnet-nightly repo, including the new Alpine images. .NET Core 2.1 images will be promoted to the microsoft/dotnet repo when released in 2018. NOTE: The -runtime-deps- image contains the dependancies needed for a .NET Core application, but NOT the .NET Core runtime itself. This is the image you'd use if your app was a self-contained application that included a copy of the .NET Core runtime. This is apps published with -r [runtimeid]. Most folks will use the -runtime- image that included the full .NET Core runtime. To be clear: - The runtime image contains the .NET Core runtime and is intended to run Framework-Dependent Deployed applications - see sample - The runtime-deps image contains just the native dependencies needed by .NET Core and is intended to run Self-Contained Deployed applications - see sample It's best with .NET Core to use multi-stage build files, so you have one container that builds your app and one that contains the results of that build. That way you don't end up shipping an image with a bunch of SDKs and compilers you don't need. NOTE: Read this to learn more about image versions in Dockerfiles so you can pick the right tag and digest for your needs. Ideally you'll pick a docker file that rolls forward to include the latest servicing patches. Given this docker file, we build with the SDK image, then publish, and the result is about 219megs.FROM microsoft/dotnet:2.0-sdk as builder RUN mkdir -p /root/src/app/dockertestWORKDIR /root/src/app/dockertest COPY dockertest.csproj . RUN dotnet restore ./dockertest.csproj COPY . .RUN dotnet publish -c release -o published FROM microsoft/dotnet:2.0.0-runtimeWORKDIR /root/ COPY --from=builder /root/src/app/dockertest/published .ENV ASPNETCORE_URLS=http://+:5000EXPOSE 5000/tcpCMD ["dotnet", "./dockertest.dll"] Then I'll save this as Dockerfile.debian and build like this:> docker build . -t shanselman/dockertestdeb:0.1 -f dockerfile.debian With a standard ASP.NET app this image ends up being 219 megs. Now I'll just change one line, and use the 2.1 alpine runtimeFROM microsoft/dotnet-nightly:2.1-runtime-alpine And build like this:> docker build . -t shanselman/dockertestalp:0.1 -f dockerfile.alpine and compare the two:> docker images | find /i "dockertest"shanselman/dockertestalp 0.1 3f2595a6833d 16 minutes ago 82.8MBshanselman/dockertestdeb 0.1 0d62455c4944 30 minutes ago 219MB Nice. About 83 megs now rather than 219 megs for a Hello World web app. Now the idea of a microservice is more feasible! Please do head over to the GitHub issue here and offer your thoughts and results as you test these Alpine images. Also, are you interested in a "-debian-slim?" It would be halfway to Alpine but not as heavy as just -debian. Lots of great stuff happening around .NET and Docker. Be sure to also check out Jeff Fritz's post on creating a minimal ASP.NET Core Windows Container to see how you can squish .(full) Framework applications running on Windows containers as well. For example, the Windows Nano Server images are just 93 megs compressed. Sponsor: Get the latest JetBrains Rider preview for .NET Core 2.0 support, Value Tracking and Call Tracking, MSTest runner, new code inspections and refactorings, and the Parallel Stacks view in debugger.© 2017 Scott Hanselman. All rights reserved.       [...]

Docker and Linux Containers on Windows, with or without Hyper-V Virtual Machines

Mon, 20 Nov 2017 03:24:10 GMT

Containers are lovely, in case you haven't heard. They are a nice and clean way to get a reliable and guaranteed deployment, no matter the host system. If I want to run my my ASP.NET Core application, I can just type "docker run -p 5000:80 shanselman/demos" at the command line, and it'll start up! I don't have any concerns that it won't run. It'll run, and run well. Some containers naysayers say , sure, we could do the same thing with Virtual Machines, but even today, a VHD (virtual hard drive) is rather an unruly thing and includes a ton of overhead that a container doesn't have. Containers are happening and you should be looking hard at them for your deployments. Historically on Windows, however, Linux Containers run inside a Hyper-V virtual machine. This can be a good thing or a bad thing, depending on what your goals are. Running Containers inside a VM gives you significant isolation with some overhead. This is nice for Servers but less so for my laptop. Docker for Windows hides the VM for the most part, but it's there. Your Container runs inside a Linux VM that runs within Hyper-V on Windows proper. With the latest version of Windows 10 (or 10 Server) and the beta of Docker for Windows, there's native Linux Container support on Windows. That means there's no Virtual Machine or Hyper-V involved (unless you want), so Linux Containers run on Windows itself using Windows 10's built in container support. For now you have to switch "modes" between Hyper V and native Containers, and you can't (yet) run Linux and Windows Containers side by side. The word on the street is that this is just a point in time thing, and that Docker will at some point support running Linux and Windows Containers in parallel. That's pretty sweet because it opens up all kinds of cool hybrid scenarios. I could run a Windows Server container with an .NET Framework ASP.NET app that talks to a Linux Container running Redis or Postgres. I could then put them all up into Kubernetes in Azure, for example. Once I've turned Linux Containers on Windows on within Docker, everything just works and has one less moving part. I can easily and quickly run busybox or real Ubuntu (although Windows 10 already supports Ubuntu natively with WSL): More useful even is to run the Azure Command Line with no install! Just "docker run -it microsoft/azure-cli" and it's running in a Linux Container. I can even run nyancat! (Thanks Thomas!) docker run -it supertest2014/nyan Speculating - I look forward to the day I can run "minikube start --vm-driver="windows" (or something) and easily set up a Kubernetes development system locally using Windows native Linux Container support rather than using Hyper-V Virtual Machines, if I choose to. Sponsor: Why miss out on version controlling your database? It’s easier than you think because SQL Source Control connects your database to the same version control tools you use for applications. Find out how.© 2017 Scott Hanselman. All rights reserved.       [...]

Announcing Visual Studio and Kubernetes – Visual Studio Connected Environment

Wed, 15 Nov 2017 15:03:44 GMT

I've been having all kinds of fun lately with Kubernetes, exploring building my own Kubernetes Cluster on the metal, as well as using a managed Kubernetes cluster in Azure with AKS. Today at the Connect() conference in NYC I was happy to announce Visual Studio Connected Environment. How would one take the best of Visual Studio and the best of managed Kubernetes and create something useful for development teams? Ecosystem momentum behind containers is amazing right now with support for containers across clouds, operating systems, and development platforms. Additionally, while microservices as an architectural pattern has been around for years, more and more developers are discovering the advantages every day. You can check out videos of the Connect() conference at, but you should check out my practice video where I show a live demo of Kubernetes in Visual Studio: height="315" src="" frameborder="0" width="560" allowfullscreen> The buzzword "cloud native" is thrown around a lot. It's a meaningful term, though, as it means "architecture with the cloud in mind." Applications that are cloud-native should consider these challenges: Connecting to and leveraging cloud services Use the right cloud services for your app, don't roll your own DB, Auth, Discovery, etc. Dealing with complexity and staying cognizant of changes Stubbing out copies of services can increase complexity and hide issues when your chain of invocations grows. K.I.S.S. Setting up and managing infrastructure and dealing with changing pre-requisites Even though you may have moved to containers for production, is your dev environment as representative of prod as possible? Establishing consistent, common environments Setting up private environments can be challenging, and it gets messier when you need to manage your local env, your team dev, staging, and ultimately prod. Adopting best practices such as service discovery and secrets management Keep secrets out of code, this is a solved problem. Service discovery and lookup should be straightforward and reliable in all environments. A lot of this reminds us to use established and mature best practices, and avoid re-inventing the wheel when one already exists. The announcements at Connect() are pretty cool because they're extending both VS and the Azure cloud to work like devs work AND like devops works. They're extending the developers’ IDE/editor experience into the cloud with services built on top of the container orchestration capabilities of Kubernetes on Azure. Visual Studio, VS Code and Visual Studio for Mac AND and through a CLI (command line interface) - they'll initially support .NET Core, node and Java on Linux. As Azure adds more support for Windows containers in Kubernetes, they'll enable .NET Full Framework applications. Given the state of Windows containers support in the platform, the initial focus is on green field development scenarios but lift-shift and modernize will come later. It took me a moment to get my head around it (be sure to watch the video!) but it's pretty amazing. Your team has a shared development environments with your containers living in, and managed by Kubernetes. However, you also have your local development machine which then can reserve its own spaces for those services and containers that you're working on. You won't break the team with the work you're doing, but you'll be able to see how your services work and interact in an environment that is close to how it will look in production. PLUS, you can F5 debug from Visual Studio or Visual Studio Code and debug, live in the cloud, in Kubernetes, as fast as you could locally. This positions Kubernetes as the underlayment for your containers, with the backplane managed by Azure/AKS, and the development experience behaving the way it always has. You use Visual Studio, or Visual Studio code, or the command line, and you use the languages and platforms that you prefer. In the demo[...]

Lightweight bundling, minifying, and compression, for CSS and JavaScript with ASP.NET Core and Smidge

Wed, 08 Nov 2017 21:07:56 GMT

Yesterday I blogged about WebOptimizer, a minifier that Mads Kristensen wrote for ASP.NET Core. A few people mentioned that Shannon Deminick also had a great minifier for ASP.NET Core. Shannon has a number of great libraries on his GitHub including not just "Smidge" but also Examine, an indexing system, ClientDependency for managing all your client side assets, and Articulate, a blog engine built on Umbraco. Often when there's more than one way to do things, but one of the ways is made by a Microsoft employee like Mads - even if it's in his spare time - it can feel like inside baseball or an unfair advantage. The same would apply if I made a node library but a node core committer also made a similar one. Many things can affect whether an open source library "pops," and it's not always merit. Sometimes it's locale/location, niceness of docs, marketing, word of mouth, website. Both Mads and Shannon and a dozen other people are all making great libraries and useful stuff. Sometimes people are aware of other projects and sometimes they aren't. At some point a community wants to "pick a winner" but even as I write this blog post, someone else we haven't met yet is likely making the next great bundler/minifier. And that's OK! I'm going to take a look at Shannon Deminck's "Smidge" in this post. Smidge has been around as a runtime bundler since the beginning of ASP.NET Core even back when DNX was a thing, if you remember that. Shannon's been updating the library as ASP.NET Core has evolved, and it's under active development. Smidge supports minification, combination, compression for JS/CSS files and features a fluent syntax for creating and configuring bundles I'll start from "dotnet new mvc" and then:C:\Users\scott\Desktop\smidgenweb>dotnet add package smidge Writing C:\Users\scott\AppData\Local\Temp\tmp325B.tmpinfo : Adding PackageReference for package 'smidge' into project 'C:\Users\scott\Desktop\smidgenweb\smidgenweb.csproj'.log : Restoring packages for C:\Users\scott\Desktop\smidgenweb\smidgenweb.csproj......SNIP...log : Installing Smidge : Package 'smidge' is compatible with all the specified frameworks in project 'C:\Users\scott\Desktop\smidgenweb\smidgenweb.csproj'.info : PackageReference for package 'smidge' version '3.0.0' added to file 'C:\Users\scott\Desktop\smidgenweb\smidgenweb.csproj'. Then I'll update appSettings.json (where logging lives) and add Smidge's config:{ "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Warning" } }, "smidge": { "dataFolder" : "App_Data/Smidge", "version" : "1" } } Let me squish my CSS, so I'll make a bundle:app.UseSmidge(bundles =>{ bundles.CreateCss("my-css", "~/css/site.css");}); I refer to the bundle by name and the Smidge tag helper turns this: into this Notice the generated filename with version embedded. That bundle could be one or more files, a whole folder, whatever you need. Her eyou can see Kestral handling the request. Smidge jumps in there and does its thing, then the bundle is cached for the next request!info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1] Executing action method Smidge.Controllers.SmidgeController.Bundle (Smidge) with arguments (Smidge.Models.BundleRequestModel) - ModelState is Validdbug: Smidge.Controllers.SmidgeController[0] Processing bundle 'my-css', debug? False ...dbug: Smidge.FileProcessors.PreProcessManager[0] Processing file '/css/site.css', type: Css, cacheFile: C:\Users\scott\Desktop\smidgenweb\App_Data\Smidge\Cache\SONOFHEXPOWER\1\bb8368ef.css, watching? False ...dbug: Smidge.FileProcessors.PreProcessManager[0] Processed file '/css/site.css' in 19msdbug: Smidge.Controllers.SmidgeController[0] Processed bundle 'my-css' in 73msinfo: Microsoft.AspNetCore.Mvc.Internal.VirtualFileResultExecutor[1] Executing FileResult, sendi[...]

WebOptimizer - a Bundler and Minifier for ASP.NET Core

Wed, 08 Nov 2017 09:05:12 GMT

ASP.NET Core didn't have a runtime bundler like previous versions of ASP.NET. This was a bummer as I was a fan. Fortunately Mads Kristensen created one and put it on GitHub, called WebOptimizer. WebOptimizer - ASP.NET Core middleware for bundling and minification of CSS and JavaScript files at runtime. With full server-side and client-side caching to ensure high performance. I'll try it out on a default ASP.NET Core 2.0 app. First, assuming I've installed I'll run C:\Users\scott\Desktop> cd squishywebC:\Users\scott\Desktop\squishyweb> dotnet new mvcThe template "ASP.NET Core Web App (Model-View-Controller)" was created successfully.This template contains technologies from parties other than Microsoft, see for details.SNIPRestore succeeded. Then I'll add a reference to the WebOptimizer package. Be sure to check the versioning and pick the one you want, or use the latest.C:\Users\scott\Desktop\squishyweb> dotnet add package LigerShark.WebOptimizer.Core --version 1.0.178-beta Add the service in ConfigureServices and add it (I'll do it conditionally, only when in Production) in Configure. Notice I had to put it before UseStaticFiles() because I want it to get the first chance at those requests.public void ConfigureServices(IServiceCollection services){ services.AddMvc(); services.AddWebOptimizer();}// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.public void Configure(IApplicationBuilder app, IHostingEnvironment env){ if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } else { app.UseWebOptimizer(); app.UseExceptionHandler("/Home/Error"); } app.UseStaticFiles(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); });} After running "dotnet run" I'll request site.css as an example and see it's automatically minimized: You can control the pipeline with globbing like this:public void ConfigureServices(IServiceCollection services){ services.AddMvc(); services.AddWebOptimizer(pipeline => { pipeline.MinifyJsFiles("js/a", "js/b", "js/c"); });} If I wanted to combine some files into an output "file" that'll be held/cached only in memory, I can do that also. To be clear, it'll never touch the disk, it's just a URL. Then I can just refer to it with a within my Razor Page or main =>{ pipeline.AddCssBundle("/css/mybundle.css", "css/*.css");}); WebOptimizer also supports automatic "cache busting" with a ?v= query string created by a TagHelper. It can even compile Scss (Sass) files into CSS. There's plugins for TypeScript, Less, and Markdown too! WebOptimizer is open source and lives at Go check it out, kick the tires, and see if it meets your needs! Maybe get involved and make a fix or help with docs! There are already some open issues you could start helping with. Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!© 2017 Scott Hanselman. All rights reserved.       [...]

The perfect Nintendo Switch travel set up and recommended accessories

Thu, 02 Nov 2017 18:02:38 GMT

I've had a Nintendo Switch since launch day and let me tell you, it's joyful. Joyous. It's a little joy device. I love 4k Xboxen and raw power as much as the next Jane or Joe Gamer, but the Switch just keeps pumping out happy games. Indie games, Metroidvania games like Axiom Verge, Legend of Zelda: Breath of the Wild (worth the cost of the system) and now, super Mario Odyssey. Even Doom and Wolfenstein 2 are coming to the Switch soon! I've travelled already with my Switch all over. Here's what I've come up with for my travels - and my at-home Switch Experience. I owe and use these items personally - and I vouch for their awesomeness and utility. Y’all. #NintendoSwitch and #SuperMario on a plane is GLORIUS. Even better with a stand, tiny Bluetooth Adapter, and Airpods. Blog soon. — Scott Hanselman (@shanselman) November 1, 2017 BlueTooth Adapter This TaoTronics BlueTooth adapter fixes the most obvious problem with the Switch - no blueooth headset support. If there is ever a Switch 1.5 release, you can bet they'll add Bluetooth. This device is great for a few reasons. It's small, it has its own rechargeable battery, it charges with micro USB, and it supports both transmit and receive. That's an added bonus in that it lets you turn any speakers with a 1/8" headphone jack into a BT speaker. Again, tiny and fits in my Switch case. I pair my Airpods with this device by putting the Airpods into pairing mode by putting the case button, then holding down the pairing button on this adapter, which promiscuously pairs. Works great. Switch Travellers Case I have a Zelda version of this case. It's very roomy and I can fit a 3rd party stand, a dozen cartridges, BT adapter, headphones, screen wipes, and more inside. There's a number of options and styles past the link, including character cases. Switch Joy-Con Gel Covers These gel-covers - or ones like them - are essential. The Switch Joy-Cons are great for children's hands, but for normal/larger-sized people they are lacking something. It's not the cover, it's the extra depth these gel covers give you. I can't use the Switch without them. HORI Compact Playstand This is an airplane must. I want to use my Pro Controller one a plane - or at least detached Joy-Cons - so ideally I want to have the Switch stand on its own. The Switch does have its own kickstand, but honestly, it's flimsy. Works when the world isn't moving, but the angle is wrong and it tips over easily on a plane. This playstand folds flat, fits in the case above, and is very adjustable. It also works great to hold your phone or small tablet for watching movies, so it ends up playing double duty. Plus, it's $12. Switch Grip Kit This one is optional UNLESS you have little kids and Mario Kart. When you're using Switch Joy-Cons as individual controllers, again, they are small. These turn them into tiny Xbox-style controllers. They are plastic holsters, but the kids love them. HDMI Type C USB Hub Adapter for Switch This can replace your not-portable Switch Dock. I didn't believe it would work but it's great. I can also fit this tiny Dongle in my Switch Case, and along with an HDMI cable and existing Switch power adapter I can plug the Switch into any hotel TV with HDMI. It's an amazing thing to be able to game in a hotel on a long business trip with minimal stuff to carry. BASSTOP Portable Switch Dock Another docking option that requires some assembly and disassembly on your part is this Portable Dock. It's not the dock, it's just the plastic shell. You'll need to take apart your existing giant dock and discover it's all air. The internals of the official dock then fit inside this one. What are YOUR must have Switch Accessories? And more important, WHY HAVE YOU NO BUY SWITCH? * My blog often uses Amazon affiliate links. I use that money for tacos and switch games. Please click on them and support my blog! Sponsor[...]

Optimizing ASP.NET Core Docker Image sizes

Tue, 31 Oct 2017 18:03:46 GMT

There is a great post from Steve Laster in 2016 about optimizing ASP.NET Docker Image sizes. Since then Docker has added multi-stage build files so you can do more in one Dockerfile...which feels like one step even though it's not. Containers are about easy and reliable deployment, and they're also about density. You want to use as little memory as possible, sure, but it also is nice to make them as small as possible so you're not spending time moving them around the network. The size of the image file can also affect startup time for the container. Plus it's just tidy. I've been building a little 6 node Raspberry Pi (ARM) Kubenetes Cluster on my desk - like you do - this week, and I noticed that my image sizes were a little larger than I'd like. This is a bigger issue because it's a relatively low-powered system, but again, why carry around x unnecessary megabytes if you don't have to? Alex Ellis has a great blog on building .NET Core apps for Raspberry Pi along with a YouTube video. In his video and blog he builds a "Console.WriteLine()" console app, which is great for OpenFaas (open source serverless platform) but I wanted to also have ASP.NET Core apps on my Raspberry Pi k8s cluster. He included this as a "challenge" in his blog, so challenge accepted! Thanks for all your help and support, Alex! ASP.NET Core on Docker (on ARM) First I make a basic ASP.NET Core app. I could do a Web API, but this time I'll do an MVC one with Razor Pages. To be clear, they are the same thing just with different starting points. I can always add pages or add JSON to either, later. I start with "dotnet new mvc" (or dotnet new razor, etc). I'm going to be running this in Docker, managed by Kuberenetes, and while I can always change the WebHost in Program.cs to change how the Kestrel web server starts up like this:WebHost.CreateDefaultBuilder(args) .UseUrls(http://*:5000;http://localhost:5001;https://hostname:5002) For Docker use cases it's easier to change the listening URL with an Environment Variable. Sure, it could be 80, but I like 5000. I'll set the ASPNETCORE_URLS environment variable to http://+:5000 when I make the Dockerfile. Optimized MultiStage Dockerfile for ASP.NET There's a number of "right" ways to do this, so you'll want to think about your scenarios. You'll see below that I'm using ARM (because Raspberry Pi) so if you see errors running your container like "qemu: Unsupported syscall: 345" then you're trying to run an ARM image on x86/x64. I'm going to be building an ARM container from Windows but I can't run it here. I have to push it to a container registry and then tell my Raspberry Pi cluster to pull it down and THEN it'll run, over there. Here's what I have so far. NOTE there are some things commented out, so be conscious. This is/was a learning exercise for me. Don't you copy/paste unless you know what's up! And if there's a mistake, here's a GitHub Gist of my Dockerfile for you to change and improve. It's important to understand that .NET Core has an SDK with build tools and development kits and compilers and stuff, and then it has a runtime. The runtime doesn't have the "make an app" stuff, it only has the "run an app stuff." There is not currently an SDK for ARM so that's a limitation that we are (somewhat elegantly) working around with the multistage build file. But, even if there WAS an SDK for ARM, we'd still want to use a Dockerfile like this because it's more efficient with space and makes a smaller image. Let's break this down. There are two stages. The first FROM is the SDK image that builds the code. We're doing the build inside Docker - which is lovely, and  great reliable way to do builds. PRO TIP: Docker is smart about making intermediate images and doing the least work, but it's useful if we (the authors) do the right thing as well to help it out. For example, see where we COPY the .csproj over and then do a "dotnet rest[...]

How to Build a Kubernetes Cluster with ARM Raspberry Pi then run .NET Core on OpenFaas

Sat, 28 Oct 2017 00:32:14 GMT

First, why would you do this? Why not. It's awesome. It's a learning experience. It's cheaper to get 6 pis than six "real computers." It's somewhat portable. While you can certainly quickly and easily build a Kubernetes Cluster in the cloud within your browser using a Cloud Shell, there's something more visceral about learning it this way, IMHO. Additionally, it's a non-trivial little bit of power you've got here. This is also a great little development cluster for experimenting. I'm very happy with the result. By the end of this blog post you'll have not just Hello World but you'll have Cloud Native Distributed Containerized RESTful microservice based on ARMv7 w/ k8s Hello World! as a service. (original Tweet). ;) Not familiar with why Kubernetes is cool? Check out Julia Evans' blog and read her K8s posts and you'll be convinced! Hardware List (scroll down for Software) Here's your shopping list. You may have a bunch of this stuff already. I had the Raspberry Pis and SD Cards already. 6 - Raspberry Pi 3 - I picked 6, but you should have at least 3 or 4. One Boss/Master and n workers. I did 6 because it's perfect for the power supply, perfect for the 8-port hub, AND it's a big but not unruly number. 6 - Samsung 32Gb Micro SDHC cards - Don't be too cheap. Faster SD cards are better. 2x6 - 1ft flat Ethernet cables - Flat is the key here. They are WAY more flexible. If you try to do this with regular 1ft cables you'll find them inflexible and frustrating. Get extras. 1 - Anker PowerPort 6 Port USB Charging Hub - Regardless of this entire blog post, this product is amazing. It's almost the same physical size as a Raspberry Pi, so it fits perfect at the bottom of your stack. It puts out 2.4a per port AND (wait for it) it includes SIX 1ft Micro USB cables...perfect for running 6 Raspberry Pis with a single power adapter. 1 - 7 layer Raspberry Pi Clear Case Enclosure - I only used 6 of these, which is cool. I love this case, and it looks fantastic. 1 - Black Box USB-Powered 8-Port Switch - This is another amazing and AFAIK unique product. An overarching goal for this little stack is that it be easy to move around and set up but also to power. We have power to spare, so I'd like to avoid a bunch of "wall warts" or power adapters. This is an 8 port switch that can be powered over a Raspberry Pi's USB. Because I'm given up to 2.4A to each micro USB, I just plugged this hub into one of the Pis and it worked no problem. It's also...wait for it...the size of a Pi. It also include magnets for mounting. 1 - Some Small Router - This one is a little tricky and somewhat optional. You can just put these Pis on your own Wifi and access them that way, but you need to think about how they get their IP address. Who doles out IPs via DHCP? Static Leases? Static IPs completely? The root question is - How portable do you want this stack to be? I propose you give them their own address space and their own router that you then use to bridge to other places. Easiest way is with another router (you likely have one lying around, as I did. Could be any router...and remember hub/switch != router. Here is a bad network diagram that makes the point, I think. The idea is that I should be able to go to a hotel or another place and just plug the little router into whatever external internet is available and the cluster will just work. Again, not needed unless portability matters to you as it does to me. You could ALSO possibly get this to work with a Travel Router but then the external internet it consumed would be just Wifi and your other clients would get on your network subnet via Wifi as well. I wanted the relative predictability of wired. What I WISH existed was a small router - similar to that little 8 port hub - that was powered off USB and had an internal and external Ethernet port. This ZyXEL Travel Router is very Optional - [...]

Recovering from the Windows 10 Insiders Fast 17017 volsnap.sys reboot GSOD/BSOD

Thu, 26 Oct 2017 04:44:49 GMT

NOTE: I'm not involved with the Windows Team or the Windows Insider Program. This blog is my own and written as a user of Windows. I have no inside information. I will happily correct this blog post if it's incorrect. Remember, don't just do stuff to your computer because you read it on a random blog. Think first, backup always, then do stuff. Beta testing is always risky. The Windows Insiders Program lets you run regular early builds of Windows 10. There's multiple "rings" like Slow and Fast - depending on your risk tolerance, and bandwidth. I run Fast and maybe twice a year there's something bad-ish that happens like a bad video driver or a app that doesn't work, but it's usually fixed within a week. It's the price I pay for happily testing new stuff. There's the Slow ring which is more stable and updates like once a month vs once a week. That ring is more "baked." This last week, as I understand it, a nasty bug made it out to Fast for some number of people (not everyone but enough that it sucked) myself included. I don't reboot my Surface Book much, maybe twice a month, but I did yesterday while preparing for the DevIntersection conference and suddenly my main machine was stuck in a "Repairing Windows" reboot loop. It wouldn't start, wouldn't repair. I was FREAKING out. Other people I've seen report a Green Screen of Death (GSOD/BSOD) loop with an error in volsnap.sys. TO FIX IT The goal is to get rid of the bad volsnap from Windows 10 Insiders build version 17017 and replace that one file with a non-broken version from a previous build. That's your goal. There's a few ways to do this, so you need to put some thought into how you want to do it. NOTE: At the time of this writing, Fast Build 17025 is rolling out and fixes this, so if you can take that build you're cool, and no worries. Do it. 1. Can you boot Windows 10 off something else? USB/DVD? Can you boot off something else like another version Windows 10 USB key or a DVD? Boot off your recovery media as if you're re-installing Windows 10 BUT DO NOT CLICK INSTALL. You may need to do special keystrokes to boot off your USB key. On Lenovo it's F2, or F10. On Surfaces it's Power+Volume Down. Go search for "boot off usb MANUFACTURER NAME" for your computer! Do you not have a copy of Windows 10 you can put on an 8 gig or larger USB? You can use this download tool on another machine to make a bootable Windows 10 USB. When you've run Windows 10 Setup, instead click Repair, then Troubleshoot, then Command Prompt. It's especially important to get to the Command Prompt this way rather than pressing Shift-10 as you enter setup, because this path will allow you to unlock your possibly BitLockered C: drive. NOTE: If your boot drive is bitlockered you'll need to go to on another machine or your phone and find your computer's Recovery Key. You'll enter this as you press Troubleshoot and it will allow you to access your now-unencrypted drive from the command prompt. At this point all your drive letters may be weird. Take a moment and look around. Your USB key may be X: or Z:. Your C: drive may be D: or E:. 2. Do you have an earlier version of volsnap.sys? Find it. If you've been taking Windows Insiders Builds/Flights, you may have a C:\Windows.old folder. Remembering to be conscious of your drive letters, you want to rename the bad volsnap and copy in the old one from elsewhere. In this example, I get it from C:\ C:\windows\system32\drivers\volsnap.sys C:\windows\system32\drivers\volsnap.sys.bakcopy C:\windows.old\windows\system32\drivers\volsnap.sys C:\windows\system32\drivers\volsnap.sys Unfortunately, *I* didn't have a C:\windows.old folder as I used Disk Cleanup to get more space. I found a good volsnap.sys from another machine in my house and copied it to the root of the USB key I booted off up. In th[...]